/configuration/#limits_config) is configured and push requests are blocked, the endpoint will return the status code configured in `block_ingestion_status_code` (`260` by default)
+along with an error message. If the configured status code is `200`, no error message will be returned.
+
### Examples
The following cURL command pushes a stream with the label "foo=bar2" and a single log line "fizzbuzz" using JSON encoding:
diff --git a/docs/sources/release-notes/v2-9.md b/docs/sources/release-notes/v2-9.md
index 74d83e2303c05..c1d72e622144b 100644
--- a/docs/sources/release-notes/v2-9.md
+++ b/docs/sources/release-notes/v2-9.md
@@ -5,7 +5,9 @@ weight: 50
---
# V2.9
-Grafana Labs is excited to announce the release of Loki 2.9.0 Here's a summary of new enhancements and important fixes:
+Grafana Labs is excited to announce the release of Loki 2.9.0 Here's a summary of new enhancements and important fixes.
+
+For a full list of all changes and fixes, refer to the [CHANGELOG](https://github.com/grafana/loki/blob/release-2.9.x/CHANGELOG.md).
## Features and enhancements
@@ -34,6 +36,15 @@ Grafana Labs is excited to announce the release of Loki 2.9.0 Here's a summary o
## Bug fixes
+### 2.9.10 (2026-08-09)
+
+- Update dependencies versions to remove CVE ([#13835](https://github.com/grafana/loki/pull/13835)) ([567bef2](https://github.com/grafana/loki/commit/567bef286376663407c54f5da07fa00963ba5485)).
+
+### 2.9.9 (2024 -07-04)
+
+- **Ingester:** Add `ingester_chunks_flush_failures_total` [12925](https://github.com/grafana/loki/pull/12925).
+- **Ingester:** Add backoff to flush op [13140](https://github.com/grafana/loki/pull/13140).
+
### 2.9.8 (2024-05-03)
- **deps:** update module golang.org/x/net to v0.23.0 [security] (release-2.9.x) ([#12865](https://github.com/grafana/loki/issues/12865)) ([94e0029](https://github.com/grafana/loki/commit/94e00299ec9b36ad97c147641566b6922268c54e)).
diff --git a/docs/sources/release-notes/v3.0.md b/docs/sources/release-notes/v3-0.md
similarity index 94%
rename from docs/sources/release-notes/v3.0.md
rename to docs/sources/release-notes/v3-0.md
index ea3c7603ff820..c04dcda82638e 100644
--- a/docs/sources/release-notes/v3.0.md
+++ b/docs/sources/release-notes/v3-0.md
@@ -54,6 +54,11 @@ The path from 2.9 to 3.0 includes several breaking changes. For important upgrad
## Bug fixes
+### 3.0.1 (2024-08-09)
+
+- **deps:** Bumped dependencies versions to resolve CVEs ([#13833](https://github.com/grafana/loki/pull/13833)) ([e13011d](https://github.com/grafana/loki/commit/e13011d91a77501ca4f659df9cf33f23085d3a35)).
+- Fixed nil pointer dereference in bloomstore initialization ([#12869](https://github.com/grafana/loki/issues/12869)) ([167b468](https://github.com/grafana/loki/commit/167b468598bc70bbed6eed44826d3f9b85e1e0b8)), closes [#12270](https://github.com/grafana/loki/issues/12270).
+
### 3.0.0 (2024-04-08)
- All lifecycler configurations reference a valid IPv6 address and port combination ([#11121](https://github.com/grafana/loki/issues/11121)) ([6385b19](https://github.com/grafana/loki/commit/6385b195739bd7d4e9706faddd0de663d8e5331a)).
diff --git a/docs/sources/release-notes/v3-1.md b/docs/sources/release-notes/v3-1.md
index ab4f0f7c3c999..917f8d9ecfd02 100644
--- a/docs/sources/release-notes/v3-1.md
+++ b/docs/sources/release-notes/v3-1.md
@@ -20,6 +20,8 @@ Key features in Loki 3.1.0 include the following:
- **LogQL:** Support negative numbers in LogQL ([#13091](https://github.com/grafana/loki/issues/13091)) ([6df81db](https://github.com/grafana/loki/commit/6df81db978b0157ab96fa0629a311f919dad1e8a)). Improve performance of `first_over_time` and `last_over_time` queries by sharding them ([#11605](https://github.com/grafana/loki/issues/11605)) ([f66172e](https://github.com/grafana/loki/commit/f66172eed17f9418ab22615537c7b65b09de96e5)). Improve syntax parser for pattern ([#12489](https://github.com/grafana/loki/issues/12489)) ([48dae44](https://github.com/grafana/loki/commit/48dae4417cca75a40d6a3bf16b0d976714e8db81)).
+- **Loki:** Add ability to disable AWS S3 dual stack endpoints usage ([#13795](https://github.com/grafana/loki/issues/13795)) ([464ac73](https://github.com/grafana/loki/commit/464ac736a6fb70b673ee3cec21049b18d353cadb)).
+
- **lokitool:** Add `lokitool` to replace `cortextool`. ([#12166](https://github.com/grafana/loki/issues/12166)) ([7b7d3d4](https://github.com/grafana/loki/commit/7b7d3d4cd2c979c778d3741156f0d765a9e531b2)). Introduce `index audit` to `lokitool` ([#13008](https://github.com/grafana/loki/issues/13008)) ([47f0236](https://github.com/grafana/loki/commit/47f0236ea8f33a67a0a1abf6e6d6b3582661c4ba)).
- **Explore Logs:** Explore Logs, which lets you explore your Loki data without writing LogQL queries, is now available in public preview. If you are a Grafana Cloud user, you can access Explore Logs in the Grafana Cloud main navigation menu. If you are not a Grafana Cloud user, you can install the [Explore Logs plugin](https://grafana.com/docs/grafana-cloud/visualizations/simplified-exploration/logs/access/). For more information, refer to the [Explore Logs documentation](https://grafana.com/docs/grafana-cloud/visualizations/simplified-exploration/logs/).
@@ -77,6 +79,10 @@ Out of an abundance of caution, we advise that users with Loki or Grafana Enterp
## Bug fixes
+## 3.1.1 (2024-08-08)
+
+- **deps:** Bumped dependencies versions to resolve CVEs ([#13789](https://github.com/grafana/loki/issues/13789)) ([34206cd](https://github.com/grafana/loki/commit/34206cd2d6290566034710ae6c2d08af8804bc91)).
+
### 3.1.0 (2024-07-02)
diff --git a/docs/sources/setup/install/helm/reference.md b/docs/sources/setup/install/helm/reference.md
index ff9eca098eb6b..f9028da643849 100644
--- a/docs/sources/setup/install/helm/reference.md
+++ b/docs/sources/setup/install/helm/reference.md
@@ -698,9 +698,9 @@ null
- bloomCompactor
+ bloomBuilder
object
- Configuration for the bloom compactor
+ Configuration for the bloom-builder
{
"affinity": {
@@ -709,7 +709,369 @@ null
{
"labelSelector": {
"matchLabels": {
- "app.kubernetes.io/component": "bloom-compactor"
+ "app.kubernetes.io/component": "bloom-builder"
+ }
+ },
+ "topologyKey": "kubernetes.io/hostname"
+ }
+ ]
+ }
+ },
+ "appProtocol": {
+ "grpc": ""
+ },
+ "autoscaling": {
+ "behavior": {
+ "enabled": false,
+ "scaleDown": {},
+ "scaleUp": {}
+ },
+ "customMetrics": [],
+ "enabled": false,
+ "maxReplicas": 3,
+ "minReplicas": 1,
+ "targetCPUUtilizationPercentage": 60,
+ "targetMemoryUtilizationPercentage": null
+ },
+ "command": null,
+ "extraArgs": [],
+ "extraContainers": [],
+ "extraEnv": [],
+ "extraEnvFrom": [],
+ "extraVolumeMounts": [],
+ "extraVolumes": [],
+ "hostAliases": [],
+ "image": {
+ "registry": null,
+ "repository": null,
+ "tag": null
+ },
+ "maxUnavailable": null,
+ "nodeSelector": {},
+ "podAnnotations": {},
+ "podLabels": {},
+ "priorityClassName": null,
+ "replicas": 0,
+ "resources": {},
+ "serviceLabels": {},
+ "terminationGracePeriodSeconds": 30,
+ "tolerations": []
+}
+
+
+
+
+ bloomBuilder.affinity
+ object
+ Affinity for bloom-builder pods.
+
+Hard node anti-affinity
+
+
+
+
+ bloomBuilder.appProtocol
+ object
+ Adds the appProtocol field to the queryFrontend service. This allows bloomBuilder to work with istio protocol selection.
+
+{
+ "grpc": ""
+}
+
+
+
+
+ bloomBuilder.appProtocol.grpc
+ string
+ Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"
+
+""
+
+
+
+
+ bloomBuilder.autoscaling.behavior.enabled
+ bool
+ Enable autoscaling behaviours
+
+false
+
+
+
+
+ bloomBuilder.autoscaling.behavior.scaleDown
+ object
+ define scale down policies, must conform to HPAScalingRules
+
+{}
+
+
+
+
+ bloomBuilder.autoscaling.behavior.scaleUp
+ object
+ define scale up policies, must conform to HPAScalingRules
+
+{}
+
+
+
+
+ bloomBuilder.autoscaling.customMetrics
+ list
+ Allows one to define custom metrics using the HPA/v2 schema (for example, Pods, Object or External metrics)
+
+[]
+
+
+
+
+ bloomBuilder.autoscaling.enabled
+ bool
+ Enable autoscaling for the bloom-builder
+
+false
+
+
+
+
+ bloomBuilder.autoscaling.maxReplicas
+ int
+ Maximum autoscaling replicas for the bloom-builder
+
+3
+
+
+
+
+ bloomBuilder.autoscaling.minReplicas
+ int
+ Minimum autoscaling replicas for the bloom-builder
+
+1
+
+
+
+
+ bloomBuilder.autoscaling.targetCPUUtilizationPercentage
+ int
+ Target CPU utilisation percentage for the bloom-builder
+
+60
+
+
+
+
+ bloomBuilder.autoscaling.targetMemoryUtilizationPercentage
+ string
+ Target memory utilisation percentage for the bloom-builder
+
+null
+
+
+
+
+ bloomBuilder.command
+ string
+ Command to execute instead of defined in Docker image
+
+null
+
+
+
+
+ bloomBuilder.extraArgs
+ list
+ Additional CLI args for the bloom-builder
+
+[]
+
+
+
+
+ bloomBuilder.extraContainers
+ list
+ Containers to add to the bloom-builder pods
+
+[]
+
+
+
+
+ bloomBuilder.extraEnv
+ list
+ Environment variables to add to the bloom-builder pods
+
+[]
+
+
+
+
+ bloomBuilder.extraEnvFrom
+ list
+ Environment variables from secrets or configmaps to add to the bloom-builder pods
+
+[]
+
+
+
+
+ bloomBuilder.extraVolumeMounts
+ list
+ Volume mounts to add to the bloom-builder pods
+
+[]
+
+
+
+
+ bloomBuilder.extraVolumes
+ list
+ Volumes to add to the bloom-builder pods
+
+[]
+
+
+
+
+ bloomBuilder.hostAliases
+ list
+ hostAliases to add
+
+[]
+
+
+
+
+ bloomBuilder.image.registry
+ string
+ The Docker registry for the bloom-builder image. Overrides `loki.image.registry`
+
+null
+
+
+
+
+ bloomBuilder.image.repository
+ string
+ Docker image repository for the bloom-builder image. Overrides `loki.image.repository`
+
+null
+
+
+
+
+ bloomBuilder.image.tag
+ string
+ Docker image tag for the bloom-builder image. Overrides `loki.image.tag`
+
+null
+
+
+
+
+ bloomBuilder.maxUnavailable
+ string
+ Pod Disruption Budget maxUnavailable
+
+null
+
+
+
+
+ bloomBuilder.nodeSelector
+ object
+ Node selector for bloom-builder pods
+
+{}
+
+
+
+
+ bloomBuilder.podAnnotations
+ object
+ Annotations for bloom-builder pods
+
+{}
+
+
+
+
+ bloomBuilder.podLabels
+ object
+ Labels for bloom-builder pods
+
+{}
+
+
+
+
+ bloomBuilder.priorityClassName
+ string
+ The name of the PriorityClass for bloom-builder pods
+
+null
+
+
+
+
+ bloomBuilder.replicas
+ int
+ Number of replicas for the bloom-builder
+
+0
+
+
+
+
+ bloomBuilder.resources
+ object
+ Resource requests and limits for the bloom-builder
+
+{}
+
+
+
+
+ bloomBuilder.serviceLabels
+ object
+ Labels for bloom-builder service
+
+{}
+
+
+
+
+ bloomBuilder.terminationGracePeriodSeconds
+ int
+ Grace period to allow the bloom-builder to shutdown before it is killed
+
+30
+
+
+
+
+ bloomBuilder.tolerations
+ list
+ Tolerations for bloom-builder pods
+
+[]
+
+
+
+
+ bloomGateway
+ object
+ Configuration for the bloom-gateway
+
+{
+ "affinity": {
+ "podAntiAffinity": {
+ "requiredDuringSchedulingIgnoredDuringExecution": [
+ {
+ "labelSelector": {
+ "matchLabels": {
+ "app.kubernetes.io/component": "bloom-gateway"
}
},
"topologyKey": "kubernetes.io/hostname"
@@ -773,16 +1135,16 @@ null
- bloomCompactor.affinity
+ bloomGateway.affinity
object
- Affinity for bloom compactor pods.
+ Affinity for bloom-gateway pods.
Hard node anti-affinity
- bloomCompactor.appProtocol
+ bloomGateway.appProtocol
object
Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"
@@ -793,7 +1155,7 @@ Hard node anti-affinity
- bloomCompactor.command
+ bloomGateway.command
string
Command to execute instead of defined in Docker image
@@ -802,61 +1164,61 @@ null
- bloomCompactor.extraArgs
+ bloomGateway.extraArgs
list
- Additional CLI args for the bloom compactor
+ Additional CLI args for the bloom-gateway
[]
- bloomCompactor.extraContainers
+ bloomGateway.extraContainers
list
- Containers to add to the bloom compactor pods
+ Containers to add to the bloom-gateway pods
[]
- bloomCompactor.extraEnv
+ bloomGateway.extraEnv
list
- Environment variables to add to the bloom compactor pods
+ Environment variables to add to the bloom-gateway pods
[]
- bloomCompactor.extraEnvFrom
+ bloomGateway.extraEnvFrom
list
- Environment variables from secrets or configmaps to add to the bloom compactor pods
+ Environment variables from secrets or configmaps to add to the bloom-gateway pods
[]
- bloomCompactor.extraVolumeMounts
+ bloomGateway.extraVolumeMounts
list
- Volume mounts to add to the bloom compactor pods
+ Volume mounts to add to the bloom-gateway pods
[]
- bloomCompactor.extraVolumes
+ bloomGateway.extraVolumes
list
- Volumes to add to the bloom compactor pods
+ Volumes to add to the bloom-gateway pods
[]
- bloomCompactor.hostAliases
+ bloomGateway.hostAliases
list
hostAliases to add
@@ -865,43 +1227,43 @@ null
- bloomCompactor.image.registry
+ bloomGateway.image.registry
string
- The Docker registry for the bloom compactor image. Overrides `loki.image.registry`
+ The Docker registry for the bloom-gateway image. Overrides `loki.image.registry`
null
- bloomCompactor.image.repository
+ bloomGateway.image.repository
string
- Docker image repository for the bloom compactor image. Overrides `loki.image.repository`
+ Docker image repository for the bloom-gateway image. Overrides `loki.image.repository`
null
- bloomCompactor.image.tag
+ bloomGateway.image.tag
string
- Docker image tag for the bloom compactor image. Overrides `loki.image.tag`
+ Docker image tag for the bloom-gateway image. Overrides `loki.image.tag`
null
- bloomCompactor.initContainers
+ bloomGateway.initContainers
list
- Init containers to add to the bloom compactor pods
+ Init containers to add to the bloom-gateway pods
[]
- bloomCompactor.livenessProbe
+ bloomGateway.livenessProbe
object
liveness probe settings for ingester pods. If empty use `loki.livenessProbe`
@@ -910,34 +1272,34 @@ null
- bloomCompactor.nodeSelector
+ bloomGateway.nodeSelector
object
- Node selector for bloom compactor pods
+ Node selector for bloom-gateway pods
{}
- bloomCompactor.persistence.annotations
+ bloomGateway.persistence.annotations
object
- Annotations for bloom compactor PVCs
+ Annotations for bloom-gateway PVCs
{}
- bloomCompactor.persistence.claims
+ bloomGateway.persistence.claims
list
- List of the bloom compactor PVCs
+ List of the bloom-gateway PVCs
- bloomCompactor.persistence.enableStatefulSetAutoDeletePVC
+ bloomGateway.persistence.enableStatefulSetAutoDeletePVC
bool
Enable StatefulSetAutoDeletePVC feature
@@ -946,16 +1308,16 @@ false
- bloomCompactor.persistence.enabled
+ bloomGateway.persistence.enabled
bool
- Enable creating PVCs for the bloom compactor
+ Enable creating PVCs for the bloom-gateway
false
- bloomCompactor.persistence.size
+ bloomGateway.persistence.size
string
Size of persistent disk
@@ -964,7 +1326,7 @@ false
- bloomCompactor.persistence.storageClass
+ bloomGateway.persistence.storageClass
string
Storage class to be used. If defined, storageClassName: . If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
@@ -973,34 +1335,34 @@ null
- bloomCompactor.podAnnotations
+ bloomGateway.podAnnotations
object
- Annotations for bloom compactor pods
+ Annotations for bloom-gateway pods
{}
- bloomCompactor.podLabels
+ bloomGateway.podLabels
object
- Labels for bloom compactor pods
+ Labels for bloom-gateway pods
{}
- bloomCompactor.priorityClassName
+ bloomGateway.priorityClassName
string
- The name of the PriorityClass for bloom compactor pods
+ The name of the PriorityClass for bloom-gateway pods
null
- bloomCompactor.readinessProbe
+ bloomGateway.readinessProbe
object
readiness probe settings for ingester pods. If empty, use `loki.readinessProbe`
@@ -1009,34 +1371,34 @@ null
- bloomCompactor.replicas
+ bloomGateway.replicas
int
- Number of replicas for the bloom compactor
+ Number of replicas for the bloom-gateway
0
- bloomCompactor.resources
+ bloomGateway.resources
object
- Resource requests and limits for the bloom compactor
+ Resource requests and limits for the bloom-gateway
{}
- bloomCompactor.serviceAccount.annotations
+ bloomGateway.serviceAccount.annotations
object
- Annotations for the bloom compactor service account
+ Annotations for the bloom-gateway service account
{}
- bloomCompactor.serviceAccount.automountServiceAccountToken
+ bloomGateway.serviceAccount.automountServiceAccountToken
bool
Set this toggle to false to opt out of automounting API credentials for the service account
@@ -1045,54 +1407,54 @@ true
- bloomCompactor.serviceAccount.imagePullSecrets
+ bloomGateway.serviceAccount.imagePullSecrets
list
- Image pull secrets for the bloom compactor service account
+ Image pull secrets for the bloom-gateway service account
[]
- bloomCompactor.serviceAccount.name
+ bloomGateway.serviceAccount.name
string
- The name of the ServiceAccount to use for the bloom compactor. If not set and create is true, a name is generated by appending "-bloom-compactor" to the common ServiceAccount.
+ The name of the ServiceAccount to use for the bloom-gateway. If not set and create is true, a name is generated by appending "-bloom-gateway" to the common ServiceAccount.
null
- bloomCompactor.serviceLabels
+ bloomGateway.serviceLabels
object
- Labels for bloom compactor service
+ Labels for bloom-gateway service
{}
- bloomCompactor.terminationGracePeriodSeconds
+ bloomGateway.terminationGracePeriodSeconds
int
- Grace period to allow the bloom compactor to shutdown before it is killed
+ Grace period to allow the bloom-gateway to shutdown before it is killed
30
- bloomCompactor.tolerations
+ bloomGateway.tolerations
list
- Tolerations for bloom compactor pods
+ Tolerations for bloom-gateway pods
[]
- bloomGateway
+ bloomPlanner
object
- Configuration for the bloom gateway
+ Configuration for the bloom-planner
{
"affinity": {
@@ -1101,7 +1463,7 @@ null
{
"labelSelector": {
"matchLabels": {
- "app.kubernetes.io/component": "bloom-gateway"
+ "app.kubernetes.io/component": "bloom-planner"
}
},
"topologyKey": "kubernetes.io/hostname"
@@ -1130,13 +1492,7 @@ null
"nodeSelector": {},
"persistence": {
"annotations": {},
- "claims": [
- {
- "name": "data",
- "size": "10Gi",
- "storageClass": null
- }
- ],
+ "claims": [],
"enableStatefulSetAutoDeletePVC": false,
"enabled": false,
"size": "10Gi",
@@ -1165,16 +1521,16 @@ null
- bloomGateway.affinity
+ bloomPlanner.affinity
object
- Affinity for bloom gateway pods.
+ Affinity for bloom-planner pods.
Hard node anti-affinity
- bloomGateway.appProtocol
+ bloomPlanner.appProtocol
object
Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"
@@ -1185,7 +1541,7 @@ Hard node anti-affinity
- bloomGateway.command
+ bloomPlanner.command
string
Command to execute instead of defined in Docker image
@@ -1194,61 +1550,61 @@ null
- bloomGateway.extraArgs
+ bloomPlanner.extraArgs
list
- Additional CLI args for the bloom gateway
+ Additional CLI args for the bloom-planner
[]
- bloomGateway.extraContainers
+ bloomPlanner.extraContainers
list
- Containers to add to the bloom gateway pods
+ Containers to add to the bloom-planner pods
[]
- bloomGateway.extraEnv
+ bloomPlanner.extraEnv
list
- Environment variables to add to the bloom gateway pods
+ Environment variables to add to the bloom-planner pods
[]
- bloomGateway.extraEnvFrom
+ bloomPlanner.extraEnvFrom
list
- Environment variables from secrets or configmaps to add to the bloom gateway pods
+ Environment variables from secrets or configmaps to add to the bloom-planner pods
[]
- bloomGateway.extraVolumeMounts
+ bloomPlanner.extraVolumeMounts
list
- Volume mounts to add to the bloom gateway pods
+ Volume mounts to add to the bloom-planner pods
[]
- bloomGateway.extraVolumes
+ bloomPlanner.extraVolumes
list
- Volumes to add to the bloom gateway pods
+ Volumes to add to the bloom-planner pods
[]
- bloomGateway.hostAliases
+ bloomPlanner.hostAliases
list
hostAliases to add
@@ -1257,43 +1613,43 @@ null
- bloomGateway.image.registry
+ bloomPlanner.image.registry
string
- The Docker registry for the bloom gateway image. Overrides `loki.image.registry`
+ The Docker registry for the bloom-planner image. Overrides `loki.image.registry`
null
- bloomGateway.image.repository
+ bloomPlanner.image.repository
string
- Docker image repository for the bloom gateway image. Overrides `loki.image.repository`
+ Docker image repository for the bloom-planner image. Overrides `loki.image.repository`
null
- bloomGateway.image.tag
+ bloomPlanner.image.tag
string
- Docker image tag for the bloom gateway image. Overrides `loki.image.tag`
+ Docker image tag for the bloom-planner image. Overrides `loki.image.tag`
null
- bloomGateway.initContainers
+ bloomPlanner.initContainers
list
- Init containers to add to the bloom gateway pods
+ Init containers to add to the bloom-planner pods
[]
- bloomGateway.livenessProbe
+ bloomPlanner.livenessProbe
object
liveness probe settings for ingester pods. If empty use `loki.livenessProbe`
@@ -1302,34 +1658,34 @@ null
- bloomGateway.nodeSelector
+ bloomPlanner.nodeSelector
object
- Node selector for bloom gateway pods
+ Node selector for bloom-planner pods
{}
- bloomGateway.persistence.annotations
+ bloomPlanner.persistence.annotations
object
- Annotations for bloom gateway PVCs
+ Annotations for bloom-planner PVCs
{}
- bloomGateway.persistence.claims
+ bloomPlanner.persistence.claims
list
- List of the bloom gateway PVCs
+ List of the bloom-planner PVCs
-
+[]
- bloomGateway.persistence.enableStatefulSetAutoDeletePVC
+ bloomPlanner.persistence.enableStatefulSetAutoDeletePVC
bool
Enable StatefulSetAutoDeletePVC feature
@@ -1338,16 +1694,16 @@ false
- bloomGateway.persistence.enabled
+ bloomPlanner.persistence.enabled
bool
- Enable creating PVCs for the bloom gateway
+ Enable creating PVCs for the bloom-planner
false
- bloomGateway.persistence.size
+ bloomPlanner.persistence.size
string
Size of persistent disk
@@ -1356,7 +1712,7 @@ false
- bloomGateway.persistence.storageClass
+ bloomPlanner.persistence.storageClass
string
Storage class to be used. If defined, storageClassName: . If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
@@ -1365,34 +1721,34 @@ null
- bloomGateway.podAnnotations
+ bloomPlanner.podAnnotations
object
- Annotations for bloom gateway pods
+ Annotations for bloom-planner pods
{}
- bloomGateway.podLabels
+ bloomPlanner.podLabels
object
- Labels for bloom gateway pods
+ Labels for bloom-planner pods
{}
- bloomGateway.priorityClassName
+ bloomPlanner.priorityClassName
string
- The name of the PriorityClass for bloom gateway pods
+ The name of the PriorityClass for bloom-planner pods
null
- bloomGateway.readinessProbe
+ bloomPlanner.readinessProbe
object
readiness probe settings for ingester pods. If empty, use `loki.readinessProbe`
@@ -1401,34 +1757,34 @@ null
- bloomGateway.replicas
+ bloomPlanner.replicas
int
- Number of replicas for the bloom gateway
+ Number of replicas for the bloom-planner
0
- bloomGateway.resources
+ bloomPlanner.resources
object
- Resource requests and limits for the bloom gateway
+ Resource requests and limits for the bloom-planner
{}
- bloomGateway.serviceAccount.annotations
+ bloomPlanner.serviceAccount.annotations
object
- Annotations for the bloom gateway service account
+ Annotations for the bloom-planner service account
{}
- bloomGateway.serviceAccount.automountServiceAccountToken
+ bloomPlanner.serviceAccount.automountServiceAccountToken
bool
Set this toggle to false to opt out of automounting API credentials for the service account
@@ -1437,45 +1793,45 @@ true
- bloomGateway.serviceAccount.imagePullSecrets
+ bloomPlanner.serviceAccount.imagePullSecrets
list
- Image pull secrets for the bloom gateway service account
+ Image pull secrets for the bloom-planner service account
[]
- bloomGateway.serviceAccount.name
+ bloomPlanner.serviceAccount.name
string
- The name of the ServiceAccount to use for the bloom gateway. If not set and create is true, a name is generated by appending "-bloom-gateway" to the common ServiceAccount.
+ The name of the ServiceAccount to use for the bloom-planner. If not set and create is true, a name is generated by appending "-bloom-planner" to the common ServiceAccount.
null
- bloomGateway.serviceLabels
+ bloomPlanner.serviceLabels
object
- Labels for bloom gateway service
+ Labels for bloom-planner service
{}
- bloomGateway.terminationGracePeriodSeconds
+ bloomPlanner.terminationGracePeriodSeconds
int
- Grace period to allow the bloom gateway to shutdown before it is killed
+ Grace period to allow the bloom-planner to shutdown before it is killed
30
- bloomGateway.tolerations
+ bloomPlanner.tolerations
list
- Tolerations for bloom gateway pods
+ Tolerations for bloom-planner pods
[]
@@ -1623,6 +1979,56 @@ true
5
+
+
+
+ chunksCache.persistence
+ object
+ Persistence settings for the chunks-cache
+
+{
+ "enabled": false,
+ "mountPath": "/data",
+ "storageClass": null,
+ "storageSize": "10G"
+}
+
+
+
+
+ chunksCache.persistence.enabled
+ bool
+ Enable creating PVCs for the chunks-cache
+
+false
+
+
+
+
+ chunksCache.persistence.mountPath
+ string
+ Volume mount path
+
+"/data"
+
+
+
+
+ chunksCache.persistence.storageClass
+ string
+ Storage class to be used. If defined, storageClassName: . If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
+
+null
+
+
+
+
+ chunksCache.persistence.storageSize
+ string
+ Size of persistent disk
+
+"10G"
+
@@ -3637,7 +4043,7 @@ null
string
The gateway image tag
-"1.24-alpine"
+"1.27-alpine"
@@ -9035,6 +9441,56 @@ true
{}
+
+
+
+ resultsCache.persistence
+ object
+ Persistence settings for the results-cache
+
+{
+ "enabled": false,
+ "mountPath": "/data",
+ "storageClass": null,
+ "storageSize": "10G"
+}
+
+
+
+
+ resultsCache.persistence.enabled
+ bool
+ Enable creating PVCs for the results-cache
+
+false
+
+
+
+
+ resultsCache.persistence.mountPath
+ string
+ Volume mount path
+
+"/data"
+
+
+
+
+ resultsCache.persistence.storageClass
+ string
+ Storage class to be used. If defined, storageClassName: . If set to "-", storageClassName: "", which disables dynamic provisioning. If empty or set to null, no storageClassName spec is set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
+
+null
+
+
+
+
+ resultsCache.persistence.storageSize
+ string
+ Size of persistent disk
+
+"10G"
+
diff --git a/docs/sources/setup/upgrade/_index.md b/docs/sources/setup/upgrade/_index.md
index de3e38a4548b3..1b68d61828973 100644
--- a/docs/sources/setup/upgrade/_index.md
+++ b/docs/sources/setup/upgrade/_index.md
@@ -46,6 +46,36 @@ parameter contains a log selector query instead of returning inconsistent result
Loki changes the default value of `-ruler.alertmanager-use-v2` from `false` to `true`. Alertmanager APIv1 was deprecated in Alertmanager 0.16.0 and is removed as of 0.27.0.
+### Experimental Bloom Filters
+
+{{% admonition type="note" %}}
+Experimental features are subject to rapid change and/or removal, which can introduce breaking changes even between minor version.
+They also don't follow the deprecation lifecycle of regular features.
+{{% /admonition %}}
+
+The bloom compactor component, which builds bloom filter blocks for query acceleration, has been removed in favor of two new components: bloom planner and bloom builder.
+Please consult the [Query Acceleration with Blooms](https://grafana.com/docs/loki//operations/query-acceleration-blooms/) docs for more information.
+
+CLI arguments (and their YAML counterparts) of per-tenant settings that have been removed as part of this change:
+
+* `-bloom-compactor.enable-compaction`
+* `-bloom-compactor.shard-size`
+* `-bloom-compactor.shard-size`
+* `-bloom-compactor.shard-size`
+
+CLI arguments of per-tenant settings that have been moved to a different prefix as part of this change:
+
+* `-bloom-compactor.max-page-size` changed to `-bloom-builder.max-page-size`
+* `-bloom-compactor.max-block-size` changed to `-bloom-builder.max-block-size`
+* `-bloom-compactor.ngram-length` changed to `-bloom-builder.ngram-length`
+* `-bloom-compactor.ngram-skip` changed to `-bloom-builder.ngram-skip`
+* `-bloom-compactor.false-positive-rate` changed to `-bloom-builder.false-positive-rate`
+* `-bloom-compactor.block-encoding` changed to `-bloom-builder.block-encoding`
+
+Their YAML counterparts in the `limits_config` block are kept identical.
+
+All other CLI arguments (and their YAML counterparts) prefixed with `-bloom-compactor.` have been removed.
+
## 3.0.0
{{% admonition type="note" %}}
diff --git a/docs/sources/shared/configuration.md b/docs/sources/shared/configuration.md
index 12aa2eecf5ad4..a563a24198ad1 100644
--- a/docs/sources/shared/configuration.md
+++ b/docs/sources/shared/configuration.md
@@ -188,12 +188,14 @@ ingester_rf1:
# Configuration for a Consul client. Only applies if the selected
# kvstore is consul.
- # The CLI flags prefix for this block configuration is: ingester-rf1
+ # The CLI flags prefix for this block configuration is:
+ # ingester-rf1.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected
# kvstore is etcd.
- # The CLI flags prefix for this block configuration is: ingester-rf1
+ # The CLI flags prefix for this block configuration is:
+ # ingester-rf1.etcd
[etcd: ]
multi:
@@ -444,12 +446,14 @@ pattern_ingester:
# Configuration for a Consul client. Only applies if the selected
# kvstore is consul.
- # The CLI flags prefix for this block configuration is: pattern-ingester
+ # The CLI flags prefix for this block configuration is:
+ # pattern-ingester.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected
# kvstore is etcd.
- # The CLI flags prefix for this block configuration is: pattern-ingester
+ # The CLI flags prefix for this block configuration is:
+ # pattern-ingester.etcd
[etcd: ]
multi:
@@ -812,73 +816,18 @@ pattern_ingester:
# CLI flag: -pattern-ingester.connection-timeout
[connection_timeout: | default = 2s]
+ # The maximum length of log lines that can be used for pattern detection.
+ # CLI flag: -pattern-ingester.max-allowed-line-length
+ [max_allowed_line_length: | default = 3000]
+
# The index_gateway block configures the Loki index gateway server, responsible
# for serving index queries without the need to constantly interact with the
# object store.
[index_gateway: ]
-# Experimental: The bloom_compactor block configures the Loki bloom compactor
-# server, responsible for compacting stream indexes into bloom filters and
-# merging them as bloom blocks.
-[bloom_compactor: ]
-
-bloom_build:
- # Flag to enable or disable the usage of the bloom-planner and bloom-builder
- # components.
- # CLI flag: -bloom-build.enabled
- [enabled: | default = false]
-
- planner:
- # Interval at which to re-run the bloom creation planning.
- # CLI flag: -bloom-build.planner.interval
- [planning_interval: | default = 8h]
-
- # Newest day-table offset (from today, inclusive) to build blooms for.
- # Increase to lower cost by not re-writing data to object storage too
- # frequently since recent data changes more often at the cost of not having
- # blooms available as quickly.
- # CLI flag: -bloom-build.planner.min-table-offset
- [min_table_offset: | default = 1]
-
- # Oldest day-table offset (from today, inclusive) to compact. This can be
- # used to lower cost by not trying to compact older data which doesn't
- # change. This can be optimized by aligning it with the maximum
- # `reject_old_samples_max_age` setting of any tenant.
- # CLI flag: -bloom-build.planner.max-table-offset
- [max_table_offset: | default = 2]
-
- # Maximum number of tasks to queue per tenant.
- # CLI flag: -bloom-build.planner.max-tasks-per-tenant
- [max_queued_tasks_per_tenant: | default = 30000]
-
- retention:
- # Enable bloom retention.
- # CLI flag: -bloom-build.planner.retention.enabled
- [enabled: | default = false]
-
- builder:
- # The grpc_client block configures the gRPC client used to communicate
- # between a client and server component in Loki.
- # The CLI flags prefix for this block configuration is:
- # bloom-gateway-client.grpc
- [grpc_config: ]
-
- # Hostname (and port) of the bloom planner
- # CLI flag: -bloom-build.builder.planner-address
- [planner_address: | default = ""]
-
- backoff_config:
- # Minimum delay when backing off.
- # CLI flag: -bloom-build.builder.backoff.backoff-min-period
- [min_period: | default = 100ms]
-
- # Maximum delay when backing off.
- # CLI flag: -bloom-build.builder.backoff.backoff-max-period
- [max_period: | default = 10s]
-
- # Number of times to backoff and retry before failing.
- # CLI flag: -bloom-build.builder.backoff.backoff-retries
- [max_retries: | default = 10]
+# Experimental: The bloom_build block configures the Loki bloom planner and
+# builder servers, responsible for building bloom filters.
+[bloom_build: ]
# Experimental: The bloom_gateway block configures the Loki bloom gateway
# server, responsible for serving queries for filtering chunks based on filter
@@ -1082,6 +1031,278 @@ metastore_client:
# Configures the gRPC client used to communicate with the metastore.
[grpc_client_config: ]
+kafka_config:
+ # The Kafka backend address.
+ # CLI flag: -kafka.address
+ [address: | default = "localhost:9092"]
+
+ # The Kafka topic name.
+ # CLI flag: -kafka.topic
+ [topic: | default = ""]
+
+ # The Kafka client ID.
+ # CLI flag: -kafka.client-id
+ [client_id: | default = ""]
+
+ # The maximum time allowed to open a connection to a Kafka broker.
+ # CLI flag: -kafka.dial-timeout
+ [dial_timeout: | default = 2s]
+
+ # How long to wait for an incoming write request to be successfully committed
+ # to the Kafka backend.
+ # CLI flag: -kafka.write-timeout
+ [write_timeout: | default = 10s]
+
+ # The consumer group used by the consumer to track the last consumed offset.
+ # The consumer group must be different for each ingester. If the configured
+ # consumer group contains the '' placeholder, it is replaced with
+ # the actual partition ID owned by the ingester. When empty (recommended),
+ # Mimir uses the ingester instance ID to guarantee uniqueness.
+ # CLI flag: -kafka.consumer-group
+ [consumer_group: | default = ""]
+
+ # How long to retry a failed request to get the last produced offset.
+ # CLI flag: -kafka.last-produced-offset-retry-timeout
+ [last_produced_offset_retry_timeout: | default = 10s]
+
+ # Enable auto-creation of Kafka topic if it doesn't exist.
+ # CLI flag: -kafka.auto-create-topic-enabled
+ [auto_create_topic_enabled: | default = true]
+
+ # The maximum size of a Kafka record data that should be generated by the
+ # producer. An incoming write request larger than this size is split into
+ # multiple Kafka records. We strongly recommend to not change this setting
+ # unless for testing purposes.
+ # CLI flag: -kafka.producer-max-record-size-bytes
+ [producer_max_record_size_bytes: | default = 15983616]
+
+ # The maximum size of (uncompressed) buffered and unacknowledged produced
+ # records sent to Kafka. The produce request fails once this limit is reached.
+ # This limit is per Kafka client. 0 to disable the limit.
+ # CLI flag: -kafka.producer-max-buffered-bytes
+ [producer_max_buffered_bytes: | default = 1073741824]
+
+kafka_ingester:
+ # Whether the kafka ingester is enabled.
+ # CLI flag: -kafka-ingester.enabled
+ [enabled: | default = false]
+
+ # Configures how the lifecycle of the ingester will operate and where it will
+ # register for discovery.
+ lifecycler:
+ ring:
+ kvstore:
+ # Backend storage to use for the ring. Supported values are: consul,
+ # etcd, inmemory, memberlist, multi.
+ # CLI flag: -kafka-ingesterstore
+ [store: | default = "consul"]
+
+ # The prefix for the keys in the store. Should end with a /.
+ # CLI flag: -kafka-ingesterprefix
+ [prefix: | default = "collectors/"]
+
+ # Configuration for a Consul client. Only applies if the selected
+ # kvstore is consul.
+ # The CLI flags prefix for this block configuration is:
+ # kafka-ingesterconsul
+ [consul: ]
+
+ # Configuration for an ETCD v3 client. Only applies if the selected
+ # kvstore is etcd.
+ # The CLI flags prefix for this block configuration is:
+ # kafka-ingesteretcd
+ [etcd: ]
+
+ multi:
+ # Primary backend storage used by multi-client.
+ # CLI flag: -kafka-ingestermulti.primary
+ [primary: | default = ""]
+
+ # Secondary backend storage used by multi-client.
+ # CLI flag: -kafka-ingestermulti.secondary
+ [secondary: | default = ""]
+
+ # Mirror writes to secondary store.
+ # CLI flag: -kafka-ingestermulti.mirror-enabled
+ [mirror_enabled: | default = false]
+
+ # Timeout for storing value to secondary store.
+ # CLI flag: -kafka-ingestermulti.mirror-timeout
+ [mirror_timeout: | default = 2s]
+
+ # The heartbeat timeout after which ingesters are skipped for
+ # reads/writes. 0 = never (timeout disabled).
+ # CLI flag: -kafka-ingesterring.heartbeat-timeout
+ [heartbeat_timeout: | default = 1m]
+
+ # The number of ingesters to write to and read from.
+ # CLI flag: -kafka-ingesterdistributor.replication-factor
+ [replication_factor: | default = 3]
+
+ # True to enable the zone-awareness and replicate ingested samples across
+ # different availability zones.
+ # CLI flag: -kafka-ingesterdistributor.zone-awareness-enabled
+ [zone_awareness_enabled: | default = false]
+
+ # Comma-separated list of zones to exclude from the ring. Instances in
+ # excluded zones will be filtered out from the ring.
+ # CLI flag: -kafka-ingesterdistributor.excluded-zones
+ [excluded_zones: | default = ""]
+
+ # Number of tokens for each ingester.
+ # CLI flag: -kafka-ingesternum-tokens
+ [num_tokens: | default = 128]
+
+ # Period at which to heartbeat to consul. 0 = disabled.
+ # CLI flag: -kafka-ingesterheartbeat-period
+ [heartbeat_period: | default = 5s]
+
+ # Heartbeat timeout after which instance is assumed to be unhealthy. 0 =
+ # disabled.
+ # CLI flag: -kafka-ingesterheartbeat-timeout
+ [heartbeat_timeout: | default = 1m]
+
+ # Observe tokens after generating to resolve collisions. Useful when using
+ # gossiping ring.
+ # CLI flag: -kafka-ingesterobserve-period
+ [observe_period: | default = 0s]
+
+ # Period to wait for a claim from another member; will join automatically
+ # after this.
+ # CLI flag: -kafka-ingesterjoin-after
+ [join_after: | default = 0s]
+
+ # Minimum duration to wait after the internal readiness checks have passed
+ # but before succeeding the readiness endpoint. This is used to slowdown
+ # deployment controllers (eg. Kubernetes) after an instance is ready and
+ # before they proceed with a rolling update, to give the rest of the cluster
+ # instances enough time to receive ring updates.
+ # CLI flag: -kafka-ingestermin-ready-duration
+ [min_ready_duration: | default = 15s]
+
+ # Name of network interface to read address from.
+ # CLI flag: -kafka-ingesterlifecycler.interface
+ [interface_names: | default = []]
+
+ # Enable IPv6 support. Required to make use of IP addresses from IPv6
+ # interfaces.
+ # CLI flag: -kafka-ingesterenable-inet6
+ [enable_inet6: | default = false]
+
+ # Duration to sleep for before exiting, to ensure metrics are scraped.
+ # CLI flag: -kafka-ingesterfinal-sleep
+ [final_sleep: | default = 0s]
+
+ # File path where tokens are stored. If empty, tokens are not stored at
+ # shutdown and restored at startup.
+ # CLI flag: -kafka-ingestertokens-file-path
+ [tokens_file_path: | default = ""]
+
+ # The availability zone where this instance is running.
+ # CLI flag: -kafka-ingesteravailability-zone
+ [availability_zone: | default = ""]
+
+ # Unregister from the ring upon clean shutdown. It can be useful to disable
+ # for rolling restarts with consistent naming in conjunction with
+ # -distributor.extend-writes=false.
+ # CLI flag: -kafka-ingesterunregister-on-shutdown
+ [unregister_on_shutdown: | default = true]
+
+ # When enabled the readiness probe succeeds only after all instances are
+ # ACTIVE and healthy in the ring, otherwise only the instance itself is
+ # checked. This option should be disabled if in your cluster multiple
+ # instances can be rolled out simultaneously, otherwise rolling updates may
+ # be slowed down.
+ # CLI flag: -kafka-ingesterreadiness-check-ring-health
+ [readiness_check_ring_health: | default = true]
+
+ # IP address to advertise in the ring.
+ # CLI flag: -kafka-ingesterlifecycler.addr
+ [address: | default = ""]
+
+ # port to advertise in consul (defaults to server.grpc-listen-port).
+ # CLI flag: -kafka-ingesterlifecycler.port
+ [port: | default = 0]
+
+ # ID to register in the ring.
+ # CLI flag: -kafka-ingesterlifecycler.ID
+ [id: | default = ""]
+
+ # Path where the shutdown marker file is stored. If not set and
+ # common.path_prefix is set then common.path_prefix will be used.
+ # CLI flag: -kafka-ingester.shutdown-marker-path
+ [shutdown_marker_path: | default = ""]
+
+ # The interval at which the ingester will flush and commit offsets to Kafka.
+ # If not set, the default flush interval will be used.
+ # CLI flag: -kafka-ingester.flush-interval
+ [flush_interval: | default = 15s]
+
+ # The size at which the ingester will flush and commit offsets to Kafka. If
+ # not set, the default flush size will be used.
+ # CLI flag: -kafka-ingester.flush-size
+ [flush_size: | default = 314572800]
+
+ partition_ring:
+ # The key-value store used to share the hash ring across multiple instances.
+ # This option needs be set on ingesters, distributors, queriers, and rulers
+ # when running in microservices mode.
+ kvstore:
+ # Backend storage to use for the ring. Supported values are: consul, etcd,
+ # inmemory, memberlist, multi.
+ # CLI flag: -ingester.partition-ring.store
+ [store: | default = "memberlist"]
+
+ # The prefix for the keys in the store. Should end with a /.
+ # CLI flag: -ingester.partition-ring.prefix
+ [prefix: | default = "collectors/"]
+
+ # Configuration for a Consul client. Only applies if the selected kvstore
+ # is consul.
+ # The CLI flags prefix for this block configuration is:
+ # ingester.partition-ring.consul
+ [consul: ]
+
+ # Configuration for an ETCD v3 client. Only applies if the selected
+ # kvstore is etcd.
+ # The CLI flags prefix for this block configuration is:
+ # ingester.partition-ring.etcd
+ [etcd: ]
+
+ multi:
+ # Primary backend storage used by multi-client.
+ # CLI flag: -ingester.partition-ring.multi.primary
+ [primary: | default = ""]
+
+ # Secondary backend storage used by multi-client.
+ # CLI flag: -ingester.partition-ring.multi.secondary
+ [secondary: | default = ""]
+
+ # Mirror writes to secondary store.
+ # CLI flag: -ingester.partition-ring.multi.mirror-enabled
+ [mirror_enabled: | default = false]
+
+ # Timeout for storing value to secondary store.
+ # CLI flag: -ingester.partition-ring.multi.mirror-timeout
+ [mirror_timeout: | default = 2s]
+
+ # Minimum number of owners to wait before a PENDING partition gets switched
+ # to ACTIVE.
+ # CLI flag: -ingester.partition-ring.min-partition-owners-count
+ [min_partition_owners_count: | default = 1]
+
+ # How long the minimum number of owners are enforced before a PENDING
+ # partition gets switched to ACTIVE.
+ # CLI flag: -ingester.partition-ring.min-partition-owners-duration
+ [min_partition_owners_duration: | default = 10s]
+
+ # How long to wait before an INACTIVE partition is eligible for deletion.
+ # The partition is deleted only if it has been in INACTIVE state for at
+ # least the configured duration and it has no owners registered. A value of
+ # 0 disables partitions deletion.
+ # CLI flag: -ingester.partition-ring.delete-inactive-partition-after
+ [delete_inactive_partition_after: | default = 13h]
+
# Configuration for 'runtime config' module, responsible for reloading runtime
# configuration file.
[runtime_config: ]
@@ -1356,7 +1577,7 @@ backoff_config:
# CLI flag: -s3.max-backoff
[max_period: | default = 3s]
- # Maximum number of times to retry when s3 get Object
+ # Maximum number of times to retry for s3 GetObject or ObjectExists
# CLI flag: -s3.max-retries
[max_retries: | default = 5]
@@ -1466,152 +1687,67 @@ The `azure_storage_config` block configures the connection to Azure object stora
[max_retry_delay: | default = 500ms]
```
-### bloom_compactor
+### bloom_build
-Experimental: The `bloom_compactor` block configures the Loki bloom compactor server, responsible for compacting stream indexes into bloom filters and merging them as bloom blocks.
+Experimental: The `bloom_build` block configures the Loki bloom planner and builder servers, responsible for building bloom filters.
```yaml
-# Defines the ring to be used by the bloom-compactor servers. In case this isn't
-# configured, this block supports inheriting configuration from the common ring
-# section.
-ring:
- kvstore:
- # Backend storage to use for the ring. Supported values are: consul, etcd,
- # inmemory, memberlist, multi.
- # CLI flag: -bloom-compactor.ring.store
- [store: | default = "consul"]
-
- # The prefix for the keys in the store. Should end with a /.
- # CLI flag: -bloom-compactor.ring.prefix
- [prefix: | default = "collectors/"]
-
- # Configuration for a Consul client. Only applies if the selected kvstore is
- # consul.
- # The CLI flags prefix for this block configuration is: bloom-compactor.ring
- [consul: ]
-
- # Configuration for an ETCD v3 client. Only applies if the selected kvstore
- # is etcd.
- # The CLI flags prefix for this block configuration is: bloom-compactor.ring
- [etcd: ]
-
- multi:
- # Primary backend storage used by multi-client.
- # CLI flag: -bloom-compactor.ring.multi.primary
- [primary: | default = ""]
-
- # Secondary backend storage used by multi-client.
- # CLI flag: -bloom-compactor.ring.multi.secondary
- [secondary: | default = ""]
-
- # Mirror writes to secondary store.
- # CLI flag: -bloom-compactor.ring.multi.mirror-enabled
- [mirror_enabled: | default = false]
-
- # Timeout for storing value to secondary store.
- # CLI flag: -bloom-compactor.ring.multi.mirror-timeout
- [mirror_timeout: | default = 2s]
-
- # Period at which to heartbeat to the ring. 0 = disabled.
- # CLI flag: -bloom-compactor.ring.heartbeat-period
- [heartbeat_period: | default = 15s]
-
- # The heartbeat timeout after which compactors are considered unhealthy within
- # the ring. 0 = never (timeout disabled).
- # CLI flag: -bloom-compactor.ring.heartbeat-timeout
- [heartbeat_timeout: | default = 1m]
-
- # File path where tokens are stored. If empty, tokens are not stored at
- # shutdown and restored at startup.
- # CLI flag: -bloom-compactor.ring.tokens-file-path
- [tokens_file_path: | default = ""]
-
- # True to enable zone-awareness and replicate blocks across different
- # availability zones.
- # CLI flag: -bloom-compactor.ring.zone-awareness-enabled
- [zone_awareness_enabled: | default = false]
-
- # Number of tokens to use in the ring per compactor. Higher number of tokens
- # will result in more and smaller files (metas and blocks.)
- # CLI flag: -bloom-compactor.ring.num-tokens
- [num_tokens: | default = 10]
-
- # Instance ID to register in the ring.
- # CLI flag: -bloom-compactor.ring.instance-id
- [instance_id: | default = ""]
-
- # Name of network interface to read address from.
- # CLI flag: -bloom-compactor.ring.instance-interface-names
- [instance_interface_names: | default = []]
-
- # Port to advertise in the ring (defaults to server.grpc-listen-port).
- # CLI flag: -bloom-compactor.ring.instance-port
- [instance_port: | default = 0]
-
- # IP address to advertise in the ring.
- # CLI flag: -bloom-compactor.ring.instance-addr
- [instance_addr: | default = ""]
-
- # The availability zone where this instance is running. Required if
- # zone-awareness is enabled.
- # CLI flag: -bloom-compactor.ring.instance-availability-zone
- [instance_availability_zone: | default = ""]
-
- # Enable using a IPv6 instance address.
- # CLI flag: -bloom-compactor.ring.instance-enable-ipv6
- [instance_enable_ipv6: | default = false]
-
-# Flag to enable or disable the usage of the bloom-compactor component.
-# CLI flag: -bloom-compactor.enabled
+# Flag to enable or disable the usage of the bloom-planner and bloom-builder
+# components.
+# CLI flag: -bloom-build.enabled
[enabled: | default = false]
-# Interval at which to re-run the compaction operation.
-# CLI flag: -bloom-compactor.compaction-interval
-[compaction_interval: | default = 10m]
-
-# Newest day-table offset (from today, inclusive) to compact. Increase to lower
-# cost by not re-writing data to object storage too frequently since recent data
-# changes more often at the cost of not having blooms available as quickly.
-# CLI flag: -bloom-compactor.min-table-offset
-[min_table_offset: | default = 1]
-
-# Oldest day-table offset (from today, inclusive) to compact. This can be used
-# to lower cost by not trying to compact older data which doesn't change. This
-# can be optimized by aligning it with the maximum `reject_old_samples_max_age`
-# setting of any tenant.
-# CLI flag: -bloom-compactor.max-table-offset
-[max_table_offset: | default = 2]
-
-# Number of workers to run in parallel for compaction.
-# CLI flag: -bloom-compactor.worker-parallelism
-[worker_parallelism: | default = 1]
-
-# Minimum backoff time between retries.
-# CLI flag: -bloom-compactor.compaction-retries-min-backoff
-[compaction_retries_min_backoff: | default = 10s]
+planner:
+ # Interval at which to re-run the bloom creation planning.
+ # CLI flag: -bloom-build.planner.interval
+ [planning_interval: | default = 8h]
+
+ # Newest day-table offset (from today, inclusive) to build blooms for.
+ # Increase to lower cost by not re-writing data to object storage too
+ # frequently since recent data changes more often at the cost of not having
+ # blooms available as quickly.
+ # CLI flag: -bloom-build.planner.min-table-offset
+ [min_table_offset: | default = 1]
+
+ # Oldest day-table offset (from today, inclusive) to compact. This can be used
+ # to lower cost by not trying to compact older data which doesn't change. This
+ # can be optimized by aligning it with the maximum
+ # `reject_old_samples_max_age` setting of any tenant.
+ # CLI flag: -bloom-build.planner.max-table-offset
+ [max_table_offset: | default = 2]
+
+ # Maximum number of tasks to queue per tenant.
+ # CLI flag: -bloom-build.planner.max-tasks-per-tenant
+ [max_queued_tasks_per_tenant: | default = 30000]
+
+ retention:
+ # Enable bloom retention.
+ # CLI flag: -bloom-build.planner.retention.enabled
+ [enabled: | default = false]
-# Maximum backoff time between retries.
-# CLI flag: -bloom-compactor.compaction-retries-max-backoff
-[compaction_retries_max_backoff: | default = 1m]
+builder:
+ # The grpc_client block configures the gRPC client used to communicate between
+ # a client and server component in Loki.
+ # The CLI flags prefix for this block configuration is:
+ # bloom-gateway-client.grpc
+ [grpc_config: ]
-# Number of retries to perform when compaction fails.
-# CLI flag: -bloom-compactor.compaction-retries
-[compaction_retries: | default = 3]
+ # Hostname (and port) of the bloom planner
+ # CLI flag: -bloom-build.builder.planner-address
+ [planner_address: | default = ""]
-# Maximum number of tables to compact in parallel. While increasing this value,
-# please make sure compactor has enough disk space allocated to be able to store
-# and compact as many tables.
-# CLI flag: -bloom-compactor.max-compaction-parallelism
-[max_compaction_parallelism: | default = 1]
+ backoff_config:
+ # Minimum delay when backing off.
+ # CLI flag: -bloom-build.builder.backoff.backoff-min-period
+ [min_period: | default = 100ms]
-retention:
- # Enable bloom retention.
- # CLI flag: -bloom-compactor.retention.enabled
- [enabled: | default = false]
+ # Maximum delay when backing off.
+ # CLI flag: -bloom-build.builder.backoff.backoff-max-period
+ [max_period: | default = 10s]
- # Max lookback days for retention.
- # CLI flag: -bloom-compactor.retention.max-lookback-days
- [max_lookback_days: | default = 365]
+ # Number of times to backoff and retry before failing.
+ # CLI flag: -bloom-build.builder.backoff.backoff-retries
+ [max_retries: | default = 10]
```
### bloom_gateway
@@ -2119,12 +2255,14 @@ ring:
# Configuration for a Consul client. Only applies if the selected kvstore is
# consul.
- # The CLI flags prefix for this block configuration is: common.storage.ring
+ # The CLI flags prefix for this block configuration is:
+ # common.storage.ring.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected kvstore
# is etcd.
- # The CLI flags prefix for this block configuration is: common.storage.ring
+ # The CLI flags prefix for this block configuration is:
+ # common.storage.ring.etcd
[etcd: ]
multi:
@@ -2298,12 +2436,13 @@ compactor_ring:
# Configuration for a Consul client. Only applies if the selected kvstore is
# consul.
- # The CLI flags prefix for this block configuration is: compactor.ring
+ # The CLI flags prefix for this block configuration is:
+ # compactor.ring.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected kvstore
# is etcd.
- # The CLI flags prefix for this block configuration is: compactor.ring
+ # The CLI flags prefix for this block configuration is: compactor.ring.etcd
[etcd: ]
multi:
@@ -2382,46 +2521,48 @@ compactor_ring:
Configuration for a Consul client. Only applies if the selected kvstore is `consul`. The supported CLI flags `` used to reference this configuration block are:
-- `bloom-compactor.ring`
-- `common.storage.ring`
-- `compactor.ring`
-- `distributor.ring`
-- `index-gateway.ring`
-- `ingester-rf1`
-- `pattern-ingester`
-- `query-scheduler.ring`
-- `ruler.ring`
+- `common.storage.ring.consul`
+- `compactor.ring.consul`
+- `consul`
+- `distributor.ring.consul`
+- `index-gateway.ring.consul`
+- `ingester-rf1.consul`
+- `ingester.partition-ring.consul`
+- `kafka-ingesterconsul`
+- `pattern-ingester.consul`
+- `query-scheduler.ring.consul`
+- `ruler.ring.consul`
```yaml
# Hostname and port of Consul.
-# CLI flag: -.consul.hostname
+# CLI flag: -.hostname
[host: | default = "localhost:8500"]
# ACL Token used to interact with Consul.
-# CLI flag: -.consul.acl-token
+# CLI flag: -.acl-token
[acl_token: | default = ""]
# HTTP timeout when talking to Consul
-# CLI flag: -.consul.client-timeout
+# CLI flag: -.client-timeout
[http_client_timeout: | default = 20s]
# Enable consistent reads to Consul.
-# CLI flag: -.consul.consistent-reads
+# CLI flag: -.consistent-reads
[consistent_reads: | default = false]
# Rate limit when watching key or prefix in Consul, in requests per second. 0
# disables the rate limit.
-# CLI flag: -.consul.watch-rate-limit
+# CLI flag: -.watch-rate-limit
[watch_rate_limit: | default = 1]
# Burst size used in rate limit. Values less than 1 are treated as 1.
-# CLI flag: -.consul.watch-burst-size
+# CLI flag: -.watch-burst-size
[watch_burst_size: | default = 1]
# Maximum duration to wait before retrying a Compare And Swap (CAS) operation.
-# CLI flag: -.consul.cas-retry-delay
+# CLI flag: -.cas-retry-delay
[cas_retry_delay: | default = 1s]
```
@@ -2526,12 +2667,14 @@ ring:
# Configuration for a Consul client. Only applies if the selected kvstore is
# consul.
- # The CLI flags prefix for this block configuration is: distributor.ring
+ # The CLI flags prefix for this block configuration is:
+ # distributor.ring.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected kvstore
# is etcd.
- # The CLI flags prefix for this block configuration is: distributor.ring
+ # The CLI flags prefix for this block configuration is:
+ # distributor.ring.etcd
[etcd: ]
multi:
@@ -2603,56 +2746,58 @@ otlp_config:
Configuration for an ETCD v3 client. Only applies if the selected kvstore is `etcd`. The supported CLI flags `` used to reference this configuration block are:
-- `bloom-compactor.ring`
-- `common.storage.ring`
-- `compactor.ring`
-- `distributor.ring`
-- `index-gateway.ring`
-- `ingester-rf1`
-- `pattern-ingester`
-- `query-scheduler.ring`
-- `ruler.ring`
+- `common.storage.ring.etcd`
+- `compactor.ring.etcd`
+- `distributor.ring.etcd`
+- `etcd`
+- `index-gateway.ring.etcd`
+- `ingester-rf1.etcd`
+- `ingester.partition-ring.etcd`
+- `kafka-ingesteretcd`
+- `pattern-ingester.etcd`
+- `query-scheduler.ring.etcd`
+- `ruler.ring.etcd`
```yaml
# The etcd endpoints to connect to.
-# CLI flag: -.etcd.endpoints
+# CLI flag: -.endpoints
[endpoints: | default = []]
# The dial timeout for the etcd connection.
-# CLI flag: -.etcd.dial-timeout
+# CLI flag: -.dial-timeout
[dial_timeout: | default = 10s]
# The maximum number of retries to do for failed ops.
-# CLI flag: -.etcd.max-retries
+# CLI flag: -.max-retries
[max_retries: | default = 10]
# Enable TLS.
-# CLI flag: -.etcd.tls-enabled
+# CLI flag: -.tls-enabled
[tls_enabled: | default = false]
# Path to the client certificate, which will be used for authenticating with the
# server. Also requires the key path to be configured.
-# CLI flag: -.etcd.tls-cert-path
+# CLI flag: -.tls-cert-path
[tls_cert_path: | default = ""]
# Path to the key for the client certificate. Also requires the client
# certificate to be configured.
-# CLI flag: -.etcd.tls-key-path
+# CLI flag: -.tls-key-path
[tls_key_path: | default = ""]
# Path to the CA certificates to validate server certificate against. If not
# set, the host's root CA certificates are used.
-# CLI flag: -.etcd.tls-ca-path
+# CLI flag: -.tls-ca-path
[tls_ca_path: | default = ""]
# Override the expected name on the server certificate.
-# CLI flag: -.etcd.tls-server-name
+# CLI flag: -.tls-server-name
[tls_server_name: | default = ""]
# Skip validating server certificate.
-# CLI flag: -.etcd.tls-insecure-skip-verify
+# CLI flag: -.tls-insecure-skip-verify
[tls_insecure_skip_verify: | default = false]
# Override the default cipher suite list (separated by commas). Allowed values:
@@ -2685,20 +2830,20 @@ Configuration for an ETCD v3 client. Only applies if the selected kvstore is `et
# - TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-# CLI flag: -.etcd.tls-cipher-suites
+# CLI flag: -.tls-cipher-suites
[tls_cipher_suites: | default = ""]
# Override the default minimum TLS version. Allowed values: VersionTLS10,
# VersionTLS11, VersionTLS12, VersionTLS13
-# CLI flag: -.etcd.tls-min-version
+# CLI flag: -.tls-min-version
[tls_min_version: | default = ""]
# Etcd username.
-# CLI flag: -.etcd.username
+# CLI flag: -.username
[username: | default = ""]
# Etcd password.
-# CLI flag: -.etcd.password
+# CLI flag: -.password
[password: | default = ""]
```
@@ -3075,12 +3220,14 @@ ring:
# Configuration for a Consul client. Only applies if the selected kvstore is
# consul.
- # The CLI flags prefix for this block configuration is: index-gateway.ring
+ # The CLI flags prefix for this block configuration is:
+ # index-gateway.ring.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected kvstore
# is etcd.
- # The CLI flags prefix for this block configuration is: index-gateway.ring
+ # The CLI flags prefix for this block configuration is:
+ # index-gateway.ring.etcd
[etcd: ]
multi:
@@ -3172,10 +3319,12 @@ lifecycler:
# Configuration for a Consul client. Only applies if the selected kvstore
# is consul.
+ # The CLI flags prefix for this block configuration is: consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected
# kvstore is etcd.
+ # The CLI flags prefix for this block configuration is: etcd
[etcd: ]
multi:
@@ -3540,7 +3689,7 @@ The `limits_config` block configures global and per-tenant limits in Loki. The v
# list to service_name. If none of the configured labels exist in the stream,
# label is set to unknown_service. Empty list disables setting the label.
# CLI flag: -validation.discover-service-name
-[discover_service_name: | default = [service app application name app_kubernetes_io_name container container_name component workload job]]
+[discover_service_name: | default = [service app application name app_kubernetes_io_name container container_name k8s_container_name component workload job k8s_job_name]]
# Discover and add log levels during ingestion, if not present already. Levels
# would be added to Structured Metadata with name
@@ -3959,27 +4108,22 @@ shard_streams:
# CLI flag: -bloom-gateway.cache-key-interval
[bloom_gateway_cache_key_interval: | default = 15m]
-# Experimental. The shard size defines how many bloom compactors should be used
-# by a tenant when computing blooms. If it's set to 0, shuffle sharding is
-# disabled.
-# CLI flag: -bloom-compactor.shard-size
-[bloom_compactor_shard_size: | default = 0]
-
-# Experimental. Whether to compact chunks into bloom filters.
-# CLI flag: -bloom-compactor.enable-compaction
-[bloom_compactor_enable_compaction: | default = false]
+# Experimental. Maximum number of builders to use when building blooms. 0 allows
+# unlimited builders.
+# CLI flag: -bloom-build.max-builders
+[bloom_build_max_builders: | default = 0]
-# Experimental. The maximum bloom block size. A value of 0 sets an unlimited
-# size. Default is 200MB. The actual block size might exceed this limit since
-# blooms will be added to blocks until the block exceeds the maximum block size.
-# CLI flag: -bloom-compactor.max-block-size
-[bloom_compactor_max_block_size: | default = 200MB]
+# Experimental. Maximum number of retries for a failed task. If a task fails
+# more than this number of times, it is considered failed and will not be
+# retried. A value of 0 disables this limit.
+# CLI flag: -bloom-build.task-max-retries
+[bloom_build_task_max_retries: | default = 3]
-# Experimental. The maximum bloom size per log stream. A log stream whose
-# generated bloom filter exceeds this size will be discarded. A value of 0 sets
-# an unlimited size. Default is 128MB.
-# CLI flag: -bloom-compactor.max-bloom-size
-[bloom_compactor_max_bloom_size: | default = 128MB]
+# Experimental. Timeout for a builder to finish a task. If a builder does not
+# respond within this time, it is considered failed and the task will be
+# requeued. 0 disables the timeout.
+# CLI flag: -bloom-build.builder-response-timeout
+[bloom_build_builder_response_timeout: | default = 0s]
# Experimental. Whether to create blooms for the tenant.
# CLI flag: -bloom-build.enable
@@ -3991,41 +4135,36 @@ shard_streams:
# CLI flag: -bloom-build.split-keyspace-by
[bloom_split_series_keyspace_by: | default = 256]
-# Experimental. Maximum number of builders to use when building blooms. 0 allows
-# unlimited builders.
-# CLI flag: -bloom-build.max-builders
-[bloom_build_max_builders: | default = 0]
-
-# Experimental. Timeout for a builder to finish a task. If a builder does not
-# respond within this time, it is considered failed and the task will be
-# requeued. 0 disables the timeout.
-# CLI flag: -bloom-build.builder-response-timeout
-[bloom_build_builder_response_timeout: | default = 0s]
-
-# Experimental. Maximum number of retries for a failed task. If a task fails
-# more than this number of times, it is considered failed and will not be
-# retried. A value of 0 disables this limit.
-# CLI flag: -bloom-build.task-max-retries
-[bloom_build_task_max_retries: | default = 3]
-
# Experimental. Length of the n-grams created when computing blooms from log
# lines.
-# CLI flag: -bloom-compactor.ngram-length
+# CLI flag: -bloom-build.ngram-length
[bloom_ngram_length: | default = 4]
# Experimental. Skip factor for the n-grams created when computing blooms from
# log lines.
-# CLI flag: -bloom-compactor.ngram-skip
+# CLI flag: -bloom-build.ngram-skip
[bloom_ngram_skip: | default = 1]
# Experimental. Scalable Bloom Filter desired false-positive rate.
-# CLI flag: -bloom-compactor.false-positive-rate
+# CLI flag: -bloom-build.false-positive-rate
[bloom_false_positive_rate: | default = 0.01]
# Experimental. Compression algorithm for bloom block pages.
-# CLI flag: -bloom-compactor.block-encoding
+# CLI flag: -bloom-build.block-encoding
[bloom_block_encoding: | default = "none"]
+# Experimental. The maximum bloom block size. A value of 0 sets an unlimited
+# size. Default is 200MB. The actual block size might exceed this limit since
+# blooms will be added to blocks until the block exceeds the maximum block size.
+# CLI flag: -bloom-build.max-block-size
+[bloom_max_block_size: | default = 200MB]
+
+# Experimental. The maximum bloom size per log stream. A log stream whose
+# generated bloom filter exceeds this size will be discarded. A value of 0 sets
+# an unlimited size. Default is 128MB.
+# CLI flag: -bloom-build.max-bloom-size
+[bloom_max_bloom_size: | default = 128MB]
+
# Allow user to send structured metadata in push payload.
# CLI flag: -validation.allow-structured-metadata
[allow_structured_metadata: | default = true]
@@ -4660,12 +4799,14 @@ scheduler_ring:
# Configuration for a Consul client. Only applies if the selected kvstore is
# consul.
- # The CLI flags prefix for this block configuration is: query-scheduler.ring
+ # The CLI flags prefix for this block configuration is:
+ # query-scheduler.ring.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected kvstore
# is etcd.
- # The CLI flags prefix for this block configuration is: query-scheduler.ring
+ # The CLI flags prefix for this block configuration is:
+ # query-scheduler.ring.etcd
[etcd: ]
multi:
@@ -4966,12 +5107,12 @@ ring:
# Configuration for a Consul client. Only applies if the selected kvstore is
# consul.
- # The CLI flags prefix for this block configuration is: ruler.ring
+ # The CLI flags prefix for this block configuration is: ruler.ring.consul
[consul: ]
# Configuration for an ETCD v3 client. Only applies if the selected kvstore
# is etcd.
- # The CLI flags prefix for this block configuration is: ruler.ring
+ # The CLI flags prefix for this block configuration is: ruler.ring.etcd
[etcd: ]
multi:
@@ -5298,7 +5439,7 @@ backoff_config:
# CLI flag: -.storage.s3.max-backoff
[max_period: | default = 3s]
- # Maximum number of times to retry when s3 get Object
+ # Maximum number of times to retry for s3 GetObject or ObjectExists
# CLI flag: -.storage.s3.max-retries
[max_retries: | default = 5]
@@ -5517,11 +5658,7 @@ grpc_tls_config:
# CLI flag: -server.grpc.stats-tracking-enabled
[grpc_server_stats_tracking_enabled: | default = true]
-# If true, gGPC's buffer pools will be used to handle incoming requests.
-# Enabling this feature can reduce memory allocation, but also requires
-# disabling GRPC server stats tracking by setting
-# `server.grpc.stats-tracking-enabled=false`. This is an experimental gRPC
-# feature, so it might be removed in a future version of the gRPC library.
+# Deprecated option, has no effect and will be removed in a future version.
# CLI flag: -server.grpc.recv-buffer-pools-enabled
[grpc_server_recv_buffer_pools_enabled: | default = false]
diff --git a/go.mod b/go.mod
index 28e682757255d..9ebf69afe05f7 100644
--- a/go.mod
+++ b/go.mod
@@ -17,7 +17,7 @@ require (
github.com/alicebob/miniredis/v2 v2.30.4
github.com/aliyun/aliyun-oss-go-sdk v2.2.10+incompatible
github.com/aws/aws-sdk-go v1.54.19
- github.com/baidubce/bce-sdk-go v0.9.187
+ github.com/baidubce/bce-sdk-go v0.9.189
github.com/bmatcuk/doublestar v1.3.4
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500
github.com/cespare/xxhash v1.1.0
@@ -34,7 +34,7 @@ require (
github.com/fatih/color v1.16.0
github.com/felixge/fgprof v0.9.4
github.com/fluent/fluent-bit-go v0.0.0-20230731091245-a7a013e2473c
- github.com/fsouza/fake-gcs-server v1.47.7
+ github.com/fsouza/fake-gcs-server v1.7.0
github.com/go-kit/log v0.2.1
github.com/go-logfmt/logfmt v0.6.0
github.com/go-redis/redis/v8 v8.11.5
@@ -49,14 +49,14 @@ require (
github.com/gorilla/mux v1.8.1
github.com/gorilla/websocket v1.5.3
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2
- github.com/grafana/dskit v0.0.0-20240819131358-463219e80ea0
+ github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b
github.com/grafana/go-gelf/v2 v2.0.1
github.com/grafana/gomemcache v0.0.0-20240229205252-cd6a66d6fb56
github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc
github.com/grafana/tail v0.0.0-20230510142333-77b18831edf0
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645
- github.com/hashicorp/consul/api v1.29.2
+ github.com/hashicorp/consul/api v1.29.4
github.com/hashicorp/golang-lru v0.6.0
github.com/imdario/mergo v0.3.16
github.com/influxdata/telegraf v1.16.3
@@ -67,7 +67,7 @@ require (
github.com/klauspost/pgzip v1.2.6
github.com/leodido/go-syslog/v4 v4.1.0
github.com/mattn/go-ieproxy v0.0.12
- github.com/minio/minio-go/v7 v7.0.75
+ github.com/minio/minio-go/v7 v7.0.76
github.com/mitchellh/go-wordwrap v1.0.1
github.com/mitchellh/mapstructure v1.5.0
github.com/modern-go/reflect2 v1.0.2
@@ -79,7 +79,7 @@ require (
github.com/opentracing/opentracing-go v1.2.0
github.com/oschwald/geoip2-golang v1.11.0
// github.com/pierrec/lz4 v2.0.5+incompatible
- github.com/pierrec/lz4/v4 v4.1.18
+ github.com/pierrec/lz4/v4 v4.1.21
github.com/pkg/errors v0.9.1
github.com/prometheus/client_golang v1.19.1
github.com/prometheus/client_model v0.6.1
@@ -89,7 +89,7 @@ require (
github.com/shurcooL/httpfs v0.0.0-20230704072500-f1e31cf0ba5c
github.com/shurcooL/vfsgen v0.0.0-20200824052919-0d455de96546
github.com/sony/gobreaker v0.5.0
- github.com/spf13/afero v1.10.0
+ github.com/spf13/afero v1.11.0
github.com/stretchr/testify v1.9.0
github.com/uber/jaeger-client-go v2.30.0+incompatible
github.com/xdg-go/scram v1.1.2
@@ -113,20 +113,20 @@ require (
github.com/Azure/go-autorest/autorest v0.11.29
github.com/DataDog/sketches-go v1.4.6
github.com/DmitriyVTitov/size v1.5.0
- github.com/IBM/go-sdk-core/v5 v5.17.4
+ github.com/IBM/go-sdk-core/v5 v5.17.5
github.com/IBM/ibm-cos-sdk-go v1.11.0
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27
github.com/buger/jsonparser v1.1.1
github.com/coder/quartz v0.1.0
github.com/d4l3k/messagediff v1.2.1
github.com/dolthub/swiss v0.2.1
- github.com/efficientgo/core v1.0.0-rc.2
+ github.com/efficientgo/core v1.0.0-rc.3
github.com/fsnotify/fsnotify v1.7.0
github.com/gogo/googleapis v1.4.1
github.com/grafana/jsonparser v0.0.0-20240425183733-ea80629e1a32
github.com/grafana/loki/pkg/push v0.0.0-20231124142027-e52380921608
github.com/hashicorp/golang-lru/v2 v2.0.7
- github.com/hashicorp/raft v1.7.0
+ github.com/hashicorp/raft v1.7.1
github.com/hashicorp/raft-wal v0.4.1
github.com/heroku/x v0.0.61
github.com/influxdata/tdigest v0.0.2-0.20210216194612-fc98d27c9e8b
@@ -137,7 +137,13 @@ require (
github.com/richardartoul/molecule v1.0.0
github.com/schollz/progressbar/v3 v3.14.6
github.com/shirou/gopsutil/v4 v4.24.0-alpha.1
- github.com/thanos-io/objstore v0.0.0-20240722162417-19b0c0f0ffd8
+ github.com/thanos-io/objstore v0.0.0-20240818203309-0363dadfdfb1
+ github.com/twmb/franz-go v1.17.1
+ github.com/twmb/franz-go/pkg/kadm v1.13.0
+ github.com/twmb/franz-go/pkg/kfake v0.0.0-20240821035758-b77dd13e2bfa
+ github.com/twmb/franz-go/pkg/kmsg v1.8.0
+ github.com/twmb/franz-go/plugin/kotel v1.5.0
+ github.com/twmb/franz-go/plugin/kprom v1.1.0
github.com/willf/bloom v2.0.3+incompatible
go.opentelemetry.io/collector/pdata v1.12.0
go4.org/netipx v0.0.0-20230125063823-8449b0a6169f
@@ -147,7 +153,7 @@ require (
google.golang.org/protobuf v1.34.2
gotest.tools v2.2.0+incompatible
k8s.io/apimachinery v0.29.3
- k8s.io/utils v0.0.0-20230726121419-3b25d923346b
+ k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3
)
require (
@@ -164,14 +170,12 @@ require (
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/goccy/go-json v0.10.3 // indirect
- github.com/gorilla/handlers v1.5.2 // indirect
- github.com/hashicorp/go-msgpack/v2 v2.1.1 // indirect
+ github.com/hashicorp/go-msgpack/v2 v2.1.2 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/ncw/swift v1.0.53 // indirect
github.com/pires/go-proxyproto v0.7.0 // indirect
- github.com/pkg/xattr v0.4.10 // indirect
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/rivo/uniseg v0.4.7 // indirect
@@ -323,7 +327,7 @@ require (
github.com/prometheus/exporter-toolkit v0.11.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
- github.com/rs/xid v1.5.0 // indirect
+ github.com/rs/xid v1.6.0 // indirect
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 // indirect
github.com/sercand/kuberesolver/v5 v5.1.1 // indirect
github.com/shopspring/decimal v1.2.0 // indirect
@@ -346,9 +350,9 @@ require (
go.opentelemetry.io/collector/semconv v0.105.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
- go.opentelemetry.io/otel v1.28.0 // indirect
+ go.opentelemetry.io/otel v1.28.0
go.opentelemetry.io/otel/metric v1.28.0 // indirect
- go.opentelemetry.io/otel/trace v1.28.0 // indirect
+ go.opentelemetry.io/otel/trace v1.28.0
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.21.0 // indirect
golang.org/x/mod v0.19.0 // indirect
diff --git a/go.sum b/go.sum
index 1c1dff53797db..28ca7bfdf1dcc 100644
--- a/go.sum
+++ b/go.sum
@@ -7,7 +7,6 @@ cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSR
cloud.google.com/go v0.41.0/go.mod h1:OauMR7DV8fzvZIl2qg6rkaIhD/vmgk4iwEw/h6ercmg=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
-cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
@@ -20,7 +19,6 @@ cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOY
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
-cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPTY=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
@@ -130,7 +128,6 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
-cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
cloud.google.com/go/storage v1.23.0/go.mod h1:vOEEDNFnciUMhBeT6hsJIn3ieU5cFRmzeLgDvXzfIXc=
cloud.google.com/go/storage v1.43.0 h1:CcxnSohZwizt4LCzQHWvBf1/kvtHUn7gk9QERXPyXFs=
@@ -253,8 +250,8 @@ github.com/DmitriyVTitov/size v1.5.0 h1:/PzqxYrOyOUX1BXj6J9OuVRVGe+66VL4D9FlUaW5
github.com/DmitriyVTitov/size v1.5.0/go.mod h1:le6rNI4CoLQV1b9gzp1+3d7hMAD/uu2QcJ+aYbNgiU0=
github.com/HdrHistogram/hdrhistogram-go v1.1.2 h1:5IcZpTvzydCQeHzK4Ef/D5rrSqwxob0t8PQPMybUNFM=
github.com/HdrHistogram/hdrhistogram-go v1.1.2/go.mod h1:yDgFjdqOqDEKOvasDdhWNXYg9BVp4O+o5f6V/ehm6Oo=
-github.com/IBM/go-sdk-core/v5 v5.17.4 h1:VGb9+mRrnS2HpHZFM5hy4J6ppIWnwNrw0G+tLSgcJLc=
-github.com/IBM/go-sdk-core/v5 v5.17.4/go.mod h1:KsAAI7eStAWwQa4F96MLy+whYSh39JzNjklZRbN/8ns=
+github.com/IBM/go-sdk-core/v5 v5.17.5 h1:AjGC7xNee5tgDIjndekBDW5AbypdERHSgib3EZ1KNsA=
+github.com/IBM/go-sdk-core/v5 v5.17.5/go.mod h1:KsAAI7eStAWwQa4F96MLy+whYSh39JzNjklZRbN/8ns=
github.com/IBM/ibm-cos-sdk-go v1.11.0 h1:Jp55NLN3OvBwucMGpP5wNybyjncsmTZ9+GPHai/1cE8=
github.com/IBM/ibm-cos-sdk-go v1.11.0/go.mod h1:FnWOym0CvrPM0nHoXvceClOEvGVXecPpmVIO5RFjlFk=
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
@@ -386,8 +383,8 @@ github.com/aws/smithy-go v1.11.1 h1:IQ+lPZVkSM3FRtyaDox41R8YS6iwPMYIreejOgPW49g=
github.com/aws/smithy-go v1.11.1/go.mod h1:3xHYmszWVx2c0kIwQeEVf9uSm4fYZt67FBJnwub1bgM=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27 h1:60m4tnanN1ctzIu4V3bfCNJ39BiOPSm1gHFlFjTkRE0=
github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27/go.mod h1:k08r+Yj1PRAmuayFiRK6MYuR5Ve4IuZtTfxErMIh0+c=
-github.com/baidubce/bce-sdk-go v0.9.187 h1:QeiuVWqZrxovoYsgR5CQFXCL9txOg/h97LveCOllAng=
-github.com/baidubce/bce-sdk-go v0.9.187/go.mod h1:zbYJMQwE4IZuyrJiFO8tO8NbtYiKTFTbwh4eIsqjVdg=
+github.com/baidubce/bce-sdk-go v0.9.189 h1:Dz0EAdM9wgbGHY4jeXFyQea8KeOdrRlbqwCbDLudcYw=
+github.com/baidubce/bce-sdk-go v0.9.189/go.mod h1:zbYJMQwE4IZuyrJiFO8tO8NbtYiKTFTbwh4eIsqjVdg=
github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3/go.mod h1:CIWtjkly68+yqLPbvwwR/fjNJA/idrtULjZWh2v1ys0=
@@ -585,8 +582,8 @@ github.com/eclipse/paho.mqtt.golang v1.2.0/go.mod h1:H9keYFcgq3Qr5OUJm/JZI/i6U7j
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/edsrzf/mmap-go v1.1.0 h1:6EUwBLQ/Mcr1EYLE4Tn1VdW1A4ckqCQWZBw8Hr0kjpQ=
github.com/edsrzf/mmap-go v1.1.0/go.mod h1:19H/e8pUPLicwkyNgOykDXkJ9F0MHE+Z52B8EIth78Q=
-github.com/efficientgo/core v1.0.0-rc.2 h1:7j62qHLnrZqO3V3UA0AqOGd5d5aXV3AX6m/NZBHp78I=
-github.com/efficientgo/core v1.0.0-rc.2/go.mod h1:FfGdkzWarkuzOlY04VY+bGfb1lWrjaL6x/GLcQ4vJps=
+github.com/efficientgo/core v1.0.0-rc.3 h1:X6CdgycYWDcbYiJr1H1+lQGzx13o7bq3EUkbB9DsSPc=
+github.com/efficientgo/core v1.0.0-rc.3/go.mod h1:FfGdkzWarkuzOlY04VY+bGfb1lWrjaL6x/GLcQ4vJps=
github.com/efficientgo/e2e v0.13.1-0.20220922081603-45de9fc588a8 h1:UFLc39BcUXahSNCLUrKjNGZABMUZaS4M74EZvTRnq3k=
github.com/efficientgo/e2e v0.13.1-0.20220922081603-45de9fc588a8/go.mod h1:Hi+sz0REtlhVZ8zcdeTC3j6LUEEpJpPtNjOaOKuNcgI=
github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
@@ -645,8 +642,8 @@ github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA=
github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM=
-github.com/fsouza/fake-gcs-server v1.47.7 h1:56/U4rKY081TaNbq0gHWi7/71UxC2KROqcnrD9BRJhs=
-github.com/fsouza/fake-gcs-server v1.47.7/go.mod h1:4vPUynN8/zZlxk5Jpy6LvvTTxItdTAObK4DYnp89Jys=
+github.com/fsouza/fake-gcs-server v1.7.0 h1:Un0BXUXrRWYSmYyC1Rqm2e2WJfTPyDy/HGMz31emTi8=
+github.com/fsouza/fake-gcs-server v1.7.0/go.mod h1:5XIRs4YvwNbNoz+1JF8j6KLAyDh7RHGAyAK3EP2EsNk=
github.com/fullstorydev/emulators/storage v0.0.0-20240401123056-edc69752f474 h1:TufioMBjkJ6/Oqmlye/ReuxHFS35HyLmypj/BNy/8GY=
github.com/fullstorydev/emulators/storage v0.0.0-20240401123056-edc69752f474/go.mod h1:PQwxF4UU8wuL+srGxr3BOhIW5zXqgucwVlO/nPZLsxw=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
@@ -985,7 +982,6 @@ github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
-github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
@@ -1025,7 +1021,6 @@ github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsC
github.com/googleapis/gnostic v0.1.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.2.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
-github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/gopcua/opcua v0.1.12/go.mod h1:a6QH4F9XeODklCmWuvaOdL8v9H0d73CEKUHWVZLQyE8=
github.com/gophercloud/gophercloud v0.0.0-20190126172459-c818fa66e4c8/go.mod h1:3WdhXV3rUYy9p6AUW8d94kr+HS62Y4VL9mBnFxsD8q4=
github.com/gophercloud/gophercloud v0.1.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
@@ -1034,9 +1029,8 @@ github.com/gophercloud/gophercloud v1.13.0 h1:8iY9d1DAbzMW6Vok1AxbbK5ZaUjzMp0tdy
github.com/gophercloud/gophercloud v1.13.0/go.mod h1:aAVqcocTSXh2vYFZ1JTvx4EQmfgzxRcNupUfxZbBNDM=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
-github.com/gorilla/handlers v1.5.2 h1:cLTUSsNkgcwhgRqvCNmdbRWG0A3N4F+M2nWKdScwyEE=
-github.com/gorilla/handlers v1.5.2/go.mod h1:dX+xVpaxdSw+q0Qek8SSsl3dfMk3jNddUkMzo0GtH0w=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.7.1/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
@@ -1048,8 +1042,8 @@ github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aN
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2 h1:qhugDMdQ4Vp68H0tp/0iN17DM2ehRo1rLEdOFe/gB8I=
github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2/go.mod h1:w/aiO1POVIeXUQyl0VQSZjl5OAGDTL5aX+4v0RA1tcw=
-github.com/grafana/dskit v0.0.0-20240819131358-463219e80ea0 h1:iMShjkEYATnBMbEa2wV4QiK5PU2trw24FOCON3v7+K4=
-github.com/grafana/dskit v0.0.0-20240819131358-463219e80ea0/go.mod h1:c4ASJAo1QFmXGydDzNed2o0+Fncx+x4YmQ1r9HfYU3c=
+github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b h1:x2HCzk29I0o5pRPfqWP/qwhXaPGlcz8pohq5kO1NZoE=
+github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b/go.mod h1:SPLNCARd4xdjCkue0O6hvuoveuS1dGJjDnfxYe405YQ=
github.com/grafana/go-gelf/v2 v2.0.1 h1:BOChP0h/jLeD+7F9mL7tq10xVkDG15he3T1zHuQaWak=
github.com/grafana/go-gelf/v2 v2.0.1/go.mod h1:lexHie0xzYGwCgiRGcvZ723bSNyNI8ZRD4s0CLobh90=
github.com/grafana/gocql v0.0.0-20200605141915-ba5dc39ece85 h1:xLuzPoOzdfNb/RF/IENCw+oLVdZB4G21VPhkHBgwSHY=
@@ -1092,8 +1086,8 @@ github.com/hashicorp/consul-awsauth v0.0.0-20220713182709-05ac1c5c2706/go.mod h1
github.com/hashicorp/consul-net-rpc v0.0.0-20220307172752-3602954411b4/go.mod h1:vWEAHAeAqfOwB3pSgHMQpIu8VH1jL+Ltg54Tw0wt/NI=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/api v1.18.0/go.mod h1:owRRGJ9M5xReDC5nfT8FTJrNAPbT4NM6p/k+d03q2v4=
-github.com/hashicorp/consul/api v1.29.2 h1:aYyRn8EdE2mSfG14S1+L9Qkjtz8RzmaWh6AcNGRNwPw=
-github.com/hashicorp/consul/api v1.29.2/go.mod h1:0YObcaLNDSbtlgzIRtmRXI1ZkeuK0trCBxwZQ4MYnIk=
+github.com/hashicorp/consul/api v1.29.4 h1:P6slzxDLBOxUSj3fWo2o65VuKtbtOXFi7TSSgtXutuE=
+github.com/hashicorp/consul/api v1.29.4/go.mod h1:HUlfw+l2Zy68ceJavv2zAyArl2fqhGWnMycyt56sBgg=
github.com/hashicorp/consul/proto-public v0.2.1/go.mod h1:iWNlBDJIZQJC3bBiCThoqg9i7uk/4RQZYkqH1wiQrss=
github.com/hashicorp/consul/proto-public v0.6.2 h1:+DA/3g/IiKlJZb88NBn0ZgXrxJp2NlvCZdEyl+qxvL0=
github.com/hashicorp/consul/proto-public v0.6.2/go.mod h1:cXXbOg74KBNGajC+o8RlA502Esf0R9prcoJgiOX/2Tg=
@@ -1133,8 +1127,8 @@ github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iP
github.com/hashicorp/go-msgpack v1.1.5 h1:9byZdVjKTe5mce63pRVNP1L7UAmdHOTEMGehn6KvJWs=
github.com/hashicorp/go-msgpack v1.1.5/go.mod h1:gWVc3sv/wbDmR3rQsj1CAktEZzoz1YNK9NfGLXJ69/4=
github.com/hashicorp/go-msgpack/v2 v2.0.0/go.mod h1:JIxYkkFJRDDRSoWQBSh7s9QAVThq+82iWmUpmE4jKak=
-github.com/hashicorp/go-msgpack/v2 v2.1.1 h1:xQEY9yB2wnHitoSzk/B9UjXWRQ67QKu5AOm8aFp8N3I=
-github.com/hashicorp/go-msgpack/v2 v2.1.1/go.mod h1:upybraOAblm4S7rx0+jeNy+CWWhzywQsSRV5033mMu4=
+github.com/hashicorp/go-msgpack/v2 v2.1.2 h1:4Ee8FTp834e+ewB71RDrQ0VKpyFdrKOjvYtnQ/ltVj0=
+github.com/hashicorp/go-msgpack/v2 v2.1.2/go.mod h1:upybraOAblm4S7rx0+jeNy+CWWhzywQsSRV5033mMu4=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
@@ -1190,8 +1184,8 @@ github.com/hashicorp/raft v1.1.0/go.mod h1:4Ak7FSPnuvmb0GV6vgIAJ4vYT4bek9bb6Q+7H
github.com/hashicorp/raft v1.1.1/go.mod h1:vPAJM8Asw6u8LxC3eJCUZmRP/E4QmUGE1R7g7k8sG/8=
github.com/hashicorp/raft v1.2.0/go.mod h1:vPAJM8Asw6u8LxC3eJCUZmRP/E4QmUGE1R7g7k8sG/8=
github.com/hashicorp/raft v1.3.11/go.mod h1:J8naEwc6XaaCfts7+28whSeRvCqTd6e20BlCU3LtEO4=
-github.com/hashicorp/raft v1.7.0 h1:4u24Qn6lQ6uwziM++UgsyiT64Q8GyRn43CV41qPiz1o=
-github.com/hashicorp/raft v1.7.0/go.mod h1:N1sKh6Vn47mrWvEArQgILTyng8GoDRNYlgKyK7PMjs0=
+github.com/hashicorp/raft v1.7.1 h1:ytxsNx4baHsRZrhUcbt3+79zc4ly8qm7pi0393pSchY=
+github.com/hashicorp/raft v1.7.1/go.mod h1:hUeiEwQQR/Nk2iKDD0dkEhklSsu3jcAcqvPzPoZSAEM=
github.com/hashicorp/raft-autopilot v0.1.6/go.mod h1:Af4jZBwaNOI+tXfIqIdbcAnh/UyyqIMj/pOISIfhArw=
github.com/hashicorp/raft-boltdb v0.0.0-20171010151810-6e5ba93211ea/go.mod h1:pNv7Wc3ycL6F5oOWn+tPGo2gWD4a5X+yp/ntwdKLjRk=
github.com/hashicorp/raft-boltdb v0.0.0-20210409134258-03c10cc3d4ea/go.mod h1:qRd6nFJYYS6Iqnc/8HcUmko2/2Gw8qTFEmxDLii6W5I=
@@ -1324,7 +1318,6 @@ github.com/kolo/xmlrpc v0.0.0-20220921171641-a4b6fa1dd06b/go.mod h1:pcaDhQK0/NJZ
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
-github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
@@ -1421,8 +1414,8 @@ github.com/miekg/dns v1.1.61/go.mod h1:mnAarhS3nWaW+NVP2wTkYVIZyHNJ098SJZUki3eyk
github.com/mikioh/ipaddr v0.0.0-20190404000644-d465c8ab6721/go.mod h1:Ickgr2WtCLZ2MDGd4Gr0geeCH5HybhRJbonOgQpvSxc=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
-github.com/minio/minio-go/v7 v7.0.75 h1:0uLrB6u6teY2Jt+cJUVi9cTvDRuBKWSRzSAcznRkwlE=
-github.com/minio/minio-go/v7 v7.0.75/go.mod h1:qydcVzV8Hqtj1VtEocfxbmVFa2siu6HGa+LDEPogjD8=
+github.com/minio/minio-go/v7 v7.0.76 h1:9nxHH2XDai61cT/EFhyIw/wW4vJfpPNvl7lSFpRt+Ng=
+github.com/minio/minio-go/v7 v7.0.76/go.mod h1:AVM3IUN6WwKzmwBxVdjzhH8xq+f57JSbbvzqvUzR6eg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
@@ -1585,8 +1578,8 @@ github.com/philhofer/fwd v1.1.1/go.mod h1:gk3iGcWd9+svBvR0sR+KPcfE+RNWozjowpeBVG
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pierrec/lz4 v2.5.2+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
-github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
-github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
+github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
+github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
github.com/pires/go-proxyproto v0.7.0 h1:IukmRewDQFWC7kfnb66CSomk2q/seBuilHBYFwyq0Hs=
github.com/pires/go-proxyproto v0.7.0/go.mod h1:Vz/1JPY/OACxWGQNIRY2BeyDmpoaWmEP40O9LbuiFR4=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
@@ -1596,9 +1589,6 @@ github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
-github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg=
-github.com/pkg/xattr v0.4.10 h1:Qe0mtiNFHQZ296vRgUjRCoPHPqH7VdTOrZx3g0T+pGA=
-github.com/pkg/xattr v0.4.10/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 h1:GFCKgmp0tecUJ0sJuv4pzYCqS9+RGSn52M3FUwPs+uo=
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
github.com/pmezard/go-difflib v0.0.0-20151028094244-d8ed2627bdf0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -1688,8 +1678,8 @@ github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
-github.com/rs/xid v1.5.0 h1:mKX4bl4iPYJtEIxp6CYiUuLQ/8DYMoz0PUdtGgMFRVc=
-github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
+github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
+github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.4.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
github.com/russross/blackfriday v0.0.0-20170610170232-067529f716f4/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
@@ -1759,8 +1749,8 @@ github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.1/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
-github.com/spf13/afero v1.10.0 h1:EaGW2JJh15aKOejeuJ+wpFSHnbd7GE6Wvp3TsNhb6LY=
-github.com/spf13/afero v1.10.0/go.mod h1:UBogFpq8E9Hx+xc5CNTTEpTnuHVmXDwZcZcE1eb/UhQ=
+github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
+github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
@@ -1805,8 +1795,8 @@ github.com/tedsuo/ifrit v0.0.0-20191009134036-9a97d0632f00/go.mod h1:eyZnKCc955u
github.com/tencentcloud/tencentcloud-sdk-go v1.0.162/go.mod h1:asUz5BPXxgoPGaRgZaVm1iGcUAuHyYUo1nXqKa83cvI=
github.com/tencentyun/cos-go-sdk-v5 v0.7.40 h1:W6vDGKCHe4wBACI1d2UgE6+50sJFhRWU4O8IB2ozzxM=
github.com/tencentyun/cos-go-sdk-v5 v0.7.40/go.mod h1:4dCEtLHGh8QPxHEkgq+nFaky7yZxQuYwgSJM87icDaw=
-github.com/thanos-io/objstore v0.0.0-20240722162417-19b0c0f0ffd8 h1:QAgAQPtOj3OTlNKrm7G/xPeuDa8xz7brfNHv3WTUq6I=
-github.com/thanos-io/objstore v0.0.0-20240722162417-19b0c0f0ffd8/go.mod h1:3ukSkG4rIRUGkKM4oIz+BSuUx2e3RlQVVv3Cc3W+Tv4=
+github.com/thanos-io/objstore v0.0.0-20240818203309-0363dadfdfb1 h1:z0v9BB/p7s4J6R//+0a5M3wCld8KzNjrGRLIwXfrAZk=
+github.com/thanos-io/objstore v0.0.0-20240818203309-0363dadfdfb1/go.mod h1:3ukSkG4rIRUGkKM4oIz+BSuUx2e3RlQVVv3Cc3W+Tv4=
github.com/tidwall/gjson v1.6.0/go.mod h1:P256ACg0Mn+j1RXIDXoss50DeIABTYK1PULOJHhxOls=
github.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
@@ -1824,6 +1814,18 @@ github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1
github.com/transip/gotransip v0.0.0-20190812104329-6d8d9179b66f/go.mod h1:i0f4R4o2HM0m3DZYQWsj6/MEowD57VzoH0v3d7igeFY=
github.com/ttacon/chalk v0.0.0-20160626202418-22c06c80ed31/go.mod h1:onvgF043R+lC5RZ8IT9rBXDaEDnpnw/Cl+HFiw+v/7Q=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
+github.com/twmb/franz-go v1.17.1 h1:0LwPsbbJeJ9R91DPUHSEd4su82WJWcTY1Zzbgbg4CeQ=
+github.com/twmb/franz-go v1.17.1/go.mod h1:NreRdJ2F7dziDY/m6VyspWd6sNxHKXdMZI42UfQ3GXM=
+github.com/twmb/franz-go/pkg/kadm v1.13.0 h1:bJq4C2ZikUE2jh/wl9MtMTQ/kpmnBgVFh8XMQBEC+60=
+github.com/twmb/franz-go/pkg/kadm v1.13.0/go.mod h1:VMvpfjz/szpH9WB+vGM+rteTzVv0djyHFimci9qm2C0=
+github.com/twmb/franz-go/pkg/kfake v0.0.0-20240821035758-b77dd13e2bfa h1:OmQ4DJhqeOPdIH60Psut1vYU8A6LGyxJbF09w5RAa2w=
+github.com/twmb/franz-go/pkg/kfake v0.0.0-20240821035758-b77dd13e2bfa/go.mod h1:nkBI/wGFp7t1NJnnCeJdS4sX5atPAqwCPpDXKuI7SC8=
+github.com/twmb/franz-go/pkg/kmsg v1.8.0 h1:lAQB9Z3aMrIP9qF9288XcFf/ccaSxEitNA1CDTEIeTA=
+github.com/twmb/franz-go/pkg/kmsg v1.8.0/go.mod h1:HzYEb8G3uu5XevZbtU0dVbkphaKTHk0X68N5ka4q6mU=
+github.com/twmb/franz-go/plugin/kotel v1.5.0 h1:TiPfGUbQK384OO7ZYGdo7JuPCbJn+/8njQ/D9Je9CDE=
+github.com/twmb/franz-go/plugin/kotel v1.5.0/go.mod h1:wRXzRo76x1myOUMaVHAyraXoGBdEcvlLChGTVv5+DWU=
+github.com/twmb/franz-go/plugin/kprom v1.1.0 h1:grGeIJbm4llUBF8jkDjTb/b8rKllWSXjMwIqeCCcNYQ=
+github.com/twmb/franz-go/plugin/kprom v1.1.0/go.mod h1:cTDrPMSkyrO99LyGx3AtiwF9W6+THHjZrkDE2+TEBIU=
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
github.com/uber/jaeger-client-go v2.30.0+incompatible h1:D6wyKGCecFaSRUpo8lCVbaOOb6ThwMmTEbhRwtKR97o=
github.com/uber/jaeger-client-go v2.30.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
@@ -2003,7 +2005,6 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201216223049-8b5274cf687f/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
-golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
@@ -2263,7 +2264,6 @@ golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -2272,7 +2272,6 @@ golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -2292,7 +2291,6 @@ golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -2427,7 +2425,6 @@ golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
-golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
@@ -2456,6 +2453,7 @@ gonum.org/v1/netlib v0.0.0-20181029234149-ec6d1f5cefe6/go.mod h1:wa6Ws7BG/ESfp6d
gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw=
gonum.org/v1/plot v0.0.0-20190515093506-e2840ee46a6b/go.mod h1:Wt8AAjI+ypCyYX3nZBvf6cAIx93T+c/OS2HFAYskSZc=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
+google.golang.org/api v0.3.2/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -2553,9 +2551,7 @@ google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
-google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
-google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
@@ -2796,8 +2792,8 @@ k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVh
k8s.io/utils v0.0.0-20190221042446-c2654d5206da/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
k8s.io/utils v0.0.0-20190529001817-6999998975a7/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
-k8s.io/utils v0.0.0-20230726121419-3b25d923346b h1:sgn3ZU783SCgtaSJjpcVVlRqd6GSnlTLKgpAAttJvpI=
-k8s.io/utils v0.0.0-20230726121419-3b25d923346b/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3 h1:b2FmK8YH+QEwq/Sy2uAEhmqL5nPfGYbJOcaqjeYYZoA=
+k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
modernc.org/httpfs v1.0.0/go.mod h1:BSkfoMUcahSijQD5J/Vu4UMOxzmEf5SNRwyXC4PJBEw=
modernc.org/libc v1.3.1/go.mod h1:f8sp9GAfEyGYh3lsRIKtBh/XwACdFvGznxm6GJmQvXk=
modernc.org/mathutil v1.1.1/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
diff --git a/integration/bloom_building_test.go b/integration/bloom_building_test.go
index 46e8570c47717..2c4662eef4cb3 100644
--- a/integration/bloom_building_test.go
+++ b/integration/bloom_building_test.go
@@ -92,6 +92,7 @@ func TestBloomBuilding(t *testing.T) {
"-target=bloom-planner",
"-bloom-build.enabled=true",
"-bloom-build.enable=true",
+ "-bloom-build.builder.planner-address=localhost:9095", // hack to succeed config validation
"-bloom-build.planner.interval=15s",
"-bloom-build.planner.min-table-offset=0", // Disable table offset so we process today's data.
"-bloom.cache-list-ops=0", // Disable cache list operations to avoid caching issues.
diff --git a/integration/cluster/cluster.go b/integration/cluster/cluster.go
index 2ee68e15cc178..57bc182d0c8cb 100644
--- a/integration/cluster/cluster.go
+++ b/integration/cluster/cluster.go
@@ -89,7 +89,7 @@ storage_config:
bloom_gateway:
enabled: false
-bloom_compactor:
+bloom_build:
enabled: false
compactor:
diff --git a/integration/loki_micro_services_test.go b/integration/loki_micro_services_test.go
index 48f9123d96eb1..cfa46c1e9d85b 100644
--- a/integration/loki_micro_services_test.go
+++ b/integration/loki_micro_services_test.go
@@ -5,14 +5,11 @@ package integration
import (
"context"
"encoding/json"
- "fmt"
- "math/rand"
"strings"
"sync"
"testing"
"time"
- "github.com/go-kit/log/level"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/prometheus/model/labels"
@@ -1078,218 +1075,6 @@ func TestCategorizedLabels(t *testing.T) {
}
}
-func TestBloomFiltersEndToEnd(t *testing.T) {
- t.Skip("skipping until blooms have settled")
- commonFlags := []string{
- "-bloom-compactor.compaction-interval=10s",
- "-bloom-compactor.enable-compaction=true",
- "-bloom-compactor.enabled=true",
- "-bloom-gateway.enable-filtering=true",
- "-bloom-gateway.enabled=true",
- "-compactor.compaction-interval=1s",
- "-frontend.default-validity=0s",
- "-ingester.flush-on-shutdown=true",
- "-ingester.wal-enabled=false",
- "-query-scheduler.use-scheduler-ring=false",
- "-store.index-cache-read.embedded-cache.enabled=true",
- "-querier.split-queries-by-interval=24h",
- }
-
- tenantID := randStringRunes()
-
- clu := cluster.New(
- level.DebugValue(),
- cluster.SchemaWithTSDB,
- func(c *cluster.Cluster) { c.SetSchemaVer("v13") },
- )
-
- defer func() {
- assert.NoError(t, clu.Cleanup())
- }()
-
- var (
- tDistributor = clu.AddComponent(
- "distributor",
- append(
- commonFlags,
- "-target=distributor",
- )...,
- )
- tIndexGateway = clu.AddComponent(
- "index-gateway",
- append(
- commonFlags,
- "-target=index-gateway",
- )...,
- )
- tBloomGateway = clu.AddComponent(
- "bloom-gateway",
- append(
- commonFlags,
- "-target=bloom-gateway",
- )...,
- )
- )
- require.NoError(t, clu.Run())
-
- var (
- tIngester = clu.AddComponent(
- "ingester",
- append(
- commonFlags,
- "-target=ingester",
- "-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
- )...,
- )
- tQueryScheduler = clu.AddComponent(
- "query-scheduler",
- append(
- commonFlags,
- "-target=query-scheduler",
- "-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
- )...,
- )
- tCompactor = clu.AddComponent(
- "compactor",
- append(
- commonFlags,
- "-target=compactor",
- "-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
- )...,
- )
- tBloomCompactor = clu.AddComponent(
- "bloom-compactor",
- append(
- commonFlags,
- "-target=bloom-compactor",
- "-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
- )...,
- )
- )
- require.NoError(t, clu.Run())
-
- // finally, run the query-frontend and querier.
- var (
- tQueryFrontend = clu.AddComponent(
- "query-frontend",
- append(
- commonFlags,
- "-target=query-frontend",
- "-frontend.scheduler-address="+tQueryScheduler.GRPCURL(),
- "-common.compactor-address="+tCompactor.HTTPURL(),
- "-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
- )...,
- )
- _ = clu.AddComponent(
- "querier",
- append(
- commonFlags,
- "-target=querier",
- "-querier.scheduler-address="+tQueryScheduler.GRPCURL(),
- "-common.compactor-address="+tCompactor.HTTPURL(),
- "-tsdb.shipper.index-gateway-client.server-address="+tIndexGateway.GRPCURL(),
- )...,
- )
- )
- require.NoError(t, clu.Run())
-
- now := time.Now()
-
- cliDistributor := client.New(tenantID, "", tDistributor.HTTPURL())
- cliDistributor.Now = now
-
- cliIngester := client.New(tenantID, "", tIngester.HTTPURL())
- cliIngester.Now = now
-
- cliQueryFrontend := client.New(tenantID, "", tQueryFrontend.HTTPURL())
- cliQueryFrontend.Now = now
-
- cliIndexGateway := client.New(tenantID, "", tIndexGateway.HTTPURL())
- cliIndexGateway.Now = now
-
- cliBloomGateway := client.New(tenantID, "", tBloomGateway.HTTPURL())
- cliBloomGateway.Now = now
-
- cliBloomCompactor := client.New(tenantID, "", tBloomCompactor.HTTPURL())
- cliBloomCompactor.Now = now
-
- lineTpl := `caller=loki_micro_services_test.go msg="push log line" id="%s"`
- // ingest logs from 10 different pods
- // from now-60m to now-55m
- // each line contains a random, unique string
- // that string is used to verify filtering using bloom gateway
- uniqueStrings := make([]string, 5*60)
- for i := 0; i < len(uniqueStrings); i++ {
- id := randStringRunes()
- id = fmt.Sprintf("%s-%d", id, i)
- uniqueStrings[i] = id
- pod := fmt.Sprintf("pod-%d", i%10)
- line := fmt.Sprintf(lineTpl, id)
- err := cliDistributor.PushLogLine(
- line,
- now.Add(-1*time.Hour).Add(time.Duration(i)*time.Second),
- nil,
- map[string]string{"pod": pod},
- )
- require.NoError(t, err)
- }
-
- // restart ingester to flush chunks and that there are zero chunks in memory
- require.NoError(t, cliIngester.Flush())
- require.NoError(t, tIngester.Restart())
-
- // wait for compactor to compact index and for bloom compactor to build bloom filters
- require.Eventually(t, func() bool {
- // verify metrics that observe usage of block for filtering
- metrics, err := cliBloomCompactor.Metrics()
- require.NoError(t, err)
- successfulRunCount, labels, err := extractMetric(`loki_bloomcompactor_runs_completed_total`, metrics)
- if err != nil {
- return false
- }
- t.Log("bloom compactor runs", successfulRunCount, labels)
- if labels["status"] != "success" {
- return false
- }
-
- return successfulRunCount == 1
- }, 30*time.Second, time.Second)
-
- // use bloom gateway to perform needle in the haystack queries
- randIdx := rand.Intn(len(uniqueStrings))
- q := fmt.Sprintf(`{job="varlog"} |= "%s"`, uniqueStrings[randIdx])
- start := now.Add(-90 * time.Minute)
- end := now.Add(-30 * time.Minute)
- resp, err := cliQueryFrontend.RunRangeQueryWithStartEnd(context.Background(), q, start, end)
- require.NoError(t, err)
-
- // verify response
- require.Len(t, resp.Data.Stream, 1)
- expectedLine := fmt.Sprintf(lineTpl, uniqueStrings[randIdx])
- require.Equal(t, expectedLine, resp.Data.Stream[0].Values[0][1])
-
- // verify metrics that observe usage of block for filtering
- bloomGwMetrics, err := cliBloomGateway.Metrics()
- require.NoError(t, err)
-
- unfilteredCount := getMetricValue(t, "loki_bloom_gateway_chunkrefs_pre_filtering", bloomGwMetrics)
- require.Equal(t, float64(10), unfilteredCount)
-
- filteredCount := getMetricValue(t, "loki_bloom_gateway_chunkrefs_post_filtering", bloomGwMetrics)
- require.Equal(t, float64(1), filteredCount)
-
- mf, err := extractMetricFamily("loki_bloom_gateway_bloom_query_latency", bloomGwMetrics)
- require.NoError(t, err)
-
- count := getValueFromMetricFamilyWithFunc(mf, &dto.LabelPair{
- Name: proto.String("status"),
- Value: proto.String("success"),
- }, func(m *dto.Metric) uint64 {
- return m.Histogram.GetSampleCount()
- })
- require.Equal(t, uint64(1), count)
-}
-
func getValueFromMF(mf *dto.MetricFamily, lbs []*dto.LabelPair) float64 {
return getValueFromMetricFamilyWithFunc(mf, lbs[0], func(m *dto.Metric) float64 { return m.Counter.GetValue() })
}
diff --git a/integration/loki_rule_eval_test.go b/integration/loki_rule_eval_test.go
index e0898c50829a3..b3811ac624252 100644
--- a/integration/loki_rule_eval_test.go
+++ b/integration/loki_rule_eval_test.go
@@ -103,7 +103,7 @@ func testRuleEval(t *testing.T, mode string) {
// this is the function that will be called when the remote-write receiver receives a request.
// it tests that the expected payload is received.
- expectedResults := func(w http.ResponseWriter, r *http.Request) {
+ expectedResults := func(_ http.ResponseWriter, r *http.Request) {
wr, err := remote.DecodeWriteRequest(r.Body)
require.NoError(t, err)
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 2651831719e40..7727779251bcc 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,32 @@
## Main
+## [0.6.2](https://github.com/grafana/loki/compare/operator/v0.6.1...operator/v0.6.2) (2024-09-11)
+
+
+### Features
+
+* Ingester Stream Limit Improvements ([#13532](https://github.com/grafana/loki/issues/13532)) ([ec34aaa](https://github.com/grafana/loki/commit/ec34aaa1ff2e616ef223631657b63f7dffedd3cc))
+* **operator:** Add alert for discarded samples ([#13512](https://github.com/grafana/loki/issues/13512)) ([5f2a02f](https://github.com/grafana/loki/commit/5f2a02f14222dab891b7851e8f48052d6c9b594a))
+* **operator:** Add support for Loki OTLP limits config ([#13446](https://github.com/grafana/loki/issues/13446)) ([d02f435](https://github.com/grafana/loki/commit/d02f435d3bf121b19e15de4f139c95a6d010b25c))
+* **operator:** Add support for the volume API ([#13369](https://github.com/grafana/loki/issues/13369)) ([d451e23](https://github.com/grafana/loki/commit/d451e23225047a11b4d5d82900cec4a46d6e7b39))
+* **operator:** Enable leader-election ([#13760](https://github.com/grafana/loki/issues/13760)) ([1ba4bff](https://github.com/grafana/loki/commit/1ba4bff005930b173391df35248e6f58e076fa74))
+* **operator:** Update Loki operand to v3.1.0 ([#13422](https://github.com/grafana/loki/issues/13422)) ([cf5f52d](https://github.com/grafana/loki/commit/cf5f52dca0db93847218cdd2c3f4860d983381ae))
+* **operator:** Update Loki operand to v3.1.1 ([#14042](https://github.com/grafana/loki/issues/14042)) ([7ae1588](https://github.com/grafana/loki/commit/7ae1588200396b73a16fadd2610670a5ce5fd747))
+
+
+### Bug Fixes
+
+* **deps:** update k8s.io/utils digest to 702e33f ([#14033](https://github.com/grafana/loki/issues/14033)) ([b7eecc7](https://github.com/grafana/loki/commit/b7eecc7a693e96f4d0fe0dcd7583ecdc4dd7283f))
+* **operator:** add alertmanager client config to ruler template ([#13182](https://github.com/grafana/loki/issues/13182)) ([6148c37](https://github.com/grafana/loki/commit/6148c3760d701768e442186d4e7d574c7dc16c91))
+* **operator:** Allow structured metadata only if V13 schema provided ([#13463](https://github.com/grafana/loki/issues/13463)) ([3ac130b](https://github.com/grafana/loki/commit/3ac130b8a152169766cb173718f2312aeb4f694e))
+* **operator:** Don't overwrite annotations for LokiStack ingress resources ([#13708](https://github.com/grafana/loki/issues/13708)) ([f523530](https://github.com/grafana/loki/commit/f52353060dd936cff587ff2060c8616941695ece))
+* **operator:** Improve API documentation for schema version ([#13122](https://github.com/grafana/loki/issues/13122)) ([3a9f50f](https://github.com/grafana/loki/commit/3a9f50f5099a02e662b8ac10ddad0b36cd844161))
+* **operator:** Remove duplicate conditions from status ([#13497](https://github.com/grafana/loki/issues/13497)) ([527510d](https://github.com/grafana/loki/commit/527510d1a84a981250047dbabba8d492177b8452))
+* **operator:** Set object storage for delete requests when using retention ([#13562](https://github.com/grafana/loki/issues/13562)) ([46de4c1](https://github.com/grafana/loki/commit/46de4c1bc839ef682798bec5003123f7d5f4404b))
+* **operator:** Skip updating annotations for serviceaccounts ([#13450](https://github.com/grafana/loki/issues/13450)) ([1b9b111](https://github.com/grafana/loki/commit/1b9b11116b48fb37b7015d27104668412fc04937))
+* **operator:** Support v3.1.0 in OpenShift dashboards ([#13430](https://github.com/grafana/loki/issues/13430)) ([8279d59](https://github.com/grafana/loki/commit/8279d59f145df9c9132aeff9e3d46c738650027c))
+* **operator:** Watch for CredentialsRequests on CCOAuthEnv only ([#13299](https://github.com/grafana/loki/issues/13299)) ([7fc926e](https://github.com/grafana/loki/commit/7fc926e36ea8fca7bd8e9955c8994574535dbbae))
+
## [0.6.1](https://github.com/grafana/loki/compare/operator/v0.6.0...operator/v0.6.1) (2024-06-03)
diff --git a/operator/Makefile b/operator/Makefile
index 3be618f551b18..8dce06bbce797 100644
--- a/operator/Makefile
+++ b/operator/Makefile
@@ -21,7 +21,7 @@ LOKI_OPERATOR_NS ?= kubernetes-operators
# To re-generate a bundle for another specific version without changing the standard setup, you can:
# - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2)
# - use environment variables to overwrite this value (e.g export VERSION=0.0.2)
-VERSION ?= 0.6.1
+VERSION ?= 0.6.2
CHANNELS ?= "alpha"
DEFAULT_CHANNEL ?= "alpha"
diff --git a/operator/apis/loki/go.mod b/operator/apis/loki/go.mod
index e71be82d62d2b..24d692d874e34 100644
--- a/operator/apis/loki/go.mod
+++ b/operator/apis/loki/go.mod
@@ -6,7 +6,7 @@ require (
github.com/stretchr/testify v1.8.2
k8s.io/api v0.26.9
k8s.io/apimachinery v0.26.9
- k8s.io/utils v0.0.0-20240102154912-e7106e64919e
+ k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3
sigs.k8s.io/controller-runtime v0.14.5
)
diff --git a/operator/apis/loki/go.sum b/operator/apis/loki/go.sum
index 2990aff037cce..95a57d02a1675 100644
--- a/operator/apis/loki/go.sum
+++ b/operator/apis/loki/go.sum
@@ -82,8 +82,8 @@ k8s.io/apimachinery v0.26.9 h1:5yAV9cFR7Z4gIorKcAjWnx4uxtxiFsERwq4Pvmx0CCg=
k8s.io/apimachinery v0.26.9/go.mod h1:qYzLkrQ9lhrZRh0jNKo2cfvf/R1/kQONnSiyB7NUJU0=
k8s.io/klog/v2 v2.80.1 h1:atnLQ121W371wYYFawwYx1aEY2eUfs4l3J72wtgAwV4=
k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
-k8s.io/utils v0.0.0-20240102154912-e7106e64919e h1:eQ/4ljkx21sObifjzXwlPKpdGLrCfRziVtos3ofG/sQ=
-k8s.io/utils v0.0.0-20240102154912-e7106e64919e/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3 h1:b2FmK8YH+QEwq/Sy2uAEhmqL5nPfGYbJOcaqjeYYZoA=
+k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
sigs.k8s.io/controller-runtime v0.14.5 h1:6xaWFqzT5KuAQ9ufgUaj1G/+C4Y1GRkhrxl+BJ9i+5s=
sigs.k8s.io/controller-runtime v0.14.5/go.mod h1:WqIdsAY6JBsjfc/CqO0CORmNtoCtE4S6qbPc9s68h+0=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k=
diff --git a/operator/apis/loki/v1/lokistack_types.go b/operator/apis/loki/v1/lokistack_types.go
index 90cee75d94475..41f4ad95e6a8b 100644
--- a/operator/apis/loki/v1/lokistack_types.go
+++ b/operator/apis/loki/v1/lokistack_types.go
@@ -791,6 +791,140 @@ type IngestionLimitSpec struct {
PerStreamRateLimitBurst int32 `json:"perStreamRateLimitBurst,omitempty"`
}
+// OTLPAttributeAction defines the action to executed when indexing
+// OTLP resource attributes. Resource attributes can be either added
+// to the index, the chunk structured metadata or entirely dropped.
+type OTLPAttributeAction string
+
+const (
+ // OTLPAttributeActionIndexLabel stores a resource attribute as a label, which is part of the index identifying streams.
+ OTLPAttributeActionIndexLabel OTLPAttributeAction = "indexLabel"
+ // OTLPAttributeActionStructuredMetadata stores an attribute as structured metadata with each log entry.
+ OTLPAttributeActionStructuredMetadata OTLPAttributeAction = "structuredMetadata"
+ // OTLPAttributeActionDrop removes the matching attributes from the log entry.
+ OTLPAttributeActionDrop OTLPAttributeAction = "drop"
+)
+
+// OTLPAttributesSpec contains the configuration for a set of attributes
+// to store them as index labels or structured metadata or drop them altogether.
+type OTLPAttributesSpec struct {
+ // Action defines the indexing action for the selected attributes. They
+ // can be either added to structured metadata or drop altogether.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +kubebuilder:validation:Enum=structured_metadata;drop
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Action"
+ Action OTLPAttributeAction `json:"action"`
+
+ // Attributes allows choosing the attributes by listing their names.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Attribute Names"
+ Attributes []string `json:"attributes,omitempty"`
+
+ // Regex allows choosing the attributes by matching a regular expression.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Regular Expression"
+ Regex string `json:"regex,omitempty"`
+}
+
+// OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+// to store them as index labels or structured metadata or drop them altogether.
+type OTLPResourceAttributesConfigSpec struct {
+ // Action defines the indexing action for the selected resoure attributes. They
+ // can be either indexed as labels, added to structured metadata or drop altogether.
+ //
+ // +required
+ // +kubebuilder:validation:Required
+ // +kubebuilder:validation:Enum=index_label;structured_metadata;drop
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Action"
+ Action OTLPAttributeAction `json:"action,omitempty"`
+
+ // Attributes is the list of attributes to configure indexing or drop them
+ // altogether.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Attribute Names"
+ Attributes []string `json:"attributes,omitempty"`
+
+ // Regex allows choosing the attributes by matching a regular expression.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Regular Expression"
+ Regex string `json:"regex,omitempty"`
+}
+
+// OTLPResourceAttributesSpec contains the configuration for resource attributes
+// to store them as index labels or structured metadata or drop them altogether.
+type OTLPResourceAttributesSpec struct {
+ // IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ // indexed as labels.
+ //
+ // If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,xDescriptors="urn:alm:descriptor:com.tectonic.ui:booleanSwitch",displayName="Ignore Global Defaults"
+ IgnoreDefaults bool `json:"ignoreDefaults,omitempty"`
+
+ // Attributes contains the configuration for resource attributes
+ // to store them as index labels or structured metadata or drop them altogether.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Attributes"
+ Attributes []OTLPResourceAttributesConfigSpec `json:"attributes,omitempty"`
+}
+
+// GlobalOTLPSpec defines which resource, scope and log attributes to
+// be stored as index or structured metadata or drop altogether for all
+// tenants.
+type GlobalOTLPSpec struct {
+ // IndexedResourceAttributes contains the global configuration for resource attributes
+ // to store them as index labels or structured metadata or drop them altogether.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Indexed Resource Attributes"
+ IndexedResourceAttributes []string `json:"indexedResourceAttributes,omitempty"`
+
+ OTLPSpec `json:",omitempty"`
+}
+
+// OTLPSpec defines which resource, scope and log attributes to
+// be stored as index or structured metadata or drop altogether
+type OTLPSpec struct {
+ // ResourceAttributes contains the configuration for resource attributes
+ // to store them as index labels or structured metadata or drop them altogether.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Resource Attributes"
+ ResourceAttributes *OTLPResourceAttributesSpec `json:"resourceAttributes,omitempty"`
+
+ // ScopeAttributes contains the configuration for scope attributes
+ // to store them as index labels or structured metadata or drop them altogether.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Scope Attributes"
+ ScopeAttributes []OTLPAttributesSpec `json:"scopeAttributes,omitempty"`
+
+ // LogAttributes contains the configuration for log attributes
+ // to store them as index labels or structured metadata or drop them altogether.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Log Attributes"
+ LogAttributes []OTLPAttributesSpec `json:"logAttributes,omitempty"`
+}
+
// RetentionStreamSpec defines a log stream with separate retention time.
type RetentionStreamSpec struct {
// Days contains the number of days logs are kept.
@@ -844,6 +978,14 @@ type LimitsTemplateSpec struct {
// +kubebuilder:validation:Optional
QueryLimits *QueryLimitSpec `json:"queries,omitempty"`
+ // OTLP to configure which resource, scope and log attributes
+ // to store as labels or structured metadata or drop them altogether
+ // for all tenants.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ OTLP *GlobalOTLPSpec `json:"otlp,omitempty"`
+
// Retention defines how long logs are kept in storage.
//
// +optional
@@ -865,6 +1007,14 @@ type PerTenantLimitsTemplateSpec struct {
// +kubebuilder:validation:Optional
QueryLimits *PerTenantQueryLimitSpec `json:"queries,omitempty"`
+ // OTLP to configure which resource, scope and log attributes
+ // to store as labels or structured metadata or drop them altogether
+ // for a single tenants.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ OTLP *OTLPSpec `json:"otlp,omitempty"`
+
// Retention defines how long logs are kept in storage.
//
// +optional
@@ -1313,3 +1463,16 @@ func (t BlockedQueryTypes) String() string {
return strings.Join(res, ",")
}
+
+func (a OTLPAttributeAction) Value() string {
+ switch a {
+ case OTLPAttributeActionIndexLabel:
+ return "index_label"
+ case OTLPAttributeActionStructuredMetadata:
+ return "structured_metadata"
+ case OTLPAttributeActionDrop:
+ return "drop"
+ default:
+ return string(a)
+ }
+}
diff --git a/operator/apis/loki/v1/v1.go b/operator/apis/loki/v1/v1.go
index 8e86ea4b24985..a17e7244dfb53 100644
--- a/operator/apis/loki/v1/v1.go
+++ b/operator/apis/loki/v1/v1.go
@@ -84,6 +84,13 @@ var (
// ErrIPv6InstanceAddrTypeNotAllowed when the default InstanceAddrType is used with enableIPv6.
ErrIPv6InstanceAddrTypeNotAllowed = errors.New(`instanceAddrType "default" cannot be used with enableIPv6 at the same time`)
+ // ErrOTLPResourceAttributesEmptyNotAllowed when the OTLP ResourceAttributes are empty even though ignoreDefaults is enabled.
+ ErrOTLPResourceAttributesEmptyNotAllowed = errors.New(`resourceAttributes cannot be empty when ignoreDefaults is true`)
+ // ErrOTLPResourceAttributesIndexLabelActionMissing when OTLP ResourceAttributes does not contain at least one index label when ignoreDefaults is enabled.
+ ErrOTLPResourceAttributesIndexLabelActionMissing = errors.New(`resourceAttributes does not contain at least one attributed mapped to "index_label"`)
+ // ErrOTLPAttributesSpecInvalid when the OTLPAttributesSpec attibutes and regex fields are both empty.
+ ErrOTLPAttributesSpecInvalid = errors.New(`attributes and regex cannot be empty at the same time`)
+
// ErrRuleMustMatchNamespace indicates that an expression used in an alerting or recording rule is missing
// matchers for a namespace.
ErrRuleMustMatchNamespace = errors.New("rule needs to have a matcher for the namespace")
diff --git a/operator/apis/loki/v1/zz_generated.deepcopy.go b/operator/apis/loki/v1/zz_generated.deepcopy.go
index c7206c5ab6602..faab229b2a569 100644
--- a/operator/apis/loki/v1/zz_generated.deepcopy.go
+++ b/operator/apis/loki/v1/zz_generated.deepcopy.go
@@ -504,6 +504,27 @@ func (in *ClusterProxy) DeepCopy() *ClusterProxy {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *GlobalOTLPSpec) DeepCopyInto(out *GlobalOTLPSpec) {
+ *out = *in
+ if in.IndexedResourceAttributes != nil {
+ in, out := &in.IndexedResourceAttributes, &out.IndexedResourceAttributes
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+ in.OTLPSpec.DeepCopyInto(&out.OTLPSpec)
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GlobalOTLPSpec.
+func (in *GlobalOTLPSpec) DeepCopy() *GlobalOTLPSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(GlobalOTLPSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HashRingSpec) DeepCopyInto(out *HashRingSpec) {
*out = *in
@@ -579,6 +600,11 @@ func (in *LimitsTemplateSpec) DeepCopyInto(out *LimitsTemplateSpec) {
*out = new(QueryLimitSpec)
**out = **in
}
+ if in.OTLP != nil {
+ in, out := &in.OTLP, &out.OTLP
+ *out = new(GlobalOTLPSpec)
+ (*in).DeepCopyInto(*out)
+ }
if in.Retention != nil {
in, out := &in.Retention, &out.Retention
*out = new(RetentionLimitSpec)
@@ -1057,6 +1083,102 @@ func (in *OPASpec) DeepCopy() *OPASpec {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *OTLPAttributesSpec) DeepCopyInto(out *OTLPAttributesSpec) {
+ *out = *in
+ if in.Attributes != nil {
+ in, out := &in.Attributes, &out.Attributes
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OTLPAttributesSpec.
+func (in *OTLPAttributesSpec) DeepCopy() *OTLPAttributesSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(OTLPAttributesSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *OTLPResourceAttributesConfigSpec) DeepCopyInto(out *OTLPResourceAttributesConfigSpec) {
+ *out = *in
+ if in.Attributes != nil {
+ in, out := &in.Attributes, &out.Attributes
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OTLPResourceAttributesConfigSpec.
+func (in *OTLPResourceAttributesConfigSpec) DeepCopy() *OTLPResourceAttributesConfigSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(OTLPResourceAttributesConfigSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *OTLPResourceAttributesSpec) DeepCopyInto(out *OTLPResourceAttributesSpec) {
+ *out = *in
+ if in.Attributes != nil {
+ in, out := &in.Attributes, &out.Attributes
+ *out = make([]OTLPResourceAttributesConfigSpec, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OTLPResourceAttributesSpec.
+func (in *OTLPResourceAttributesSpec) DeepCopy() *OTLPResourceAttributesSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(OTLPResourceAttributesSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *OTLPSpec) DeepCopyInto(out *OTLPSpec) {
+ *out = *in
+ if in.ResourceAttributes != nil {
+ in, out := &in.ResourceAttributes, &out.ResourceAttributes
+ *out = new(OTLPResourceAttributesSpec)
+ (*in).DeepCopyInto(*out)
+ }
+ if in.ScopeAttributes != nil {
+ in, out := &in.ScopeAttributes, &out.ScopeAttributes
+ *out = make([]OTLPAttributesSpec, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ if in.LogAttributes != nil {
+ in, out := &in.LogAttributes, &out.LogAttributes
+ *out = make([]OTLPAttributesSpec, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OTLPSpec.
+func (in *OTLPSpec) DeepCopy() *OTLPSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(OTLPSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ObjectStorageSchema) DeepCopyInto(out *ObjectStorageSchema) {
*out = *in
@@ -1162,6 +1284,11 @@ func (in *PerTenantLimitsTemplateSpec) DeepCopyInto(out *PerTenantLimitsTemplate
*out = new(PerTenantQueryLimitSpec)
(*in).DeepCopyInto(*out)
}
+ if in.OTLP != nil {
+ in, out := &in.OTLP, &out.OTLP
+ *out = new(OTLPSpec)
+ (*in).DeepCopyInto(*out)
+ }
if in.Retention != nil {
in, out := &in.Retention, &out.Retention
*out = new(RetentionLimitSpec)
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml
index d15e930efe88e..eca4a2b99b5d3 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml
@@ -3,9 +3,9 @@ kind: ServiceAccount
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-metrics-reader
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
index 03115d44fdad5..83b327cf24d0d 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
@@ -6,11 +6,11 @@ metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: metrics
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-metrics-service
spec:
ports:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-token_v1_secret.yaml b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-token_v1_secret.yaml
index 109b3393f9982..2a1160859812b 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-token_v1_secret.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-token_v1_secret.yaml
@@ -4,10 +4,10 @@ metadata:
annotations:
kubernetes.io/service-account.name: loki-operator-controller-manager-metrics-reader
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-metrics-token
type: kubernetes.io/service-account-token
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml
index 97d00df9c4efb..1af7f354c22e3 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml
@@ -3,11 +3,11 @@ kind: ClusterRoleBinding
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-read-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml b/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml
index 868ea5ffb8a59..33f2b19ed8dcb 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml
@@ -60,9 +60,9 @@ data:
kind: ConfigMap
metadata:
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-manager-config
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml b/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
index 53b6d9edb87e9..698498c81b6c8 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
@@ -2,11 +2,11 @@ apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator
name: loki-operator-metrics-monitor
spec:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
index 6b4e2e91a201a..4455d646e6c8e 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
@@ -3,11 +3,11 @@ kind: ClusterRole
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-metrics-reader
rules:
- nonResourceURLs:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
index 79175acf55831..25fa29ce78669 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-prometheus
rules:
- apiGroups:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
index 32cc553937813..939d6973181a9 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml b/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml
index 2fe1edca6fd22..56581d01bbc2d 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml
@@ -3,11 +3,11 @@ kind: Service
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-webhook-service
spec:
ports:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
index 8468d36696af4..5f6c30e2cf4f2 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -149,8 +149,8 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:0.6.1
- createdAt: "2024-07-05T08:22:21Z"
+ containerImage: docker.io/grafana/loki-operator:0.6.2
+ createdAt: "2024-09-09T09:16:58Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
features.operators.openshift.io/disconnected: "true"
@@ -167,7 +167,7 @@ metadata:
labels:
operatorframework.io/arch.amd64: supported
operatorframework.io/arch.arm64: supported
- name: loki-operator.v0.6.1
+ name: loki-operator.v0.6.2
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -361,6 +361,66 @@ spec:
path: limits.global.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: IndexedResourceAttributes contains the global configuration for
+ resource attributes to store them as index labels or structured metadata
+ or drop them altogether.
+ displayName: Indexed Resource Attributes
+ path: limits.global.otlp.indexedResourceAttributes
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.global.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.global.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.global.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.global.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.global.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.global.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.global.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.scopeAttributes[0].regex
- description: CardinalityLimit defines the cardinality limit for index queries.
displayName: Cardinality Limit
path: limits.global.queries.cardinalityLimit
@@ -457,6 +517,61 @@ spec:
path: limits.tenants.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.tenants.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.tenants.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.tenants.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.tenants.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.tenants.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.scopeAttributes[0].regex
- description: Blocked defines the list of rules to block matching queries.
displayName: Blocked
path: limits.tenants.queries.blocked
@@ -1699,11 +1814,11 @@ spec:
serviceAccountName: loki-operator-controller-manager
deployments:
- label:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
control-plane: controller-manager
name: loki-operator-controller-manager
spec:
@@ -1737,7 +1852,7 @@ spec:
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
value: quay.io/observatorium/opa-openshift:latest
- image: docker.io/grafana/loki-operator:0.6.1
+ image: docker.io/grafana/loki-operator:0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -1862,8 +1977,8 @@ spec:
name: gateway
- image: quay.io/observatorium/opa-openshift:latest
name: opa
- replaces: loki-operator.v0.6.0
- version: 0.6.1
+ replaces: loki-operator.v0.6.1
+ version: 0.6.2
webhookdefinitions:
- admissionReviewVersions:
- v1
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
index 64298e8d50629..1ab9360ef7ac6 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: alertingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
index 7cfeec6d074f6..6d32f047bc094 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: lokistacks.loki.grafana.com
spec:
conversion:
@@ -164,6 +164,127 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for all tenants.
+ properties:
+ indexedResourceAttributes:
+ description: |-
+ IndexedResourceAttributes contains the global configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ type: string
+ type: array
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
@@ -307,6 +428,120 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for a single tenants.
+ properties:
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
index cdef169a4cede..433f205ec8205 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: recordingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
index d7af1c24bad25..bb3edbfd454a5 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: rulerconfigs.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml b/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml
index d15e930efe88e..eca4a2b99b5d3 100644
--- a/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml
+++ b/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-reader_v1_serviceaccount.yaml
@@ -3,9 +3,9 @@ kind: ServiceAccount
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-metrics-reader
diff --git a/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml b/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
index 8738be99aa872..e82a7c02ef414 100644
--- a/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
+++ b/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
@@ -4,11 +4,11 @@ metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: metrics
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-metrics-service
spec:
ports:
diff --git a/operator/bundle/community/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml b/operator/bundle/community/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml
index 242c0978035a0..0a1c320c138ae 100644
--- a/operator/bundle/community/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml
+++ b/operator/bundle/community/manifests/loki-operator-controller-manager-read-metrics_rbac.authorization.k8s.io_v1_clusterrolebinding.yaml
@@ -3,11 +3,11 @@ kind: ClusterRoleBinding
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-controller-manager-read-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
diff --git a/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml b/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml
index 98efac218bb48..8679061570153 100644
--- a/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml
+++ b/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml
@@ -24,9 +24,9 @@ data:
kind: ConfigMap
metadata:
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-manager-config
diff --git a/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
index 6b4e2e91a201a..4455d646e6c8e 100644
--- a/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ b/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
@@ -3,11 +3,11 @@ kind: ClusterRole
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-metrics-reader
rules:
- nonResourceURLs:
diff --git a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
index 79175acf55831..25fa29ce78669 100644
--- a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
+++ b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-prometheus
rules:
- apiGroups:
diff --git a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
index 32cc553937813..939d6973181a9 100644
--- a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
+++ b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
diff --git a/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml b/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml
index 2fe1edca6fd22..56581d01bbc2d 100644
--- a/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml
+++ b/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml
@@ -3,11 +3,11 @@ kind: Service
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: loki-operator-webhook-service
spec:
ports:
diff --git a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
index d0cd0a035dee8..09119acd77819 100644
--- a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
@@ -149,8 +149,8 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:0.6.1
- createdAt: "2024-07-05T08:22:19Z"
+ containerImage: docker.io/grafana/loki-operator:0.6.2
+ createdAt: "2024-09-09T09:16:56Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
@@ -160,7 +160,7 @@ metadata:
labels:
operatorframework.io/arch.amd64: supported
operatorframework.io/arch.arm64: supported
- name: loki-operator.v0.6.1
+ name: loki-operator.v0.6.2
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -354,6 +354,66 @@ spec:
path: limits.global.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: IndexedResourceAttributes contains the global configuration for
+ resource attributes to store them as index labels or structured metadata
+ or drop them altogether.
+ displayName: Indexed Resource Attributes
+ path: limits.global.otlp.indexedResourceAttributes
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.global.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.global.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.global.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.global.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.global.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.global.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.global.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.scopeAttributes[0].regex
- description: CardinalityLimit defines the cardinality limit for index queries.
displayName: Cardinality Limit
path: limits.global.queries.cardinalityLimit
@@ -450,6 +510,61 @@ spec:
path: limits.tenants.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.tenants.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.tenants.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.tenants.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.tenants.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.tenants.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.scopeAttributes[0].regex
- description: Blocked defines the list of rules to block matching queries.
displayName: Blocked
path: limits.tenants.queries.blocked
@@ -1679,11 +1794,11 @@ spec:
serviceAccountName: loki-operator-controller-manager
deployments:
- label:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
control-plane: controller-manager
name: loki-operator-controller-manager
spec:
@@ -1712,12 +1827,12 @@ spec:
- /manager
env:
- name: RELATED_IMAGE_LOKI
- value: docker.io/grafana/loki:3.1.0
+ value: docker.io/grafana/loki:3.1.1
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
value: quay.io/observatorium/opa-openshift:latest
- image: docker.io/grafana/loki-operator:0.6.1
+ image: docker.io/grafana/loki-operator:0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -1824,14 +1939,14 @@ spec:
provider:
name: Grafana Loki SIG Operator
relatedImages:
- - image: docker.io/grafana/loki:3.1.0
+ - image: docker.io/grafana/loki:3.1.1
name: loki
- image: quay.io/observatorium/api:latest
name: gateway
- image: quay.io/observatorium/opa-openshift:latest
name: opa
- replaces: loki-operator.v0.6.0
- version: 0.6.1
+ replaces: loki-operator.v0.6.1
+ version: 0.6.2
webhookdefinitions:
- admissionReviewVersions:
- v1
diff --git a/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
index 6a8d8c78d1039..a1761382e4b00 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: alertingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
index 234ec782eb1bf..c11f81be9c55f 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: lokistacks.loki.grafana.com
spec:
conversion:
@@ -164,6 +164,127 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for all tenants.
+ properties:
+ indexedResourceAttributes:
+ description: |-
+ IndexedResourceAttributes contains the global configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ type: string
+ type: array
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
@@ -307,6 +428,120 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for a single tenants.
+ properties:
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
diff --git a/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
index 8018b4a152072..91df47a6c6802 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: recordingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
index 71b690e14a632..594a2d724d991 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.14.0
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.6.1
+ app.kubernetes.io/instance: loki-operator-v0.6.2
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.6.1
+ app.kubernetes.io/version: 0.6.2
name: rulerconfigs.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
index f1fc360f9d4ea..7081facf6e91e 100644
--- a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: quay.io/openshift-logging/loki-operator:0.1.0
- createdAt: "2024-07-05T08:22:23Z"
+ createdAt: "2024-09-09T09:17:00Z"
description: |
The Loki Operator for OCP provides a means for configuring and managing a Loki stack for cluster logging.
## Prerequisites and Requirements
@@ -374,6 +374,66 @@ spec:
path: limits.global.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: IndexedResourceAttributes contains the global configuration for
+ resource attributes to store them as index labels or structured metadata
+ or drop them altogether.
+ displayName: Indexed Resource Attributes
+ path: limits.global.otlp.indexedResourceAttributes
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.global.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.global.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.global.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.global.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.global.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.global.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.global.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.scopeAttributes[0].regex
- description: CardinalityLimit defines the cardinality limit for index queries.
displayName: Cardinality Limit
path: limits.global.queries.cardinalityLimit
@@ -470,6 +530,61 @@ spec:
path: limits.tenants.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.tenants.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.tenants.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.tenants.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.tenants.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.tenants.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.scopeAttributes[0].regex
- description: Blocked defines the list of rules to block matching queries.
displayName: Blocked
path: limits.tenants.queries.blocked
@@ -1717,7 +1832,7 @@ spec:
- /manager
env:
- name: RELATED_IMAGE_LOKI
- value: quay.io/openshift-logging/loki:v3.1.0
+ value: quay.io/openshift-logging/loki:v3.1.1
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
@@ -1841,7 +1956,7 @@ spec:
provider:
name: Red Hat
relatedImages:
- - image: quay.io/openshift-logging/loki:v3.1.0
+ - image: quay.io/openshift-logging/loki:v3.1.1
name: loki
- image: quay.io/observatorium/api:latest
name: gateway
diff --git a/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
index 4ab2f8aaba2ad..2084694a6db8f 100644
--- a/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -164,6 +164,127 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for all tenants.
+ properties:
+ indexedResourceAttributes:
+ description: |-
+ IndexedResourceAttributes contains the global configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ type: string
+ type: array
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
@@ -307,6 +428,120 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for a single tenants.
+ properties:
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
diff --git a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
index 2429338bd3a60..0180fe98e7ae4 100644
--- a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
@@ -146,6 +146,127 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for all tenants.
+ properties:
+ indexedResourceAttributes:
+ description: |-
+ IndexedResourceAttributes contains the global configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ type: string
+ type: array
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
@@ -289,6 +410,120 @@ spec:
format: int32
type: integer
type: object
+ otlp:
+ description: |-
+ OTLP to configure which resource, scope and log attributes
+ to store as labels or structured metadata or drop them altogether
+ for a single tenants.
+ properties:
+ logAttributes:
+ description: |-
+ LogAttributes contains the configuration for log attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ resourceAttributes:
+ description: |-
+ ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ attributes:
+ description: |-
+ Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected resoure attributes. They
+ can be either indexed as labels, added to structured metadata or drop altogether.
+ enum:
+ - index_label
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: |-
+ Attributes is the list of attributes to configure indexing or drop them
+ altogether.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ type: object
+ type: array
+ ignoreDefaults:
+ description: |-
+ IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+ indexed as labels.
+
+
+ If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+ type: boolean
+ type: object
+ scopeAttributes:
+ description: |-
+ ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ items:
+ description: |-
+ OTLPAttributesSpec contains the configuration for a set of attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ properties:
+ action:
+ description: |-
+ Action defines the indexing action for the selected attributes. They
+ can be either added to structured metadata or drop altogether.
+ enum:
+ - structured_metadata
+ - drop
+ type: string
+ attributes:
+ description: Attributes allows choosing the attributes
+ by listing their names.
+ items:
+ type: string
+ type: array
+ regex:
+ description: Regex allows choosing the attributes
+ by matching a regular expression.
+ type: string
+ required:
+ - action
+ type: object
+ type: array
+ type: object
queries:
description: QueryLimits defines the limit applied on querying
log streams.
diff --git a/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
index b655b250aea30..04021971986e5 100644
--- a/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
@@ -6,7 +6,7 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:0.6.1
+ containerImage: docker.io/grafana/loki-operator:0.6.2
createdAt: "2022-12-22T13:28:40+00:00"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
@@ -274,6 +274,66 @@ spec:
path: limits.global.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: IndexedResourceAttributes contains the global configuration for
+ resource attributes to store them as index labels or structured metadata
+ or drop them altogether.
+ displayName: Indexed Resource Attributes
+ path: limits.global.otlp.indexedResourceAttributes
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.global.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.global.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.global.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.global.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.global.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.global.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.global.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.scopeAttributes[0].regex
- description: CardinalityLimit defines the cardinality limit for index queries.
displayName: Cardinality Limit
path: limits.global.queries.cardinalityLimit
@@ -370,6 +430,61 @@ spec:
path: limits.tenants.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.tenants.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.tenants.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.tenants.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.tenants.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.tenants.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.scopeAttributes[0].regex
- description: Blocked defines the list of rules to block matching queries.
displayName: Blocked
path: limits.tenants.queries.blocked
@@ -2277,5 +2392,5 @@ spec:
minKubeVersion: 1.21.1
provider:
name: Grafana Loki SIG Operator
- replaces: loki-operator.v0.6.0
+ replaces: loki-operator.v0.6.1
version: 0.0.0
diff --git a/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
index 7d12fc8ddaad8..e1a74389c40f7 100644
--- a/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
@@ -6,7 +6,7 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:0.6.1
+ containerImage: docker.io/grafana/loki-operator:0.6.2
createdAt: "2022-12-22T13:28:40+00:00"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
@@ -267,6 +267,66 @@ spec:
path: limits.global.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: IndexedResourceAttributes contains the global configuration for
+ resource attributes to store them as index labels or structured metadata
+ or drop them altogether.
+ displayName: Indexed Resource Attributes
+ path: limits.global.otlp.indexedResourceAttributes
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.global.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.global.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.global.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.global.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.global.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.global.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.global.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.scopeAttributes[0].regex
- description: CardinalityLimit defines the cardinality limit for index queries.
displayName: Cardinality Limit
path: limits.global.queries.cardinalityLimit
@@ -363,6 +423,61 @@ spec:
path: limits.tenants.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.tenants.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.tenants.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.tenants.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.tenants.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.tenants.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.scopeAttributes[0].regex
- description: Blocked defines the list of rules to block matching queries.
displayName: Blocked
path: limits.tenants.queries.blocked
@@ -2257,5 +2372,5 @@ spec:
minKubeVersion: 1.21.1
provider:
name: Grafana Loki SIG Operator
- replaces: loki-operator.v0.6.0
+ replaces: loki-operator.v0.6.1
version: 0.0.0
diff --git a/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml
index d55686c3addc3..3d25c9f4c2441 100644
--- a/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml
@@ -286,6 +286,66 @@ spec:
path: limits.global.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: IndexedResourceAttributes contains the global configuration for
+ resource attributes to store them as index labels or structured metadata
+ or drop them altogether.
+ displayName: Indexed Resource Attributes
+ path: limits.global.otlp.indexedResourceAttributes
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.global.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.global.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.global.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.global.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.global.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.global.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.global.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.global.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.global.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.global.otlp.scopeAttributes[0].regex
- description: CardinalityLimit defines the cardinality limit for index queries.
displayName: Cardinality Limit
path: limits.global.queries.cardinalityLimit
@@ -382,6 +442,61 @@ spec:
path: limits.tenants.ingestion.perStreamRateLimitBurst
x-descriptors:
- urn:alm:descriptor:com.tectonic.ui:number
+ - description: LogAttributes contains the configuration for log attributes to
+ store them as index labels or structured metadata or drop them altogether.
+ displayName: Log Attributes
+ path: limits.tenants.otlp.logAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.logAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.logAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.logAttributes[0].regex
+ - description: ResourceAttributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Resource Attributes
+ path: limits.tenants.otlp.resourceAttributes
+ - description: Attributes contains the configuration for resource attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Attributes
+ path: limits.tenants.otlp.resourceAttributes.attributes
+ - description: Action defines the indexing action for the selected resoure attributes.
+ They can be either indexed as labels, added to structured metadata or drop
+ altogether.
+ displayName: Action
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].action
+ - description: Attributes is the list of attributes to configure indexing or
+ drop them altogether.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.resourceAttributes.attributes[0].regex
+ - description: "IgnoreDefaults controls whether to ignore the global configuration
+ for resource attributes indexed as labels. \n If IgnoreDefaults is true,
+ then this spec needs to contain at least one mapping to a index label."
+ displayName: Ignore Global Defaults
+ path: limits.tenants.otlp.resourceAttributes.ignoreDefaults
+ x-descriptors:
+ - urn:alm:descriptor:com.tectonic.ui:booleanSwitch
+ - description: ScopeAttributes contains the configuration for scope attributes
+ to store them as index labels or structured metadata or drop them altogether.
+ displayName: Scope Attributes
+ path: limits.tenants.otlp.scopeAttributes
+ - description: Action defines the indexing action for the selected attributes.
+ They can be either added to structured metadata or drop altogether.
+ displayName: Action
+ path: limits.tenants.otlp.scopeAttributes[0].action
+ - description: Attributes allows choosing the attributes by listing their names.
+ displayName: Attribute Names
+ path: limits.tenants.otlp.scopeAttributes[0].attributes
+ - description: Regex allows choosing the attributes by matching a regular expression.
+ displayName: Regular Expression
+ path: limits.tenants.otlp.scopeAttributes[0].regex
- description: Blocked defines the list of rules to block matching queries.
displayName: Blocked
path: limits.tenants.queries.blocked
diff --git a/operator/config/overlays/community-openshift/kustomization.yaml b/operator/config/overlays/community-openshift/kustomization.yaml
index bf76d87727c81..af5e40aac80b8 100644
--- a/operator/config/overlays/community-openshift/kustomization.yaml
+++ b/operator/config/overlays/community-openshift/kustomization.yaml
@@ -11,8 +11,8 @@ labels:
app.kubernetes.io/managed-by: operator-lifecycle-manager
includeSelectors: true
- pairs:
- app.kubernetes.io/instance: loki-operator-v0.6.1
- app.kubernetes.io/version: "0.6.1"
+ app.kubernetes.io/instance: loki-operator-v0.6.2
+ app.kubernetes.io/version: "0.6.2"
configMapGenerator:
- files:
@@ -27,4 +27,4 @@ patchesStrategicMerge:
images:
- name: controller
newName: docker.io/grafana/loki-operator
- newTag: 0.6.1
+ newTag: 0.6.2
diff --git a/operator/config/overlays/community/kustomization.yaml b/operator/config/overlays/community/kustomization.yaml
index c6db762d38a57..ed910555da2c5 100644
--- a/operator/config/overlays/community/kustomization.yaml
+++ b/operator/config/overlays/community/kustomization.yaml
@@ -22,8 +22,8 @@ labels:
app.kubernetes.io/managed-by: operator-lifecycle-manager
includeSelectors: true
- pairs:
- app.kubernetes.io/instance: loki-operator-v0.6.1
- app.kubernetes.io/version: "0.6.1"
+ app.kubernetes.io/instance: loki-operator-v0.6.2
+ app.kubernetes.io/version: "0.6.2"
generatorOptions:
disableNameSuffixHash: true
@@ -43,7 +43,7 @@ patchesStrategicMerge:
images:
- name: controller
newName: docker.io/grafana/loki-operator
- newTag: 0.6.1
+ newTag: 0.6.2
# the following config is for teaching kustomize how to do var substitution
vars:
diff --git a/operator/config/overlays/community/manager_related_image_patch.yaml b/operator/config/overlays/community/manager_related_image_patch.yaml
index 5e454e400cc87..52681b95a4b1c 100644
--- a/operator/config/overlays/community/manager_related_image_patch.yaml
+++ b/operator/config/overlays/community/manager_related_image_patch.yaml
@@ -9,7 +9,7 @@ spec:
- name: manager
env:
- name: RELATED_IMAGE_LOKI
- value: docker.io/grafana/loki:3.1.0
+ value: docker.io/grafana/loki:3.1.1
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
diff --git a/operator/config/overlays/development/manager_related_image_patch.yaml b/operator/config/overlays/development/manager_related_image_patch.yaml
index 3ae62832b1500..21aa79f99a7d7 100644
--- a/operator/config/overlays/development/manager_related_image_patch.yaml
+++ b/operator/config/overlays/development/manager_related_image_patch.yaml
@@ -9,6 +9,6 @@ spec:
- name: manager
env:
- name: RELATED_IMAGE_LOKI
- value: docker.io/grafana/loki:3.1.0
+ value: docker.io/grafana/loki:3.1.1
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
diff --git a/operator/config/overlays/openshift/manager_related_image_patch.yaml b/operator/config/overlays/openshift/manager_related_image_patch.yaml
index 675a31564035f..3bb2b65cf4617 100644
--- a/operator/config/overlays/openshift/manager_related_image_patch.yaml
+++ b/operator/config/overlays/openshift/manager_related_image_patch.yaml
@@ -9,7 +9,7 @@ spec:
- name: manager
env:
- name: RELATED_IMAGE_LOKI
- value: quay.io/openshift-logging/loki:v3.1.0
+ value: quay.io/openshift-logging/loki:v3.1.1
- name: RELATED_IMAGE_GATEWAY
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
diff --git a/operator/docs/operator/api.md b/operator/docs/operator/api.md
index 3e2cb6a850ac5..ee4ee5c5a62b7 100644
--- a/operator/docs/operator/api.md
+++ b/operator/docs/operator/api.md
@@ -1146,6 +1146,51 @@ a secret. This mode is only supported for certain object storage types in certai
+## GlobalOTLPSpec { #loki-grafana-com-v1-GlobalOTLPSpec }
+
+(Appears on: LimitsTemplateSpec )
+
+
+
GlobalOTLPSpec defines which resource, scope and log attributes to
+be stored as index or structured metadata or drop altogether for all
+tenants.
+
+
+
+
+Field
+Description
+
+
+
+
+
+indexedResourceAttributes
+
+[]string
+
+
+
+(Optional)
+IndexedResourceAttributes contains the global configuration for resource attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+OTLPSpec
+
+
+OTLPSpec
+
+
+
+
+
+
+
+
+
## HashRingSpec { #loki-grafana-com-v1-HashRingSpec }
(Appears on: LokiStackSpec )
@@ -1473,6 +1518,22 @@ QueryLimitSpec
+otlp
+
+
+GlobalOTLPSpec
+
+
+
+
+(Optional)
+OTLP to configure which resource, scope and log attributes
+to store as labels or structured metadata or drop them altogether
+for all tenants.
+
+
+
+
retention
@@ -2601,6 +2662,262 @@ string
+## OTLPAttributeAction { #loki-grafana-com-v1-OTLPAttributeAction }
+(string
alias)
+
+(Appears on: OTLPAttributesSpec , OTLPResourceAttributesConfigSpec )
+
+
+
OTLPAttributeAction defines the action to executed when indexing
+OTLP resource attributes. Resource attributes can be either added
+to the index, the chunk structured metadata or entirely dropped.
+
+
+
+
+Value
+Description
+
+
+"drop"
+OTLPAttributeActionDrop removes the matching attributes from the log entry.
+
+"indexLabel"
+OTLPAttributeActionIndexLabel stores a resource attribute as a label, which is part of the index identifying streams.
+
+"structuredMetadata"
+OTLPAttributeActionStructuredMetadata stores an attribute as structured metadata with each log entry.
+
+
+
+
+## OTLPAttributesSpec { #loki-grafana-com-v1-OTLPAttributesSpec }
+
+(Appears on: OTLPSpec )
+
+
+
OTLPAttributesSpec contains the configuration for a set of attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+Field
+Description
+
+
+
+
+
+action
+
+
+OTLPAttributeAction
+
+
+
+
+Action defines the indexing action for the selected attributes. They
+can be either added to structured metadata or drop altogether.
+
+
+
+
+attributes
+
+[]string
+
+
+
+(Optional)
+Attributes allows choosing the attributes by listing their names.
+
+
+
+
+regex
+
+string
+
+
+
+(Optional)
+Regex allows choosing the attributes by matching a regular expression.
+
+
+
+
+
+## OTLPResourceAttributesConfigSpec { #loki-grafana-com-v1-OTLPResourceAttributesConfigSpec }
+
+(Appears on: OTLPResourceAttributesSpec )
+
+
+
OTLPResourceAttributesConfigSpec contains the configuration for a set of resource attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+Field
+Description
+
+
+
+
+
+action
+
+
+OTLPAttributeAction
+
+
+
+
+Action defines the indexing action for the selected resoure attributes. They
+can be either indexed as labels, added to structured metadata or drop altogether.
+
+
+
+
+attributes
+
+[]string
+
+
+
+(Optional)
+Attributes is the list of attributes to configure indexing or drop them
+altogether.
+
+
+
+
+regex
+
+string
+
+
+
+(Optional)
+Regex allows choosing the attributes by matching a regular expression.
+
+
+
+
+
+## OTLPResourceAttributesSpec { #loki-grafana-com-v1-OTLPResourceAttributesSpec }
+
+(Appears on: OTLPSpec )
+
+
+
OTLPResourceAttributesSpec contains the configuration for resource attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+Field
+Description
+
+
+
+
+
+ignoreDefaults
+
+bool
+
+
+
+(Optional)
+IgnoreDefaults controls whether to ignore the global configuration for resource attributes
+indexed as labels.
+If IgnoreDefaults is true, then this spec needs to contain at least one mapping to a index label.
+
+
+
+
+attributes
+
+
+[]OTLPResourceAttributesConfigSpec
+
+
+
+
+(Optional)
+Attributes contains the configuration for resource attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+
+## OTLPSpec { #loki-grafana-com-v1-OTLPSpec }
+
+(Appears on: GlobalOTLPSpec , PerTenantLimitsTemplateSpec )
+
+
+
OTLPSpec defines which resource, scope and log attributes to
+be stored as index or structured metadata or drop altogether
+
+
+
+
+Field
+Description
+
+
+
+
+
+resourceAttributes
+
+
+OTLPResourceAttributesSpec
+
+
+
+
+(Optional)
+ResourceAttributes contains the configuration for resource attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+scopeAttributes
+
+
+[]OTLPAttributesSpec
+
+
+
+
+(Optional)
+ScopeAttributes contains the configuration for scope attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+logAttributes
+
+
+[]OTLPAttributesSpec
+
+
+
+
+(Optional)
+LogAttributes contains the configuration for log attributes
+to store them as index labels or structured metadata or drop them altogether.
+
+
+
+
+
## ObjectStorageSchema { #loki-grafana-com-v1-ObjectStorageSchema }
(Appears on: LokiStackStorageStatus , ObjectStorageSpec )
@@ -2953,6 +3270,22 @@ PerTenantQueryLimitSpec
+otlp
+
+
+OTLPSpec
+
+
+
+
+(Optional)
+OTLP to configure which resource, scope and log attributes
+to store as labels or structured metadata or drop them altogether
+for a single tenants.
+
+
+
+
retention
diff --git a/operator/docs/operator/compatibility.md b/operator/docs/operator/compatibility.md
index 5887c7a68cfa4..382e7e6a7673f 100644
--- a/operator/docs/operator/compatibility.md
+++ b/operator/docs/operator/compatibility.md
@@ -27,17 +27,5 @@ Due to the use of apiextensions.k8s.io/v1 CustomResourceDefinitions, requires Ku
The versions of Loki compatible to be run with the Loki Operator are:
-* v2.7.1
-* v2.7.2
-* v2.7.3
-* v2.7.4
-* v2.8.0
-* v2.8.3
-* v2.9.0
-* v2.9.1
-* v2.9.2
-* v2.9.3
-* v2.9.4
-* v2.9.6
-* v2.9.8
* v3.1.0
+* v3.1.1
diff --git a/operator/go.mod b/operator/go.mod
index 664104d7705cd..2f14eb4b884f8 100644
--- a/operator/go.mod
+++ b/operator/go.mod
@@ -25,7 +25,7 @@ require (
k8s.io/apiserver v0.28.7
k8s.io/client-go v0.28.7
k8s.io/component-base v0.28.7
- k8s.io/utils v0.0.0-20240102154912-e7106e64919e
+ k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3
sigs.k8s.io/controller-runtime v0.16.5
sigs.k8s.io/yaml v1.4.0
)
diff --git a/operator/go.sum b/operator/go.sum
index 65901a948caa1..7d51e72970523 100644
--- a/operator/go.sum
+++ b/operator/go.sum
@@ -1966,8 +1966,8 @@ k8s.io/klog/v2 v2.110.1 h1:U/Af64HJf7FcwMcXyKm2RPM22WZzyR7OSpYj5tg3cL0=
k8s.io/klog/v2 v2.110.1/go.mod h1:YGtd1984u+GgbuZ7e08/yBuAfKLSO0+uR1Fhi6ExXjo=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 h1:LyMgNKD2P8Wn1iAwQU5OhxCKlKJy0sHc+PcDwFB24dQ=
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9/go.mod h1:wZK2AVp1uHCp4VamDVgBP2COHZjqD1T68Rf0CM3YjSM=
-k8s.io/utils v0.0.0-20240102154912-e7106e64919e h1:eQ/4ljkx21sObifjzXwlPKpdGLrCfRziVtos3ofG/sQ=
-k8s.io/utils v0.0.0-20240102154912-e7106e64919e/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3 h1:b2FmK8YH+QEwq/Sy2uAEhmqL5nPfGYbJOcaqjeYYZoA=
+k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
lukechampine.com/uint128 v1.1.1/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk=
lukechampine.com/uint128 v1.2.0/go.mod h1:c4eWIwlEGaxC/+H1VguhU4PHXNWDCDMUlWdIWl2j1gk=
modernc.org/cc/v3 v3.36.0/go.mod h1:NFUHyPn4ekoC/JHeZFfZurN6ixxawE1BnVonP/oahEI=
diff --git a/operator/hack/addons_dev.yaml b/operator/hack/addons_dev.yaml
index feb9781d269d4..320743bc37325 100644
--- a/operator/hack/addons_dev.yaml
+++ b/operator/hack/addons_dev.yaml
@@ -29,7 +29,7 @@ spec:
spec:
containers:
- name: logcli
- image: docker.io/grafana/logcli:3.1.0-amd64
+ image: docker.io/grafana/logcli:3.1.1-amd64
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -73,7 +73,7 @@ spec:
spec:
containers:
- name: promtail
- image: docker.io/grafana/promtail:3.1.0
+ image: docker.io/grafana/promtail:3.1.1
args:
- -config.file=/etc/promtail/promtail.yaml
- -log.level=info
diff --git a/operator/hack/addons_ocp.yaml b/operator/hack/addons_ocp.yaml
index 59b942c9a4050..6e49cbd847c8a 100644
--- a/operator/hack/addons_ocp.yaml
+++ b/operator/hack/addons_ocp.yaml
@@ -29,7 +29,7 @@ spec:
spec:
containers:
- name: logcli
- image: docker.io/grafana/logcli:3.1.0-amd64
+ image: docker.io/grafana/logcli:3.1.1-amd64
imagePullPolicy: IfNotPresent
command:
- /bin/sh
@@ -70,7 +70,7 @@ spec:
spec:
containers:
- name: promtail
- image: docker.io/grafana/promtail:3.1.0
+ image: docker.io/grafana/promtail:3.1.1
args:
- -config.file=/etc/promtail/promtail.yaml
- -log.level=info
diff --git a/operator/internal/manifests/internal/config/build_test.go b/operator/internal/manifests/internal/config/build_test.go
index 5e119e6661d5e..016e42253f21b 100644
--- a/operator/internal/manifests/internal/config/build_test.go
+++ b/operator/internal/manifests/internal/config/build_test.go
@@ -6012,3 +6012,587 @@ overrides:
require.YAMLEq(t, expCfg, string(cfg))
require.YAMLEq(t, expRCfg, string(rCfg))
}
+
+func TestBuild_ConfigAndRuntimeConfig_OTLPConfigGenerated(t *testing.T) {
+ expCfg := `
+---
+auth_enabled: true
+chunk_store_config:
+ chunk_cache_config:
+ embedded_cache:
+ enabled: true
+ max_size_mb: 500
+common:
+ storage:
+ s3:
+ endpoint: http://test.default.svc.cluster.local.:9000
+ bucketnames: loki
+ region: us-east
+ access_key_id: ${AWS_ACCESS_KEY_ID}
+ secret_access_key: ${AWS_ACCESS_KEY_SECRET}
+ s3forcepathstyle: true
+ compactor_grpc_address: loki-compactor-grpc-lokistack-dev.default.svc.cluster.local:9095
+ ring:
+ kvstore:
+ store: memberlist
+ heartbeat_period: 5s
+ heartbeat_timeout: 1m
+ instance_port: 9095
+compactor:
+ compaction_interval: 2h
+ working_directory: /tmp/loki/compactor
+distributor:
+ otlp_config:
+ default_resource_attributes_as_index_labels:
+ - foo.bar
+ - bar.baz
+frontend:
+ tail_proxy_url: http://loki-querier-http-lokistack-dev.default.svc.cluster.local:3100
+ compress_responses: true
+ max_outstanding_per_tenant: 4096
+ log_queries_longer_than: 5s
+frontend_worker:
+ frontend_address: loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local:9095
+ grpc_client_config:
+ max_send_msg_size: 104857600
+ingester:
+ chunk_block_size: 262144
+ chunk_encoding: snappy
+ chunk_idle_period: 1h
+ chunk_retain_period: 5m
+ chunk_target_size: 2097152
+ flush_op_timeout: 10m
+ lifecycler:
+ final_sleep: 0s
+ join_after: 30s
+ num_tokens: 512
+ ring:
+ replication_factor: 1
+ max_chunk_age: 2h
+ wal:
+ enabled: true
+ dir: /tmp/wal
+ replay_memory_ceiling: 2147483648
+ingester_client:
+ grpc_client_config:
+ max_recv_msg_size: 67108864
+ remote_timeout: 1s
+# NOTE: Keep the order of keys as in Loki docs
+# to enable easy diffs when vendoring newer
+# Loki releases.
+# (See https://grafana.com/docs/loki/latest/configuration/#limits_config)
+#
+# Values for not exposed fields are taken from the grafana/loki production
+# configuration manifests.
+# (See https://github.com/grafana/loki/blob/main/production/ksonnet/loki/config.libsonnet)
+limits_config:
+ ingestion_rate_strategy: global
+ ingestion_rate_mb: 4
+ ingestion_burst_size_mb: 6
+ max_label_name_length: 1024
+ max_label_value_length: 2048
+ max_label_names_per_series: 30
+ reject_old_samples: true
+ reject_old_samples_max_age: 168h
+ creation_grace_period: 10m
+ # Keep max_streams_per_user always to 0 to default
+ # using max_global_streams_per_user always.
+ # (See https://github.com/grafana/loki/blob/main/pkg/ingester/limiter.go#L73)
+ max_streams_per_user: 0
+ max_line_size: 256000
+ max_entries_limit_per_query: 5000
+ max_global_streams_per_user: 0
+ max_chunks_per_query: 2000000
+ max_query_length: 721h
+ max_query_parallelism: 32
+ tsdb_max_query_parallelism: 512
+ max_query_series: 500
+ cardinality_limit: 100000
+ max_streams_matchers_per_query: 1000
+ max_cache_freshness_per_query: 10m
+ split_queries_by_interval: 30m
+ query_timeout: 1m
+ volume_enabled: true
+ volume_max_series: 1000
+ per_stream_rate_limit: 5MB
+ per_stream_rate_limit_burst: 15MB
+ shard_streams:
+ enabled: true
+ desired_rate: 3MB
+ allow_structured_metadata: true
+ otlp_config:
+ resource_attributes:
+ ignore_defaults: true
+ attributes_config:
+ - action: index_label
+ attributes:
+ - res.foo.bar
+ - res.bar.baz
+ regex: .*
+ - action: structured_metadata
+ attributes:
+ - res.service.env
+ regex: .*
+ scope_attributes:
+ - action: index_label
+ attributes:
+ - scope.foo.bar
+ - scope.bar.baz
+ regex: .*
+ - action: structured_metadata
+ attributes:
+ - scope.service.env
+ regex: .*
+ log_attributes:
+ - action: index_label
+ attributes:
+ - log.foo.bar
+ - log.bar.baz
+ regex: .*
+ - action: structured_metadata
+ attributes:
+ - log.service.env
+ regex: .*
+memberlist:
+ abort_if_cluster_join_fails: true
+ advertise_port: 7946
+ bind_port: 7946
+ join_members:
+ - loki-gossip-ring-lokistack-dev.default.svc.cluster.local:7946
+ max_join_backoff: 1m
+ max_join_retries: 10
+ min_join_backoff: 1s
+querier:
+ engine:
+ max_look_back_period: 30s
+ extra_query_delay: 0s
+ max_concurrent: 2
+ query_ingesters_within: 3h
+ tail_max_duration: 1h
+query_range:
+ align_queries_with_step: true
+ cache_results: true
+ max_retries: 5
+ results_cache:
+ cache:
+ embedded_cache:
+ enabled: true
+ max_size_mb: 500
+ parallelise_shardable_queries: true
+schema_config:
+ configs:
+ - from: "2024-01-01"
+ index:
+ period: 24h
+ prefix: index_
+ object_store: s3
+ schema: v13
+ store: tsdb
+ruler:
+ enable_api: true
+ enable_sharding: true
+ evaluation_interval: 1m
+ poll_interval: 1m
+ external_url: http://alert.me/now
+ external_labels:
+ key1: val1
+ key2: val2
+ alertmanager_url: http://alerthost1,http://alerthost2
+ enable_alertmanager_v2: true
+ enable_alertmanager_discovery: true
+ alertmanager_refresh_interval: 1m
+ notification_queue_capacity: 1000
+ notification_timeout: 1m
+ alertmanager_client:
+ tls_cert_path: "custom/path"
+ tls_key_path: "custom/key"
+ tls_ca_path: "custom/CA"
+ tls_server_name: "custom-servername"
+ tls_insecure_skip_verify: false
+ basic_auth_password: "pass"
+ basic_auth_username: "user"
+ credentials: "creds"
+ credentials_file: "cred/file"
+ type: "auth"
+ for_outage_tolerance: 10m
+ for_grace_period: 5m
+ resend_delay: 2m
+ remote_write:
+ enabled: true
+ config_refresh_period: 1m
+ client:
+ name: remote-write-me
+ url: http://remote.write.me
+ remote_timeout: 10s
+ proxy_url: http://proxy.through.me
+ follow_redirects: true
+ headers:
+ more: foryou
+ less: forme
+ authorization:
+ type: bearer
+ credentials: supersecret
+ queue_config:
+ capacity: 1000
+ max_shards: 100
+ min_shards: 50
+ max_samples_per_send: 1000
+ batch_send_deadline: 10s
+ min_backoff: 30ms
+ max_backoff: 100ms
+ wal:
+ dir: /tmp/wal
+ truncate_frequency: 60m
+ min_age: 5m
+ max_age: 4h
+ rule_path: /tmp/loki
+ storage:
+ type: local
+ local:
+ directory: /tmp/rules
+ ring:
+ kvstore:
+ store: memberlist
+server:
+ graceful_shutdown_timeout: 5s
+ grpc_server_min_time_between_pings: '10s'
+ grpc_server_ping_without_stream_allowed: true
+ grpc_server_max_concurrent_streams: 1000
+ grpc_server_max_recv_msg_size: 104857600
+ grpc_server_max_send_msg_size: 104857600
+ http_listen_port: 3100
+ http_server_idle_timeout: 30s
+ http_server_read_timeout: 30s
+ http_server_write_timeout: 10m0s
+ log_level: info
+storage_config:
+ tsdb_shipper:
+ active_index_directory: /tmp/loki/tsdb-index
+ cache_location: /tmp/loki/tsdb-cache
+ cache_ttl: 24h
+ resync_interval: 5m
+ index_gateway_client:
+ server_address: dns:///loki-index-gateway-grpc-lokistack-dev.default.svc.cluster.local:9095
+tracing:
+ enabled: false
+analytics:
+ reporting_enabled: true
+`
+ expRCfg := `
+---
+overrides:
+ test-a:
+ otlp_config:
+ resource_attributes:
+ ignore_defaults: true
+ attributes_config:
+ - action: index_label
+ attributes:
+ - res.foo.bar
+ - res.bar.baz
+ regex: .*
+ - action: structured_metadata
+ attributes:
+ - res.service.env
+ regex: .*
+ scope_attributes:
+ - action: index_label
+ attributes:
+ - scope.foo.bar
+ - scope.bar.baz
+ regex: .*
+ - action: structured_metadata
+ attributes:
+ - scope.service.env
+ regex: .*
+ log_attributes:
+ - action: index_label
+ attributes:
+ - log.foo.bar
+ - log.bar.baz
+ regex: .*
+ - action: structured_metadata
+ attributes:
+ - log.service.env
+ regex: .*
+`
+ opts := Options{
+ Stack: lokiv1.LokiStackSpec{
+ Replication: &lokiv1.ReplicationSpec{
+ Factor: 1,
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ IngestionLimits: &lokiv1.IngestionLimitSpec{
+ IngestionRate: 4,
+ IngestionBurstSize: 6,
+ MaxLabelNameLength: 1024,
+ MaxLabelValueLength: 2048,
+ MaxLabelNamesPerSeries: 30,
+ MaxGlobalStreamsPerTenant: 0,
+ MaxLineSize: 256000,
+ PerStreamRateLimit: 5,
+ PerStreamRateLimitBurst: 15,
+ PerStreamDesiredRate: 3,
+ },
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ IndexedResourceAttributes: []string{
+ "foo.bar",
+ "bar.baz",
+ },
+ OTLPSpec: lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ Attributes: []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ Attributes: []string{
+ "res.foo.bar",
+ "res.bar.baz",
+ },
+ Regex: ".*",
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{
+ "res.service.env",
+ },
+ Regex: ".*",
+ },
+ },
+ },
+ ScopeAttributes: []lokiv1.OTLPAttributesSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ Attributes: []string{
+ "scope.foo.bar",
+ "scope.bar.baz",
+ },
+ Regex: ".*",
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{
+ "scope.service.env",
+ },
+ Regex: ".*",
+ },
+ },
+ LogAttributes: []lokiv1.OTLPAttributesSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ Attributes: []string{
+ "log.foo.bar",
+ "log.bar.baz",
+ },
+ Regex: ".*",
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{
+ "log.service.env",
+ },
+ Regex: ".*",
+ },
+ },
+ },
+ },
+ QueryLimits: &lokiv1.QueryLimitSpec{
+ MaxEntriesLimitPerQuery: 5000,
+ MaxChunksPerQuery: 2000000,
+ MaxQuerySeries: 500,
+ QueryTimeout: "1m",
+ CardinalityLimit: 100000,
+ MaxVolumeSeries: 1000,
+ },
+ },
+ },
+ },
+ Namespace: "test-ns",
+ Name: "test",
+ Compactor: Address{
+ FQDN: "loki-compactor-grpc-lokistack-dev.default.svc.cluster.local",
+ Port: 9095,
+ },
+ FrontendWorker: Address{
+ FQDN: "loki-query-frontend-grpc-lokistack-dev.default.svc.cluster.local",
+ Port: 9095,
+ },
+ GossipRing: GossipRing{
+ InstancePort: 9095,
+ BindPort: 7946,
+ MembersDiscoveryAddr: "loki-gossip-ring-lokistack-dev.default.svc.cluster.local",
+ },
+ Querier: Address{
+ Protocol: "http",
+ FQDN: "loki-querier-http-lokistack-dev.default.svc.cluster.local",
+ Port: 3100,
+ },
+ IndexGateway: Address{
+ FQDN: "loki-index-gateway-grpc-lokistack-dev.default.svc.cluster.local",
+ Port: 9095,
+ },
+ Ruler: Ruler{
+ Enabled: true,
+ RulesStorageDirectory: "/tmp/rules",
+ EvaluationInterval: "1m",
+ PollInterval: "1m",
+ AlertManager: &AlertManagerConfig{
+ Notifier: &NotifierConfig{
+ TLS: TLSConfig{
+ ServerName: ptr.To("custom-servername"),
+ CertPath: ptr.To("custom/path"),
+ KeyPath: ptr.To("custom/key"),
+ CAPath: ptr.To("custom/CA"),
+ InsecureSkipVerify: ptr.To(false),
+ },
+ BasicAuth: BasicAuth{
+ Username: ptr.To("user"),
+ Password: ptr.To("pass"),
+ },
+ HeaderAuth: HeaderAuth{
+ CredentialsFile: ptr.To("cred/file"),
+ Type: ptr.To("auth"),
+ Credentials: ptr.To("creds"),
+ },
+ },
+ ExternalURL: "http://alert.me/now",
+ ExternalLabels: map[string]string{
+ "key1": "val1",
+ "key2": "val2",
+ },
+ Hosts: "http://alerthost1,http://alerthost2",
+ EnableV2: true,
+ EnableDiscovery: true,
+ RefreshInterval: "1m",
+ QueueCapacity: 1000,
+ Timeout: "1m",
+ ForOutageTolerance: "10m",
+ ForGracePeriod: "5m",
+ ResendDelay: "2m",
+ },
+ RemoteWrite: &RemoteWriteConfig{
+ Enabled: true,
+ RefreshPeriod: "1m",
+ Client: &RemoteWriteClientConfig{
+ Name: "remote-write-me",
+ URL: "http://remote.write.me",
+ RemoteTimeout: "10s",
+ Headers: map[string]string{
+ "more": "foryou",
+ "less": "forme",
+ },
+ ProxyURL: "http://proxy.through.me",
+ FollowRedirects: true,
+ BearerToken: "supersecret",
+ },
+ Queue: &RemoteWriteQueueConfig{
+ Capacity: 1000,
+ MaxShards: 100,
+ MinShards: 50,
+ MaxSamplesPerSend: 1000,
+ BatchSendDeadline: "10s",
+ MinBackOffPeriod: "30ms",
+ MaxBackOffPeriod: "100ms",
+ },
+ },
+ },
+ StorageDirectory: "/tmp/loki",
+ MaxConcurrent: MaxConcurrent{
+ AvailableQuerierCPUCores: 2,
+ },
+ WriteAheadLog: WriteAheadLog{
+ Directory: "/tmp/wal",
+ IngesterMemoryRequest: 4 * 1024 * 1024 * 1024,
+ },
+ ObjectStorage: storage.Options{
+ SharedStore: lokiv1.ObjectStorageSecretS3,
+ S3: &storage.S3StorageConfig{
+ Endpoint: "http://test.default.svc.cluster.local.:9000",
+ Region: "us-east",
+ Buckets: "loki",
+ ForcePathStyle: true,
+ },
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2024-01-01",
+ },
+ },
+ AllowStructuredMetadata: true,
+ },
+ Shippers: []string{"tsdb"},
+ EnableRemoteReporting: true,
+ HTTPTimeouts: HTTPTimeoutConfig{
+ IdleTimeout: 30 * time.Second,
+ ReadTimeout: 30 * time.Second,
+ WriteTimeout: 10 * time.Minute,
+ },
+ Overrides: map[string]LokiOverrides{
+ "test-a": {
+ Limits: lokiv1.PerTenantLimitsTemplateSpec{
+ OTLP: &lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ Attributes: []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ Attributes: []string{
+ "res.foo.bar",
+ "res.bar.baz",
+ },
+ Regex: ".*",
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{
+ "res.service.env",
+ },
+ Regex: ".*",
+ },
+ },
+ },
+ ScopeAttributes: []lokiv1.OTLPAttributesSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ Attributes: []string{
+ "scope.foo.bar",
+ "scope.bar.baz",
+ },
+ Regex: ".*",
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{
+ "scope.service.env",
+ },
+ Regex: ".*",
+ },
+ },
+ LogAttributes: []lokiv1.OTLPAttributesSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ Attributes: []string{
+ "log.foo.bar",
+ "log.bar.baz",
+ },
+ Regex: ".*",
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{
+ "log.service.env",
+ },
+ Regex: ".*",
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+ cfg, rCfg, err := Build(opts)
+ require.NoError(t, err)
+ require.YAMLEq(t, expCfg, string(cfg))
+ require.YAMLEq(t, expRCfg, string(rCfg))
+}
diff --git a/operator/internal/manifests/internal/config/loki-config.yaml b/operator/internal/manifests/internal/config/loki-config.yaml
index dfdf5c9003258..01ee6fe9b7075 100644
--- a/operator/internal/manifests/internal/config/loki-config.yaml
+++ b/operator/internal/manifests/internal/config/loki-config.yaml
@@ -107,6 +107,16 @@ compactor:
{{- end }}
delete_request_store: {{.ObjectStorage.SharedStore}}
{{- end }}
+{{- if $l := .Stack.Limits.Global }}
+{{- with $l.OTLP }}{{- with .IndexedResourceAttributes }}
+distributor:
+ otlp_config:
+ default_resource_attributes_as_index_labels:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+{{- end }}
+{{- end }}{{- end }}
frontend:
tail_proxy_url: {{ .Querier.Protocol }}://{{ .Querier.FQDN }}:{{ .Querier.Port }}
{{- if .Gates.HTTPEncryption }}
@@ -222,6 +232,58 @@ limits_config:
shard_streams:
enabled: true
desired_rate: {{ . }}MB
+{{- end }}
+{{- with .Stack.Limits.Global.OTLP }}
+ otlp_config:
+ {{- with .ResourceAttributes }}
+ resource_attributes:
+ ignore_defaults: {{ .IgnoreDefaults }}
+ {{- with .Attributes}}
+ attributes_config:
+ {{- range . }}
+ - action: {{ .Action.Value }}
+ {{- with .Regex }}
+ regex: {{ . }}
+ {{- end }}
+ {{- with .Attributes }}
+ attributes:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- with .ScopeAttributes }}
+ scope_attributes:
+ {{- range . }}
+ - action: {{ .Action.Value }}
+ {{- with .Regex }}
+ regex: {{ . }}
+ {{- end }}
+ {{- with .Attributes }}
+ attributes:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- with .LogAttributes }}
+ log_attributes:
+ {{- range . }}
+ - action: {{ .Action.Value }}
+ {{- with .Regex }}
+ regex: {{ . }}
+ {{- end }}
+ {{- with .Attributes }}
+ attributes:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
{{- end }}
allow_structured_metadata: {{ .ObjectStorage.AllowStructuredMetadata }}
{{- with .GossipRing }}
diff --git a/operator/internal/manifests/internal/config/loki-runtime-config.yaml b/operator/internal/manifests/internal/config/loki-runtime-config.yaml
index 7d5b5e2421085..21b75508dc7aa 100644
--- a/operator/internal/manifests/internal/config/loki-runtime-config.yaml
+++ b/operator/internal/manifests/internal/config/loki-runtime-config.yaml
@@ -73,6 +73,58 @@ overrides:
{{- end }}
{{- end}}
{{- end -}}
+ {{- if $l := $spec.OTLP }}
+ otlp_config:
+ {{- with $l.ResourceAttributes }}
+ resource_attributes:
+ ignore_defaults: {{ .IgnoreDefaults }}
+ {{- with .Attributes}}
+ attributes_config:
+ {{- range . }}
+ - action: {{ .Action.Value }}
+ {{- with .Regex }}
+ regex: {{ . }}
+ {{- end }}
+ {{- with .Attributes }}
+ attributes:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end}}
+ {{- with $l.ScopeAttributes }}
+ scope_attributes:
+ {{- range . }}
+ - action: {{ .Action.Value }}
+ {{- with .Regex }}
+ regex: {{ . }}
+ {{- end }}
+ {{- with .Attributes }}
+ attributes:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- with $l.LogAttributes }}
+ log_attributes:
+ {{- range . }}
+ - action: {{ .Action.Value }}
+ {{- with .Regex }}
+ regex: {{ . }}
+ {{- end }}
+ {{- with .Attributes }}
+ attributes:
+ {{- range . }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
{{- with $spec.Retention }}
retention_period: {{ .Days }}d
{{- with .Streams }}
diff --git a/operator/internal/manifests/var.go b/operator/internal/manifests/var.go
index b250734dd45f4..1fed3e1422982 100644
--- a/operator/internal/manifests/var.go
+++ b/operator/internal/manifests/var.go
@@ -59,7 +59,7 @@ const (
EnvRelatedImageGateway = "RELATED_IMAGE_GATEWAY"
// DefaultContainerImage declares the default fallback for loki image.
- DefaultContainerImage = "docker.io/grafana/loki:3.1.0"
+ DefaultContainerImage = "docker.io/grafana/loki:3.1.1"
// DefaultLokiStackGatewayImage declares the default image for lokiStack-gateway.
DefaultLokiStackGatewayImage = "quay.io/observatorium/api:latest"
diff --git a/operator/internal/validation/lokistack.go b/operator/internal/validation/lokistack.go
index 3246b7ada106f..e6c84458c0802 100644
--- a/operator/internal/validation/lokistack.go
+++ b/operator/internal/validation/lokistack.go
@@ -78,6 +78,16 @@ func (v *LokiStackValidator) validate(ctx context.Context, obj runtime.Object) (
allErrs = append(allErrs, errors...)
}
+ if stack.Spec.Limits != nil {
+ if stack.Spec.Limits.Global != nil && stack.Spec.Limits.Global.OTLP != nil {
+ allErrs = append(allErrs, v.validateGlobalOTLPSpec(stack.Spec.Limits.Global.OTLP)...)
+ }
+
+ if stack.Spec.Limits.Tenants != nil {
+ allErrs = append(allErrs, v.validatePerTenantOTLPSpec(stack.Spec.Limits.Tenants)...)
+ }
+ }
+
if v.ExtendedValidator != nil {
allErrs = append(allErrs, v.ExtendedValidator(ctx, stack)...)
}
@@ -93,6 +103,100 @@ func (v *LokiStackValidator) validate(ctx context.Context, obj runtime.Object) (
)
}
+func (v *LokiStackValidator) validateGlobalOTLPSpec(s *lokiv1.GlobalOTLPSpec) field.ErrorList {
+ basePath := field.NewPath("spec", "limits", "global")
+
+ return v.validateOTLPSpec(basePath, &s.OTLPSpec)
+}
+
+func (v *LokiStackValidator) validatePerTenantOTLPSpec(tenants map[string]lokiv1.PerTenantLimitsTemplateSpec) field.ErrorList {
+ var allErrs field.ErrorList
+
+ for key, tenant := range tenants {
+ basePath := field.NewPath("spec", "limits", "tenants").Key(key)
+ allErrs = append(allErrs, v.validateOTLPSpec(basePath, tenant.OTLP)...)
+ }
+
+ return allErrs
+}
+
+func (v *LokiStackValidator) validateOTLPSpec(parent *field.Path, s *lokiv1.OTLPSpec) field.ErrorList {
+ var allErrs field.ErrorList
+
+ if s.ResourceAttributes != nil && s.ResourceAttributes.IgnoreDefaults {
+ switch {
+ case len(s.ResourceAttributes.Attributes) == 0:
+ allErrs = append(allErrs,
+ field.Invalid(
+ parent.Child("otlp", "resourceAttributes"),
+ []lokiv1.OTLPAttributesSpec{},
+ lokiv1.ErrOTLPResourceAttributesEmptyNotAllowed.Error(),
+ ),
+ )
+ default:
+ var indexLabelActionFound bool
+ for _, attr := range s.ResourceAttributes.Attributes {
+ if attr.Action == lokiv1.OTLPAttributeActionIndexLabel {
+ indexLabelActionFound = true
+ break
+ }
+ }
+
+ if !indexLabelActionFound {
+ allErrs = append(allErrs,
+ field.Invalid(
+ parent.Child("otlp", "resourceAttributes"),
+ s.ResourceAttributes.Attributes,
+ lokiv1.ErrOTLPResourceAttributesIndexLabelActionMissing.Error(),
+ ),
+ )
+ }
+
+ for idx, attr := range s.ResourceAttributes.Attributes {
+ if len(attr.Attributes) == 0 && attr.Regex == "" {
+ allErrs = append(allErrs,
+ field.Invalid(
+ parent.Child("otlp", "resourceAttributes").Index(idx),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ )
+ }
+ }
+ }
+ }
+
+ if len(s.ScopeAttributes) != 0 {
+ for idx, attr := range s.ScopeAttributes {
+ if len(attr.Attributes) == 0 && attr.Regex == "" {
+ allErrs = append(allErrs,
+ field.Invalid(
+ parent.Child("otlp", "scopeAttributes").Index(idx),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ )
+ }
+ }
+ }
+
+ if len(s.LogAttributes) != 0 {
+ for idx, attr := range s.LogAttributes {
+ if len(attr.Attributes) == 0 && attr.Regex == "" {
+ allErrs = append(allErrs,
+ field.Invalid(
+ parent.Child("otlp", "logAttributes").Index(idx),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ )
+ }
+ }
+ }
+
+ return allErrs
+}
+
func (v *LokiStackValidator) validateHashRingSpec(s lokiv1.LokiStackSpec) field.ErrorList {
if s.HashRing == nil {
return nil
diff --git a/operator/internal/validation/lokistack_test.go b/operator/internal/validation/lokistack_test.go
index e0419cd39565a..06f31507cef9a 100644
--- a/operator/internal/validation/lokistack_test.go
+++ b/operator/internal/validation/lokistack_test.go
@@ -390,6 +390,315 @@ var ltt = []struct {
},
),
},
+ {
+ desc: "enabling global limits OTLP IgnoreDefaults without resource attributes",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ OTLPSpec: lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "resourceAttributes"),
+ []lokiv1.OTLPAttributesSpec{},
+ lokiv1.ErrOTLPResourceAttributesEmptyNotAllowed.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "enabling global limits OTLP IgnoreDefaults without index label action for resource attributes",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ OTLPSpec: lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ Attributes: []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{"test"},
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "resourceAttributes"),
+ []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ Attributes: []string{"test"},
+ },
+ },
+ lokiv1.ErrOTLPResourceAttributesIndexLabelActionMissing.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "enabling global limits OTLP IgnoreDefaults with invalid resource attributes config",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ OTLPSpec: lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ Attributes: []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "resourceAttributes"),
+ []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ },
+ },
+ lokiv1.ErrOTLPResourceAttributesIndexLabelActionMissing.Error(),
+ ),
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "resourceAttributes").Index(0),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "enabling global limits OTLP with invalid resource attributes config",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ OTLPSpec: lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ Attributes: []lokiv1.OTLPResourceAttributesConfigSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "resourceAttributes").Index(0),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "invalid global OTLP scope attribute specs",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ OTLPSpec: lokiv1.OTLPSpec{
+ ScopeAttributes: []lokiv1.OTLPAttributesSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "scopeAttributes").Index(0),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "scopeAttributes").Index(1),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "invalid global OTLP log attribute specs",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Global: &lokiv1.LimitsTemplateSpec{
+ OTLP: &lokiv1.GlobalOTLPSpec{
+ OTLPSpec: lokiv1.OTLPSpec{
+ LogAttributes: []lokiv1.OTLPAttributesSpec{
+ {
+ Action: lokiv1.OTLPAttributeActionIndexLabel,
+ },
+ {
+ Action: lokiv1.OTLPAttributeActionStructuredMetadata,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "logAttributes").Index(0),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ field.Invalid(
+ field.NewPath("spec", "limits", "global", "otlp", "logAttributes").Index(1),
+ []string{},
+ lokiv1.ErrOTLPAttributesSpecInvalid.Error(),
+ ),
+ },
+ ),
+ },
+ {
+ desc: "enabling per-tenant limits OTLP IgnoreDefaults without resource attributes",
+ spec: lokiv1.LokiStack{
+ Spec: lokiv1.LokiStackSpec{
+ Storage: lokiv1.ObjectStorageSpec{
+ Schemas: []lokiv1.ObjectStorageSchema{
+ {
+ Version: lokiv1.ObjectStorageSchemaV13,
+ EffectiveDate: "2020-10-11",
+ },
+ },
+ },
+ Limits: &lokiv1.LimitsSpec{
+ Tenants: map[string]lokiv1.PerTenantLimitsTemplateSpec{
+ "tenant-a": {
+ OTLP: &lokiv1.OTLPSpec{
+ ResourceAttributes: &lokiv1.OTLPResourceAttributesSpec{
+ IgnoreDefaults: true,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ err: apierrors.NewInvalid(
+ schema.GroupKind{Group: "loki.grafana.com", Kind: "LokiStack"},
+ "testing-stack",
+ field.ErrorList{
+ field.Invalid(
+ field.NewPath("spec", "limits", "tenants").Key("tenant-a").Child("otlp", "resourceAttributes"),
+ []lokiv1.OTLPAttributesSpec{},
+ lokiv1.ErrOTLPResourceAttributesEmptyNotAllowed.Error(),
+ ),
+ },
+ ),
+ },
}
func TestLokiStackValidationWebhook_ValidateCreate(t *testing.T) {
diff --git a/pkg/bloombuild/builder/batch.go b/pkg/bloombuild/builder/batch.go
index 9961aa23d7c74..d86111d2924a7 100644
--- a/pkg/bloombuild/builder/batch.go
+++ b/pkg/bloombuild/builder/batch.go
@@ -234,7 +234,7 @@ func (i *blockLoadingIter) init() {
// set "match all" filter function if not present
if i.filter == nil {
- i.filter = func(cbq *bloomshipper.CloseableBlockQuerier) bool { return true }
+ i.filter = func(_ *bloomshipper.CloseableBlockQuerier) bool { return true }
}
// done
diff --git a/pkg/bloombuild/builder/batch_test.go b/pkg/bloombuild/builder/batch_test.go
index 37109d0196af6..e0fe37a0e448f 100644
--- a/pkg/bloombuild/builder/batch_test.go
+++ b/pkg/bloombuild/builder/batch_test.go
@@ -16,7 +16,7 @@ import (
func TestBatchedLoader(t *testing.T) {
t.Parallel()
- errMapper := func(i int) (int, error) {
+ errMapper := func(_ int) (int, error) {
return 0, errors.New("bzzt")
}
successMapper := func(i int) (int, error) {
diff --git a/pkg/bloombuild/builder/builder.go b/pkg/bloombuild/builder/builder.go
index 045f96bc7f591..6cc2ecfa32f61 100644
--- a/pkg/bloombuild/builder/builder.go
+++ b/pkg/bloombuild/builder/builder.go
@@ -30,6 +30,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
utillog "github.com/grafana/loki/v3/pkg/util/log"
+ "github.com/grafana/loki/v3/pkg/util/ring"
)
type Builder struct {
@@ -47,6 +48,10 @@ type Builder struct {
chunkLoader ChunkLoader
client protos.PlannerForBuilderClient
+
+ // used only in SSD mode where a single planner of the backend replicas needs to create tasksQueue
+ // therefore is nil when planner is run in microservice mode (default)
+ ringWatcher *common.RingWatcher
}
func New(
@@ -59,6 +64,7 @@ func New(
bloomStore bloomshipper.Store,
logger log.Logger,
r prometheus.Registerer,
+ rm *ring.RingManager,
) (*Builder, error) {
utillog.WarnExperimentalUse("Bloom Builder", logger)
@@ -82,11 +88,20 @@ func New(
logger: logger,
}
+ if rm != nil {
+ b.ringWatcher = common.NewRingWatcher(rm.RingLifecycler.GetInstanceID(), rm.Ring, time.Minute, logger)
+ }
+
b.Service = services.NewBasicService(b.starting, b.running, b.stopping)
return b, nil
}
-func (b *Builder) starting(_ context.Context) error {
+func (b *Builder) starting(ctx context.Context) error {
+ if b.ringWatcher != nil {
+ if err := services.StartAndAwaitRunning(ctx, b.ringWatcher); err != nil {
+ return fmt.Errorf("error starting builder subservices: %w", err)
+ }
+ }
b.metrics.running.Set(1)
return nil
}
@@ -94,6 +109,12 @@ func (b *Builder) starting(_ context.Context) error {
func (b *Builder) stopping(_ error) error {
defer b.metrics.running.Set(0)
+ if b.ringWatcher != nil {
+ if err := services.StopAndAwaitTerminated(context.Background(), b.ringWatcher); err != nil {
+ return fmt.Errorf("error stopping builder subservices: %w", err)
+ }
+ }
+
if b.client != nil {
// The gRPC server we use from dskit expects the orgID to be injected into the context when auth is enabled
// We won't actually use the orgID anywhere in this service, but we need to inject it to satisfy the server.
@@ -137,16 +158,27 @@ func (b *Builder) running(ctx context.Context) error {
return nil
}
-func (b *Builder) connectAndBuild(
- ctx context.Context,
-) error {
+func (b *Builder) plannerAddress() string {
+ if b.ringWatcher == nil {
+ return b.cfg.PlannerAddress
+ }
+
+ addr, err := b.ringWatcher.GetLeaderAddress()
+ if err != nil {
+ return b.cfg.PlannerAddress
+ }
+
+ return addr
+}
+
+func (b *Builder) connectAndBuild(ctx context.Context) error {
opts, err := b.cfg.GrpcConfig.DialOption(nil, nil)
if err != nil {
return fmt.Errorf("failed to create grpc dial options: %w", err)
}
// nolint:staticcheck // grpc.DialContext() has been deprecated; we'll address it before upgrading to gRPC 2.
- conn, err := grpc.DialContext(ctx, b.cfg.PlannerAddress, opts...)
+ conn, err := grpc.DialContext(ctx, b.plannerAddress(), opts...)
if err != nil {
return fmt.Errorf("failed to dial bloom planner: %w", err)
}
@@ -310,8 +342,8 @@ func (b *Builder) processTask(
blockCt int
nGramSize = uint64(b.limits.BloomNGramLength(tenant))
nGramSkip = uint64(b.limits.BloomNGramSkip(tenant))
- maxBlockSize = uint64(b.limits.BloomCompactorMaxBlockSize(tenant))
- maxBloomSize = uint64(b.limits.BloomCompactorMaxBloomSize(tenant))
+ maxBlockSize = uint64(b.limits.BloomMaxBlockSize(tenant))
+ maxBloomSize = uint64(b.limits.BloomMaxBloomSize(tenant))
blockOpts = v1.NewBlockOptions(blockEnc, nGramSize, nGramSkip, maxBlockSize, maxBloomSize)
created []bloomshipper.Meta
totalSeries int
diff --git a/pkg/bloombuild/builder/builder_test.go b/pkg/bloombuild/builder/builder_test.go
index b04a34fb6eeb2..6197c6209974f 100644
--- a/pkg/bloombuild/builder/builder_test.go
+++ b/pkg/bloombuild/builder/builder_test.go
@@ -88,7 +88,7 @@ func Test_BuilderLoop(t *testing.T) {
}
flagext.DefaultValues(&cfg.GrpcConfig)
- builder, err := New(cfg, limits, schemaCfg, storageCfg, storage.NewClientMetrics(), nil, fakeBloomStore{}, logger, prometheus.DefaultRegisterer)
+ builder, err := New(cfg, limits, schemaCfg, storageCfg, storage.NewClientMetrics(), nil, fakeBloomStore{}, logger, prometheus.DefaultRegisterer, nil)
require.NoError(t, err)
t.Cleanup(func() {
err = services.StopAndAwaitTerminated(context.Background(), builder)
@@ -234,11 +234,11 @@ func (f fakeLimits) BloomNGramSkip(_ string) int {
panic("implement me")
}
-func (f fakeLimits) BloomCompactorMaxBlockSize(_ string) int {
+func (f fakeLimits) BloomMaxBlockSize(_ string) int {
panic("implement me")
}
-func (f fakeLimits) BloomCompactorMaxBloomSize(_ string) int {
+func (f fakeLimits) BloomMaxBloomSize(_ string) int {
panic("implement me")
}
diff --git a/pkg/bloombuild/builder/config.go b/pkg/bloombuild/builder/config.go
index deeeb951465ab..ddacfd884e10c 100644
--- a/pkg/bloombuild/builder/config.go
+++ b/pkg/bloombuild/builder/config.go
@@ -40,6 +40,6 @@ type Limits interface {
BloomBlockEncoding(tenantID string) string
BloomNGramLength(tenantID string) int
BloomNGramSkip(tenantID string) int
- BloomCompactorMaxBlockSize(tenantID string) int
- BloomCompactorMaxBloomSize(tenantID string) int
+ BloomMaxBlockSize(tenantID string) int
+ BloomMaxBloomSize(tenantID string) int
}
diff --git a/pkg/bloombuild/builder/spec_test.go b/pkg/bloombuild/builder/spec_test.go
index 5be2a0e1c61be..8ab6c2bba4f46 100644
--- a/pkg/bloombuild/builder/spec_test.go
+++ b/pkg/bloombuild/builder/spec_test.go
@@ -137,7 +137,7 @@ func TestSimpleBloomGenerator(t *testing.T) {
storeItr := v2.NewMapIter[v1.SeriesWithBlooms, *v1.Series](
v2.NewSliceIter[v1.SeriesWithBlooms](data),
func(swb v1.SeriesWithBlooms) *v1.Series {
- return swb.Series
+ return &swb.Series.Series
},
)
@@ -161,7 +161,9 @@ func TestSimpleBloomGenerator(t *testing.T) {
}
require.Equal(t, len(expectedRefs), len(outputRefs))
for i := range expectedRefs {
- require.Equal(t, expectedRefs[i].Series, outputRefs[i].Series)
+ // TODO(chaudum): For now we only compare the series
+ // but we should also compare meta.
+ require.Equal(t, expectedRefs[i].Series.Series, outputRefs[i].Series.Series)
}
})
}
diff --git a/pkg/bloombuild/common/ringwatcher.go b/pkg/bloombuild/common/ringwatcher.go
new file mode 100644
index 0000000000000..f5045354d2493
--- /dev/null
+++ b/pkg/bloombuild/common/ringwatcher.go
@@ -0,0 +1,119 @@
+package common
+
+import (
+ "context"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/services"
+)
+
+const (
+ RingKeyOfLeader = 0xffff
+)
+
+type RingWatcher struct {
+ services.Service
+ id string
+ ring *ring.Ring
+ leader *ring.InstanceDesc
+ lookupPeriod time.Duration
+ logger log.Logger
+}
+
+// NewRingWatcher creates a service.Service that watches a ring for a leader instance.
+// The leader instance is the instance that owns the key `RingKeyOfLeader`.
+// It provides functions to get the leader's address, and to check whether a given instance in the ring is leader.
+// Bloom planner and bloom builder use this ring watcher to hook into index gateway ring when they are run as
+// part of the `backend` target of the Simple Scalable Deployment (SSD).
+// It should not be used for any other components outside of the bloombuild package.
+func NewRingWatcher(id string, ring *ring.Ring, lookupPeriod time.Duration, logger log.Logger) *RingWatcher {
+ w := &RingWatcher{
+ id: id,
+ ring: ring,
+ lookupPeriod: lookupPeriod,
+ logger: logger,
+ }
+ w.Service = services.NewBasicService(nil, w.updateLoop, nil)
+ return w
+}
+
+func (w *RingWatcher) waitForInitialLeader(ctx context.Context) error {
+ syncTicker := time.NewTicker(time.Second)
+ defer syncTicker.Stop()
+
+ for {
+ select {
+ case <-ctx.Done():
+ return ctx.Err()
+ case <-syncTicker.C:
+ w.lookupAddresses()
+ if w.leader != nil {
+ return nil
+ }
+ }
+ }
+}
+
+func (w *RingWatcher) updateLoop(ctx context.Context) error {
+ _ = w.waitForInitialLeader(ctx)
+
+ syncTicker := time.NewTicker(w.lookupPeriod)
+ defer syncTicker.Stop()
+
+ for {
+ select {
+ case <-ctx.Done():
+ return nil
+ case <-syncTicker.C:
+ w.lookupAddresses()
+ }
+ }
+}
+
+func (w *RingWatcher) lookupAddresses() {
+ bufDescs, bufHosts, bufZones := ring.MakeBuffersForGet()
+ rs, err := w.ring.Get(RingKeyOfLeader, ring.WriteNoExtend, bufDescs, bufHosts, bufZones)
+ if err != nil {
+ level.Error(w.logger).Log("msg", "failed to get replicationset for key", "key", RingKeyOfLeader, "err", err)
+ w.leader = nil
+ return
+ }
+
+ for i := range rs.Instances {
+ inst := rs.Instances[i]
+ state, err := w.ring.GetInstanceState(inst.Id)
+ if err != nil || state != ring.ACTIVE {
+ return
+ }
+ tr, err := w.ring.GetTokenRangesForInstance(inst.Id)
+ if err != nil && (len(tr) == 0 || tr.IncludesKey(RingKeyOfLeader)) {
+ if w.leader == nil || w.leader.Id != inst.Id {
+ level.Info(w.logger).Log("msg", "updated leader", "new_leader", inst)
+ }
+ w.leader = &inst
+ return
+ }
+ }
+
+ w.leader = nil
+}
+
+func (w *RingWatcher) IsLeader() bool {
+ return w.IsInstanceLeader(w.id)
+}
+
+func (w *RingWatcher) IsInstanceLeader(instanceID string) bool {
+ res := w.leader != nil && w.leader.Id == instanceID
+ level.Debug(w.logger).Log("msg", "check if instance is leader", "inst", instanceID, "curr_leader", w.leader, "is_leader", res)
+ return res
+}
+
+func (w *RingWatcher) GetLeaderAddress() (string, error) {
+ if w.leader == nil {
+ return "", ring.ErrEmptyRing
+ }
+ return w.leader.Addr, nil
+}
diff --git a/pkg/bloombuild/planner/planner.go b/pkg/bloombuild/planner/planner.go
index f65fdf59c9acb..f66748f1832b8 100644
--- a/pkg/bloombuild/planner/planner.go
+++ b/pkg/bloombuild/planner/planner.go
@@ -27,9 +27,13 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
"github.com/grafana/loki/v3/pkg/util"
utillog "github.com/grafana/loki/v3/pkg/util/log"
+ "github.com/grafana/loki/v3/pkg/util/ring"
)
-var errPlannerIsNotRunning = errors.New("planner is not running")
+var (
+ errPlannerIsNotRunning = errors.New("planner is not running")
+ errPlannerIsNotLeader = errors.New("planner is not leader")
+)
type Planner struct {
services.Service
@@ -52,6 +56,10 @@ type Planner struct {
metrics *Metrics
logger log.Logger
+
+ // used only in SSD mode where a single planner of the backend replicas needs to create tasksQueue
+ // therefore is nil when planner is run in microservice mode (default)
+ ringWatcher *common.RingWatcher
}
func New(
@@ -63,6 +71,7 @@ func New(
bloomStore bloomshipper.StoreBase,
logger log.Logger,
r prometheus.Registerer,
+ rm *ring.RingManager,
) (*Planner, error) {
utillog.WarnExperimentalUse("Bloom Planner", logger)
@@ -101,6 +110,12 @@ func New(
)
svcs := []services.Service{p.tasksQueue, p.activeUsers}
+
+ if rm != nil {
+ p.ringWatcher = common.NewRingWatcher(rm.RingLifecycler.GetInstanceID(), rm.Ring, time.Minute, logger)
+ svcs = append(svcs, p.ringWatcher)
+ }
+
p.subservices, err = services.NewManager(svcs...)
if err != nil {
return nil, fmt.Errorf("error creating subservices manager: %w", err)
@@ -112,6 +127,15 @@ func New(
return p, nil
}
+func (p *Planner) isLeader() bool {
+ if p.ringWatcher == nil {
+ // when the planner runs as standalone service in microserivce mode, then there is no ringWatcher
+ // therefore we can safely assume that the planner is a singleton
+ return true
+ }
+ return p.ringWatcher.IsLeader()
+}
+
func (p *Planner) starting(ctx context.Context) (err error) {
if err := services.StartManagerAndAwaitHealthy(ctx, p.subservices); err != nil {
return fmt.Errorf("error starting planner subservices: %w", err)
@@ -135,10 +159,9 @@ func (p *Planner) stopping(_ error) error {
func (p *Planner) running(ctx context.Context) error {
go p.trackInflightRequests(ctx)
- // run once at beginning
- if err := p.runOne(ctx); err != nil {
- level.Error(p.logger).Log("msg", "bloom build iteration failed for the first time", "err", err)
- }
+ // run once at beginning, but delay by 1m to allow ring consolidation when running in SSD mode
+ initialPlanningTimer := time.NewTimer(time.Minute)
+ defer initialPlanningTimer.Stop()
planningTicker := time.NewTicker(p.cfg.PlanningInterval)
defer planningTicker.Stop()
@@ -154,6 +177,12 @@ func (p *Planner) running(ctx context.Context) error {
level.Debug(p.logger).Log("msg", "planner context done")
return nil
+ case <-initialPlanningTimer.C:
+ level.Info(p.logger).Log("msg", "starting initial bloom build iteration")
+ if err := p.runOne(ctx); err != nil {
+ level.Error(p.logger).Log("msg", "initial bloom build iteration failed", "err", err)
+ }
+
case <-planningTicker.C:
level.Info(p.logger).Log("msg", "starting bloom build iteration")
if err := p.runOne(ctx); err != nil {
@@ -192,6 +221,10 @@ type tenantTable struct {
}
func (p *Planner) runOne(ctx context.Context) error {
+ if !p.isLeader() {
+ return errPlannerIsNotLeader
+ }
+
var (
wg sync.WaitGroup
start = time.Now()
@@ -901,6 +934,11 @@ func (p *Planner) BuilderLoop(builder protos.PlannerForBuilder_BuilderLoopServer
builderID := resp.GetBuilderID()
logger := log.With(p.logger, "builder", builderID)
+
+ if !p.isLeader() {
+ return errPlannerIsNotLeader
+ }
+
level.Debug(logger).Log("msg", "builder connected")
p.tasksQueue.RegisterConsumerConnection(builderID)
diff --git a/pkg/bloombuild/planner/planner_test.go b/pkg/bloombuild/planner/planner_test.go
index b44436860cd88..32f8d5798a7f2 100644
--- a/pkg/bloombuild/planner/planner_test.go
+++ b/pkg/bloombuild/planner/planner_test.go
@@ -532,7 +532,7 @@ func createPlanner(
bloomStore, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageCfg, storage.ClientMetrics{}, metasCache, blocksCache, &mempool.SimpleHeapAllocator{}, reg, logger)
require.NoError(t, err)
- planner, err := New(cfg, limits, schemaCfg, storageCfg, storage.ClientMetrics{}, bloomStore, logger, reg)
+ planner, err := New(cfg, limits, schemaCfg, storageCfg, storage.ClientMetrics{}, bloomStore, logger, reg, nil)
require.NoError(t, err)
return planner
diff --git a/pkg/bloombuild/planner/retention_test.go b/pkg/bloombuild/planner/retention_test.go
index 15118aeca70ae..6738ac336e749 100644
--- a/pkg/bloombuild/planner/retention_test.go
+++ b/pkg/bloombuild/planner/retention_test.go
@@ -621,7 +621,7 @@ func putMetasForLastNDays(t *testing.T, schemaCfg storageconfig.SchemaConfig, bl
}
}
-// getMetasForLastNDays returns groups of continuous metas for the last N days.
+// getGroupedMetasForLastNDays returns groups of continuous metas for the last N days.
func getGroupedMetasForLastNDays(t *testing.T, bloomStore *bloomshipper.BloomStore, tenant string, start model.Time, days int) [][][]bloomshipper.Meta {
metasGrouped := make([][][]bloomshipper.Meta, 0)
currentGroup := make([][]bloomshipper.Meta, 0)
diff --git a/pkg/bloomcompactor/batch.go b/pkg/bloomcompactor/batch.go
deleted file mode 100644
index c4e1043b44831..0000000000000
--- a/pkg/bloomcompactor/batch.go
+++ /dev/null
@@ -1,357 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "io"
- "math"
- "time"
-
- "github.com/grafana/dskit/multierror"
- "golang.org/x/exp/slices"
-
- "github.com/grafana/loki/v3/pkg/chunkenc"
- iter "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/logproto"
- logql_log "github.com/grafana/loki/v3/pkg/logql/log"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/chunk"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
-)
-
-type Fetcher[A, B any] interface {
- Fetch(ctx context.Context, inputs []A) ([]B, error)
-}
-
-type FetchFunc[A, B any] func(ctx context.Context, inputs []A) ([]B, error)
-
-func (f FetchFunc[A, B]) Fetch(ctx context.Context, inputs []A) ([]B, error) {
- return f(ctx, inputs)
-}
-
-// batchedLoader implements `v1.Iterator[C]` in batches
-type batchedLoader[A, B, C any] struct {
- metrics *Metrics
- batchSize int
- ctx context.Context
- fetchers []Fetcher[A, B]
- work [][]A
-
- mapper func(B) (C, error)
- cur C
- batch []B
- err error
-}
-
-const batchedLoaderDefaultBatchSize = 50
-
-func newBatchedLoader[A, B, C any](
- ctx context.Context,
- fetchers []Fetcher[A, B],
- inputs [][]A,
- mapper func(B) (C, error),
- batchSize int,
-) *batchedLoader[A, B, C] {
- return &batchedLoader[A, B, C]{
- batchSize: max(batchSize, 1),
- ctx: ctx,
- fetchers: fetchers,
- work: inputs,
- mapper: mapper,
- }
-}
-
-func (b *batchedLoader[A, B, C]) Next() bool {
-
- // iterate work until we have non-zero length batch
- for len(b.batch) == 0 {
-
- // empty batch + no work remaining = we're done
- if len(b.work) == 0 {
- return false
- }
-
- // setup next batch
- next := b.work[0]
- batchSize := min(b.batchSize, len(next))
- toFetch := next[:batchSize]
- fetcher := b.fetchers[0]
-
- // update work
- b.work[0] = b.work[0][batchSize:]
- if len(b.work[0]) == 0 {
- // if we've exhausted work from this set of inputs,
- // set pointer to next set of inputs
- // and their respective fetcher
- b.work = b.work[1:]
- b.fetchers = b.fetchers[1:]
- }
-
- // there was no work in this batch; continue (should not happen)
- if len(toFetch) == 0 {
- continue
- }
-
- b.batch, b.err = fetcher.Fetch(b.ctx, toFetch)
- // error fetching, short-circuit iteration
- if b.err != nil {
- return false
- }
- }
-
- return b.prepNext()
-}
-
-func (b *batchedLoader[_, B, C]) prepNext() bool {
- b.cur, b.err = b.mapper(b.batch[0])
- b.batch = b.batch[1:]
- return b.err == nil
-}
-
-func (b *batchedLoader[_, _, C]) At() C {
- return b.cur
-}
-
-func (b *batchedLoader[_, _, _]) Err() error {
- return b.err
-}
-
-// to ensure memory is bounded while loading chunks
-// TODO(owen-d): testware
-func newBatchedChunkLoader(
- ctx context.Context,
- fetchers []Fetcher[chunk.Chunk, chunk.Chunk],
- inputs [][]chunk.Chunk,
- metrics *Metrics,
- batchSize int,
-) *batchedLoader[chunk.Chunk, chunk.Chunk, v1.ChunkRefWithIter] {
-
- mapper := func(c chunk.Chunk) (v1.ChunkRefWithIter, error) {
- chk := c.Data.(*chunkenc.Facade).LokiChunk()
- metrics.chunkSize.Observe(float64(chk.UncompressedSize()))
- itr, err := chk.Iterator(
- ctx,
- time.Unix(0, 0),
- time.Unix(0, math.MaxInt64),
- logproto.FORWARD,
- logql_log.NewNoopPipeline().ForStream(nil),
- )
-
- if err != nil {
- return v1.ChunkRefWithIter{}, err
- }
-
- return v1.ChunkRefWithIter{
- Ref: v1.ChunkRef{
- From: c.From,
- Through: c.Through,
- Checksum: c.Checksum,
- },
- Itr: itr,
- }, nil
- }
- return newBatchedLoader(ctx, fetchers, inputs, mapper, batchSize)
-}
-
-func newBatchedBlockLoader(
- ctx context.Context,
- fetcher Fetcher[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier],
- blocks []bloomshipper.BlockRef,
- batchSize int,
-) *batchedLoader[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier, *bloomshipper.CloseableBlockQuerier] {
-
- fetchers := []Fetcher[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier]{fetcher}
- inputs := [][]bloomshipper.BlockRef{blocks}
- mapper := func(a *bloomshipper.CloseableBlockQuerier) (*bloomshipper.CloseableBlockQuerier, error) {
- return a, nil
- }
-
- return newBatchedLoader(ctx, fetchers, inputs, mapper, batchSize)
-}
-
-// compiler checks
-var _ iter.Iterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
-var _ iter.CloseIterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
-var _ iter.ResetIterator[*v1.SeriesWithBlooms] = &blockLoadingIter{}
-
-// TODO(chaudum): testware
-func newBlockLoadingIter(ctx context.Context, blocks []bloomshipper.BlockRef, fetcher FetchFunc[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier], batchSize int) *blockLoadingIter {
-
- return &blockLoadingIter{
- ctx: ctx,
- fetcher: fetcher,
- inputs: blocks,
- batchSize: batchSize,
- loaded: make(map[io.Closer]struct{}),
- }
-}
-
-type blockLoadingIter struct {
- // constructor arguments
- ctx context.Context
- fetcher Fetcher[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier]
- inputs []bloomshipper.BlockRef
- overlapping iter.Iterator[[]bloomshipper.BlockRef]
- batchSize int
- // optional arguments
- filter func(*bloomshipper.CloseableBlockQuerier) bool
- // internals
- initialized bool
- err error
- iter iter.Iterator[*v1.SeriesWithBlooms]
- loader *batchedLoader[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier, *bloomshipper.CloseableBlockQuerier]
- loaded map[io.Closer]struct{}
-}
-
-// At implements v1.Iterator.
-func (i *blockLoadingIter) At() *v1.SeriesWithBlooms {
- if !i.initialized {
- panic("iterator not initialized")
- }
- return i.iter.At()
-}
-
-// Err implements v1.Iterator.
-func (i *blockLoadingIter) Err() error {
- if !i.initialized {
- panic("iterator not initialized")
- }
- if i.err != nil {
- return i.err
- }
- return i.iter.Err()
-}
-
-func (i *blockLoadingIter) init() {
- if i.initialized {
- return
- }
-
- // group overlapping blocks
- i.overlapping = overlappingBlocksIter(i.inputs)
-
- // set initial iter
- i.iter = iter.NewEmptyIter[*v1.SeriesWithBlooms]()
-
- // set "match all" filter function if not present
- if i.filter == nil {
- i.filter = func(cbq *bloomshipper.CloseableBlockQuerier) bool { return true }
- }
-
- // done
- i.initialized = true
-}
-
-// load next populates the underlying iter via relevant batches
-// and returns the result of iter.Next()
-func (i *blockLoadingIter) loadNext() bool {
- for i.overlapping.Next() {
- blockRefs := i.overlapping.At()
-
- loader := newBatchedBlockLoader(i.ctx, i.fetcher, blockRefs, i.batchSize)
- filtered := iter.NewFilterIter[*bloomshipper.CloseableBlockQuerier](loader, i.filter)
-
- iters := make([]iter.PeekIterator[*v1.SeriesWithBlooms], 0, len(blockRefs))
- for filtered.Next() {
- bq := filtered.At()
- i.loaded[bq] = struct{}{}
- itr, err := bq.SeriesIter()
- if err != nil {
- i.err = err
- i.iter = iter.NewEmptyIter[*v1.SeriesWithBlooms]()
- return false
- }
- iters = append(iters, itr)
- }
-
- if err := filtered.Err(); err != nil {
- i.err = err
- i.iter = iter.NewEmptyIter[*v1.SeriesWithBlooms]()
- return false
- }
-
- // edge case: we've filtered out all blocks in the batch; check next batch
- if len(iters) == 0 {
- continue
- }
-
- // Turn the list of blocks into a single iterator that returns the next series
- mergedBlocks := v1.NewHeapIterForSeriesWithBloom(iters...)
- // two overlapping blocks can conceivably have the same series, so we need to dedupe,
- // preferring the one with the most chunks already indexed since we'll have
- // to add fewer chunks to the bloom
- i.iter = iter.NewDedupingIter[*v1.SeriesWithBlooms, *v1.SeriesWithBlooms](
- func(a, b *v1.SeriesWithBlooms) bool {
- return a.Series.Fingerprint == b.Series.Fingerprint
- },
- iter.Identity[*v1.SeriesWithBlooms],
- func(a, b *v1.SeriesWithBlooms) *v1.SeriesWithBlooms {
- if len(a.Series.Chunks) > len(b.Series.Chunks) {
- return a
- }
- return b
- },
- iter.NewPeekIter(mergedBlocks),
- )
- return i.iter.Next()
- }
-
- i.iter = iter.NewEmptyIter[*v1.SeriesWithBlooms]()
- i.err = i.overlapping.Err()
- return false
-}
-
-// Next implements v1.Iterator.
-func (i *blockLoadingIter) Next() bool {
- i.init()
- return i.iter.Next() || i.loadNext()
-}
-
-// Close implements v1.CloseableIterator.
-func (i *blockLoadingIter) Close() error {
- var err multierror.MultiError
- for k := range i.loaded {
- err.Add(k.Close())
- }
- return err.Err()
-}
-
-// Reset implements v1.ResettableIterator.
-// TODO(chaudum) Cache already fetched blocks to to avoid the overhead of
-// creating the reader.
-func (i *blockLoadingIter) Reset() error {
- if !i.initialized {
- return nil
- }
- // close loaded queriers
- err := i.Close()
- i.initialized = false
- clear(i.loaded)
- return err
-}
-
-func (i *blockLoadingIter) Filter(filter func(*bloomshipper.CloseableBlockQuerier) bool) {
- if i.initialized {
- panic("iterator already initialized")
- }
- i.filter = filter
-}
-
-func overlappingBlocksIter(inputs []bloomshipper.BlockRef) iter.Iterator[[]bloomshipper.BlockRef] {
- // can we assume sorted blocks?
- peekIter := iter.NewPeekIter(iter.NewSliceIter(inputs))
-
- return iter.NewDedupingIter[bloomshipper.BlockRef, []bloomshipper.BlockRef](
- func(a bloomshipper.BlockRef, b []bloomshipper.BlockRef) bool {
- minFp := b[0].Bounds.Min
- maxFp := slices.MaxFunc(b, func(a, b bloomshipper.BlockRef) int { return int(a.Bounds.Max - b.Bounds.Max) }).Bounds.Max
- return a.Bounds.Overlaps(v1.NewBounds(minFp, maxFp))
- },
- func(a bloomshipper.BlockRef) []bloomshipper.BlockRef {
- return []bloomshipper.BlockRef{a}
- },
- func(a bloomshipper.BlockRef, b []bloomshipper.BlockRef) []bloomshipper.BlockRef {
- return append(b, a)
- },
- peekIter,
- )
-}
diff --git a/pkg/bloomcompactor/batch_test.go b/pkg/bloomcompactor/batch_test.go
deleted file mode 100644
index 09d595459b509..0000000000000
--- a/pkg/bloomcompactor/batch_test.go
+++ /dev/null
@@ -1,210 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "errors"
- "testing"
-
- "github.com/stretchr/testify/require"
-
- v2 "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
-)
-
-func TestBatchedLoader(t *testing.T) {
- t.Parallel()
-
- errMapper := func(i int) (int, error) {
- return 0, errors.New("bzzt")
- }
- successMapper := func(i int) (int, error) {
- return i, nil
- }
-
- expired, cancel := context.WithCancel(context.Background())
- cancel()
-
- for _, tc := range []struct {
- desc string
- ctx context.Context
- batchSize int
- mapper func(int) (int, error)
- err bool
- inputs [][]int
- exp []int
- }{
- {
- desc: "OneBatch",
- ctx: context.Background(),
- batchSize: 2,
- mapper: successMapper,
- err: false,
- inputs: [][]int{{0, 1}},
- exp: []int{0, 1},
- },
- {
- desc: "ZeroBatchSizeStillWorks",
- ctx: context.Background(),
- batchSize: 0,
- mapper: successMapper,
- err: false,
- inputs: [][]int{{0, 1}},
- exp: []int{0, 1},
- },
- {
- desc: "OneBatchLessThanFull",
- ctx: context.Background(),
- batchSize: 2,
- mapper: successMapper,
- err: false,
- inputs: [][]int{{0}},
- exp: []int{0},
- },
- {
- desc: "TwoBatches",
- ctx: context.Background(),
- batchSize: 2,
- mapper: successMapper,
- err: false,
- inputs: [][]int{{0, 1, 2, 3}},
- exp: []int{0, 1, 2, 3},
- },
- {
- desc: "MultipleBatchesMultipleLoaders",
- ctx: context.Background(),
- batchSize: 2,
- mapper: successMapper,
- err: false,
- inputs: [][]int{{0, 1}, {2}, {3, 4, 5}},
- exp: []int{0, 1, 2, 3, 4, 5},
- },
- {
- desc: "HandlesEmptyInputs",
- ctx: context.Background(),
- batchSize: 2,
- mapper: successMapper,
- err: false,
- inputs: [][]int{{0, 1, 2, 3}, nil, {4}},
- exp: []int{0, 1, 2, 3, 4},
- },
- {
- desc: "Timeout",
- ctx: expired,
- batchSize: 2,
- mapper: successMapper,
- err: true,
- inputs: [][]int{{0}},
- },
- {
- desc: "MappingFailure",
- ctx: context.Background(),
- batchSize: 2,
- mapper: errMapper,
- err: true,
- inputs: [][]int{{0}},
- },
- } {
- tc := tc
- t.Run(tc.desc, func(t *testing.T) {
- fetchers := make([]Fetcher[int, int], 0, len(tc.inputs))
- for range tc.inputs {
- fetchers = append(
- fetchers,
- FetchFunc[int, int](func(ctx context.Context, xs []int) ([]int, error) {
- if ctx.Err() != nil {
- return nil, ctx.Err()
- }
- return xs, nil
- }),
- )
- }
-
- loader := newBatchedLoader[int, int, int](
- tc.ctx,
- fetchers,
- tc.inputs,
- tc.mapper,
- tc.batchSize,
- )
-
- got, err := v2.Collect[int](loader)
- if tc.err {
- require.Error(t, err)
- return
- }
- require.NoError(t, err)
- require.Equal(t, tc.exp, got)
-
- })
- }
-}
-
-func TestOverlappingBlocksIter(t *testing.T) {
- t.Parallel()
- for _, tc := range []struct {
- desc string
- inp []bloomshipper.BlockRef
- exp int // expected groups
- }{
- {
- desc: "Empty",
- inp: []bloomshipper.BlockRef{},
- exp: 0,
- },
- {
- desc: "NonOverlapping",
- inp: []bloomshipper.BlockRef{
- genBlockRef(0x0000, 0x00ff),
- genBlockRef(0x0100, 0x01ff),
- genBlockRef(0x0200, 0x02ff),
- },
- exp: 3,
- },
- {
- desc: "AllOverlapping",
- inp: []bloomshipper.BlockRef{
- genBlockRef(0x0000, 0x02ff), // |-----------|
- genBlockRef(0x0100, 0x01ff), // |---|
- genBlockRef(0x0200, 0x02ff), // |---|
- },
- exp: 1,
- },
- {
- desc: "PartialOverlapping",
- inp: []bloomshipper.BlockRef{
- genBlockRef(0x0000, 0x01ff), // group 1 |-------|
- genBlockRef(0x0100, 0x02ff), // group 1 |-------|
- genBlockRef(0x0200, 0x03ff), // group 1 |-------|
- genBlockRef(0x0200, 0x02ff), // group 1 |---|
- },
- exp: 1,
- },
- {
- desc: "PartialOverlapping",
- inp: []bloomshipper.BlockRef{
- genBlockRef(0x0000, 0x01ff), // group 1 |-------|
- genBlockRef(0x0100, 0x02ff), // group 1 |-------|
- genBlockRef(0x0100, 0x01ff), // group 1 |---|
- genBlockRef(0x0300, 0x03ff), // group 2 |---|
- genBlockRef(0x0310, 0x03ff), // group 2 |-|
- },
- exp: 2,
- },
- } {
- tc := tc
- t.Run(tc.desc, func(t *testing.T) {
- it := overlappingBlocksIter(tc.inp)
- var overlapping [][]bloomshipper.BlockRef
- var i int
- for it.Next() && it.Err() == nil {
- require.NotNil(t, it.At())
- overlapping = append(overlapping, it.At())
- for _, r := range it.At() {
- t.Log(i, r)
- }
- i++
- }
- require.Equal(t, tc.exp, len(overlapping))
- })
- }
-}
diff --git a/pkg/bloomcompactor/bloomcompactor.go b/pkg/bloomcompactor/bloomcompactor.go
deleted file mode 100644
index 6f07389a0bb4a..0000000000000
--- a/pkg/bloomcompactor/bloomcompactor.go
+++ /dev/null
@@ -1,510 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "sync"
- "time"
-
- "github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/grafana/dskit/backoff"
- "github.com/grafana/dskit/concurrency"
- "github.com/grafana/dskit/multierror"
- "github.com/grafana/dskit/ring"
- "github.com/grafana/dskit/services"
- "github.com/pkg/errors"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/common/model"
-
- "github.com/grafana/loki/v3/pkg/bloomutils"
- iter "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/storage"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/config"
- "github.com/grafana/loki/v3/pkg/storage/stores"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- utillog "github.com/grafana/loki/v3/pkg/util/log"
- util_ring "github.com/grafana/loki/v3/pkg/util/ring"
-)
-
-var (
- RingOp = ring.NewOp([]ring.InstanceState{ring.JOINING, ring.ACTIVE}, nil)
-)
-
-/*
-Bloom-compactor
-
-This is a standalone service that is responsible for compacting TSDB indexes into bloomfilters.
-It creates and merges bloomfilters into an aggregated form, called bloom-blocks.
-It maintains a list of references between bloom-blocks and TSDB indexes in files called meta.jsons.
-
-Bloom-compactor regularly runs to check for changes in meta.jsons and runs compaction only upon changes in TSDBs.
-*/
-type Compactor struct {
- services.Service
-
- cfg Config
- schemaCfg config.SchemaConfig
- logger log.Logger
- limits Limits
-
- tsdbStore TSDBStore
- // TODO(owen-d): ShardingStrategy
- controller *SimpleBloomController
- retentionManager *RetentionManager
-
- // temporary workaround until bloomStore has implemented read/write shipper interface
- bloomStore bloomshipper.StoreBase
-
- sharding util_ring.TenantSharding
-
- metrics *Metrics
-}
-
-func New(
- cfg Config,
- schemaCfg config.SchemaConfig,
- storeCfg storage.Config,
- clientMetrics storage.ClientMetrics,
- fetcherProvider stores.ChunkFetcherProvider,
- ring ring.ReadRing,
- ringLifeCycler *ring.BasicLifecycler,
- limits Limits,
- store bloomshipper.Store,
- logger log.Logger,
- r prometheus.Registerer,
-) (*Compactor, error) {
- utillog.WarnExperimentalUse("Bloom Compactor", logger)
- c := &Compactor{
- cfg: cfg,
- schemaCfg: schemaCfg,
- logger: logger,
- sharding: util_ring.NewTenantShuffleSharding(ring, ringLifeCycler, limits.BloomCompactorShardSize),
- limits: limits,
- bloomStore: store,
- metrics: NewMetrics(r, store.BloomMetrics()),
- }
-
- tsdbStore, err := NewTSDBStores(schemaCfg, storeCfg, clientMetrics, logger)
- if err != nil {
- return nil, errors.Wrap(err, "failed to create TSDB store")
- }
- c.tsdbStore = tsdbStore
-
- chunkLoader := NewStoreChunkLoader(
- fetcherProvider,
- c.metrics,
- )
-
- c.controller = NewSimpleBloomController(
- c.tsdbStore,
- c.bloomStore,
- chunkLoader,
- c.limits,
- c.metrics,
- c.logger,
- )
-
- c.retentionManager = NewRetentionManager(
- c.cfg.RetentionConfig,
- c.limits,
- c.bloomStore,
- newFirstTokenRetentionSharding(ring, ringLifeCycler),
- c.metrics,
- c.logger,
- )
-
- c.Service = services.NewBasicService(c.starting, c.running, c.stopping)
- return c, nil
-}
-
-func (c *Compactor) starting(_ context.Context) (err error) {
- c.metrics.compactorRunning.Set(1)
- return err
-}
-
-func (c *Compactor) stopping(_ error) error {
- c.metrics.compactorRunning.Set(0)
- return nil
-}
-
-func (c *Compactor) running(ctx context.Context) error {
- // run once at beginning
- if err := c.runOne(ctx); err != nil {
- return err
- }
-
- ticker := time.NewTicker(c.cfg.CompactionInterval)
- defer ticker.Stop()
- for {
- select {
- case <-ctx.Done():
- err := ctx.Err()
- level.Debug(c.logger).Log("msg", "compactor context done", "err", err)
- return err
-
- case <-ticker.C:
- if err := c.runOne(ctx); err != nil {
- return err
- }
- }
- }
-}
-
-func runWithRetries(
- ctx context.Context,
- minBackoff, maxBackoff time.Duration,
- maxRetries int,
- f func(ctx context.Context) error,
-) error {
- var lastErr error
-
- retries := backoff.New(ctx, backoff.Config{
- MinBackoff: minBackoff,
- MaxBackoff: maxBackoff,
- MaxRetries: maxRetries,
- })
-
- for retries.Ongoing() {
- lastErr = f(ctx)
- if lastErr == nil {
- return nil
- }
-
- retries.Wait()
- }
-
- return lastErr
-}
-
-type tenantTableRange struct {
- tenant string
- table config.DayTable
- ownershipRange v1.FingerprintBounds
-
- finished bool
- queueTime, startTime, endTime time.Time
-}
-
-func (c *Compactor) tenants(ctx context.Context, table config.DayTable) (*iter.SliceIter[string], error) {
- tenants, err := c.tsdbStore.UsersForPeriod(ctx, table)
- if err != nil {
- return nil, errors.Wrap(err, "getting tenants")
- }
-
- return iter.NewSliceIter(tenants), nil
-}
-
-// ownsTenant returns the ownership range for the tenant, if the compactor owns the tenant, and an error.
-func (c *Compactor) ownsTenant(tenant string) ([]v1.FingerprintBounds, bool, error) {
- if !c.limits.BloomCompactorEnabled(tenant) {
- return nil, false, nil
- }
- tenantRing, owned := c.sharding.OwnsTenant(tenant)
- if !owned {
- return nil, false, nil
- }
-
- // TODO(owen-d): use .GetTokenRangesForInstance()
- // when it's supported for non zone-aware rings
- // instead of doing all this manually
-
- rs, err := tenantRing.GetAllHealthy(RingOp)
- if err != nil {
- return nil, false, errors.Wrap(err, "getting ring healthy instances")
- }
-
- ranges, err := bloomutils.TokenRangesForInstance(c.cfg.Ring.InstanceID, rs.Instances)
- if err != nil {
- return nil, false, errors.Wrap(err, "getting token ranges for instance")
- }
-
- keyspaces := bloomutils.KeyspacesFromTokenRanges(ranges)
- return keyspaces, true, nil
-}
-
-// runs a single round of compaction for all relevant tenants and tables
-func (c *Compactor) runOne(ctx context.Context) error {
- c.metrics.compactionsStarted.Inc()
- start := time.Now()
- level.Info(c.logger).Log("msg", "running bloom compaction", "workers", c.cfg.WorkerParallelism)
- var workersErr, retentionErr error
- var wg sync.WaitGroup
- input := make(chan *tenantTableRange)
-
- // Launch retention (will return instantly if retention is disabled or not owned by this compactor)
- wg.Add(1)
- go func() {
- retentionErr = c.retentionManager.Apply(ctx)
- wg.Done()
- }()
-
- tables := c.tables(time.Now())
- level.Debug(c.logger).Log("msg", "loaded tables", "tables", tables.TotalDays())
-
- tracker, err := newCompactionTracker(tables.TotalDays())
- if err != nil {
- return errors.Wrap(err, "creating compaction tracker")
- }
-
- wg.Add(1)
- go func() {
- workersErr = c.runWorkers(ctx, input, tracker)
- wg.Done()
- }()
-
- err = c.loadWork(ctx, tables, input, tracker)
-
- wg.Wait()
- duration := time.Since(start)
- err = multierror.New(retentionErr, workersErr, err, ctx.Err()).Err()
-
- if err != nil {
- level.Error(c.logger).Log("msg", "compaction iteration failed", "err", err, "duration", duration)
- c.metrics.compactionCompleted.WithLabelValues(statusFailure).Inc()
- c.metrics.compactionTime.WithLabelValues(statusFailure).Observe(time.Since(start).Seconds())
- return err
- }
-
- c.metrics.compactionCompleted.WithLabelValues(statusSuccess).Inc()
- c.metrics.compactionTime.WithLabelValues(statusSuccess).Observe(time.Since(start).Seconds())
- level.Info(c.logger).Log("msg", "compaction iteration completed", "duration", duration)
- return nil
-}
-
-func (c *Compactor) tables(ts time.Time) *dayRangeIterator {
- // adjust the minimum by one to make it inclusive, which is more intuitive
- // for a configuration variable
- adjustedMin := min(c.cfg.MinTableOffset - 1)
- minCompactionDelta := time.Duration(adjustedMin) * config.ObjectStorageIndexRequiredPeriod
- maxCompactionDelta := time.Duration(c.cfg.MaxTableOffset) * config.ObjectStorageIndexRequiredPeriod
-
- from := ts.Add(-maxCompactionDelta).UnixNano() / int64(config.ObjectStorageIndexRequiredPeriod) * int64(config.ObjectStorageIndexRequiredPeriod)
- through := ts.Add(-minCompactionDelta).UnixNano() / int64(config.ObjectStorageIndexRequiredPeriod) * int64(config.ObjectStorageIndexRequiredPeriod)
-
- fromDay := config.NewDayTime(model.TimeFromUnixNano(from))
- throughDay := config.NewDayTime(model.TimeFromUnixNano(through))
- level.Debug(c.logger).Log("msg", "loaded tables for compaction", "from", fromDay, "through", throughDay)
- return newDayRangeIterator(fromDay, throughDay, c.schemaCfg)
-}
-
-func (c *Compactor) loadWork(
- ctx context.Context,
- tables *dayRangeIterator,
- ch chan<- *tenantTableRange,
- tracker *compactionTracker,
-) error {
-
- for tables.Next() && tables.Err() == nil && ctx.Err() == nil {
- table := tables.At()
-
- level.Debug(c.logger).Log("msg", "loading work for table", "table", table)
-
- tenants, err := c.tenants(ctx, table)
- if err != nil {
- return errors.Wrap(err, "getting tenants")
- }
- nTenants := tenants.Remaining()
-
- type ownedTenant struct {
- tenant string
- ownershipRanges []v1.FingerprintBounds
- }
-
- // build owned tenants separately and load them all prior to compaction in order to
- // accurately report progress
- var ownedTenants []ownedTenant
-
- for tenants.Next() && tenants.Err() == nil && ctx.Err() == nil {
- c.metrics.tenantsDiscovered.Inc()
- tenant := tenants.At()
- ownershipRanges, owns, err := c.ownsTenant(tenant)
- if err != nil {
- return errors.Wrap(err, "checking tenant ownership")
- }
- if !owns {
- level.Debug(c.logger).Log("msg", "skipping tenant", "tenant", tenant, "table", table)
- c.metrics.tenantsSkipped.Inc()
- continue
- }
- c.metrics.tenantsOwned.Inc()
- ownedTenants = append(ownedTenants, ownedTenant{tenant, ownershipRanges})
- }
- if err := tenants.Err(); err != nil {
- level.Error(c.logger).Log("msg", "error iterating tenants", "err", err)
- return errors.Wrap(err, "iterating tenants")
- }
-
- level.Debug(c.logger).Log("msg", "loaded tenants", "table", table, "tenants", nTenants, "owned_tenants", len(ownedTenants))
- tracker.registerTable(table.DayTime, len(ownedTenants))
-
- for _, t := range ownedTenants {
- // loop over ranges, registering them in the tracker;
- // we add them to the tracker before queueing them
- // so progress reporting is aware of all tenant/table
- // pairs prior to execution. Otherwise, progress could
- // decrease over time as more work is discovered.
- var inputs []*tenantTableRange
- for _, ownershipRange := range t.ownershipRanges {
- tt := tenantTableRange{
- tenant: t.tenant,
- table: table,
- ownershipRange: ownershipRange,
- }
- tracker.update(tt.tenant, tt.table.DayTime, tt.ownershipRange, tt.ownershipRange.Min)
- inputs = append(inputs, &tt)
- }
-
- // iterate the inputs, queueing them
- for _, tt := range inputs {
- level.Debug(c.logger).Log("msg", "enqueueing work for tenant", "tenant", tt.tenant, "table", table, "ownership", tt.ownershipRange.String())
- tt.queueTime = time.Now() // accurrately report queue time
- select {
- case ch <- tt:
- case <-ctx.Done():
- return ctx.Err()
- }
- }
- }
-
- if err := tenants.Err(); err != nil {
- level.Error(c.logger).Log("msg", "error iterating tenants", "err", err)
- return errors.Wrap(err, "iterating tenants")
- }
-
- }
-
- if err := tables.Err(); err != nil {
- level.Error(c.logger).Log("msg", "error iterating tables", "err", err)
- return errors.Wrap(err, "iterating tables")
- }
-
- close(ch)
- return ctx.Err()
-}
-
-func (c *Compactor) runWorkers(
- ctx context.Context,
- input <-chan *tenantTableRange,
- tracker *compactionTracker,
-) error {
-
- // TODO(owen-d): refactor for cleanliness
- reporterCtx, cancel := context.WithCancel(ctx)
- var wg sync.WaitGroup
- wg.Add(1)
- go func() {
- ticker := time.NewTicker(30 * time.Second)
- for {
- select {
- case <-ticker.C:
- c.metrics.progress.Set(tracker.progress())
- case <-reporterCtx.Done():
- c.metrics.progress.Set(tracker.progress())
- wg.Done()
- ticker.Stop()
- return
- }
- }
- }()
-
- err := concurrency.ForEachJob(ctx, c.cfg.WorkerParallelism, c.cfg.WorkerParallelism, func(ctx context.Context, idx int) error {
-
- for {
- select {
- case <-ctx.Done():
- return ctx.Err()
-
- case tt, ok := <-input:
- if !ok {
- return nil
- }
- c.metrics.tenantsStarted.Inc()
- err := c.compactTenantTable(ctx, tt, tracker)
- duration := tt.endTime.Sub(tt.startTime)
- c.metrics.timePerTenant.WithLabelValues(tt.tenant).Add(duration.Seconds())
- progress := tracker.progress()
-
- if err != nil {
- c.metrics.tenantTableRanges.WithLabelValues(statusFailure).Inc()
- return errors.Wrapf(
- err,
- "compacting tenant table (%s) for tenant (%s) with ownership (%s)",
- tt.table,
- tt.tenant,
- tt.ownershipRange,
- )
- }
- level.Debug(c.logger).Log(
- "msg", "finished compacting tenant table",
- "tenant", tt.tenant,
- "table", tt.table,
- "ownership", tt.ownershipRange.String(),
- "duration", duration,
- "current_progress", progress,
- )
- c.metrics.tenantTableRanges.WithLabelValues(statusSuccess).Inc()
- }
- }
-
- })
- cancel()
- wg.Wait()
-
- return err
-
-}
-
-func (c *Compactor) compactTenantTable(ctx context.Context, tt *tenantTableRange, tracker *compactionTracker) error {
- level.Info(c.logger).Log("msg", "compacting", "org_id", tt.tenant, "table", tt.table, "ownership", tt.ownershipRange.String())
- tt.startTime = time.Now()
- err := c.controller.compactTenant(ctx, tt.table, tt.tenant, tt.ownershipRange, tracker)
- tt.finished = true
- tt.endTime = time.Now()
- tracker.update(tt.tenant, tt.table.DayTime, tt.ownershipRange, tt.ownershipRange.Max)
- level.Info(c.logger).Log("msg", "finished compacting", "org_id", tt.tenant, "table", tt.table, "ownership", tt.ownershipRange.String(), "err", err)
- return err
-}
-
-type dayRangeIterator struct {
- min, max, cur config.DayTime
- curPeriod config.PeriodConfig
- schemaCfg config.SchemaConfig
- err error
-}
-
-func newDayRangeIterator(min, max config.DayTime, schemaCfg config.SchemaConfig) *dayRangeIterator {
- return &dayRangeIterator{min: min, max: max, cur: min.Dec(), schemaCfg: schemaCfg}
-}
-
-func (r *dayRangeIterator) TotalDays() int {
- offset := r.cur
- if r.cur.Before(r.min) {
- offset = r.min
- }
- return int(r.max.Sub(offset.Time) / config.ObjectStorageIndexRequiredPeriod)
-}
-
-func (r *dayRangeIterator) Next() bool {
- r.cur = r.cur.Inc()
- if !r.cur.Before(r.max) {
- return false
- }
-
- period, err := r.schemaCfg.SchemaForTime(r.cur.ModelTime())
- if err != nil {
- r.err = errors.Wrapf(err, "getting schema for time (%s)", r.cur)
- return false
- }
- r.curPeriod = period
-
- return true
-}
-
-func (r *dayRangeIterator) At() config.DayTable {
- return config.NewDayTable(r.cur, r.curPeriod.IndexTables.Prefix)
-}
-
-func (r *dayRangeIterator) Err() error {
- return nil
-}
diff --git a/pkg/bloomcompactor/bloomcompactor_test.go b/pkg/bloomcompactor/bloomcompactor_test.go
deleted file mode 100644
index 2a82782b722c6..0000000000000
--- a/pkg/bloomcompactor/bloomcompactor_test.go
+++ /dev/null
@@ -1,284 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "flag"
- "fmt"
- "math"
- "testing"
- "time"
-
- "github.com/grafana/dskit/ring"
- "github.com/grafana/dskit/services"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
-
- "github.com/grafana/loki/v3/pkg/bloomutils"
- "github.com/grafana/loki/v3/pkg/chunkenc"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/config"
- util_log "github.com/grafana/loki/v3/pkg/util/log"
- lokiring "github.com/grafana/loki/v3/pkg/util/ring"
- util_ring "github.com/grafana/loki/v3/pkg/util/ring"
- "github.com/grafana/loki/v3/pkg/validation"
-)
-
-func TestCompactor_ownsTenant(t *testing.T) {
- for _, tc := range []struct {
- name string
- limits Limits
- compactors int
-
- expectedCompactorsOwningTenant int
- }{
- {
- name: "no sharding with one instance",
- limits: mockLimits{
- shardSize: 0,
- },
- compactors: 1,
- expectedCompactorsOwningTenant: 1,
- },
- {
- name: "no sharding with multiple instances",
- limits: mockLimits{
- shardSize: 0,
- },
- compactors: 10,
- expectedCompactorsOwningTenant: 10,
- },
- {
- name: "sharding with one instance",
- limits: mockLimits{
- shardSize: 5,
- },
- compactors: 1,
- expectedCompactorsOwningTenant: 1,
- },
- {
- name: "sharding with multiple instances",
- limits: mockLimits{
- shardSize: 5,
- },
- compactors: 10,
- expectedCompactorsOwningTenant: 5,
- },
- } {
- t.Run(tc.name, func(t *testing.T) {
- var ringManagers []*lokiring.RingManager
- var compactors []*Compactor
- for i := 0; i < tc.compactors; i++ {
- var cfg Config
- cfg.RegisterFlags(flag.NewFlagSet("ring", flag.PanicOnError))
- cfg.Ring.KVStore.Store = "inmemory"
- cfg.Ring.InstanceID = fmt.Sprintf("bloom-compactor-%d", i)
- cfg.Ring.InstanceAddr = fmt.Sprintf("localhost-%d", i)
-
- ringManager, err := lokiring.NewRingManager("bloom-compactor", lokiring.ServerMode, cfg.Ring, 1, cfg.Ring.NumTokens, util_log.Logger, prometheus.NewRegistry())
- require.NoError(t, err)
- require.NoError(t, ringManager.StartAsync(context.Background()))
-
- shuffleSharding := util_ring.NewTenantShuffleSharding(ringManager.Ring, ringManager.RingLifecycler, tc.limits.BloomCompactorShardSize)
-
- compactor := &Compactor{
- cfg: cfg,
- sharding: shuffleSharding,
- limits: tc.limits,
- }
-
- ringManagers = append(ringManagers, ringManager)
- compactors = append(compactors, compactor)
- }
- defer func() {
- // Stop all rings and wait for them to stop.
- for _, ringManager := range ringManagers {
- ringManager.StopAsync()
- require.Eventually(t, func() bool {
- return ringManager.State() == services.Terminated
- }, 1*time.Minute, 100*time.Millisecond)
- }
- }()
-
- // Wait for all rings to see each other.
- for _, ringManager := range ringManagers {
- require.Eventually(t, func() bool {
- running := ringManager.State() == services.Running
- discovered := ringManager.Ring.InstancesCount() == tc.compactors
- return running && discovered
- }, 1*time.Minute, 100*time.Millisecond)
- }
-
- var compactorOwnsTenant int
- var compactorOwnershipRange []v1.FingerprintBounds
- for _, compactor := range compactors {
- ownershipRange, ownsTenant, err := compactor.ownsTenant("tenant")
- require.NoError(t, err)
- if ownsTenant {
- compactorOwnsTenant++
- compactorOwnershipRange = append(compactorOwnershipRange, ownershipRange...)
- }
- }
- require.Equal(t, tc.expectedCompactorsOwningTenant, compactorOwnsTenant)
-
- coveredKeySpace := v1.NewBounds(math.MaxUint64, 0)
- for i, boundsA := range compactorOwnershipRange {
- for j, boundsB := range compactorOwnershipRange {
- if i == j {
- continue
- }
- // Assert that the fingerprint key-space is not overlapping
- require.False(t, boundsA.Overlaps(boundsB))
- }
-
- if boundsA.Min < coveredKeySpace.Min {
- coveredKeySpace.Min = boundsA.Min
- }
- if boundsA.Max > coveredKeySpace.Max {
- coveredKeySpace.Max = boundsA.Max
- }
-
- }
- // Assert that the fingerprint key-space is complete
- require.True(t, coveredKeySpace.Equal(v1.NewBounds(0, math.MaxUint64)))
- })
- }
-}
-
-type mockLimits struct {
- shardSize int
-}
-
-func (m mockLimits) RetentionPeriod(_ string) time.Duration {
- panic("implement me")
-}
-
-func (m mockLimits) StreamRetention(_ string) []validation.StreamRetention {
- panic("implement me")
-}
-
-func (m mockLimits) AllByUserID() map[string]*validation.Limits {
- panic("implement me")
-}
-
-func (m mockLimits) DefaultLimits() *validation.Limits {
- panic("implement me")
-}
-
-func (m mockLimits) VolumeMaxSeries(_ string) int {
- panic("implement me")
-}
-
-func (m mockLimits) BloomCompactorShardSize(_ string) int {
- return m.shardSize
-}
-
-func (m mockLimits) BloomCompactorEnabled(_ string) bool {
- return true
-}
-
-func (m mockLimits) BloomNGramLength(_ string) int {
- panic("implement me")
-}
-
-func (m mockLimits) BloomNGramSkip(_ string) int {
- panic("implement me")
-}
-
-func (m mockLimits) BloomFalsePositiveRate(_ string) float64 {
- panic("implement me")
-}
-
-func (m mockLimits) BloomBlockEncoding(_ string) string {
- return chunkenc.EncNone.String()
-}
-
-func (m mockLimits) BloomCompactorMaxBlockSize(_ string) int {
- panic("implement me")
-}
-
-func (m mockLimits) BloomCompactorMaxBloomSize(_ string) int {
- panic("implement me")
-}
-
-func TestTokenRangesForInstance(t *testing.T) {
- desc := func(id int, tokens ...uint32) ring.InstanceDesc {
- return ring.InstanceDesc{Id: fmt.Sprintf("%d", id), Tokens: tokens}
- }
-
- tests := map[string]struct {
- input []ring.InstanceDesc
- exp map[string]ring.TokenRanges
- err bool
- }{
- "no nodes": {
- input: []ring.InstanceDesc{},
- exp: map[string]ring.TokenRanges{
- "0": {0, math.MaxUint32}, // have to put one in here to trigger test
- },
- err: true,
- },
- "one node": {
- input: []ring.InstanceDesc{
- desc(0, 0, 100),
- },
- exp: map[string]ring.TokenRanges{
- "0": {0, math.MaxUint32},
- },
- },
- "two nodes": {
- input: []ring.InstanceDesc{
- desc(0, 25, 75),
- desc(1, 10, 50, 100),
- },
- exp: map[string]ring.TokenRanges{
- "0": {10, 24, 50, 74},
- "1": {0, 9, 25, 49, 75, math.MaxUint32},
- },
- },
- "consecutive tokens": {
- input: []ring.InstanceDesc{
- desc(0, 99),
- desc(1, 100),
- },
- exp: map[string]ring.TokenRanges{
- "0": {0, 98, 100, math.MaxUint32},
- "1": {99, 99},
- },
- },
- "extremes": {
- input: []ring.InstanceDesc{
- desc(0, 0),
- desc(1, math.MaxUint32),
- },
- exp: map[string]ring.TokenRanges{
- "0": {math.MaxUint32, math.MaxUint32},
- "1": {0, math.MaxUint32 - 1},
- },
- },
- }
-
- for desc, test := range tests {
- t.Run(desc, func(t *testing.T) {
- for id := range test.exp {
- ranges, err := bloomutils.TokenRangesForInstance(id, test.input)
- if test.err {
- require.Error(t, err)
- continue
- }
- require.NoError(t, err)
- require.Equal(t, test.exp[id], ranges)
- }
- })
- }
-}
-
-func parseDayTime(s string) config.DayTime {
- t, err := time.Parse("2006-01-02", s)
- if err != nil {
- panic(err)
- }
- return config.DayTime{
- Time: model.TimeFromUnix(t.Unix()),
- }
-}
diff --git a/pkg/bloomcompactor/config.go b/pkg/bloomcompactor/config.go
deleted file mode 100644
index 82daac0eac39f..0000000000000
--- a/pkg/bloomcompactor/config.go
+++ /dev/null
@@ -1,98 +0,0 @@
-package bloomcompactor
-
-import (
- "flag"
- "fmt"
- "time"
-
- "github.com/pkg/errors"
-
- "github.com/grafana/loki/v3/pkg/util/ring"
-)
-
-const (
- ringReplicationFactor = 1
-)
-
-// Config configures the bloom-compactor component.
-type Config struct {
- // Ring configures the ring store used to save and retrieve the different Bloom-Compactor instances.
- // In case it isn't explicitly set, it follows the same behavior of the other rings (ex: using the common configuration
- // section and the ingester configuration by default).
- Ring ring.RingConfig `yaml:"ring,omitempty" doc:"description=Defines the ring to be used by the bloom-compactor servers. In case this isn't configured, this block supports inheriting configuration from the common ring section."`
- // Enabled configures whether bloom-compactors should be used to compact index values into bloomfilters
- Enabled bool `yaml:"enabled"`
- CompactionInterval time.Duration `yaml:"compaction_interval"`
- MinTableOffset int `yaml:"min_table_offset"`
- MaxTableOffset int `yaml:"max_table_offset"`
- WorkerParallelism int `yaml:"worker_parallelism"`
- RetryMinBackoff time.Duration `yaml:"compaction_retries_min_backoff"`
- RetryMaxBackoff time.Duration `yaml:"compaction_retries_max_backoff"`
- CompactionRetries int `yaml:"compaction_retries"`
-
- MaxCompactionParallelism int `yaml:"max_compaction_parallelism"`
-
- RetentionConfig RetentionConfig `yaml:"retention"`
-}
-
-// RegisterFlags registers flags for the Bloom-Compactor configuration.
-func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
- f.BoolVar(&cfg.Enabled, "bloom-compactor.enabled", false, "Flag to enable or disable the usage of the bloom-compactor component.")
- f.DurationVar(&cfg.CompactionInterval, "bloom-compactor.compaction-interval", 10*time.Minute, "Interval at which to re-run the compaction operation.")
- f.IntVar(&cfg.WorkerParallelism, "bloom-compactor.worker-parallelism", 1, "Number of workers to run in parallel for compaction.")
- // TODO(owen-d): This is a confusing name. Rename it to `min_table_offset`
- f.IntVar(&cfg.MinTableOffset, "bloom-compactor.min-table-offset", 1, "Newest day-table offset (from today, inclusive) to compact. Increase to lower cost by not re-writing data to object storage too frequently since recent data changes more often at the cost of not having blooms available as quickly.")
- // TODO(owen-d): ideally we'd set this per tenant based on their `reject_old_samples_max_age` setting,
- // but due to how we need to discover tenants, we can't do that yet. Tenant+Period discovery is done by
- // iterating the table periods in object storage and looking for tenants within that period.
- // In order to have this done dynamically, we'd need to account for tenant specific overrides, which are also
- // dynamically reloaded.
- // I'm doing it the simple way for now.
- f.IntVar(&cfg.MaxTableOffset, "bloom-compactor.max-table-offset", 2, "Oldest day-table offset (from today, inclusive) to compact. This can be used to lower cost by not trying to compact older data which doesn't change. This can be optimized by aligning it with the maximum `reject_old_samples_max_age` setting of any tenant.")
- f.DurationVar(&cfg.RetryMinBackoff, "bloom-compactor.compaction-retries-min-backoff", 10*time.Second, "Minimum backoff time between retries.")
- f.DurationVar(&cfg.RetryMaxBackoff, "bloom-compactor.compaction-retries-max-backoff", time.Minute, "Maximum backoff time between retries.")
- f.IntVar(&cfg.CompactionRetries, "bloom-compactor.compaction-retries", 3, "Number of retries to perform when compaction fails.")
- f.IntVar(&cfg.MaxCompactionParallelism, "bloom-compactor.max-compaction-parallelism", 1, "Maximum number of tables to compact in parallel. While increasing this value, please make sure compactor has enough disk space allocated to be able to store and compact as many tables.")
- cfg.RetentionConfig.RegisterFlags(f)
-
- // Ring
- skipFlags := []string{
- "bloom-compactor.ring.num-tokens",
- "bloom-compactor.ring.replication-factor",
- }
- cfg.Ring.RegisterFlagsWithPrefix("bloom-compactor.", "collectors/", f, skipFlags...)
- // Overrides
- f.IntVar(&cfg.Ring.NumTokens, "bloom-compactor.ring.num-tokens", 10, "Number of tokens to use in the ring per compactor. Higher number of tokens will result in more and smaller files (metas and blocks.)")
- // Ignored
- f.IntVar(&cfg.Ring.ReplicationFactor, "bloom-compactor.ring.replication-factor", ringReplicationFactor, fmt.Sprintf("IGNORED: Replication factor is fixed to %d", ringReplicationFactor))
-}
-
-func (cfg *Config) Validate() error {
- if !cfg.Enabled {
- return nil
- }
-
- if err := cfg.RetentionConfig.Validate(); err != nil {
- return err
- }
-
- if cfg.MinTableOffset > cfg.MaxTableOffset {
- return fmt.Errorf("min-table-offset (%d) must be less than or equal to max-table-offset (%d)", cfg.MinTableOffset, cfg.MaxTableOffset)
- }
- if cfg.Ring.ReplicationFactor != ringReplicationFactor {
- return errors.New("Replication factor must not be changed as it will not take effect")
- }
- return nil
-}
-
-type Limits interface {
- RetentionLimits
- BloomCompactorShardSize(tenantID string) int
- BloomCompactorEnabled(tenantID string) bool
- BloomNGramLength(tenantID string) int
- BloomNGramSkip(tenantID string) int
- BloomFalsePositiveRate(tenantID string) float64
- BloomCompactorMaxBlockSize(tenantID string) int
- BloomCompactorMaxBloomSize(tenantID string) int
- BloomBlockEncoding(tenantID string) string
-}
diff --git a/pkg/bloomcompactor/controller.go b/pkg/bloomcompactor/controller.go
deleted file mode 100644
index d53ba80b0123b..0000000000000
--- a/pkg/bloomcompactor/controller.go
+++ /dev/null
@@ -1,779 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "fmt"
- "math"
- "os"
- "sort"
- "sync"
-
- "github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/pkg/errors"
- "github.com/prometheus/common/model"
-
- "github.com/grafana/loki/v3/pkg/chunkenc"
- iter "github.com/grafana/loki/v3/pkg/iter/v2"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/config"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
-)
-
-type SimpleBloomController struct {
- tsdbStore TSDBStore
- bloomStore bloomshipper.StoreBase
- chunkLoader ChunkLoader
- metrics *Metrics
- limits Limits
-
- logger log.Logger
-}
-
-func NewSimpleBloomController(
- tsdbStore TSDBStore,
- blockStore bloomshipper.StoreBase,
- chunkLoader ChunkLoader,
- limits Limits,
- metrics *Metrics,
- logger log.Logger,
-) *SimpleBloomController {
- return &SimpleBloomController{
- tsdbStore: tsdbStore,
- bloomStore: blockStore,
- chunkLoader: chunkLoader,
- metrics: metrics,
- limits: limits,
- logger: logger,
- }
-}
-
-func (s *SimpleBloomController) writerReaderFunc() (v1.BlockWriter, v1.BlockReader) {
- dir, err := os.MkdirTemp("", "bloom-block-")
- if err != nil {
- panic(err)
- }
- return v1.NewDirectoryBlockWriter(dir), v1.NewDirectoryBlockReader(dir)
-}
-
-/*
-Compaction works as follows, split across many functions for clarity:
- 1. Fetch all meta.jsons for the given tenant and table which overlap the ownership range of this compactor.
- 2. Load current TSDBs for this tenant/table.
- 3. For each live TSDB (there should be only 1, but this works with multiple), find any gaps
- (fingerprint ranges) which are not up date, determined by checking other meta.jsons and comparing
- the tsdbs they were generated from + their ownership ranges.
- 4. Build new bloom blocks for each gap, using the series and chunks from the TSDBs and any existing
- blocks which overlap the gaps to accelerate bloom generation.
- 5. Write the new blocks and metas to the store.
- 6. Determine if any meta.jsons overlap the ownership range but are outdated, and remove them and
- their associated blocks if so.
-*/
-func (s *SimpleBloomController) compactTenant(
- ctx context.Context,
- table config.DayTable,
- tenant string,
- ownershipRange v1.FingerprintBounds,
- tracker *compactionTracker,
-) error {
- logger := log.With(s.logger, "org_id", tenant, "table", table.Addr(), "ownership", ownershipRange.String())
-
- client, err := s.bloomStore.Client(table.ModelTime())
- if err != nil {
- level.Error(logger).Log("msg", "failed to get client", "err", err)
- return errors.Wrap(err, "failed to get client")
- }
-
- // Fetch source metas to be used in both compaction and cleanup of out-of-date metas+blooms
- metas, err := s.bloomStore.FetchMetas(
- ctx,
- bloomshipper.MetaSearchParams{
- TenantID: tenant,
- Interval: bloomshipper.NewInterval(table.Bounds()),
- Keyspace: ownershipRange,
- },
- )
- if err != nil {
- level.Error(logger).Log("msg", "failed to get metas", "err", err)
- return errors.Wrap(err, "failed to get metas")
- }
-
- level.Debug(logger).Log("msg", "found relevant metas", "metas", len(metas))
-
- // fetch all metas overlapping our ownership range so we can safely
- // check which metas can be deleted even if they only partially overlap out ownership range
- superset, err := s.fetchSuperSet(ctx, tenant, table, ownershipRange, metas, logger)
- if err != nil {
- return errors.Wrap(err, "failed to fetch superset")
- }
-
- // build compaction plans
- work, err := s.findOutdatedGaps(ctx, tenant, table, ownershipRange, metas, logger)
- if err != nil {
- return errors.Wrap(err, "failed to find outdated gaps")
- }
-
- // build new blocks
- built, err := s.buildGaps(ctx, tenant, table, ownershipRange, client, work, tracker, logger)
- if err != nil {
- return errors.Wrap(err, "failed to build gaps")
- }
-
- // combine built and superset metas
- // in preparation for removing outdated ones
- combined := append(superset, built...)
-
- outdated, err := outdatedMetas(combined)
- if err != nil {
- return errors.Wrap(err, "failed to find outdated metas")
- }
- level.Debug(logger).Log("msg", "found outdated metas", "outdated", len(outdated))
-
- var (
- deletedMetas int
- deletedBlocks int
- )
- defer func() {
- s.metrics.metasDeleted.Add(float64(deletedMetas))
- s.metrics.blocksDeleted.Add(float64(deletedBlocks))
- }()
-
- for _, meta := range outdated {
- for _, block := range meta.Blocks {
- err := client.DeleteBlocks(ctx, []bloomshipper.BlockRef{block})
- if err != nil {
- if client.IsObjectNotFoundErr(err) {
- level.Debug(logger).Log("msg", "block not found while attempting delete, continuing", "block", block.String())
- } else {
- level.Error(logger).Log("msg", "failed to delete block", "err", err, "block", block.String())
- return errors.Wrap(err, "failed to delete block")
- }
- }
- deletedBlocks++
- level.Debug(logger).Log("msg", "removed outdated block", "block", block.String())
- }
-
- err = client.DeleteMetas(ctx, []bloomshipper.MetaRef{meta.MetaRef})
- if err != nil {
- if client.IsObjectNotFoundErr(err) {
- level.Debug(logger).Log("msg", "meta not found while attempting delete, continuing", "meta", meta.MetaRef.String())
- } else {
- level.Error(logger).Log("msg", "failed to delete meta", "err", err, "meta", meta.MetaRef.String())
- return errors.Wrap(err, "failed to delete meta")
- }
- }
- deletedMetas++
- level.Debug(logger).Log("msg", "removed outdated meta", "meta", meta.MetaRef.String())
- }
-
- level.Debug(logger).Log("msg", "finished compaction")
- return nil
-}
-
-// fetchSuperSet fetches all metas which overlap the ownership range of the first set of metas we've resolved
-func (s *SimpleBloomController) fetchSuperSet(
- ctx context.Context,
- tenant string,
- table config.DayTable,
- ownershipRange v1.FingerprintBounds,
- metas []bloomshipper.Meta,
- logger log.Logger,
-) ([]bloomshipper.Meta, error) {
- // in order to delete outdates metas which only partially fall within the ownership range,
- // we need to fetcha all metas in the entire bound range of the first set of metas we've resolved
- /*
- For instance, we have the following ownership range and we resolve `meta1` in our first Fetch call
- because it overlaps the ownership range, we'll need to fetch newer metas that may overlap it in order
- to check if it safely can be deleted. This falls partially outside our specific ownership range, but
- we can safely run multiple deletes by treating their removal as idempotent.
- |-------------ownership range-----------------|
- |-------meta1-------|
-
- we fetch this before possibly deleting meta1 |------|
- */
- superset := ownershipRange
- for _, meta := range metas {
- union := superset.Union(meta.Bounds)
- if len(union) > 1 {
- level.Error(logger).Log("msg", "meta bounds union is not a single range", "union", union)
- return nil, errors.New("meta bounds union is not a single range")
- }
- superset = union[0]
- }
-
- within := superset.Within(ownershipRange)
- level.Debug(logger).Log(
- "msg", "looking for superset metas",
- "superset", superset.String(),
- "superset_within", within,
- )
-
- if within {
- // we don't need to fetch any more metas
- // NB(owen-d): here we copy metas into the output. This is slightly inefficient, but
- // helps prevent mutability bugs by returning the same slice as the input.
- results := make([]bloomshipper.Meta, len(metas))
- copy(results, metas)
- return results, nil
- }
-
- supersetMetas, err := s.bloomStore.FetchMetas(
- ctx,
- bloomshipper.MetaSearchParams{
- TenantID: tenant,
- Interval: bloomshipper.NewInterval(table.Bounds()),
- Keyspace: superset,
- },
- )
-
- if err != nil {
- level.Error(logger).Log("msg", "failed to get meta superset range", "err", err, "superset", superset)
- return nil, errors.Wrap(err, "failed to get meta supseret range")
- }
-
- level.Debug(logger).Log(
- "msg", "found superset metas",
- "metas", len(metas),
- "fresh_metas", len(supersetMetas),
- "delta", len(supersetMetas)-len(metas),
- )
-
- return supersetMetas, nil
-}
-
-func (s *SimpleBloomController) findOutdatedGaps(
- ctx context.Context,
- tenant string,
- table config.DayTable,
- ownershipRange v1.FingerprintBounds,
- metas []bloomshipper.Meta,
- logger log.Logger,
-) ([]blockPlan, error) {
- // Resolve TSDBs
- tsdbs, err := s.tsdbStore.ResolveTSDBs(ctx, table, tenant)
- if err != nil {
- level.Error(logger).Log("msg", "failed to resolve tsdbs", "err", err)
- return nil, errors.Wrap(err, "failed to resolve tsdbs")
- }
-
- if len(tsdbs) == 0 {
- return nil, nil
- }
-
- // Determine which TSDBs have gaps in the ownership range and need to
- // be processed.
- tsdbsWithGaps, err := gapsBetweenTSDBsAndMetas(ownershipRange, tsdbs, metas)
- if err != nil {
- level.Error(logger).Log("msg", "failed to find gaps", "err", err)
- return nil, errors.Wrap(err, "failed to find gaps")
- }
-
- if len(tsdbsWithGaps) == 0 {
- level.Debug(logger).Log("msg", "blooms exist for all tsdbs")
- return nil, nil
- }
-
- work, err := blockPlansForGaps(tsdbsWithGaps, metas)
- if err != nil {
- level.Error(logger).Log("msg", "failed to create plan", "err", err)
- return nil, errors.Wrap(err, "failed to create plan")
- }
-
- return work, nil
-}
-
-func (s *SimpleBloomController) loadWorkForGap(
- ctx context.Context,
- table config.DayTable,
- tenant string,
- id tsdb.Identifier,
- gap gapWithBlocks,
-) (iter.Iterator[*v1.Series], iter.CloseResetIterator[*v1.SeriesWithBlooms], error) {
- // load a series iterator for the gap
- seriesItr, err := s.tsdbStore.LoadTSDB(ctx, table, tenant, id, gap.bounds)
- if err != nil {
- return nil, nil, errors.Wrap(err, "failed to load tsdb")
- }
-
- // load a blocks iterator for the gap
- fetcher, err := s.bloomStore.Fetcher(table.ModelTime())
- if err != nil {
- return nil, nil, errors.Wrap(err, "failed to get fetcher")
- }
-
- // NB(owen-d): we filter out nil blocks here to avoid panics in the bloom generator since the fetcher
- // input->output length and indexing in its contract
- // NB(chaudum): Do we want to fetch in strict mode and fail instead?
- f := FetchFunc[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier](func(ctx context.Context, refs []bloomshipper.BlockRef) ([]*bloomshipper.CloseableBlockQuerier, error) {
- blks, err := fetcher.FetchBlocks(ctx, refs, bloomshipper.WithFetchAsync(false), bloomshipper.WithIgnoreNotFound(true))
- if err != nil {
- return nil, err
- }
- exists := make([]*bloomshipper.CloseableBlockQuerier, 0, len(blks))
- for _, blk := range blks {
- if blk != nil {
- exists = append(exists, blk)
- }
- }
- return exists, nil
- })
- blocksIter := newBlockLoadingIter(ctx, gap.blocks, f, 10)
-
- return seriesItr, blocksIter, nil
-}
-
-func (s *SimpleBloomController) buildGaps(
- ctx context.Context,
- tenant string,
- table config.DayTable,
- ownershipRange v1.FingerprintBounds,
- client bloomshipper.Client,
- work []blockPlan,
- tracker *compactionTracker,
- logger log.Logger,
-) ([]bloomshipper.Meta, error) {
- // Generate Blooms
- // Now that we have the gaps, we will generate a bloom block for each gap.
- // We can accelerate this by using existing blocks which may already contain
- // needed chunks in their blooms, for instance after a new TSDB version is generated
- // but contains many of the same chunk references from the previous version.
- // To do this, we'll need to take the metas we've already resolved and find blocks
- // overlapping the ownership ranges we've identified as needing updates.
- // With these in hand, we can download the old blocks and use them to
- // accelerate bloom generation for the new blocks.
-
- blockEnc, err := chunkenc.ParseEncoding(s.limits.BloomBlockEncoding(tenant))
- if err != nil {
- return nil, errors.Wrap(err, "failed to parse block encoding")
- }
-
- var (
- blockCt int
- tsdbCt = len(work)
- nGramSize = uint64(s.limits.BloomNGramLength(tenant))
- nGramSkip = uint64(s.limits.BloomNGramSkip(tenant))
- maxBlockSize = uint64(s.limits.BloomCompactorMaxBlockSize(tenant))
- maxBloomSize = uint64(s.limits.BloomCompactorMaxBloomSize(tenant))
- blockOpts = v1.NewBlockOptions(blockEnc, nGramSize, nGramSkip, maxBlockSize, maxBloomSize)
- created []bloomshipper.Meta
- totalSeries int
- bytesAdded int
- )
-
- for i, plan := range work {
-
- reporter := biasedReporter(func(fp model.Fingerprint) {
- tracker.update(tenant, table.DayTime, ownershipRange, fp)
- }, ownershipRange, i, len(work))
-
- for i := range plan.gaps {
- gap := plan.gaps[i]
- logger := log.With(logger, "gap", gap.bounds.String(), "tsdb", plan.tsdb.Name())
-
- meta := bloomshipper.Meta{
- MetaRef: bloomshipper.MetaRef{
- Ref: bloomshipper.Ref{
- TenantID: tenant,
- TableName: table.Addr(),
- Bounds: gap.bounds,
- },
- },
- Sources: []tsdb.SingleTenantTSDBIdentifier{plan.tsdb},
- }
-
- // Fetch blocks that aren't up to date but are in the desired fingerprint range
- // to try and accelerate bloom creation
- level.Debug(logger).Log("msg", "loading series and blocks for gap", "blocks", len(gap.blocks))
- seriesItr, blocksIter, err := s.loadWorkForGap(ctx, table, tenant, plan.tsdb, gap)
- if err != nil {
- level.Error(logger).Log("msg", "failed to get series and blocks", "err", err)
- return nil, errors.Wrap(err, "failed to get series and blocks")
- }
-
- // TODO(owen-d): more elegant error handling than sync.OnceFunc
- closeBlocksIter := sync.OnceFunc(func() {
- if err := blocksIter.Close(); err != nil {
- level.Error(logger).Log("msg", "failed to close blocks iterator", "err", err)
- }
- })
- defer closeBlocksIter()
-
- // Blocks are built consuming the series iterator. For observability, we wrap the series iterator
- // with a counter iterator to count the number of times Next() is called on it.
- // This is used to observe the number of series that are being processed.
- seriesItrWithCounter := iter.NewCounterIter[*v1.Series](seriesItr)
-
- gen := NewSimpleBloomGenerator(
- tenant,
- blockOpts,
- seriesItrWithCounter,
- s.chunkLoader,
- blocksIter,
- s.writerReaderFunc,
- reporter,
- s.metrics,
- logger,
- )
-
- level.Debug(logger).Log("msg", "generating blocks", "overlapping_blocks", len(gap.blocks))
-
- newBlocks := gen.Generate(ctx)
- if err != nil {
- level.Error(logger).Log("msg", "failed to generate bloom", "err", err)
- return nil, errors.Wrap(err, "failed to generate bloom")
- }
-
- for newBlocks.Next() && newBlocks.Err() == nil {
- blockCt++
- blk := newBlocks.At()
-
- built, err := bloomshipper.BlockFrom(tenant, table.Addr(), blk)
- if err != nil {
- level.Error(logger).Log("msg", "failed to build block", "err", err)
- if err = blk.Reader().Cleanup(); err != nil {
- level.Error(logger).Log("msg", "failed to cleanup block directory", "err", err)
- }
- return nil, errors.Wrap(err, "failed to build block")
- }
-
- if err := client.PutBlock(
- ctx,
- built,
- ); err != nil {
- level.Error(logger).Log("msg", "failed to write block", "err", err)
- if err = blk.Reader().Cleanup(); err != nil {
- level.Error(logger).Log("msg", "failed to cleanup block directory", "err", err)
- }
- return nil, errors.Wrap(err, "failed to write block")
- }
- s.metrics.blocksCreated.Inc()
-
- if err := blk.Reader().Cleanup(); err != nil {
- level.Error(logger).Log("msg", "failed to cleanup block directory", "err", err)
- }
-
- totalGapKeyspace := (gap.bounds.Max - gap.bounds.Min)
- progress := (built.Bounds.Max - gap.bounds.Min)
- pct := float64(progress) / float64(totalGapKeyspace) * 100
- level.Debug(logger).Log(
- "msg", "uploaded block",
- "block", built.BlockRef.String(),
- "progress_pct", fmt.Sprintf("%.2f", pct),
- )
-
- meta.Blocks = append(meta.Blocks, built.BlockRef)
- }
-
- if err := newBlocks.Err(); err != nil {
- level.Error(logger).Log("msg", "failed to generate bloom", "err", err)
- return nil, errors.Wrap(err, "failed to generate bloom")
- }
-
- closeBlocksIter()
- bytesAdded += newBlocks.Bytes()
- totalSeries += seriesItrWithCounter.Count()
- s.metrics.blocksReused.Add(float64(len(gap.blocks)))
-
- // Write the new meta
- // TODO(owen-d): put total size in log, total time in metrics+log
- ref, err := bloomshipper.MetaRefFrom(tenant, table.Addr(), gap.bounds, meta.Sources, meta.Blocks)
- if err != nil {
- level.Error(logger).Log("msg", "failed to checksum meta", "err", err)
- return nil, errors.Wrap(err, "failed to checksum meta")
- }
- meta.MetaRef = ref
-
- if err := client.PutMeta(ctx, meta); err != nil {
- level.Error(logger).Log("msg", "failed to write meta", "err", err)
- return nil, errors.Wrap(err, "failed to write meta")
- }
-
- s.metrics.metasCreated.Inc()
- level.Debug(logger).Log("msg", "uploaded meta", "meta", meta.MetaRef.String())
- created = append(created, meta)
- }
- }
-
- s.metrics.seriesPerCompaction.Observe(float64(totalSeries))
- s.metrics.bytesPerCompaction.Observe(float64(bytesAdded))
- level.Debug(logger).Log("msg", "finished bloom generation", "blocks", blockCt, "tsdbs", tsdbCt, "series", totalSeries, "bytes_added", bytesAdded)
- return created, nil
-}
-
-// Simple way to ensure increasing progress reporting
-// and likely unused in practice because we don't expect to see more than 1 tsdb to compact.
-// We assume each TSDB accounts for the same amount of work and only move progress forward
-// depending on the current TSDB's index. For example, if we have 2 TSDBs and a fingerprint
-// range from 0-100 (valid for both TSDBs), we'll limit reported progress for each TSDB to 50%.
-func biasedReporter(
- f func(model.Fingerprint),
- ownershipRange v1.FingerprintBounds,
- i,
- total int,
-) func(model.Fingerprint) {
- return func(fp model.Fingerprint) {
- clipped := min(max(fp, ownershipRange.Min), ownershipRange.Max)
- delta := (clipped - ownershipRange.Min) / model.Fingerprint(total)
- step := model.Fingerprint(ownershipRange.Range() / uint64(total))
- res := ownershipRange.Min + (step * model.Fingerprint(i)) + delta
- f(res)
- }
-}
-
-func coversFullRange(bounds v1.FingerprintBounds, overlaps []v1.FingerprintBounds) bool {
- // if there are no overlaps, the range is not covered
- if len(overlaps) == 0 {
- return false
- }
-
- // keep track of bounds which need to be filled in order
- // for the overlaps to cover the full range
- missing := []v1.FingerprintBounds{bounds}
- ignores := make(map[int]bool)
- for _, overlap := range overlaps {
- var i int
- for {
- if i >= len(missing) {
- break
- }
-
- if ignores[i] {
- i++
- continue
- }
-
- remaining := missing[i].Unless(overlap)
- switch len(remaining) {
- case 0:
- // this range is covered, ignore it
- ignores[i] = true
- case 1:
- // this range is partially covered, updated it
- missing[i] = remaining[0]
- case 2:
- // this range has been partially covered in the middle,
- // split it into two ranges and append
- ignores[i] = true
- missing = append(missing, remaining...)
- }
- i++
- }
-
- }
-
- return len(ignores) == len(missing)
-}
-
-type gapWithBlocks struct {
- bounds v1.FingerprintBounds
- blocks []bloomshipper.BlockRef
-}
-
-// blockPlan is a plan for all the work needed to build a meta.json
-// It includes:
-// - the tsdb (source of truth) which contains all the series+chunks
-// we need to ensure are indexed in bloom blocks
-// - a list of gaps that are out of date and need to be checked+built
-// - within each gap, a list of block refs which overlap the gap are included
-// so we can use them to accelerate bloom generation. They likely contain many
-// of the same chunks we need to ensure are indexed, just from previous tsdb iterations.
-// This is a performance optimization to avoid expensive re-reindexing
-type blockPlan struct {
- tsdb tsdb.SingleTenantTSDBIdentifier
- gaps []gapWithBlocks
-}
-
-// blockPlansForGaps groups tsdb gaps we wish to fill with overlapping but out of date blocks.
-// This allows us to expedite bloom generation by using existing blocks to fill in the gaps
-// since many will contain the same chunks.
-func blockPlansForGaps(tsdbs []tsdbGaps, metas []bloomshipper.Meta) ([]blockPlan, error) {
- plans := make([]blockPlan, 0, len(tsdbs))
-
- for _, idx := range tsdbs {
- plan := blockPlan{
- tsdb: idx.tsdb,
- gaps: make([]gapWithBlocks, 0, len(idx.gaps)),
- }
-
- for _, gap := range idx.gaps {
- planGap := gapWithBlocks{
- bounds: gap,
- }
-
- for _, meta := range metas {
-
- if meta.Bounds.Intersection(gap) == nil {
- // this meta doesn't overlap the gap, skip
- continue
- }
-
- for _, block := range meta.Blocks {
- if block.Bounds.Intersection(gap) == nil {
- // this block doesn't overlap the gap, skip
- continue
- }
- // this block overlaps the gap, add it to the plan
- // for this gap
- planGap.blocks = append(planGap.blocks, block)
- }
- }
-
- // ensure we sort blocks so deduping iterator works as expected
- sort.Slice(planGap.blocks, func(i, j int) bool {
- return planGap.blocks[i].Bounds.Less(planGap.blocks[j].Bounds)
- })
-
- peekingBlocks := iter.NewPeekIter[bloomshipper.BlockRef](
- iter.NewSliceIter[bloomshipper.BlockRef](
- planGap.blocks,
- ),
- )
- // dedupe blocks which could be in multiple metas
- itr := iter.NewDedupingIter[bloomshipper.BlockRef, bloomshipper.BlockRef](
- func(a, b bloomshipper.BlockRef) bool {
- return a == b
- },
- iter.Identity[bloomshipper.BlockRef],
- func(a, _ bloomshipper.BlockRef) bloomshipper.BlockRef {
- return a
- },
- peekingBlocks,
- )
-
- deduped, err := iter.Collect[bloomshipper.BlockRef](itr)
- if err != nil {
- return nil, errors.Wrap(err, "failed to dedupe blocks")
- }
- planGap.blocks = deduped
-
- plan.gaps = append(plan.gaps, planGap)
- }
-
- plans = append(plans, plan)
- }
-
- return plans, nil
-}
-
-// Used to signal the gaps that need to be populated for a tsdb
-type tsdbGaps struct {
- tsdb tsdb.SingleTenantTSDBIdentifier
- gaps []v1.FingerprintBounds
-}
-
-// tsdbsUpToDate returns if the metas are up to date with the tsdbs. This is determined by asserting
-// that for each TSDB, there are metas covering the entire ownership range which were generated from that specific TSDB.
-func gapsBetweenTSDBsAndMetas(
- ownershipRange v1.FingerprintBounds,
- tsdbs []tsdb.SingleTenantTSDBIdentifier,
- metas []bloomshipper.Meta,
-) (res []tsdbGaps, err error) {
- for _, db := range tsdbs {
- id := db.Name()
-
- relevantMetas := make([]v1.FingerprintBounds, 0, len(metas))
- for _, meta := range metas {
- for _, s := range meta.Sources {
- if s.Name() == id {
- relevantMetas = append(relevantMetas, meta.Bounds)
- }
- }
- }
-
- gaps, err := findGaps(ownershipRange, relevantMetas)
- if err != nil {
- return nil, err
- }
-
- if len(gaps) > 0 {
- res = append(res, tsdbGaps{
- tsdb: db,
- gaps: gaps,
- })
- }
- }
-
- return res, err
-}
-
-func findGaps(ownershipRange v1.FingerprintBounds, metas []v1.FingerprintBounds) (gaps []v1.FingerprintBounds, err error) {
- if len(metas) == 0 {
- return []v1.FingerprintBounds{ownershipRange}, nil
- }
-
- // turn the available metas into a list of non-overlapping metas
- // for easier processing
- var nonOverlapping []v1.FingerprintBounds
- // First, we reduce the metas into a smaller set by combining overlaps. They must be sorted.
- var cur *v1.FingerprintBounds
- for i := 0; i < len(metas); i++ {
- j := i + 1
-
- // first iteration (i == 0), set the current meta
- if cur == nil {
- cur = &metas[i]
- }
-
- if j >= len(metas) {
- // We've reached the end of the list. Add the last meta to the non-overlapping set.
- nonOverlapping = append(nonOverlapping, *cur)
- break
- }
-
- combined := cur.Union(metas[j])
- if len(combined) == 1 {
- // There was an overlap between the two tested ranges. Combine them and keep going.
- cur = &combined[0]
- continue
- }
-
- // There was no overlap between the two tested ranges. Add the first to the non-overlapping set.
- // and keep the second for the next iteration.
- nonOverlapping = append(nonOverlapping, combined[0])
- cur = &combined[1]
- }
-
- // Now, detect gaps between the non-overlapping metas and the ownership range.
- // The left bound of the ownership range will be adjusted as we go.
- leftBound := ownershipRange.Min
- for _, meta := range nonOverlapping {
-
- clippedMeta := meta.Intersection(ownershipRange)
- // should never happen as long as we are only combining metas
- // that intersect with the ownership range
- if clippedMeta == nil {
- return nil, fmt.Errorf("meta is not within ownership range: %v", meta)
- }
-
- searchRange := ownershipRange.Slice(leftBound, clippedMeta.Max)
- // update the left bound for the next iteration
- // We do the max to prevent the max bound to overflow from MaxUInt64 to 0
- leftBound = min(
- max(clippedMeta.Max+1, clippedMeta.Max),
- max(ownershipRange.Max+1, ownershipRange.Max),
- )
-
- // since we've already ensured that the meta is within the ownership range,
- // we know the xor will be of length zero (when the meta is equal to the ownership range)
- // or 1 (when the meta is a subset of the ownership range)
- xors := searchRange.Unless(*clippedMeta)
- if len(xors) == 0 {
- // meta is equal to the ownership range. This means the meta
- // covers this entire section of the ownership range.
- continue
- }
-
- gaps = append(gaps, xors[0])
- }
-
- // If the leftBound is less than the ownership range max, and it's smaller than MaxUInt64,
- // There is a gap between the last meta and the end of the ownership range.
- // Note: we check `leftBound < math.MaxUint64` since in the loop above we clamp the
- // leftBound to MaxUint64 to prevent an overflow to 0: `max(clippedMeta.Max+1, clippedMeta.Max)`
- if leftBound < math.MaxUint64 && leftBound <= ownershipRange.Max {
- gaps = append(gaps, v1.NewBounds(leftBound, ownershipRange.Max))
- }
-
- return gaps, nil
-}
diff --git a/pkg/bloomcompactor/controller_test.go b/pkg/bloomcompactor/controller_test.go
deleted file mode 100644
index 5c6a506473476..0000000000000
--- a/pkg/bloomcompactor/controller_test.go
+++ /dev/null
@@ -1,584 +0,0 @@
-package bloomcompactor
-
-import (
- "fmt"
- "math"
- "testing"
- "time"
-
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
-)
-
-func Test_findGaps(t *testing.T) {
- for _, tc := range []struct {
- desc string
- err bool
- exp []v1.FingerprintBounds
- ownershipRange v1.FingerprintBounds
- metas []v1.FingerprintBounds
- }{
- {
- desc: "error nonoverlapping metas",
- err: true,
- exp: nil,
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{v1.NewBounds(11, 20)},
- },
- {
- desc: "one meta with entire ownership range",
- err: false,
- exp: nil,
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{v1.NewBounds(0, 10)},
- },
- {
- desc: "two non-overlapping metas with entire ownership range",
- err: false,
- exp: nil,
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, 5),
- v1.NewBounds(6, 10),
- },
- },
- {
- desc: "two overlapping metas with entire ownership range",
- err: false,
- exp: nil,
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, 6),
- v1.NewBounds(4, 10),
- },
- },
- {
- desc: "one meta with partial ownership range",
- err: false,
- exp: []v1.FingerprintBounds{
- v1.NewBounds(6, 10),
- },
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, 5),
- },
- },
- {
- desc: "smaller subsequent meta with partial ownership range",
- err: false,
- exp: []v1.FingerprintBounds{
- v1.NewBounds(8, 10),
- },
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, 7),
- v1.NewBounds(3, 4),
- },
- },
- {
- desc: "hole in the middle",
- err: false,
- exp: []v1.FingerprintBounds{
- v1.NewBounds(4, 5),
- },
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, 3),
- v1.NewBounds(6, 10),
- },
- },
- {
- desc: "holes on either end",
- err: false,
- exp: []v1.FingerprintBounds{
- v1.NewBounds(0, 2),
- v1.NewBounds(8, 10),
- },
- ownershipRange: v1.NewBounds(0, 10),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(3, 5),
- v1.NewBounds(6, 7),
- },
- },
- {
- desc: "full ownership range with single meta",
- err: false,
- exp: nil,
- ownershipRange: v1.NewBounds(0, math.MaxUint64),
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, math.MaxUint64),
- },
- },
- {
- desc: "full ownership range with multiple metas",
- err: false,
- exp: nil,
- ownershipRange: v1.NewBounds(0, math.MaxUint64),
- // Three metas covering the whole 0 - MaxUint64
- metas: []v1.FingerprintBounds{
- v1.NewBounds(0, math.MaxUint64/3),
- v1.NewBounds(math.MaxUint64/3+1, math.MaxUint64/2),
- v1.NewBounds(math.MaxUint64/2+1, math.MaxUint64),
- },
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- gaps, err := findGaps(tc.ownershipRange, tc.metas)
- if tc.err {
- require.Error(t, err)
- return
- }
- require.Equal(t, tc.exp, gaps)
- })
- }
-}
-
-func tsdbID(n int) tsdb.SingleTenantTSDBIdentifier {
- return tsdb.SingleTenantTSDBIdentifier{
- TS: time.Unix(int64(n), 0),
- }
-}
-
-func genMeta(min, max model.Fingerprint, sources []int, blocks []bloomshipper.BlockRef) bloomshipper.Meta {
- m := bloomshipper.Meta{
- MetaRef: bloomshipper.MetaRef{
- Ref: bloomshipper.Ref{
- Bounds: v1.NewBounds(min, max),
- },
- },
- Blocks: blocks,
- }
- for _, source := range sources {
- m.Sources = append(m.Sources, tsdbID(source))
- }
- return m
-}
-
-func Test_gapsBetweenTSDBsAndMetas(t *testing.T) {
-
- for _, tc := range []struct {
- desc string
- err bool
- exp []tsdbGaps
- ownershipRange v1.FingerprintBounds
- tsdbs []tsdb.SingleTenantTSDBIdentifier
- metas []bloomshipper.Meta
- }{
- {
- desc: "non-overlapping tsdbs and metas",
- err: true,
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(11, 20, []int{0}, nil),
- },
- },
- {
- desc: "single tsdb",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(4, 8, []int{0}, nil),
- },
- exp: []tsdbGaps{
- {
- tsdb: tsdbID(0),
- gaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 3),
- v1.NewBounds(9, 10),
- },
- },
- },
- },
- {
- desc: "multiple tsdbs with separate blocks",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0), tsdbID(1)},
- metas: []bloomshipper.Meta{
- genMeta(0, 5, []int{0}, nil),
- genMeta(6, 10, []int{1}, nil),
- },
- exp: []tsdbGaps{
- {
- tsdb: tsdbID(0),
- gaps: []v1.FingerprintBounds{
- v1.NewBounds(6, 10),
- },
- },
- {
- tsdb: tsdbID(1),
- gaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 5),
- },
- },
- },
- },
- {
- desc: "multiple tsdbs with the same blocks",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0), tsdbID(1)},
- metas: []bloomshipper.Meta{
- genMeta(0, 5, []int{0, 1}, nil),
- genMeta(6, 8, []int{1}, nil),
- },
- exp: []tsdbGaps{
- {
- tsdb: tsdbID(0),
- gaps: []v1.FingerprintBounds{
- v1.NewBounds(6, 10),
- },
- },
- {
- tsdb: tsdbID(1),
- gaps: []v1.FingerprintBounds{
- v1.NewBounds(9, 10),
- },
- },
- },
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- gaps, err := gapsBetweenTSDBsAndMetas(tc.ownershipRange, tc.tsdbs, tc.metas)
- if tc.err {
- require.Error(t, err)
- return
- }
- require.Equal(t, tc.exp, gaps)
- })
- }
-}
-
-func genBlockRef(min, max model.Fingerprint) bloomshipper.BlockRef {
- bounds := v1.NewBounds(min, max)
- return bloomshipper.BlockRef{
- Ref: bloomshipper.Ref{
- Bounds: bounds,
- },
- }
-}
-
-func Test_blockPlansForGaps(t *testing.T) {
- for _, tc := range []struct {
- desc string
- ownershipRange v1.FingerprintBounds
- tsdbs []tsdb.SingleTenantTSDBIdentifier
- metas []bloomshipper.Meta
- err bool
- exp []blockPlan
- }{
- {
- desc: "single overlapping meta+no overlapping block",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(5, 20, []int{1}, []bloomshipper.BlockRef{genBlockRef(11, 20)}),
- },
- exp: []blockPlan{
- {
- tsdb: tsdbID(0),
- gaps: []gapWithBlocks{
- {
- bounds: v1.NewBounds(0, 10),
- },
- },
- },
- },
- },
- {
- desc: "single overlapping meta+one overlapping block",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(5, 20, []int{1}, []bloomshipper.BlockRef{genBlockRef(9, 20)}),
- },
- exp: []blockPlan{
- {
- tsdb: tsdbID(0),
- gaps: []gapWithBlocks{
- {
- bounds: v1.NewBounds(0, 10),
- blocks: []bloomshipper.BlockRef{genBlockRef(9, 20)},
- },
- },
- },
- },
- },
- {
- // the range which needs to be generated doesn't overlap with existing blocks
- // from other tsdb versions since theres an up to date tsdb version block,
- // but we can trim the range needing generation
- desc: "trims up to date area",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(9, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(9, 20)}), // block for same tsdb
- genMeta(9, 20, []int{1}, []bloomshipper.BlockRef{genBlockRef(9, 20)}), // block for different tsdb
- },
- exp: []blockPlan{
- {
- tsdb: tsdbID(0),
- gaps: []gapWithBlocks{
- {
- bounds: v1.NewBounds(0, 8),
- },
- },
- },
- },
- },
- {
- desc: "uses old block for overlapping range",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(9, 20, []int{0}, []bloomshipper.BlockRef{genBlockRef(9, 20)}), // block for same tsdb
- genMeta(5, 20, []int{1}, []bloomshipper.BlockRef{genBlockRef(5, 20)}), // block for different tsdb
- },
- exp: []blockPlan{
- {
- tsdb: tsdbID(0),
- gaps: []gapWithBlocks{
- {
- bounds: v1.NewBounds(0, 8),
- blocks: []bloomshipper.BlockRef{genBlockRef(5, 20)},
- },
- },
- },
- },
- },
- {
- desc: "multi case",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0), tsdbID(1)}, // generate for both tsdbs
- metas: []bloomshipper.Meta{
- genMeta(0, 2, []int{0}, []bloomshipper.BlockRef{
- genBlockRef(0, 1),
- genBlockRef(1, 2),
- }), // tsdb_0
- genMeta(6, 8, []int{0}, []bloomshipper.BlockRef{genBlockRef(6, 8)}), // tsdb_0
-
- genMeta(3, 5, []int{1}, []bloomshipper.BlockRef{genBlockRef(3, 5)}), // tsdb_1
- genMeta(8, 10, []int{1}, []bloomshipper.BlockRef{genBlockRef(8, 10)}), // tsdb_1
- },
- exp: []blockPlan{
- {
- tsdb: tsdbID(0),
- gaps: []gapWithBlocks{
- // tsdb (id=0) can source chunks from the blocks built from tsdb (id=1)
- {
- bounds: v1.NewBounds(3, 5),
- blocks: []bloomshipper.BlockRef{genBlockRef(3, 5)},
- },
- {
- bounds: v1.NewBounds(9, 10),
- blocks: []bloomshipper.BlockRef{genBlockRef(8, 10)},
- },
- },
- },
- // tsdb (id=1) can source chunks from the blocks built from tsdb (id=0)
- {
- tsdb: tsdbID(1),
- gaps: []gapWithBlocks{
- {
- bounds: v1.NewBounds(0, 2),
- blocks: []bloomshipper.BlockRef{
- genBlockRef(0, 1),
- genBlockRef(1, 2),
- },
- },
- {
- bounds: v1.NewBounds(6, 7),
- blocks: []bloomshipper.BlockRef{genBlockRef(6, 8)},
- },
- },
- },
- },
- },
- {
- desc: "dedupes block refs",
- ownershipRange: v1.NewBounds(0, 10),
- tsdbs: []tsdb.SingleTenantTSDBIdentifier{tsdbID(0)},
- metas: []bloomshipper.Meta{
- genMeta(9, 20, []int{1}, []bloomshipper.BlockRef{
- genBlockRef(1, 4),
- genBlockRef(9, 20),
- }), // blocks for first diff tsdb
- genMeta(5, 20, []int{2}, []bloomshipper.BlockRef{
- genBlockRef(5, 10),
- genBlockRef(9, 20), // same block references in prior meta (will be deduped)
- }), // block for second diff tsdb
- },
- exp: []blockPlan{
- {
- tsdb: tsdbID(0),
- gaps: []gapWithBlocks{
- {
- bounds: v1.NewBounds(0, 10),
- blocks: []bloomshipper.BlockRef{
- genBlockRef(1, 4),
- genBlockRef(5, 10),
- genBlockRef(9, 20),
- },
- },
- },
- },
- },
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- // we reuse the gapsBetweenTSDBsAndMetas function to generate the gaps as this function is tested
- // separately and it's used to generate input in our regular code path (easier to write tests this way).
- gaps, err := gapsBetweenTSDBsAndMetas(tc.ownershipRange, tc.tsdbs, tc.metas)
- require.NoError(t, err)
-
- plans, err := blockPlansForGaps(gaps, tc.metas)
- if tc.err {
- require.Error(t, err)
- return
- }
- require.Equal(t, tc.exp, plans)
-
- })
- }
-}
-
-func Test_coversFullRange(t *testing.T) {
- for _, tc := range []struct {
- desc string
- src v1.FingerprintBounds
- overlaps []v1.FingerprintBounds
- exp bool
- }{
- {
- desc: "empty",
- src: v1.NewBounds(0, 10),
- overlaps: []v1.FingerprintBounds{},
- exp: false,
- },
- {
- desc: "single_full_range",
- src: v1.NewBounds(0, 10),
- overlaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 10),
- },
- exp: true,
- },
- {
- desc: "single_partial_range",
- src: v1.NewBounds(0, 10),
- overlaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 5),
- },
- exp: false,
- },
- {
- desc: "multiple_full_ranges",
- src: v1.NewBounds(0, 10),
- overlaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 5),
- v1.NewBounds(6, 10),
- },
- exp: true,
- },
- {
- desc: "multiple_partial_ranges",
- src: v1.NewBounds(0, 10),
- overlaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 5),
- v1.NewBounds(7, 8),
- },
- exp: false,
- },
- {
- desc: "wraps_partial_range",
- src: v1.NewBounds(10, 20),
- overlaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 12),
- v1.NewBounds(13, 15),
- v1.NewBounds(19, 21),
- },
- exp: false,
- },
- {
- desc: "wraps_full_range",
- src: v1.NewBounds(10, 20),
- overlaps: []v1.FingerprintBounds{
- v1.NewBounds(0, 12),
- v1.NewBounds(13, 15),
- v1.NewBounds(16, 25),
- },
- exp: true,
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- require.Equal(t, tc.exp, coversFullRange(tc.src, tc.overlaps))
- })
- }
-}
-
-func TestBiasedReporter(t *testing.T) {
- for i, tc := range []struct {
- bounds v1.FingerprintBounds
- originalFPs [][]model.Fingerprint
- expectedFPs [][]model.Fingerprint
- }{
- {
- bounds: v1.NewBounds(0, 10),
- originalFPs: [][]model.Fingerprint{
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- },
- expectedFPs: [][]model.Fingerprint{
- {0, 0, 1, 1, 2, 2, 3, 3, 4, 4},
- {5, 5, 6, 6, 7, 7, 8, 8, 9, 9},
- },
- },
- {
- bounds: v1.NewBounds(0, 9), // small resolution loss when dividing by 2
- originalFPs: [][]model.Fingerprint{
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- },
- expectedFPs: [][]model.Fingerprint{
- {0, 0, 1, 1, 2, 2, 3, 3, 4, 4},
- {4, 4, 5, 5, 6, 6, 7, 7, 8, 8},
- },
- },
- {
- bounds: v1.NewBounds(0, 10),
- originalFPs: [][]model.Fingerprint{
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- },
- expectedFPs: [][]model.Fingerprint{
- {0, 0, 0, 1, 1, 1, 2, 2, 2, 3},
- {3, 3, 3, 4, 4, 4, 5, 5, 5, 6},
- {6, 6, 6, 7, 7, 7, 8, 8, 8, 9},
- },
- },
- } {
- t.Run(fmt.Sprint(i), func(t *testing.T) {
- for i, inputs := range tc.originalFPs {
-
- validator := func(exp []model.Fingerprint) func(model.Fingerprint) {
- j := 0
- return func(fp model.Fingerprint) {
- require.Equal(t, int(exp[j]), int(fp))
- j++
- }
- }(tc.expectedFPs[i])
-
- biased := biasedReporter(validator, tc.bounds, i, len(tc.originalFPs))
-
- for _, fp := range inputs {
- biased(fp)
- }
-
- }
- })
- }
-}
diff --git a/pkg/bloomcompactor/metrics.go b/pkg/bloomcompactor/metrics.go
deleted file mode 100644
index d569a4dbfd82d..0000000000000
--- a/pkg/bloomcompactor/metrics.go
+++ /dev/null
@@ -1,229 +0,0 @@
-package bloomcompactor
-
-import (
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/client_golang/prometheus/promauto"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
-)
-
-const (
- metricsNamespace = "loki"
- metricsSubsystem = "bloomcompactor"
-
- statusSuccess = "success"
- statusFailure = "failure"
-
- tenantLabel = "tenant"
-)
-
-type Metrics struct {
- bloomMetrics *v1.Metrics
- compactorRunning prometheus.Gauge
- chunkSize prometheus.Histogram // uncompressed size of all chunks summed per series
-
- compactionsStarted prometheus.Counter
- compactionCompleted *prometheus.CounterVec
- compactionTime *prometheus.HistogramVec
-
- tenantsDiscovered prometheus.Counter
- tenantsOwned prometheus.Counter
- tenantsSkipped prometheus.Counter
- tenantsStarted prometheus.Counter
- tenantTableRanges *prometheus.CounterVec
- seriesPerCompaction prometheus.Histogram
- bytesPerCompaction prometheus.Histogram
-
- blocksReused prometheus.Counter
-
- blocksCreated prometheus.Counter
- blocksDeleted prometheus.Counter
- metasCreated prometheus.Counter
- metasDeleted prometheus.Counter
-
- progress prometheus.Gauge
- timePerTenant *prometheus.CounterVec
-
- // Retention metrics
- retentionRunning prometheus.Gauge
- retentionTime *prometheus.HistogramVec
- retentionDaysPerIteration *prometheus.HistogramVec
- retentionTenantsPerIteration *prometheus.HistogramVec
- retentionTenantsExceedingLookback prometheus.Gauge
-}
-
-func NewMetrics(r prometheus.Registerer, bloomMetrics *v1.Metrics) *Metrics {
- m := Metrics{
- bloomMetrics: bloomMetrics,
- compactorRunning: promauto.With(r).NewGauge(prometheus.GaugeOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "running",
- Help: "Value will be 1 if compactor is currently running on this instance",
- }),
- chunkSize: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "chunk_series_size",
- Help: "Uncompressed size of chunks in a series",
- // 256B -> 100GB, 10 buckets
- Buckets: prometheus.ExponentialBucketsRange(256, 100<<30, 10),
- }),
-
- compactionsStarted: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "compactions_started_total",
- Help: "Total number of compactions started",
- }),
- compactionCompleted: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "compactions_completed_total",
- Help: "Total number of compactions completed",
- }, []string{"status"}),
- compactionTime: promauto.With(r).NewHistogramVec(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "compactions_time_seconds",
- Help: "Time spent during a compaction cycle.",
- Buckets: prometheus.DefBuckets,
- }, []string{"status"}),
-
- tenantsDiscovered: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "tenants_discovered_total",
- Help: "Number of tenants discovered during the current compaction run",
- }),
- tenantsOwned: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "tenants_owned",
- Help: "Number of tenants owned by this instance",
- }),
- tenantsSkipped: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "tenants_skipped_total",
- Help: "Number of tenants skipped since they are not owned by this instance",
- }),
- tenantsStarted: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "tenants_started_total",
- Help: "Number of tenants started to process during the current compaction run",
- }),
- tenantTableRanges: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "tenant_table_ranges_completed_total",
- Help: "Number of tenants table ranges (table, tenant, keyspace) processed during the current compaction run",
- }, []string{"status"}),
- seriesPerCompaction: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "series_per_compaction",
- Help: "Number of series during compaction (tenant, table, fingerprint-range). Includes series which copied from other blocks and don't need to be indexed",
- // Up to 10M series per tenant, way more than what we expect given our max_global_streams_per_user limits
- Buckets: prometheus.ExponentialBucketsRange(1, 10e6, 10),
- }),
- bytesPerCompaction: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "bytes_per_compaction",
- Help: "Number of source bytes from chunks added during a compaction cycle (the tenant, table, keyspace tuple).",
- // 1KB -> 100GB, 10 buckets
- Buckets: prometheus.ExponentialBucketsRange(1<<10, 100<<30, 10),
- }),
- blocksReused: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "blocks_reused_total",
- Help: "Number of overlapping bloom blocks reused when creating new blocks",
- }),
- blocksCreated: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "blocks_created_total",
- Help: "Number of blocks created",
- }),
- blocksDeleted: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "blocks_deleted_total",
- Help: "Number of blocks deleted",
- }),
- metasCreated: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "metas_created_total",
- Help: "Number of metas created",
- }),
- metasDeleted: promauto.With(r).NewCounter(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "metas_deleted_total",
- Help: "Number of metas deleted",
- }),
-
- progress: promauto.With(r).NewGauge(prometheus.GaugeOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "progress",
- Help: "Progress of the compaction process as a percentage. 1 means compaction is complete.",
- }),
-
- // TODO(owen-d): cleanup tenant metrics over time as ring changes
- // TODO(owen-d): histogram for distributions?
- timePerTenant: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "tenant_compaction_seconds_total",
- Help: "Time spent processing a tenant.",
- }, []string{tenantLabel}),
-
- // Retention
- retentionRunning: promauto.With(r).NewGauge(prometheus.GaugeOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "retention_running",
- Help: "1 if retention is running in this compactor.",
- }),
-
- retentionTime: promauto.With(r).NewHistogramVec(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "retention_time_seconds",
- Help: "Time this retention process took to complete.",
- Buckets: prometheus.DefBuckets,
- }, []string{"status"}),
-
- retentionDaysPerIteration: promauto.With(r).NewHistogramVec(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "retention_days_processed",
- Help: "Number of days iterated over during the retention process.",
- // 1day -> 5 years, 10 buckets
- Buckets: prometheus.ExponentialBucketsRange(1, 365*5, 10),
- }, []string{"status"}),
-
- retentionTenantsPerIteration: promauto.With(r).NewHistogramVec(prometheus.HistogramOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "retention_tenants_processed",
- Help: "Number of tenants on which retention was applied during the retention process.",
- // 1 tenant -> 10k tenants, 10 buckets
- Buckets: prometheus.ExponentialBucketsRange(1, 10000, 10),
- }, []string{"status"}),
-
- retentionTenantsExceedingLookback: promauto.With(r).NewGauge(prometheus.GaugeOpts{
- Namespace: metricsNamespace,
- Subsystem: metricsSubsystem,
- Name: "retention_tenants_exceeding_lookback",
- Help: "Number of tenants with a retention exceeding the configured retention lookback.",
- }),
- }
-
- return &m
-}
diff --git a/pkg/bloomcompactor/retention.go b/pkg/bloomcompactor/retention.go
deleted file mode 100644
index caaf80ffb9c3f..0000000000000
--- a/pkg/bloomcompactor/retention.go
+++ /dev/null
@@ -1,320 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "flag"
- "math"
- "slices"
- "time"
-
- "github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/grafana/dskit/ring"
- "github.com/pkg/errors"
- "github.com/prometheus/common/model"
-
- "github.com/grafana/loki/v3/pkg/storage/chunk/client"
- storageconfig "github.com/grafana/loki/v3/pkg/storage/config"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/validation"
-)
-
-type retentionSharding interface {
- OwnsRetention() (bool, error)
-}
-
-type firstTokenRetentionSharding struct {
- ring ring.ReadRing
- ringLifeCycler *ring.BasicLifecycler
-}
-
-func newFirstTokenRetentionSharding(ring ring.ReadRing, ringLifeCycler *ring.BasicLifecycler) *firstTokenRetentionSharding {
- return &firstTokenRetentionSharding{
- ring: ring,
- ringLifeCycler: ringLifeCycler,
- }
-}
-
-// OwnsRetention returns true if the compactor should apply retention.
-// This is determined by checking if the compactor owns the smaller token in the ring.
-// Note that during a ring topology change, more than one compactor may attempt to apply retention.
-// This is fine since retention consists on deleting old data which should be idempotent.
-func (s *firstTokenRetentionSharding) OwnsRetention() (bool, error) {
- rs, err := s.ring.GetAllHealthy(RingOp)
- if err != nil {
- return false, errors.Wrap(err, "getting ring healthy instances")
- }
- if len(rs.Instances) == 0 {
- return false, errors.New("no healthy instances in ring")
- }
-
- // Lookup the instance with smaller token
- instance := slices.MinFunc(rs.Instances, func(a, b ring.InstanceDesc) int {
- smallerA := slices.Min(a.GetTokens())
- smallerB := slices.Min(b.GetTokens())
- if smallerA < smallerB {
- return -1
- }
- if smallerA > smallerB {
- return 1
- }
- return 0
- })
-
- return instance.GetId() == s.ringLifeCycler.GetInstanceID(), nil
-}
-
-type RetentionConfig struct {
- Enabled bool `yaml:"enabled"`
- MaxLookbackDays int `yaml:"max_lookback_days"`
-}
-
-func (cfg *RetentionConfig) RegisterFlags(f *flag.FlagSet) {
- f.BoolVar(&cfg.Enabled, "bloom-compactor.retention.enabled", false, "Enable bloom retention.")
- f.IntVar(&cfg.MaxLookbackDays, "bloom-compactor.retention.max-lookback-days", 365, "Max lookback days for retention.")
-}
-
-func (cfg *RetentionConfig) Validate() error {
- if !cfg.Enabled {
- return nil
- }
-
- if cfg.MaxLookbackDays < 1 {
- return errors.New("max lookback days must be a positive number")
- }
- return nil
-}
-
-type RetentionLimits interface {
- RetentionPeriod(userID string) time.Duration
- StreamRetention(userID string) []validation.StreamRetention
- AllByUserID() map[string]*validation.Limits
- DefaultLimits() *validation.Limits
-}
-
-type RetentionManager struct {
- cfg RetentionConfig
- limits RetentionLimits
- bloomStore bloomshipper.StoreBase
- sharding retentionSharding
- metrics *Metrics
- logger log.Logger
- lastDayRun storageconfig.DayTime
-
- // For testing
- now func() model.Time
-}
-
-func NewRetentionManager(
- cfg RetentionConfig,
- limits RetentionLimits,
- bloomStore bloomshipper.StoreBase,
- sharding retentionSharding,
- metrics *Metrics,
- logger log.Logger,
-) *RetentionManager {
- return &RetentionManager{
- cfg: cfg,
- limits: limits,
- bloomStore: bloomStore,
- sharding: sharding,
- metrics: metrics,
- logger: log.With(logger, "subcomponent", "retention-manager"),
- now: model.Now,
- lastDayRun: storageconfig.NewDayTime(0),
- }
-}
-
-func (r *RetentionManager) Apply(ctx context.Context) error {
- if !r.cfg.Enabled {
- level.Debug(r.logger).Log("msg", "retention is disabled")
- return nil
- }
-
- start := r.now()
- today := storageconfig.NewDayTime(start)
- if !today.After(r.lastDayRun) {
- // We've already run retention for today
- return nil
- }
-
- ownsRetention, err := r.sharding.OwnsRetention()
- if err != nil {
- return errors.Wrap(err, "checking if compactor owns retention")
- }
- if !ownsRetention {
- level.Debug(r.logger).Log("msg", "this compactor doesn't own retention")
- return nil
- }
-
- level.Info(r.logger).Log("msg", "Applying retention", "today", today.String(), "lastDayRun", r.lastDayRun.String())
- r.metrics.retentionRunning.Set(1)
- defer r.metrics.retentionRunning.Set(0)
-
- tenantsRetention := retentionByTenant(r.limits)
- r.reportTenantsExceedingLookback(tenantsRetention)
-
- defaultLimits := r.limits.DefaultLimits()
- defaultRetention := findLongestRetention(time.Duration(defaultLimits.RetentionPeriod), defaultLimits.StreamRetention)
-
- smallestRetention := smallestEnabledRetention(defaultRetention, tenantsRetention)
- if smallestRetention == 0 {
- level.Debug(r.logger).Log("msg", "no retention period set for any tenant, skipping retention")
- return nil
- }
-
- // Start day is today minus the smallest retention period.
- // Note that the last retention day is exclusive. E.g. 30 days retention means we keep 30 days of data,
- // thus we start deleting data from the 31st day onwards.
- startDay := storageconfig.NewDayTime(today.Add(-smallestRetention)).Dec()
- // End day is today minus the max lookback days
- endDay := storageconfig.NewDayTime(today.Add(-time.Duration(r.cfg.MaxLookbackDays) * 24 * time.Hour))
-
- var daysProcessed int
- tenantsRetentionApplied := make(map[string]struct{}, 100)
- for day := startDay; day.After(endDay); day = day.Dec() {
- dayLogger := log.With(r.logger, "day", day.String())
- bloomClient, err := r.bloomStore.Client(day.ModelTime())
- if err != nil {
- level.Error(dayLogger).Log("msg", "failed to get bloom store client", "err", err)
- break
- }
- objectClient := bloomClient.ObjectClient()
-
- tenants, err := r.bloomStore.TenantFilesForInterval(
- ctx, bloomshipper.NewInterval(day.Bounds()),
- func(tenant string, _ client.StorageObject) bool {
- // Filter out tenants whose retention hasn't expired yet
- globalRetention := r.limits.RetentionPeriod(tenant)
- streamRetention := r.limits.StreamRetention(tenant)
- tenantRetention := findLongestRetention(globalRetention, streamRetention)
- expirationDay := storageconfig.NewDayTime(today.Add(-tenantRetention))
- return day.Before(expirationDay)
- },
- )
- if err != nil {
- r.metrics.retentionTime.WithLabelValues(statusFailure).Observe(time.Since(start.Time()).Seconds())
- r.metrics.retentionDaysPerIteration.WithLabelValues(statusFailure).Observe(float64(daysProcessed))
- r.metrics.retentionTenantsPerIteration.WithLabelValues(statusFailure).Observe(float64(len(tenantsRetentionApplied)))
- return errors.Wrap(err, "getting users for period")
- }
-
- if len(tenants) == 0 {
- // No tenants for this day means we can break here since previous
- // retention iterations have already deleted all tenants
- break
- }
-
- for tenant, objects := range tenants {
- if len(objects) == 0 {
- continue
- }
-
- tenantLogger := log.With(dayLogger, "tenant", tenant)
- level.Info(tenantLogger).Log("msg", "applying retention to tenant", "keys", len(objects))
-
- // Note: we cannot delete the tenant directory directly because it is not an
- // actual key in the object store. Instead, we need to delete all keys one by one.
- for _, object := range objects {
- if err := objectClient.DeleteObject(ctx, object.Key); err != nil {
- r.metrics.retentionTime.WithLabelValues(statusFailure).Observe(time.Since(start.Time()).Seconds())
- r.metrics.retentionDaysPerIteration.WithLabelValues(statusFailure).Observe(float64(daysProcessed))
- r.metrics.retentionTenantsPerIteration.WithLabelValues(statusFailure).Observe(float64(len(tenantsRetentionApplied)))
- return errors.Wrapf(err, "deleting key %s", object.Key)
- }
- }
-
- tenantsRetentionApplied[tenant] = struct{}{}
- }
-
- daysProcessed++
- }
-
- r.lastDayRun = today
- r.metrics.retentionTime.WithLabelValues(statusSuccess).Observe(time.Since(start.Time()).Seconds())
- r.metrics.retentionDaysPerIteration.WithLabelValues(statusSuccess).Observe(float64(daysProcessed))
- r.metrics.retentionTenantsPerIteration.WithLabelValues(statusSuccess).Observe(float64(len(tenantsRetentionApplied)))
- level.Info(r.logger).Log("msg", "finished applying retention", "daysProcessed", daysProcessed, "tenants", len(tenantsRetentionApplied))
-
- return nil
-}
-
-func (r *RetentionManager) reportTenantsExceedingLookback(retentionByTenant map[string]time.Duration) {
- if len(retentionByTenant) == 0 {
- r.metrics.retentionTenantsExceedingLookback.Set(0)
- return
- }
-
- var tenantsExceedingLookback int
- for tenant, retention := range retentionByTenant {
- if retention > time.Duration(r.cfg.MaxLookbackDays)*24*time.Hour {
- level.Warn(r.logger).Log("msg", "tenant retention exceeds max lookback days", "tenant", tenant, "retention", retention.String())
- }
- tenantsExceedingLookback++
- }
-
- r.metrics.retentionTenantsExceedingLookback.Set(float64(tenantsExceedingLookback))
-}
-
-func findLongestRetention(globalRetention time.Duration, streamRetention []validation.StreamRetention) time.Duration {
- if len(streamRetention) == 0 {
- return globalRetention
- }
-
- maxStreamRetention := slices.MaxFunc(streamRetention, func(a, b validation.StreamRetention) int {
- return int(a.Period - b.Period)
- })
-
- if time.Duration(maxStreamRetention.Period) > globalRetention {
- return time.Duration(maxStreamRetention.Period)
- }
- return globalRetention
-}
-
-func retentionByTenant(limits RetentionLimits) map[string]time.Duration {
- all := limits.AllByUserID()
- if len(all) == 0 {
- return nil
- }
-
- retentions := make(map[string]time.Duration, len(all))
- for tenant, lim := range all {
- retention := findLongestRetention(time.Duration(lim.RetentionPeriod), lim.StreamRetention)
- if retention == 0 {
- continue
- }
- retentions[tenant] = retention
- }
-
- return retentions
-}
-
-// smallestEnabledRetention returns the smallest retention period across all tenants and the default.
-func smallestEnabledRetention(defaultRetention time.Duration, perTenantRetention map[string]time.Duration) time.Duration {
- if len(perTenantRetention) == 0 {
- return defaultRetention
- }
-
- smallest := time.Duration(math.MaxInt64)
- if defaultRetention != 0 {
- smallest = defaultRetention
- }
-
- for _, retention := range perTenantRetention {
- // Skip unlimited retention
- if retention == 0 {
- continue
- }
-
- if retention < smallest {
- smallest = retention
- }
- }
-
- if smallest == time.Duration(math.MaxInt64) {
- // No tenant nor defaults configures a retention
- return 0
- }
-
- return smallest
-}
diff --git a/pkg/bloomcompactor/retention_test.go b/pkg/bloomcompactor/retention_test.go
deleted file mode 100644
index e610ab5b02e02..0000000000000
--- a/pkg/bloomcompactor/retention_test.go
+++ /dev/null
@@ -1,882 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "flag"
- "fmt"
- "math"
- "os"
- "testing"
- "time"
-
- "github.com/go-kit/log"
- "github.com/grafana/dskit/services"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
-
- "github.com/grafana/loki/v3/pkg/storage"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/chunk/cache"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client/local"
- storageconfig "github.com/grafana/loki/v3/pkg/storage/config"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper/config"
- "github.com/grafana/loki/v3/pkg/storage/types"
- util_log "github.com/grafana/loki/v3/pkg/util/log"
- "github.com/grafana/loki/v3/pkg/util/mempool"
- lokiring "github.com/grafana/loki/v3/pkg/util/ring"
- "github.com/grafana/loki/v3/pkg/validation"
-)
-
-var testTime = parseDayTime("2024-12-31").ModelTime()
-
-func TestRetention(t *testing.T) {
- for _, tc := range []struct {
- name string
- ownsRetention bool
- cfg RetentionConfig
- lim mockRetentionLimits
- prePopulate func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore)
- expectErr bool
- check func(t *testing.T, bloomStore *bloomshipper.BloomStore)
- }{
- {
- name: "retention disabled",
- ownsRetention: true,
- cfg: RetentionConfig{
- Enabled: false,
- MaxLookbackDays: 2 * 365,
- },
- lim: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- "2": 200 * 24 * time.Hour,
- "3": 500 * 24 * time.Hour,
- },
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 200)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "2", testTime, 50)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "3", testTime, 500)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 200, len(metas[0]))
- metas = getGroupedMetasForLastNDays(t, bloomStore, "2", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 50, len(metas[0]))
- metas = getGroupedMetasForLastNDays(t, bloomStore, "3", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 500, len(metas[0]))
- },
- },
- {
- name: "compactor does not own retention",
- ownsRetention: false,
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 2 * 365,
- },
- lim: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- "2": 200 * 24 * time.Hour,
- "3": 500 * 24 * time.Hour,
- },
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 200)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "2", testTime, 50)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "3", testTime, 500)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 200, len(metas[0]))
- metas = getGroupedMetasForLastNDays(t, bloomStore, "2", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 50, len(metas[0]))
- metas = getGroupedMetasForLastNDays(t, bloomStore, "3", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 500, len(metas[0]))
- },
- },
- {
- name: "unlimited retention",
- ownsRetention: true,
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 2 * 365,
- },
- lim: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 0,
- },
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 200)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 200, len(metas[0]))
- },
- },
- {
- name: "default retention",
- ownsRetention: true,
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 2 * 365,
- },
- lim: mockRetentionLimits{
- defaultRetention: 30 * 24 * time.Hour,
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 200)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 31, len(metas[0]))
- },
- },
- {
- name: "retention lookback smaller than max retention",
- ownsRetention: true,
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 100,
- },
- lim: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- "2": 20 * 24 * time.Hour,
- "3": 200 * 24 * time.Hour,
- "4": 400 * 24 * time.Hour,
- },
- streamRetention: map[string][]validation.StreamRetention{
- "1": {
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- "2": {
- {
- Period: model.Duration(10 * 24 * time.Hour),
- },
- },
- },
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 200)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "2", testTime, 50)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "3", testTime, 500)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "4", testTime, 500)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- // Tenant 1 has 40 days of retention, and we wrote 200 days of metas
- // We should get two groups: 0th-40th and 101th-200th
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 2, len(metas))
- require.Equal(t, 41, len(metas[0])) // 0-40th day
- require.Equal(t, 100, len(metas[1])) // 100th-200th day
-
- // Tenant 2 has 20 days of retention, and we wrote 50 days of metas
- // We should get one group: 0th-20th
- metas = getGroupedMetasForLastNDays(t, bloomStore, "2", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 21, len(metas[0])) // 0th-20th
-
- // Tenant 3 has 200 days of retention, and we wrote 500 days of metas
- // Since the manager looks up to 100 days, we shouldn't have deleted any metas
- metas = getGroupedMetasForLastNDays(t, bloomStore, "3", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 500, len(metas[0])) // 0th-500th
-
- // Tenant 4 has 400 days of retention, and we wrote 500 days of metas
- // Since the manager looks up to 100 days, we shouldn't have deleted any metas
- metas = getGroupedMetasForLastNDays(t, bloomStore, "4", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 500, len(metas[0])) // 0th-500th
- },
- },
- {
- name: "retention lookback bigger than max retention",
- ownsRetention: true,
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 2 * 365,
- },
- lim: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- "2": 20 * 24 * time.Hour,
- "3": 200 * 24 * time.Hour,
- "4": 400 * 24 * time.Hour,
- },
- streamRetention: map[string][]validation.StreamRetention{
- "1": {
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- "2": {
- {
- Period: model.Duration(10 * 24 * time.Hour),
- },
- },
- },
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 200)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "2", testTime, 50)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "3", testTime, 500)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "4", testTime, 500)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- // Tenant 1 has 40 days of retention, and we wrote 200 days of metas
- // We should get one groups: 0th-40th
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 41, len(metas[0])) // 0-40th day
-
- // Tenant 2 has 20 days of retention, and we wrote 50 days of metas
- // We should get one group: 0th-20th
- metas = getGroupedMetasForLastNDays(t, bloomStore, "2", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 21, len(metas[0])) // 0th-20th
-
- // Tenant 3 has 200 days of retention, and we wrote 500 days of metas
- // We should get one group: 0th-200th
- metas = getGroupedMetasForLastNDays(t, bloomStore, "3", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 201, len(metas[0])) // 0th-200th
-
- // Tenant 4 has 400 days of retention, and we wrote 500 days of metas
- // Since the manager looks up to 100 days, we shouldn't have deleted any metas
- metas = getGroupedMetasForLastNDays(t, bloomStore, "4", testTime, 500)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 401, len(metas[0])) // 0th-400th
- },
- },
- {
- name: "hit no tenants in table",
- ownsRetention: true,
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 2 * 365,
- },
- lim: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- },
- },
- prePopulate: func(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore) {
- // Place metas with a gap of 50 days. [0th-100th], [151th-200th]
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime, 100)
- putMetasForLastNDays(t, schemaCfg, bloomStore, "1", testTime.Add(-150*24*time.Hour), 50)
- },
- check: func(t *testing.T, bloomStore *bloomshipper.BloomStore) {
- // We should get two groups: 0th-30th and 151th-200th
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 500)
- require.Equal(t, 2, len(metas))
- require.Equal(t, 31, len(metas[0])) // 0th-30th day
- require.Equal(t, 50, len(metas[1])) // 151th-200th day
- },
- },
- } {
- t.Run(tc.name, func(t *testing.T) {
- bloomStore, schema, _, err := NewMockBloomStore(t)
- require.NoError(t, err)
-
- rm := NewRetentionManager(
- tc.cfg,
- tc.lim,
- bloomStore,
- mockSharding{
- ownsRetention: tc.ownsRetention,
- },
- NewMetrics(nil, v1.NewMetrics(nil)),
- util_log.Logger,
- )
- rm.now = func() model.Time {
- return testTime
- }
-
- tc.prePopulate(t, schema, bloomStore)
-
- err = rm.Apply(context.Background())
- if tc.expectErr {
- require.Error(t, err)
- return
- }
- require.NoError(t, err)
-
- tc.check(t, bloomStore)
- })
- }
-}
-
-func TestRetentionRunsOncePerDay(t *testing.T) {
- bloomStore, schema, _, err := NewMockBloomStore(t)
- require.NoError(t, err)
-
- rm := NewRetentionManager(
- RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 365,
- },
- mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- },
- },
- bloomStore,
- mockSharding{
- ownsRetention: true,
- },
- NewMetrics(nil, v1.NewMetrics(nil)),
- util_log.Logger,
- )
- rm.now = func() model.Time {
- return testTime
- }
-
- // Write metas for the last 100 days and run retention
- putMetasForLastNDays(t, schema, bloomStore, "1", testTime, 100)
- err = rm.Apply(context.Background())
- require.NoError(t, err)
-
- // We should get only the first 30 days of metas
- metas := getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 100)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 31, len(metas[0])) // 0th-30th day
-
- // We now change the now() time to be a bit later in the day
- rm.now = func() model.Time {
- return testTime.Add(1 * time.Hour)
- }
-
- // Write metas again and run retention. Since we already ran retention at now()'s day,
- // Apply should be a noop, and therefore we should be able to get all the 100 days of metas
- putMetasForLastNDays(t, schema, bloomStore, "1", testTime, 100)
- err = rm.Apply(context.Background())
- require.NoError(t, err)
-
- metas = getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 100)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 100, len(metas[0]))
-
- // We now change the now() time to be the next day, retention should run again
- rm.now = func() model.Time {
- return testTime.Add(24 * time.Hour)
- }
- err = rm.Apply(context.Background())
- require.NoError(t, err)
-
- // We should only see the first 30 days of metas
- metas = getGroupedMetasForLastNDays(t, bloomStore, "1", testTime, 100)
- require.Equal(t, 1, len(metas))
- require.Equal(t, 30, len(metas[0])) // 0th-30th day
-}
-
-func TestOwnsRetention(t *testing.T) {
- for _, tc := range []struct {
- name string
- numCompactors int
- }{
- {
- name: "single compactor",
- numCompactors: 1,
- },
- {
- name: "multiple compactors",
- numCompactors: 100,
- },
- } {
- t.Run(tc.name, func(t *testing.T) {
- var ringManagers []*lokiring.RingManager
- for i := 0; i < tc.numCompactors; i++ {
- var cfg Config
- cfg.RegisterFlags(flag.NewFlagSet("ring", flag.PanicOnError))
- cfg.Ring.KVStore.Store = "inmemory"
- cfg.Ring.InstanceID = fmt.Sprintf("bloom-compactor-%d", i)
- cfg.Ring.InstanceAddr = fmt.Sprintf("localhost-%d", i)
-
- ringManager, err := lokiring.NewRingManager("bloom-compactor", lokiring.ServerMode, cfg.Ring, 1, cfg.Ring.NumTokens, util_log.Logger, prometheus.NewRegistry())
- require.NoError(t, err)
- require.NoError(t, ringManager.StartAsync(context.Background()))
-
- ringManagers = append(ringManagers, ringManager)
- }
- t.Cleanup(func() {
- // Stop all rings and wait for them to stop.
- for _, ringManager := range ringManagers {
- ringManager.StopAsync()
- require.Eventually(t, func() bool {
- return ringManager.State() == services.Terminated
- }, 1*time.Minute, 100*time.Millisecond)
- }
- })
-
- // Wait for all rings to see each other.
- for _, ringManager := range ringManagers {
- require.Eventually(t, func() bool {
- running := ringManager.State() == services.Running
- discovered := ringManager.Ring.InstancesCount() == tc.numCompactors
- return running && discovered
- }, 1*time.Minute, 100*time.Millisecond)
- }
-
- var shardings []retentionSharding
- for _, ringManager := range ringManagers {
- shardings = append(shardings, newFirstTokenRetentionSharding(ringManager.Ring, ringManager.RingLifecycler))
- }
-
- var ownsRetention int
- for _, sharding := range shardings {
- owns, err := sharding.OwnsRetention()
- require.NoError(t, err)
- if owns {
- ownsRetention++
- }
- }
-
- require.Equal(t, 1, ownsRetention)
- })
- }
-}
-
-func TestFindLongestRetention(t *testing.T) {
- for _, tc := range []struct {
- name string
- globalRetention time.Duration
- streamRetention []validation.StreamRetention
- expectedRetention time.Duration
- }{
- {
- name: "no retention",
- expectedRetention: 0,
- },
- {
- name: "global retention",
- globalRetention: 30 * 24 * time.Hour,
- expectedRetention: 30 * 24 * time.Hour,
- },
- {
- name: "stream retention",
- streamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- },
- expectedRetention: 30 * 24 * time.Hour,
- },
- {
- name: "two stream retention",
- streamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- expectedRetention: 40 * 24 * time.Hour,
- },
- {
- name: "stream retention bigger than global",
- globalRetention: 20 * 24 * time.Hour,
- streamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- expectedRetention: 40 * 24 * time.Hour,
- },
- {
- name: "global retention bigger than stream",
- globalRetention: 40 * 24 * time.Hour,
- streamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(20 * 24 * time.Hour),
- },
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- },
- expectedRetention: 40 * 24 * time.Hour,
- },
- } {
- t.Run(tc.name, func(t *testing.T) {
- retention := findLongestRetention(tc.globalRetention, tc.streamRetention)
- require.Equal(t, tc.expectedRetention, retention)
- })
- }
-}
-
-func TestSmallestRetention(t *testing.T) {
- for _, tc := range []struct {
- name string
- limits RetentionLimits
- expectedRetention time.Duration
- expectedHasRetention bool
- }{
- {
- name: "no retention",
- limits: mockRetentionLimits{},
- expectedRetention: 0,
- },
- {
- name: "default global retention",
- limits: mockRetentionLimits{
- defaultRetention: 30 * 24 * time.Hour,
- },
- expectedRetention: 30 * 24 * time.Hour,
- },
- {
- name: "default stream retention",
- limits: mockRetentionLimits{
- defaultStreamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- },
- },
- expectedRetention: 30 * 24 * time.Hour,
- },
- {
- name: "tenant configured unlimited",
- limits: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 0,
- },
- defaultRetention: 30 * 24 * time.Hour,
- },
- expectedRetention: 30 * 24 * time.Hour,
- },
- {
- name: "no default one tenant",
- limits: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- },
- streamRetention: map[string][]validation.StreamRetention{
- "1": {
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- },
- },
- expectedRetention: 40 * 24 * time.Hour,
- },
- {
- name: "no default two tenants",
- limits: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- "2": 20 * 24 * time.Hour,
- },
- streamRetention: map[string][]validation.StreamRetention{
- "1": {
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- "2": {
- {
- Period: model.Duration(10 * 24 * time.Hour),
- },
- },
- },
- },
- expectedRetention: 20 * 24 * time.Hour,
- },
- {
- name: "default bigger than tenant",
- limits: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 10 * 24 * time.Hour,
- },
- streamRetention: map[string][]validation.StreamRetention{
- "1": {
- {
- Period: model.Duration(20 * 24 * time.Hour),
- },
- },
- },
- defaultRetention: 40 * 24 * time.Hour,
- defaultStreamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(30 * 24 * time.Hour),
- },
- },
- },
- expectedRetention: 20 * 24 * time.Hour,
- },
- {
- name: "tenant bigger than default",
- limits: mockRetentionLimits{
- retention: map[string]time.Duration{
- "1": 30 * 24 * time.Hour,
- },
- streamRetention: map[string][]validation.StreamRetention{
- "1": {
- {
- Period: model.Duration(40 * 24 * time.Hour),
- },
- },
- },
- defaultRetention: 10 * 24 * time.Hour,
- defaultStreamRetention: []validation.StreamRetention{
- {
- Period: model.Duration(20 * 24 * time.Hour),
- },
- },
- },
- expectedRetention: 20 * 24 * time.Hour,
- },
- } {
- t.Run(tc.name, func(t *testing.T) {
- defaultLim := tc.limits.DefaultLimits()
- defaultRetention := findLongestRetention(time.Duration(defaultLim.RetentionPeriod), defaultLim.StreamRetention)
- tenantsRetention := retentionByTenant(tc.limits)
-
- retention := smallestEnabledRetention(defaultRetention, tenantsRetention)
- require.Equal(t, tc.expectedRetention, retention)
- })
- }
-}
-
-func TestRetentionConfigValidate(t *testing.T) {
- for _, tc := range []struct {
- name string
- cfg RetentionConfig
- expectErr bool
- }{
- {
- name: "enabled and valid",
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 2 * 365,
- },
- expectErr: false,
- },
- {
- name: "invalid max lookback days",
- cfg: RetentionConfig{
- Enabled: true,
- MaxLookbackDays: 0,
- },
- expectErr: true,
- },
- {
- name: "disabled and invalid",
- cfg: RetentionConfig{
- Enabled: false,
- MaxLookbackDays: 0,
- },
- expectErr: false,
- },
- } {
- t.Run(tc.name, func(t *testing.T) {
- err := tc.cfg.Validate()
- if tc.expectErr {
- require.Error(t, err)
- return
- }
- require.NoError(t, err)
- })
- }
-}
-
-func putMetasForLastNDays(t *testing.T, schemaCfg storageconfig.SchemaConfig, bloomStore *bloomshipper.BloomStore, tenant string, start model.Time, days int) {
- const metasPerDay = 2
-
- startDay := storageconfig.NewDayTime(start)
- endDay := storageconfig.NewDayTime(startDay.Add(-time.Duration(days) * 24 * time.Hour))
- for day := startDay; day.After(endDay); day = day.Dec() {
- period, err := schemaCfg.SchemaForTime(day.ModelTime())
- require.NoError(t, err)
-
- dayTable := storageconfig.NewDayTable(day, period.IndexTables.Prefix)
- bloomClient, err := bloomStore.Client(dayTable.ModelTime())
- require.NoErrorf(t, err, "failed to get bloom client for day %d: %s", day, err)
-
- for i := 0; i < metasPerDay; i++ {
- err = bloomClient.PutMeta(context.Background(), bloomshipper.Meta{
- MetaRef: bloomshipper.MetaRef{
- Ref: bloomshipper.Ref{
- TenantID: tenant,
- TableName: dayTable.String(),
- Bounds: v1.NewBounds(model.Fingerprint(i*100), model.Fingerprint(i*100+100)),
- },
- },
- Blocks: []bloomshipper.BlockRef{},
- })
- require.NoError(t, err)
- }
- }
-}
-
-// getMetasForLastNDays returns groups of continuous metas for the last N days.
-func getGroupedMetasForLastNDays(t *testing.T, bloomStore *bloomshipper.BloomStore, tenant string, start model.Time, days int) [][][]bloomshipper.Meta {
- metasGrouped := make([][][]bloomshipper.Meta, 0)
- currentGroup := make([][]bloomshipper.Meta, 0)
-
- startDay := storageconfig.NewDayTime(start)
- endDay := storageconfig.NewDayTime(startDay.Add(-time.Duration(days) * 24 * time.Hour))
-
- for day := startDay; day.After(endDay); day = day.Dec() {
- metas, err := bloomStore.FetchMetas(context.Background(), bloomshipper.MetaSearchParams{
- TenantID: tenant,
- Interval: bloomshipper.NewInterval(day.Bounds()),
- Keyspace: v1.NewBounds(0, math.MaxUint64),
- })
- require.NoError(t, err)
- if len(metas) == 0 {
- // We have reached the end of the metas group: cut a new group
- if len(currentGroup) > 0 {
- metasGrouped = append(metasGrouped, currentGroup)
- currentGroup = make([][]bloomshipper.Meta, 0)
- }
- continue
- }
- currentGroup = append(currentGroup, metas)
- }
-
- // Append the last group if it's not empty
- if len(currentGroup) > 0 {
- metasGrouped = append(metasGrouped, currentGroup)
- }
-
- return metasGrouped
-}
-
-func NewMockBloomStore(t *testing.T) (*bloomshipper.BloomStore, storageconfig.SchemaConfig, string, error) {
- workDir := t.TempDir()
- return NewMockBloomStoreWithWorkDir(t, workDir)
-}
-
-func NewMockBloomStoreWithWorkDir(t *testing.T, workDir string) (*bloomshipper.BloomStore, storageconfig.SchemaConfig, string, error) {
- schemaCfg := storageconfig.SchemaConfig{
- Configs: []storageconfig.PeriodConfig{
- {
- ObjectType: types.StorageTypeFileSystem,
- From: storageconfig.DayTime{
- Time: testTime.Add(-2 * 365 * 24 * time.Hour), // -2 year
- },
- IndexTables: storageconfig.IndexPeriodicTableConfig{
- PeriodicTableConfig: storageconfig.PeriodicTableConfig{
- Period: 24 * time.Hour,
- Prefix: "schema_a_table_",
- }},
- },
- {
- ObjectType: types.StorageTypeFileSystem,
- From: storageconfig.DayTime{
- Time: testTime.Add(-365 * 24 * time.Hour), // -1 year
- },
- IndexTables: storageconfig.IndexPeriodicTableConfig{
- PeriodicTableConfig: storageconfig.PeriodicTableConfig{
- Period: 24 * time.Hour,
- Prefix: "schema_b_table_",
- }},
- },
- },
- }
-
- storageConfig := storage.Config{
- FSConfig: local.FSConfig{
- Directory: workDir,
- },
- BloomShipperConfig: config.Config{
- WorkingDirectory: []string{workDir},
- DownloadParallelism: 1,
- BlocksCache: config.BlocksCacheConfig{
- SoftLimit: 1 << 20,
- HardLimit: 2 << 20,
- TTL: time.Hour,
- PurgeInterval: time.Hour,
- },
- },
- }
-
- reg := prometheus.NewPedanticRegistry()
- metrics := storage.NewClientMetrics()
- t.Cleanup(metrics.Unregister)
- logger := log.NewLogfmtLogger(os.Stderr)
-
- metasCache := cache.NewMockCache()
- blocksCache := bloomshipper.NewFsBlocksCache(storageConfig.BloomShipperConfig.BlocksCache, prometheus.NewPedanticRegistry(), logger)
-
- store, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageConfig, metrics, metasCache, blocksCache, &mempool.SimpleHeapAllocator{}, reg, logger)
- if err == nil {
- t.Cleanup(store.Stop)
- }
-
- return store, schemaCfg, workDir, err
-}
-
-type mockRetentionLimits struct {
- retention map[string]time.Duration
- streamRetention map[string][]validation.StreamRetention
- defaultRetention time.Duration
- defaultStreamRetention []validation.StreamRetention
-}
-
-func (m mockRetentionLimits) RetentionPeriod(tenant string) time.Duration {
- return m.retention[tenant]
-}
-
-func (m mockRetentionLimits) StreamRetention(tenant string) []validation.StreamRetention {
- return m.streamRetention[tenant]
-}
-
-func (m mockRetentionLimits) AllByUserID() map[string]*validation.Limits {
- tenants := make(map[string]*validation.Limits, len(m.retention))
-
- for tenant, retention := range m.retention {
- if _, ok := tenants[tenant]; !ok {
- tenants[tenant] = &validation.Limits{}
- }
- tenants[tenant].RetentionPeriod = model.Duration(retention)
- }
-
- for tenant, streamRetention := range m.streamRetention {
- if _, ok := tenants[tenant]; !ok {
- tenants[tenant] = &validation.Limits{}
- }
- tenants[tenant].StreamRetention = streamRetention
- }
-
- return tenants
-}
-
-func (m mockRetentionLimits) DefaultLimits() *validation.Limits {
- return &validation.Limits{
- RetentionPeriod: model.Duration(m.defaultRetention),
- StreamRetention: m.defaultStreamRetention,
- }
-}
-
-type mockSharding struct {
- ownsRetention bool
-}
-
-func (m mockSharding) OwnsRetention() (bool, error) {
- return m.ownsRetention, nil
-}
diff --git a/pkg/bloomcompactor/spec.go b/pkg/bloomcompactor/spec.go
deleted file mode 100644
index 696f192970b68..0000000000000
--- a/pkg/bloomcompactor/spec.go
+++ /dev/null
@@ -1,312 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "fmt"
- "io"
-
- "github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/pkg/errors"
- "github.com/prometheus/common/model"
-
- iter "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/logproto"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/chunk"
- "github.com/grafana/loki/v3/pkg/storage/chunk/fetcher"
- "github.com/grafana/loki/v3/pkg/storage/stores"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
-)
-
-// inclusive range
-type Keyspace struct {
- min, max model.Fingerprint
-}
-
-func (k Keyspace) Cmp(other Keyspace) v1.BoundsCheck {
- if other.max < k.min {
- return v1.Before
- } else if other.min > k.max {
- return v1.After
- }
- return v1.Overlap
-}
-
-// Store is likely bound within. This allows specifying impls like ShardedStore
-// to only request the shard-range needed from the existing store.
-type BloomGenerator interface {
- Generate(ctx context.Context) (skippedBlocks []v1.BlockMetadata, toClose []io.Closer, results iter.Iterator[*v1.Block], err error)
-}
-
-// Simple implementation of a BloomGenerator.
-type SimpleBloomGenerator struct {
- userID string
- store iter.Iterator[*v1.Series]
- chunkLoader ChunkLoader
- blocksIter iter.ResetIterator[*v1.SeriesWithBlooms]
-
- // options to build blocks with
- opts v1.BlockOptions
-
- metrics *Metrics
- logger log.Logger
-
- writerReaderFunc func() (v1.BlockWriter, v1.BlockReader)
- reporter func(model.Fingerprint)
-
- tokenizer *v1.BloomTokenizer
-}
-
-// SimpleBloomGenerator is a foundational implementation of BloomGenerator.
-// It mainly wires up a few different components to generate bloom filters for a set of blocks
-// and handles schema compatibility:
-// Blocks which are incompatible with the schema are skipped and will have their chunks reindexed
-func NewSimpleBloomGenerator(
- userID string,
- opts v1.BlockOptions,
- store iter.Iterator[*v1.Series],
- chunkLoader ChunkLoader,
- blocksIter iter.ResetIterator[*v1.SeriesWithBlooms],
- writerReaderFunc func() (v1.BlockWriter, v1.BlockReader),
- reporter func(model.Fingerprint),
- metrics *Metrics,
- logger log.Logger,
-) *SimpleBloomGenerator {
- return &SimpleBloomGenerator{
- userID: userID,
- opts: opts,
- store: store,
- chunkLoader: chunkLoader,
- blocksIter: blocksIter,
- logger: log.With(
- logger,
- "component", "bloom_generator",
- "org_id", userID,
- ),
- writerReaderFunc: writerReaderFunc,
- metrics: metrics,
- reporter: reporter,
-
- tokenizer: v1.NewBloomTokenizer(
- opts.Schema.NGramLen(),
- opts.Schema.NGramSkip(),
- int(opts.UnencodedBlockOptions.MaxBloomSizeBytes),
- metrics.bloomMetrics,
- log.With(
- logger,
- "component", "bloom_tokenizer",
- "org_id", userID,
- ),
- ),
- }
-}
-
-func (s *SimpleBloomGenerator) populator(ctx context.Context) v1.BloomPopulatorFunc {
- return func(
- series *v1.Series,
- srcBlooms iter.SizedIterator[*v1.Bloom],
- toAdd v1.ChunkRefs,
- ch chan *v1.BloomCreation,
- ) {
- level.Debug(s.logger).Log(
- "msg", "populating bloom filter",
- "stage", "before",
- "fp", series.Fingerprint,
- "chunks", len(series.Chunks),
- )
- chunkItersWithFP := s.chunkLoader.Load(ctx, s.userID, &v1.Series{
- Fingerprint: series.Fingerprint,
- Chunks: toAdd,
- })
-
- s.tokenizer.Populate(srcBlooms, chunkItersWithFP.itr, ch)
-
- if s.reporter != nil {
- s.reporter(series.Fingerprint)
- }
- }
-}
-
-func (s *SimpleBloomGenerator) Generate(ctx context.Context) *LazyBlockBuilderIterator {
- level.Debug(s.logger).Log("msg", "generating bloom filters for blocks", "schema", fmt.Sprintf("%+v", s.opts.Schema))
-
- series := iter.NewPeekIter(s.store)
-
- // TODO: Use interface
- impl, ok := s.blocksIter.(*blockLoadingIter)
- if ok {
- impl.Filter(
- func(bq *bloomshipper.CloseableBlockQuerier) bool {
-
- logger := log.With(s.logger, "block", bq.BlockRef)
- md, err := bq.Metadata()
- schema := md.Options.Schema
- if err != nil {
- level.Warn(logger).Log("msg", "failed to get schema for block", "err", err)
- bq.Close() // close unused querier
- return false
- }
-
- if !s.opts.Schema.Compatible(schema) {
- level.Warn(logger).Log("msg", "block schema incompatible with options", "generator_schema", fmt.Sprintf("%+v", s.opts.Schema), "block_schema", fmt.Sprintf("%+v", schema))
- bq.Close() // close unused querier
- return false
- }
-
- level.Debug(logger).Log("msg", "adding compatible block to bloom generation inputs")
- return true
- },
- )
- }
-
- return NewLazyBlockBuilderIterator(ctx, s.opts, s.metrics, s.populator(ctx), s.writerReaderFunc, series, s.blocksIter)
-}
-
-// LazyBlockBuilderIterator is a lazy iterator over blocks that builds
-// each block by adding series to them until they are full.
-type LazyBlockBuilderIterator struct {
- ctx context.Context
- opts v1.BlockOptions
- metrics *Metrics
- populate v1.BloomPopulatorFunc
- writerReaderFunc func() (v1.BlockWriter, v1.BlockReader)
- series iter.PeekIterator[*v1.Series]
- blocks iter.ResetIterator[*v1.SeriesWithBlooms]
-
- bytesAdded int
- curr *v1.Block
- err error
-}
-
-func NewLazyBlockBuilderIterator(
- ctx context.Context,
- opts v1.BlockOptions,
- metrics *Metrics,
- populate v1.BloomPopulatorFunc,
- writerReaderFunc func() (v1.BlockWriter, v1.BlockReader),
- series iter.PeekIterator[*v1.Series],
- blocks iter.ResetIterator[*v1.SeriesWithBlooms],
-) *LazyBlockBuilderIterator {
- return &LazyBlockBuilderIterator{
- ctx: ctx,
- opts: opts,
- metrics: metrics,
- populate: populate,
- writerReaderFunc: writerReaderFunc,
- series: series,
- blocks: blocks,
- }
-}
-
-func (b *LazyBlockBuilderIterator) Bytes() (bytes int) {
- return b.bytesAdded
-}
-
-func (b *LazyBlockBuilderIterator) Next() bool {
- // No more series to process
- if _, hasNext := b.series.Peek(); !hasNext {
- return false
- }
-
- if err := b.ctx.Err(); err != nil {
- b.err = errors.Wrap(err, "context canceled")
- return false
- }
-
- if err := b.blocks.Reset(); err != nil {
- b.err = errors.Wrap(err, "reset blocks iterator")
- return false
- }
-
- mergeBuilder := v1.NewMergeBuilder(b.blocks, b.series, b.populate, b.metrics.bloomMetrics)
- writer, reader := b.writerReaderFunc()
- blockBuilder, err := v1.NewBlockBuilder(b.opts, writer)
- if err != nil {
- b.err = errors.Wrap(err, "failed to create bloom block builder")
- return false
- }
- _, sourceBytes, err := mergeBuilder.Build(blockBuilder)
- b.bytesAdded += sourceBytes
-
- if err != nil {
- b.err = errors.Wrap(err, "failed to build bloom block")
- return false
- }
-
- b.curr = v1.NewBlock(reader, b.metrics.bloomMetrics)
- return true
-}
-
-func (b *LazyBlockBuilderIterator) At() *v1.Block {
- return b.curr
-}
-
-func (b *LazyBlockBuilderIterator) Err() error {
- return b.err
-}
-
-// IndexLoader loads an index. This helps us do things like
-// load TSDBs for a specific period excluding multitenant (pre-compacted) indices
-type indexLoader interface {
- Index() (tsdb.Index, error)
-}
-
-// ChunkItersByFingerprint models the chunks belonging to a fingerprint
-type ChunkItersByFingerprint struct {
- fp model.Fingerprint
- itr iter.Iterator[v1.ChunkRefWithIter]
-}
-
-// ChunkLoader loads chunks from a store
-type ChunkLoader interface {
- Load(ctx context.Context, userID string, series *v1.Series) *ChunkItersByFingerprint
-}
-
-// StoreChunkLoader loads chunks from a store
-type StoreChunkLoader struct {
- fetcherProvider stores.ChunkFetcherProvider
- metrics *Metrics
-}
-
-func NewStoreChunkLoader(fetcherProvider stores.ChunkFetcherProvider, metrics *Metrics) *StoreChunkLoader {
- return &StoreChunkLoader{
- fetcherProvider: fetcherProvider,
- metrics: metrics,
- }
-}
-
-func (s *StoreChunkLoader) Load(ctx context.Context, userID string, series *v1.Series) *ChunkItersByFingerprint {
- // NB(owen-d): This is probably unnecessary as we should only have one fetcher
- // because we'll only be working on a single index period at a time, but this should protect
- // us in the case of refactoring/changing this and likely isn't a perf bottleneck.
- chksByFetcher := make(map[*fetcher.Fetcher][]chunk.Chunk)
- for _, chk := range series.Chunks {
- fetcher := s.fetcherProvider.GetChunkFetcher(chk.From)
- chksByFetcher[fetcher] = append(chksByFetcher[fetcher], chunk.Chunk{
- ChunkRef: logproto.ChunkRef{
- Fingerprint: uint64(series.Fingerprint),
- UserID: userID,
- From: chk.From,
- Through: chk.Through,
- Checksum: chk.Checksum,
- },
- })
- }
-
- var (
- fetchers = make([]Fetcher[chunk.Chunk, chunk.Chunk], 0, len(chksByFetcher))
- inputs = make([][]chunk.Chunk, 0, len(chksByFetcher))
- )
- for fetcher, chks := range chksByFetcher {
- fn := FetchFunc[chunk.Chunk, chunk.Chunk](fetcher.FetchChunks)
- fetchers = append(fetchers, fn)
- inputs = append(inputs, chks)
- }
-
- return &ChunkItersByFingerprint{
- fp: series.Fingerprint,
- itr: newBatchedChunkLoader(ctx, fetchers, inputs, s.metrics, batchedLoaderDefaultBatchSize),
- }
-}
diff --git a/pkg/bloomcompactor/spec_test.go b/pkg/bloomcompactor/spec_test.go
deleted file mode 100644
index 8ee914b5c8982..0000000000000
--- a/pkg/bloomcompactor/spec_test.go
+++ /dev/null
@@ -1,170 +0,0 @@
-package bloomcompactor
-
-import (
- "bytes"
- "context"
- "fmt"
- "testing"
-
- "github.com/go-kit/log"
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
-
- "github.com/grafana/loki/v3/pkg/chunkenc"
- v2 "github.com/grafana/loki/v3/pkg/iter/v2"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/util/mempool"
-)
-
-func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
- return blocksFromSchemaWithRange(t, n, options, 0, 0xffff)
-}
-
-// splits 100 series across `n` non-overlapping blocks.
-// uses options to build blocks with.
-func blocksFromSchemaWithRange(t *testing.T, n int, options v1.BlockOptions, fromFP, throughFp model.Fingerprint) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
- if 100%n != 0 {
- panic("100 series must be evenly divisible by n")
- }
-
- numSeries := 100
- data, _ = v1.MkBasicSeriesWithBlooms(numSeries, fromFP, throughFp, 0, 10000)
-
- seriesPerBlock := numSeries / n
-
- for i := 0; i < n; i++ {
- // references for linking in memory reader+writer
- indexBuf := bytes.NewBuffer(nil)
- bloomsBuf := bytes.NewBuffer(nil)
- writer := v1.NewMemoryBlockWriter(indexBuf, bloomsBuf)
- reader := v1.NewByteReader(indexBuf, bloomsBuf)
-
- builder, err := v1.NewBlockBuilder(
- options,
- writer,
- )
- require.Nil(t, err)
-
- minIdx, maxIdx := i*seriesPerBlock, (i+1)*seriesPerBlock
-
- itr := v2.NewSliceIter[v1.SeriesWithBlooms](data[minIdx:maxIdx])
- _, err = builder.BuildFrom(itr)
- require.Nil(t, err)
-
- res = append(res, v1.NewBlock(reader, v1.NewMetrics(nil)))
- ref := genBlockRef(data[minIdx].Series.Fingerprint, data[maxIdx-1].Series.Fingerprint)
- t.Log("create block", ref)
- refs = append(refs, ref)
- }
-
- return res, data, refs
-}
-
-// doesn't actually load any chunks
-type dummyChunkLoader struct{}
-
-func (dummyChunkLoader) Load(_ context.Context, _ string, series *v1.Series) *ChunkItersByFingerprint {
- return &ChunkItersByFingerprint{
- fp: series.Fingerprint,
- itr: v2.NewEmptyIter[v1.ChunkRefWithIter](),
- }
-}
-
-func dummyBloomGen(t *testing.T, opts v1.BlockOptions, store v2.Iterator[*v1.Series], blocks []*v1.Block, refs []bloomshipper.BlockRef) *SimpleBloomGenerator {
- bqs := make([]*bloomshipper.CloseableBlockQuerier, 0, len(blocks))
- for i, b := range blocks {
- bqs = append(bqs, &bloomshipper.CloseableBlockQuerier{
- BlockRef: refs[i],
- BlockQuerier: v1.NewBlockQuerier(b, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize),
- })
- }
-
- fetcher := func(_ context.Context, refs []bloomshipper.BlockRef) ([]*bloomshipper.CloseableBlockQuerier, error) {
- res := make([]*bloomshipper.CloseableBlockQuerier, 0, len(refs))
- for _, ref := range refs {
- for _, bq := range bqs {
- if ref.Bounds.Equal(bq.Bounds) {
- res = append(res, bq)
- }
- }
- }
- t.Log("req", refs)
- t.Log("res", res)
- return res, nil
- }
-
- blocksIter := newBlockLoadingIter(context.Background(), refs, FetchFunc[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier](fetcher), 1)
-
- return NewSimpleBloomGenerator(
- "fake",
- opts,
- store,
- dummyChunkLoader{},
- blocksIter,
- func() (v1.BlockWriter, v1.BlockReader) {
- indexBuf := bytes.NewBuffer(nil)
- bloomsBuf := bytes.NewBuffer(nil)
- return v1.NewMemoryBlockWriter(indexBuf, bloomsBuf), v1.NewByteReader(indexBuf, bloomsBuf)
- },
- nil,
- NewMetrics(nil, v1.NewMetrics(nil)),
- log.NewNopLogger(),
- )
-}
-
-func TestSimpleBloomGenerator(t *testing.T) {
- const maxBlockSize = 100 << 20 // 100MB
- for _, enc := range []chunkenc.Encoding{chunkenc.EncNone, chunkenc.EncGZIP, chunkenc.EncSnappy} {
- for _, tc := range []struct {
- desc string
- fromSchema, toSchema v1.BlockOptions
- overlapping bool
- }{
- {
- desc: "SkipsIncompatibleSchemas",
- fromSchema: v1.NewBlockOptions(enc, 3, 0, maxBlockSize, 0),
- toSchema: v1.NewBlockOptions(enc, 4, 0, maxBlockSize, 0),
- },
- {
- desc: "CombinesBlocks",
- fromSchema: v1.NewBlockOptions(enc, 4, 0, maxBlockSize, 0),
- toSchema: v1.NewBlockOptions(enc, 4, 0, maxBlockSize, 0),
- },
- } {
- t.Run(fmt.Sprintf("%s/%s", tc.desc, enc), func(t *testing.T) {
- sourceBlocks, data, refs := blocksFromSchemaWithRange(t, 2, tc.fromSchema, 0x00000, 0x6ffff)
- storeItr := v2.NewMapIter[v1.SeriesWithBlooms, *v1.Series](
- v2.NewSliceIter[v1.SeriesWithBlooms](data),
- func(swb v1.SeriesWithBlooms) *v1.Series {
- return swb.Series
- },
- )
-
- gen := dummyBloomGen(t, tc.toSchema, storeItr, sourceBlocks, refs)
- results := gen.Generate(context.Background())
-
- var outputBlocks []*v1.Block
- for results.Next() {
- outputBlocks = append(outputBlocks, results.At())
- }
- // require.Equal(t, tc.outputBlocks, len(outputBlocks))
-
- // Check all the input series are present in the output blocks.
- expectedRefs := v1.PointerSlice(data)
- outputRefs := make([]*v1.SeriesWithBlooms, 0, len(data))
- for _, block := range outputBlocks {
- bq := v1.NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize).Iter()
- for bq.Next() {
- outputRefs = append(outputRefs, bq.At())
- }
- }
- require.Equal(t, len(expectedRefs), len(outputRefs))
- for i := range expectedRefs {
- require.Equal(t, expectedRefs[i].Series, outputRefs[i].Series)
- }
- })
- }
- }
-
-}
diff --git a/pkg/bloomcompactor/tracker.go b/pkg/bloomcompactor/tracker.go
deleted file mode 100644
index 1c9bde0a4ae71..0000000000000
--- a/pkg/bloomcompactor/tracker.go
+++ /dev/null
@@ -1,123 +0,0 @@
-package bloomcompactor
-
-import (
- "fmt"
- "math"
- "sync"
-
- "github.com/pkg/errors"
- "github.com/prometheus/common/model"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/config"
-)
-
-type tableRangeProgress struct {
- tenant string
- table config.DayTime
- bounds v1.FingerprintBounds
-
- lastFP model.Fingerprint
-}
-
-type compactionTracker struct {
- sync.Mutex
-
- nTables int
- // tables -> n_tenants
- metadata map[config.DayTime]int
-
- // table -> tenant -> workload_id -> keyspace
- tables map[config.DayTime]map[string]map[string]*tableRangeProgress
-}
-
-func newCompactionTracker(nTables int) (*compactionTracker, error) {
- if nTables <= 0 {
- return nil, errors.New("nTables must be positive")
- }
-
- return &compactionTracker{
- nTables: nTables,
- tables: make(map[config.DayTime]map[string]map[string]*tableRangeProgress),
- metadata: make(map[config.DayTime]int),
- }, nil
-}
-
-func (c *compactionTracker) registerTable(tbl config.DayTime, nTenants int) {
- c.Lock()
- defer c.Unlock()
- c.metadata[tbl] = nTenants
- c.tables[tbl] = make(map[string]map[string]*tableRangeProgress)
-}
-
-func (c *compactionTracker) update(
- tenant string,
- table config.DayTime,
- bounds v1.FingerprintBounds,
- mostRecentFP model.Fingerprint,
-) {
- c.Lock()
- defer c.Unlock()
- key := fmt.Sprintf("%s_%s_%s", tenant, table.String(), bounds.String())
- tbl, ok := c.tables[table]
- if !ok {
- panic(fmt.Sprintf("table not registered: %s", table.String()))
- }
- workloads, ok := tbl[tenant]
- if !ok {
- workloads = make(map[string]*tableRangeProgress)
- tbl[tenant] = workloads
- }
- workloads[key] = &tableRangeProgress{
- tenant: tenant,
- table: table,
- bounds: bounds,
- // ensure lastFP is at least the minimum fp for each range;
- // this handles the case when the first fingeprint hasn't been processed yet.
- // as a precaution we also clip the lastFP to the bounds.
- lastFP: min(max(mostRecentFP, bounds.Min), bounds.Max),
- }
-}
-
-// Returns progress in (0-1) range, bounded to 3 decimals.
-// compaction progress is measured by the following:
-// 1. The number of days of data that has been compacted
-// as a percentage of the total number of days of data that needs to be compacted.
-// 2. Within each day, the number of tenants that have been compacted
-// as a percentage of the total number of tenants that need to be compacted.
-// 3. Within each tenant, the percent of the keyspaces that have been compacted.
-// NB(owen-d): this treats all tenants equally, when this may not be the case wrt
-// the work they have to do. This is a simplification and can be x-referenced with
-// the tenant_compaction_seconds_total metric to see how much time is being spent on
-// each tenant while the compaction tracker shows total compaction progress across
-// all tables and tenants.
-func (c *compactionTracker) progress() (progress float64) {
- c.Lock()
- defer c.Unlock()
-
- perTablePct := 1. / float64(c.nTables)
-
- // for all registered tables, determine the number of registered tenants
- for tbl, nTenants := range c.metadata {
- perTenantPct := perTablePct / float64(nTenants)
-
- // iterate tenants in each table
- for _, tenant := range c.tables[tbl] {
- var (
- totalKeyspace uint64
- finishedKeyspace uint64
- )
-
- // iterate table ranges for each tenant+table pair
- for _, batch := range tenant {
- totalKeyspace += batch.bounds.Range()
- finishedKeyspace += uint64(batch.lastFP - batch.bounds.Min)
- }
-
- tenantProgress := float64(finishedKeyspace) / float64(totalKeyspace)
- progress += perTenantPct * tenantProgress
- }
- }
-
- return math.Round(progress*1000) / 1000
-}
diff --git a/pkg/bloomcompactor/tracker_test.go b/pkg/bloomcompactor/tracker_test.go
deleted file mode 100644
index e23eb55d6dc64..0000000000000
--- a/pkg/bloomcompactor/tracker_test.go
+++ /dev/null
@@ -1,106 +0,0 @@
-package bloomcompactor
-
-import (
- "testing"
-
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/config"
-)
-
-func mkTblRange(tenant string, tbl config.DayTime, from, through model.Fingerprint) *tenantTableRange {
- return &tenantTableRange{
- tenant: tenant,
- table: config.NewDayTable(tbl, ""),
- ownershipRange: v1.NewBounds(from, through),
- }
-}
-
-func updateTracker(tr *compactionTracker, tt *tenantTableRange, lastFP model.Fingerprint) {
- tr.update(tt.tenant, tt.table.DayTime, tt.ownershipRange, lastFP)
-}
-
-func TestCompactionTrackerClipsRange(t *testing.T) {
- // test invalid table number
- tracker, err := newCompactionTracker(1)
- require.NoError(t, err)
-
- day1 := parseDayTime("2024-01-01")
- tracker.registerTable(day1, 1)
-
- work := mkTblRange("a", day1, 0, 10)
- updateTracker(tracker, work, 0)
- require.Equal(t, 0., tracker.progress())
- updateTracker(tracker, work, work.ownershipRange.Min)
- require.Equal(t, 0., tracker.progress())
- updateTracker(tracker, work, 5)
- require.Equal(t, 0.5, tracker.progress())
- updateTracker(tracker, work, work.ownershipRange.Max*2)
- require.Equal(t, 1., tracker.progress())
- updateTracker(tracker, work, work.ownershipRange.Max)
- require.Equal(t, 1., tracker.progress())
-}
-
-func TestCompactionTrackerFull(t *testing.T) {
- // test invalid table number
- _, err := newCompactionTracker(0)
- require.Error(t, err)
-
- tracker, err := newCompactionTracker(2)
- require.NoError(t, err)
-
- day1 := parseDayTime("2024-01-01")
- day2 := parseDayTime("2024-01-02")
-
- tracker.registerTable(day1, 2)
- tracker.registerTable(day2, 3)
- require.Equal(t, 0., tracker.progress())
-
- aDayOneOffsetZero := mkTblRange("a", day1, 0, 10)
- aDayOneOffsetOne := mkTblRange("a", day1, 40, 50)
- bDayOneOffsetZero := mkTblRange("b", day1, 10, 20)
-
- // register the workloads for day0_tenantA
- updateTracker(tracker, aDayOneOffsetZero, 0)
- updateTracker(tracker, aDayOneOffsetOne, 0)
-
- require.Equal(t, 0., tracker.progress())
- updateTracker(tracker, aDayOneOffsetZero, aDayOneOffsetZero.ownershipRange.Max) // simulate finish
- require.Equal(t, 0.125, tracker.progress())
- updateTracker(tracker, aDayOneOffsetOne, aDayOneOffsetOne.ownershipRange.Max) // simulate finish
- require.Equal(t, 0.25, tracker.progress())
-
- // register the workloads for day0_tenantB
- updateTracker(tracker, bDayOneOffsetZero, 0)
-
- require.Equal(t, 0.25, tracker.progress())
- // simulate half finish (partial workload progress)
- updateTracker(
- tracker,
- bDayOneOffsetZero,
- bDayOneOffsetZero.ownershipRange.Min+model.Fingerprint(bDayOneOffsetZero.ownershipRange.Range())/2,
- )
- require.Equal(t, 0.375, tracker.progress())
- // simulate finish
- updateTracker(tracker, bDayOneOffsetZero, bDayOneOffsetZero.ownershipRange.Max)
- require.Equal(t, 0.5, tracker.progress())
-
- aDayTwoOffsetZero := mkTblRange("a", day2, 0, 10)
- bDayTwoOffsetZero := mkTblRange("b", day2, 10, 20)
- cDayTwoOffsetZero := mkTblRange("c", day2, 20, 30)
- updateTracker(tracker, aDayTwoOffsetZero, 0)
- updateTracker(tracker, bDayTwoOffsetZero, 0)
- updateTracker(tracker, cDayTwoOffsetZero, 0)
- require.Equal(t, 0.5, tracker.progress())
-
- // simulate finish for the a & b
- updateTracker(tracker, aDayTwoOffsetZero, aDayTwoOffsetZero.ownershipRange.Max)
- updateTracker(tracker, bDayTwoOffsetZero, bDayTwoOffsetZero.ownershipRange.Max)
- require.Equal(t, 0.833, tracker.progress())
-
- // simulate finish for the c
- updateTracker(tracker, cDayTwoOffsetZero, cDayTwoOffsetZero.ownershipRange.Max)
- require.Equal(t, 1., tracker.progress())
-}
diff --git a/pkg/bloomcompactor/tsdb.go b/pkg/bloomcompactor/tsdb.go
deleted file mode 100644
index c522cc6dbcef2..0000000000000
--- a/pkg/bloomcompactor/tsdb.go
+++ /dev/null
@@ -1,262 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "fmt"
- "io"
- "math"
- "path"
- "strings"
-
- "github.com/go-kit/log"
- "github.com/go-kit/log/level"
- "github.com/pkg/errors"
- "github.com/prometheus/common/model"
- "github.com/prometheus/prometheus/model/labels"
-
- "github.com/grafana/loki/v3/pkg/chunkenc"
- iter "github.com/grafana/loki/v3/pkg/iter/v2"
- baseStore "github.com/grafana/loki/v3/pkg/storage"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/config"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/storage"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/index"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/sharding"
- "github.com/grafana/loki/v3/pkg/storage/types"
-)
-
-const (
- gzipExtension = ".gz"
-)
-
-type TSDBStore interface {
- UsersForPeriod(ctx context.Context, table config.DayTable) ([]string, error)
- ResolveTSDBs(ctx context.Context, table config.DayTable, tenant string) ([]tsdb.SingleTenantTSDBIdentifier, error)
- LoadTSDB(
- ctx context.Context,
- table config.DayTable,
- tenant string,
- id tsdb.Identifier,
- bounds v1.FingerprintBounds,
- ) (iter.Iterator[*v1.Series], error)
-}
-
-// BloomTSDBStore is a wrapper around the storage.Client interface which
-// implements the TSDBStore interface for this pkg.
-type BloomTSDBStore struct {
- storage storage.Client
- logger log.Logger
-}
-
-func NewBloomTSDBStore(storage storage.Client, logger log.Logger) *BloomTSDBStore {
- return &BloomTSDBStore{
- storage: storage,
- logger: logger,
- }
-}
-
-func (b *BloomTSDBStore) UsersForPeriod(ctx context.Context, table config.DayTable) ([]string, error) {
- _, users, err := b.storage.ListFiles(ctx, table.Addr(), true) // bypass cache for ease of testing
- return users, err
-}
-
-func (b *BloomTSDBStore) ResolveTSDBs(ctx context.Context, table config.DayTable, tenant string) ([]tsdb.SingleTenantTSDBIdentifier, error) {
- indices, err := b.storage.ListUserFiles(ctx, table.Addr(), tenant, true) // bypass cache for ease of testing
- if err != nil {
- return nil, errors.Wrap(err, "failed to list user files")
- }
-
- ids := make([]tsdb.SingleTenantTSDBIdentifier, 0, len(indices))
- for _, index := range indices {
- key := index.Name
- if decompress := storage.IsCompressedFile(index.Name); decompress {
- key = strings.TrimSuffix(key, gzipExtension)
- }
-
- id, ok := tsdb.ParseSingleTenantTSDBPath(path.Base(key))
- if !ok {
- return nil, errors.Errorf("failed to parse single tenant tsdb path: %s", key)
- }
-
- ids = append(ids, id)
-
- }
- return ids, nil
-}
-
-func (b *BloomTSDBStore) LoadTSDB(
- ctx context.Context,
- table config.DayTable,
- tenant string,
- id tsdb.Identifier,
- bounds v1.FingerprintBounds,
-) (iter.Iterator[*v1.Series], error) {
- withCompression := id.Name() + gzipExtension
-
- data, err := b.storage.GetUserFile(ctx, table.Addr(), tenant, withCompression)
- if err != nil {
- return nil, errors.Wrap(err, "failed to get file")
- }
- defer data.Close()
-
- decompressorPool := chunkenc.GetReaderPool(chunkenc.EncGZIP)
- decompressor, err := decompressorPool.GetReader(data)
- if err != nil {
- return nil, errors.Wrap(err, "failed to get decompressor")
- }
- defer decompressorPool.PutReader(decompressor)
-
- buf, err := io.ReadAll(decompressor)
- if err != nil {
- return nil, errors.Wrap(err, "failed to read file")
- }
-
- reader, err := index.NewReader(index.RealByteSlice(buf))
- if err != nil {
- return nil, errors.Wrap(err, "failed to create index reader")
- }
-
- idx := tsdb.NewTSDBIndex(reader)
- defer func() {
- if err := idx.Close(); err != nil {
- level.Error(b.logger).Log("msg", "failed to close index", "err", err)
- }
- }()
-
- return NewTSDBSeriesIter(ctx, tenant, idx, bounds)
-}
-
-func NewTSDBSeriesIter(ctx context.Context, user string, f sharding.ForSeries, bounds v1.FingerprintBounds) (iter.Iterator[*v1.Series], error) {
- // TODO(salvacorts): Create a pool
- series := make([]*v1.Series, 0, 100)
-
- if err := f.ForSeries(
- ctx,
- user,
- bounds,
- 0, math.MaxInt64,
- func(_ labels.Labels, fp model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
- select {
- case <-ctx.Done():
- return true
- default:
- res := &v1.Series{
- Fingerprint: fp,
- Chunks: make(v1.ChunkRefs, 0, len(chks)),
- }
- for _, chk := range chks {
- res.Chunks = append(res.Chunks, v1.ChunkRef{
- From: model.Time(chk.MinTime),
- Through: model.Time(chk.MaxTime),
- Checksum: chk.Checksum,
- })
- }
-
- series = append(series, res)
- return false
- }
- },
- labels.MustNewMatcher(labels.MatchEqual, "", ""),
- ); err != nil {
- return nil, err
- }
-
- select {
- case <-ctx.Done():
- return iter.NewEmptyIter[*v1.Series](), ctx.Err()
- default:
- return iter.NewCancelableIter[*v1.Series](ctx, iter.NewSliceIter[*v1.Series](series)), nil
- }
-}
-
-type TSDBStores struct {
- schemaCfg config.SchemaConfig
- stores []TSDBStore
-}
-
-func NewTSDBStores(
- schemaCfg config.SchemaConfig,
- storeCfg baseStore.Config,
- clientMetrics baseStore.ClientMetrics,
- logger log.Logger,
-) (*TSDBStores, error) {
- res := &TSDBStores{
- schemaCfg: schemaCfg,
- stores: make([]TSDBStore, len(schemaCfg.Configs)),
- }
-
- for i, cfg := range schemaCfg.Configs {
- if cfg.IndexType == types.TSDBType {
-
- c, err := baseStore.NewObjectClient(cfg.ObjectType, storeCfg, clientMetrics)
- if err != nil {
- return nil, errors.Wrap(err, "failed to create object client")
- }
- res.stores[i] = NewBloomTSDBStore(storage.NewIndexStorageClient(c, cfg.IndexTables.PathPrefix), logger)
- }
- }
-
- return res, nil
-}
-
-func (s *TSDBStores) storeForPeriod(table config.DayTime) (TSDBStore, error) {
- for i := len(s.schemaCfg.Configs) - 1; i >= 0; i-- {
- period := s.schemaCfg.Configs[i]
-
- if !table.Before(period.From) {
- // we have the desired period config
-
- if s.stores[i] != nil {
- // valid: it's of tsdb type
- return s.stores[i], nil
- }
-
- // invalid
- return nil, errors.Errorf(
- "store for period is not of TSDB type (%s) while looking up store for (%v)",
- period.IndexType,
- table,
- )
- }
-
- }
-
- return nil, fmt.Errorf(
- "there is no store matching no matching period found for table (%v) -- too early",
- table,
- )
-}
-
-func (s *TSDBStores) UsersForPeriod(ctx context.Context, table config.DayTable) ([]string, error) {
- store, err := s.storeForPeriod(table.DayTime)
- if err != nil {
- return nil, err
- }
-
- return store.UsersForPeriod(ctx, table)
-}
-
-func (s *TSDBStores) ResolveTSDBs(ctx context.Context, table config.DayTable, tenant string) ([]tsdb.SingleTenantTSDBIdentifier, error) {
- store, err := s.storeForPeriod(table.DayTime)
- if err != nil {
- return nil, err
- }
-
- return store.ResolveTSDBs(ctx, table, tenant)
-}
-
-func (s *TSDBStores) LoadTSDB(
- ctx context.Context,
- table config.DayTable,
- tenant string,
- id tsdb.Identifier,
- bounds v1.FingerprintBounds,
-) (iter.Iterator[*v1.Series], error) {
- store, err := s.storeForPeriod(table.DayTime)
- if err != nil {
- return nil, err
- }
-
- return store.LoadTSDB(ctx, table, tenant, id, bounds)
-}
diff --git a/pkg/bloomcompactor/tsdb_test.go b/pkg/bloomcompactor/tsdb_test.go
deleted file mode 100644
index b81880d83b46a..0000000000000
--- a/pkg/bloomcompactor/tsdb_test.go
+++ /dev/null
@@ -1,106 +0,0 @@
-package bloomcompactor
-
-import (
- "context"
- "math"
- "testing"
-
- "github.com/prometheus/common/model"
- "github.com/prometheus/prometheus/model/labels"
- "github.com/stretchr/testify/require"
-
- v2 "github.com/grafana/loki/v3/pkg/iter/v2"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/index"
-)
-
-type forSeriesTestImpl []*v1.Series
-
-func (f forSeriesTestImpl) ForSeries(
- _ context.Context,
- _ string,
- _ index.FingerprintFilter,
- _ model.Time,
- _ model.Time,
- fn func(labels.Labels, model.Fingerprint, []index.ChunkMeta) bool,
- _ ...*labels.Matcher,
-) error {
- for i := range f {
- unmapped := make([]index.ChunkMeta, 0, len(f[i].Chunks))
- for _, c := range f[i].Chunks {
- unmapped = append(unmapped, index.ChunkMeta{
- MinTime: int64(c.From),
- MaxTime: int64(c.Through),
- Checksum: c.Checksum,
- })
- }
-
- fn(nil, f[i].Fingerprint, unmapped)
- }
- return nil
-}
-
-func (f forSeriesTestImpl) Close() error {
- return nil
-}
-
-func TestTSDBSeriesIter(t *testing.T) {
- input := []*v1.Series{
- {
- Fingerprint: 1,
- Chunks: []v1.ChunkRef{
- {
- From: 0,
- Through: 1,
- Checksum: 2,
- },
- {
- From: 3,
- Through: 4,
- Checksum: 5,
- },
- },
- },
- }
- srcItr := v2.NewSliceIter(input)
- itr, err := NewTSDBSeriesIter(context.Background(), "", forSeriesTestImpl(input), v1.NewBounds(0, math.MaxUint64))
- require.NoError(t, err)
-
- v1.EqualIterators[*v1.Series](
- t,
- func(a, b *v1.Series) {
- require.Equal(t, a, b)
- },
- itr,
- srcItr,
- )
-}
-
-func TestTSDBSeriesIter_Expiry(t *testing.T) {
- t.Run("expires on creation", func(t *testing.T) {
- ctx, cancel := context.WithCancel(context.Background())
- cancel()
- itr, err := NewTSDBSeriesIter(ctx, "", forSeriesTestImpl{
- {}, // a single entry
- }, v1.NewBounds(0, math.MaxUint64))
- require.Error(t, err)
- require.False(t, itr.Next())
- })
-
- t.Run("expires during consumption", func(t *testing.T) {
- ctx, cancel := context.WithCancel(context.Background())
- itr, err := NewTSDBSeriesIter(ctx, "", forSeriesTestImpl{
- {},
- {},
- }, v1.NewBounds(0, math.MaxUint64))
- require.NoError(t, err)
-
- require.True(t, itr.Next())
- require.NoError(t, itr.Err())
-
- cancel()
- require.False(t, itr.Next())
- require.Error(t, itr.Err())
- })
-
-}
diff --git a/pkg/bloomcompactor/versioned_range.go b/pkg/bloomcompactor/versioned_range.go
deleted file mode 100644
index 8af56a0754cc3..0000000000000
--- a/pkg/bloomcompactor/versioned_range.go
+++ /dev/null
@@ -1,260 +0,0 @@
-package bloomcompactor
-
-import (
- "sort"
-
- "github.com/prometheus/common/model"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
-)
-
-type tsdbToken struct {
- through model.Fingerprint // inclusive
- version int // TSDB version
-}
-
-// a ring of token ranges used to identify old metas.
-// each token represents that a TSDB version has covered the entire range
-// up to that point from the previous token.
-type tsdbTokenRange []tsdbToken
-
-func (t tsdbTokenRange) Len() int {
- return len(t)
-}
-
-func (t tsdbTokenRange) Less(i, j int) bool {
- return t[i].through < t[j].through
-}
-
-func (t tsdbTokenRange) Swap(i, j int) {
- t[i], t[j] = t[j], t[i]
-}
-
-// Add ensures a versioned set of bounds is added to the range. If the bounds are already
-// covered by a more up to date version, it returns false.
-func (t tsdbTokenRange) Add(version int, bounds v1.FingerprintBounds) (res tsdbTokenRange, added bool) {
- // allows attempting to join neighboring token ranges with identical versions
- // that aren't known until the end of the function
- var shouldReassemble bool
- var reassembleFrom int
- defer func() {
- if shouldReassemble {
- res = res.reassemble(reassembleFrom)
- }
- }()
-
- // special case: first token
- if len(t) == 0 {
- tok := tsdbToken{through: bounds.Max, version: version}
- // special case: first token is included in bounds, no need to fill negative space
- if bounds.Min == 0 {
- return append(t, tok), true
- }
- // Use a negative version to indicate that the range is not covered by any version.
- return append(t, tsdbToken{through: bounds.Min - 1, version: -1}, tok), true
- }
-
- // For non-nil token ranges, we continually update the range with newer versions.
- for {
- // find first token that covers the start of the range
- i := sort.Search(len(t), func(i int) bool {
- return t[i].through >= bounds.Min
- })
-
- if i == len(t) {
- tok := tsdbToken{through: bounds.Max, version: version}
-
- // edge case: there is no gap between the previous token range
- // and the new one;
- // skip adding a negative token
- if t[len(t)-1].through == bounds.Min-1 {
- return append(t, tok), true
- }
-
- // the range is not covered by any version and we are at the end of the range.
- // Add a negative token and the new token.
- negative := tsdbToken{through: bounds.Min - 1, version: -1}
- return append(t, negative, tok), true
- }
-
- // Otherwise, we've found a token that covers the start of the range.
- newer := t[i].version < version
- preExisting := t.boundsForToken(i)
- if !newer {
- if bounds.Within(preExisting) {
- // The range is already covered by a more up to date version, no need
- // to add anything, but honor if an earlier token was added
- return t, added
- }
-
- // The range is partially covered by a more up to date version;
- // update the range we need to check and continue
- bounds = v1.NewBounds(preExisting.Max+1, bounds.Max)
- continue
- }
-
- // If we need to update the range, there are 5 cases:
- // 1. `equal`: the incoming range equals an existing range ()
- // ------ # addition
- // ------ # src
- // 2. `subset`: the incoming range is a subset of an existing range
- // ------ # addition
- // -------- # src
- // 3. `overflow_both_sides`: the incoming range is a superset of an existing range. This is not possible
- // because the first token in the ring implicitly covers the left bound (zero) of all possible fps.
- // Therefore, we can skip this case.
- // ------ # addition
- // ---- # src
- // 4. `right_overflow`: the incoming range overflows the right side of an existing range
- // ------ # addition
- // ------ # src
- // 5. `left_overflow`: the incoming range overflows the left side of an existing range. This can be skipped
- // for the same reason as `superset`.
- // ------ # addition
- // ------ # src
-
- // 1) (`equal`): we're replacing the same bounds
- if bounds.Equal(preExisting) {
- t[i].version = version
- return t, true
- }
-
- // 2) (`subset`): the incoming range is a subset of an existing range
- if bounds.Within(preExisting) {
- // 2a) the incoming range touches the existing range's minimum bound
- if bounds.Min == preExisting.Min {
- tok := tsdbToken{through: bounds.Max, version: version}
- t = append(t, tsdbToken{})
- copy(t[i+1:], t[i:])
- t[i] = tok
- return t, true
- }
- // 2b) the incoming range touches the existing range's maximum bound
- if bounds.Max == preExisting.Max {
- t[i].through = bounds.Min - 1
- tok := tsdbToken{through: bounds.Max, version: version}
- t = append(t, tsdbToken{})
- copy(t[i+2:], t[i+1:])
- t[i+1] = tok
- return t, true
- }
-
- // 2c) the incoming range is does not touch either edge;
- // add two tokens (the new one and a new left-bound for the old range)
- tok := tsdbToken{through: bounds.Max, version: version}
- t = append(t, tsdbToken{}, tsdbToken{})
- copy(t[i+2:], t[i:])
- t[i+1] = tok
- t[i].through = bounds.Min - 1
- return t, true
- }
-
- // 4) (`right_overflow`): the incoming range overflows the right side of an existing range
-
- // 4a) shortcut: the incoming range is a right-overlapping superset of the existing range.
- // replace the existing token's version, update reassembly targets for merging neighboring ranges
- // w/ the same version, and continue
- if preExisting.Min == bounds.Min {
- t[i].version = version
- bounds.Min = preExisting.Max + 1
- added = true
- if !shouldReassemble {
- reassembleFrom = i
- shouldReassemble = true
- }
- continue
- }
-
- // 4b) the incoming range overlaps the right side of the existing range but
- // does not touch the left side;
- // add a new token for the right side of the existing range then update the reassembly targets
- // and continue
- overlap := tsdbToken{through: t[i].through, version: version}
- t[i].through = bounds.Min - 1
- t = append(t, tsdbToken{})
- copy(t[i+2:], t[i+1:])
- t[i+1] = overlap
- added = true
- bounds.Min = overlap.through + 1
- if !shouldReassemble {
- reassembleFrom = i + 1
- shouldReassemble = true
- }
- continue
- }
-}
-
-func (t tsdbTokenRange) boundsForToken(i int) v1.FingerprintBounds {
- if i == 0 {
- return v1.FingerprintBounds{Min: 0, Max: t[i].through}
- }
- return v1.FingerprintBounds{Min: t[i-1].through + 1, Max: t[i].through}
-}
-
-// reassemble merges neighboring tokens with the same version
-func (t tsdbTokenRange) reassemble(from int) tsdbTokenRange {
- reassembleTo := from
- for i := from; i < len(t)-1; i++ {
- if t[i].version != t[i+1].version {
- break
- }
- reassembleTo = i + 1
- }
-
- if reassembleTo == from {
- return t
- }
- t[from].through = t[reassembleTo].through
- copy(t[from+1:], t[reassembleTo+1:])
- return t[:len(t)-(reassembleTo-from)]
-}
-
-func outdatedMetas(metas []bloomshipper.Meta) (outdated []bloomshipper.Meta, err error) {
- // Sort metas descending by most recent source when checking
- // for outdated metas (older metas are discarded if they don't change the range).
- sort.Slice(metas, func(i, j int) bool {
- a, aExists := metas[i].MostRecentSource()
- b, bExists := metas[j].MostRecentSource()
-
- if !aExists && !bExists {
- // stable sort two sourceless metas by their bounds (easier testing)
- return metas[i].Bounds.Less(metas[j].Bounds)
- }
-
- if !aExists {
- // If a meta has no sources, it's out of date by definition.
- // By convention we sort it to the beginning of the list and will mark it for removal later
- return true
- }
-
- if !bExists {
- // if a exists but b does not, mark b as lesser, sorting b to the
- // front
- return false
- }
- return !a.TS.Before(b.TS)
- })
-
- var (
- tokenRange tsdbTokenRange
- added bool
- )
-
- for _, meta := range metas {
- mostRecent, exists := meta.MostRecentSource()
- if !exists {
- // if the meta exists but does not reference a TSDB, it's out of date
- // TODO(owen-d): this shouldn't happen, figure out why
- outdated = append(outdated, meta)
- }
- version := int(model.TimeFromUnixNano(mostRecent.TS.UnixNano()))
- tokenRange, added = tokenRange.Add(version, meta.Bounds)
- if !added {
- outdated = append(outdated, meta)
- }
- }
-
- return outdated, nil
-
-}
diff --git a/pkg/bloomcompactor/versioned_range_test.go b/pkg/bloomcompactor/versioned_range_test.go
deleted file mode 100644
index 67db348036ffa..0000000000000
--- a/pkg/bloomcompactor/versioned_range_test.go
+++ /dev/null
@@ -1,352 +0,0 @@
-package bloomcompactor
-
-import (
- "testing"
-
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
-)
-
-func Test_TsdbTokenRange(t *testing.T) {
- type addition struct {
- version int
- bounds v1.FingerprintBounds
- }
- type exp struct {
- added bool
- err bool
- }
- mk := func(version int, min, max model.Fingerprint) addition {
- return addition{version, v1.FingerprintBounds{Min: min, Max: max}}
- }
- tok := func(version int, through model.Fingerprint) tsdbToken {
- return tsdbToken{version: version, through: through}
- }
-
- for _, tc := range []struct {
- desc string
- additions []addition
- exp []bool
- result tsdbTokenRange
- }{
- {
- desc: "ascending versions",
- additions: []addition{
- mk(1, 0, 10),
- mk(2, 11, 20),
- mk(3, 15, 25),
- },
- exp: []bool{true, true, true},
- result: tsdbTokenRange{
- tok(1, 10),
- tok(2, 14),
- tok(3, 25),
- },
- },
- {
- desc: "descending versions",
- additions: []addition{
- mk(3, 15, 25),
- mk(2, 11, 20),
- mk(1, 0, 10),
- },
- exp: []bool{true, true, true},
- result: tsdbTokenRange{
- tok(1, 10),
- tok(2, 14),
- tok(3, 25),
- },
- },
- {
- desc: "simple",
- additions: []addition{
- mk(3, 0, 10),
- mk(2, 11, 20),
- mk(1, 15, 25),
- },
- exp: []bool{true, true, true},
- result: tsdbTokenRange{
- tok(3, 10),
- tok(2, 20),
- tok(1, 25),
- },
- },
- {
- desc: "simple replacement",
- additions: []addition{
- mk(3, 10, 20),
- mk(2, 0, 9),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(2, 9),
- tok(3, 20),
- },
- },
- {
- desc: "complex",
- additions: []addition{
- mk(5, 30, 50),
- mk(4, 20, 45),
- mk(3, 25, 70),
- mk(2, 10, 20),
- mk(1, 1, 5),
- },
- exp: []bool{true, true, true, true, true, true},
- result: tsdbTokenRange{
- tok(-1, 0),
- tok(1, 5),
- tok(-1, 9),
- tok(2, 19),
- tok(4, 29),
- tok(5, 50),
- tok(3, 70),
- },
- },
- {
- desc: "neighboring upper range",
- additions: []addition{
- mk(5, 30, 50),
- mk(4, 51, 60),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(-1, 29),
- tok(5, 50),
- tok(4, 60),
- },
- },
- {
- desc: "non-neighboring upper range",
- additions: []addition{
- mk(5, 30, 50),
- mk(4, 55, 60),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(-1, 29),
- tok(5, 50),
- tok(-1, 54),
- tok(4, 60),
- },
- },
- {
- desc: "earlier version within",
- additions: []addition{
- mk(5, 30, 50),
- mk(4, 40, 45),
- },
- exp: []bool{true, false},
- result: tsdbTokenRange{
- tok(-1, 29),
- tok(5, 50),
- },
- },
- {
- desc: "earlier version right overlapping",
- additions: []addition{
- mk(5, 10, 20),
- mk(4, 15, 25),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(-1, 9),
- tok(5, 20),
- tok(4, 25),
- },
- },
- {
- desc: "older version overlaps two",
- additions: []addition{
- mk(3, 10, 20),
- mk(2, 21, 30),
- mk(1, 15, 25),
- },
- exp: []bool{true, true, false},
- result: tsdbTokenRange{
- tok(-1, 9),
- tok(3, 20),
- tok(2, 30),
- },
- },
- {
- desc: "older version overlaps two w middle",
- additions: []addition{
- mk(3, 10, 20),
- mk(2, 22, 30),
- mk(1, 15, 25),
- },
- exp: []bool{true, true, true},
- result: tsdbTokenRange{
- tok(-1, 9),
- tok(3, 20),
- tok(1, 21),
- tok(2, 30),
- },
- },
- {
- desc: "newer right overflow",
- additions: []addition{
- mk(1, 30, 50),
- mk(2, 40, 60),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(-1, 29),
- tok(1, 39),
- tok(2, 60),
- },
- },
- {
- desc: "newer right overflow superset",
- additions: []addition{
- mk(1, 30, 50),
- mk(2, 30, 60),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(-1, 29),
- tok(2, 60),
- },
- },
- {
- desc: "newer right overflow partial",
- additions: []addition{
- mk(1, 30, 50),
- mk(2, 40, 60),
- },
- exp: []bool{true, true},
- result: tsdbTokenRange{
- tok(-1, 29),
- tok(1, 39),
- tok(2, 60),
- },
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- var (
- tr tsdbTokenRange
- added bool
- )
- for i, a := range tc.additions {
- tr, added = tr.Add(a.version, a.bounds)
- exp := tc.exp[i]
- require.Equal(t, exp, added, "on iteration %d", i)
- }
- require.Equal(t, tc.result, tr)
- })
- }
-}
-
-func Test_OutdatedMetas(t *testing.T) {
- gen := func(bounds v1.FingerprintBounds, tsdbTimes ...model.Time) (meta bloomshipper.Meta) {
- for _, tsdbTime := range tsdbTimes {
- meta.Sources = append(meta.Sources, tsdb.SingleTenantTSDBIdentifier{TS: tsdbTime.Time()})
- }
- meta.Bounds = bounds
- return meta
- }
-
- for _, tc := range []struct {
- desc string
- metas []bloomshipper.Meta
- exp []bloomshipper.Meta
- }{
- {
- desc: "no metas",
- metas: nil,
- exp: nil,
- },
- {
- desc: "single meta",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 10), 0),
- },
- exp: nil,
- },
- {
- desc: "single outdated meta",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 10), 0),
- gen(v1.NewBounds(0, 10), 1),
- },
- exp: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 10), 0),
- },
- },
- {
- desc: "single outdated via partitions",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 5), 0),
- gen(v1.NewBounds(6, 10), 0),
- gen(v1.NewBounds(0, 10), 1),
- },
- exp: []bloomshipper.Meta{
- gen(v1.NewBounds(6, 10), 0),
- gen(v1.NewBounds(0, 5), 0),
- },
- },
- {
- desc: "same tsdb versions",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 5), 0),
- gen(v1.NewBounds(6, 10), 0),
- gen(v1.NewBounds(0, 10), 1),
- },
- exp: []bloomshipper.Meta{
- gen(v1.NewBounds(6, 10), 0),
- gen(v1.NewBounds(0, 5), 0),
- },
- },
- {
- desc: "multi version ordering",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 5), 0),
- gen(v1.NewBounds(0, 10), 1), // only part of the range is outdated, must keep
- gen(v1.NewBounds(8, 10), 2),
- },
- exp: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 5), 0),
- },
- },
- {
- desc: "metas without sources are removed",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 5), 0),
- gen(v1.NewBounds(6, 10), 0),
- gen(v1.NewBounds(0, 10), 1),
- gen(v1.NewBounds(11, 15)), // Meta without sources
- },
- exp: []bloomshipper.Meta{
- gen(v1.NewBounds(11, 15)), // Meta without sources
- gen(v1.NewBounds(6, 10), 0),
- gen(v1.NewBounds(0, 5), 0),
- },
- },
- {
- desc: "metas without sources are interleaved",
- metas: []bloomshipper.Meta{
- gen(v1.NewBounds(0, 5), 0),
- gen(v1.NewBounds(6, 10)), // Meta without sources
- gen(v1.NewBounds(0, 10), 1),
- gen(v1.NewBounds(11, 15)), // Meta without sources
- gen(v1.NewBounds(16, 20), 2),
- },
- exp: []bloomshipper.Meta{
- gen(v1.NewBounds(6, 10)), // Meta without sources
- gen(v1.NewBounds(11, 15)), // Meta without sources
- gen(v1.NewBounds(0, 5), 0),
- },
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- outdated, err := outdatedMetas(tc.metas)
- require.NoError(t, err)
- require.Equal(t, tc.exp, outdated)
- })
- }
-}
diff --git a/pkg/bloomgateway/bloomgateway.go b/pkg/bloomgateway/bloomgateway.go
index cdc7c96f065b5..3c42f68ef0ddf 100644
--- a/pkg/bloomgateway/bloomgateway.go
+++ b/pkg/bloomgateway/bloomgateway.go
@@ -193,12 +193,12 @@ func (g *Gateway) FilterChunkRefs(ctx context.Context, req *logproto.FilterChunk
return nil, errors.New("from time must not be after through time")
}
- filters := v1.ExtractTestableLineFilters(req.Plan.AST)
- stats.NumFilters = len(filters)
- g.metrics.receivedFilters.Observe(float64(len(filters)))
+ matchers := v1.ExtractTestableLabelMatchers(req.Plan.AST)
+ stats.NumMatchers = len(matchers)
+ g.metrics.receivedMatchers.Observe(float64(len(matchers)))
// Shortcut if request does not contain filters
- if len(filters) == 0 {
+ if len(matchers) == 0 {
stats.Status = labelSuccess
return &logproto.FilterChunkRefResponse{
ChunkRefs: req.Refs,
@@ -227,7 +227,7 @@ func (g *Gateway) FilterChunkRefs(ctx context.Context, req *logproto.FilterChunk
stats.NumTasks = len(seriesByDay)
sp.LogKV(
- "filters", len(filters),
+ "matchers", len(matchers),
"days", len(seriesByDay),
"blocks", len(req.Blocks),
"series_requested", len(req.Refs),
@@ -239,7 +239,7 @@ func (g *Gateway) FilterChunkRefs(ctx context.Context, req *logproto.FilterChunk
}
series := seriesByDay[0]
- task := newTask(ctx, tenantID, series, filters, blocks)
+ task := newTask(ctx, tenantID, series, matchers, blocks)
// TODO(owen-d): include capacity in constructor?
task.responses = responsesPool.Get(len(series.series))
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index 67bb59e460ad9..8fdc3989510a3 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -157,7 +157,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
chunkRefs := createQueryInputFromBlockData(t, tenantID, data, 100)
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="nomatch"`)
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
@@ -196,7 +196,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
// saturate workers
// then send additional request
for i := 0; i < gw.cfg.WorkerConcurrency+1; i++ {
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="nomatch"`)
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
@@ -240,7 +240,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
// saturate workers
// then send additional request
for i := 0; i < gw.cfg.WorkerConcurrency+1; i++ {
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="nomatch"`)
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
@@ -341,7 +341,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
Checksum: uint32(idx),
},
}
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "foo"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="nomatch"`)
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
From: now.Add(-4 * time.Hour),
@@ -380,7 +380,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
t.Run("no match - return empty response", func(t *testing.T) {
inputChunkRefs := groupRefs(t, chunkRefs)
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "does not match"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="nomatch"`)
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
From: now.Add(-8 * time.Hour),
@@ -403,16 +403,14 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
inputChunkRefs := groupRefs(t, chunkRefs)
// Hack to get search string for a specific series
// see MkBasicSeriesWithBlooms() in pkg/storage/bloom/v1/test_util.go
- // each series has 1 chunk
- // each chunk has multiple strings, from int(fp) to int(nextFp)-1
- x := rand.Intn(len(inputChunkRefs))
- fp := inputChunkRefs[x].Fingerprint
- chks := inputChunkRefs[x].Refs
- line := fmt.Sprintf("%04x:%04x", int(fp), 0) // first line
+ rnd := rand.Intn(len(inputChunkRefs))
+ fp := inputChunkRefs[rnd].Fingerprint
+ chks := inputChunkRefs[rnd].Refs
+ key := fmt.Sprintf("%s:%04x", model.Fingerprint(fp), 0)
- t.Log("x=", x, "fp=", fp, "line=", line)
+ t.Log("rnd=", rnd, "fp=", fp, "key=", key)
- expr, err := syntax.ParseExpr(fmt.Sprintf(`{foo="bar"} |= "%s"`, line))
+ expr, err := syntax.ParseExpr(fmt.Sprintf(`{foo="bar"} | trace_id="%s"`, key))
require.NoError(t, err)
req := &logproto.FilterChunkRefRequest{
diff --git a/pkg/bloomgateway/metrics.go b/pkg/bloomgateway/metrics.go
index 5c046d3147c34..9fe096eec2ac4 100644
--- a/pkg/bloomgateway/metrics.go
+++ b/pkg/bloomgateway/metrics.go
@@ -56,7 +56,7 @@ type serverMetrics struct {
filteredSeries prometheus.Histogram
requestedChunks prometheus.Histogram
filteredChunks prometheus.Histogram
- receivedFilters prometheus.Histogram
+ receivedMatchers prometheus.Histogram
}
func newMetrics(registerer prometheus.Registerer, namespace, subsystem string) *metrics {
@@ -105,11 +105,11 @@ func newServerMetrics(registerer prometheus.Registerer, namespace, subsystem str
Help: "Total amount of chunk refs filtered by bloom-gateway",
Buckets: prometheus.ExponentialBucketsRange(1, 100e3, 10),
}),
- receivedFilters: promauto.With(registerer).NewHistogram(prometheus.HistogramOpts{
+ receivedMatchers: promauto.With(registerer).NewHistogram(prometheus.HistogramOpts{
Namespace: namespace,
Subsystem: subsystem,
- Name: "request_filters",
- Help: "Number of filters per request.",
+ Name: "request_matchers",
+ Help: "Number of matchers per request.",
Buckets: prometheus.ExponentialBuckets(1, 2, 9), // 1 -> 256
}),
}
diff --git a/pkg/bloomgateway/multiplexing.go b/pkg/bloomgateway/multiplexing.go
index b814ae23a5a59..2aee9dc32c48b 100644
--- a/pkg/bloomgateway/multiplexing.go
+++ b/pkg/bloomgateway/multiplexing.go
@@ -9,7 +9,6 @@ import (
iter "github.com/grafana/loki/v3/pkg/iter/v2"
"github.com/grafana/loki/v3/pkg/logproto"
- "github.com/grafana/loki/v3/pkg/logql/syntax"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
@@ -56,8 +55,8 @@ type Task struct {
// series of the original request
series []*logproto.GroupedChunkRefs
- // filters of the original request
- filters []syntax.LineFilterExpr
+ // matchers to check against
+ matchers []v1.LabelMatcher
// blocks that were resolved on the index gateway and sent with the request
blocks []bloomshipper.BlockRef
// from..through date of the task's chunks
@@ -75,13 +74,13 @@ type Task struct {
recorder *v1.BloomRecorder
}
-func newTask(ctx context.Context, tenantID string, refs seriesWithInterval, filters []syntax.LineFilterExpr, blocks []bloomshipper.BlockRef) Task {
+func newTask(ctx context.Context, tenantID string, refs seriesWithInterval, matchers []v1.LabelMatcher, blocks []bloomshipper.BlockRef) Task {
return Task{
tenant: tenantID,
recorder: v1.NewBloomRecorder(ctx, "task"),
err: new(wrappedError),
resCh: make(chan v1.Output),
- filters: filters,
+ matchers: matchers,
blocks: blocks,
series: refs.series,
interval: refs.interval,
@@ -122,7 +121,7 @@ func (t Task) Copy(series []*logproto.GroupedChunkRefs) Task {
tenant: t.tenant,
err: t.err,
resCh: t.resCh,
- filters: t.filters,
+ matchers: t.matchers,
blocks: t.blocks,
series: series,
interval: t.interval,
@@ -132,13 +131,11 @@ func (t Task) Copy(series []*logproto.GroupedChunkRefs) Task {
}
}
-func (t Task) RequestIter(
- tokenizer *v1.NGramTokenizer,
-) iter.Iterator[v1.Request] {
+func (t Task) RequestIter() iter.Iterator[v1.Request] {
return &requestIterator{
recorder: t.recorder,
series: iter.NewSliceIter(t.series),
- search: v1.FiltersToBloomTest(tokenizer, t.filters...),
+ search: v1.LabelMatchersToBloomTest(t.matchers...),
channel: t.resCh,
curr: v1.Request{},
}
diff --git a/pkg/bloomgateway/multiplexing_test.go b/pkg/bloomgateway/multiplexing_test.go
index d395d2a315cbb..e6b97679e1ef8 100644
--- a/pkg/bloomgateway/multiplexing_test.go
+++ b/pkg/bloomgateway/multiplexing_test.go
@@ -11,7 +11,6 @@ import (
v2 "github.com/grafana/loki/v3/pkg/iter/v2"
"github.com/grafana/loki/v3/pkg/logproto"
- "github.com/grafana/loki/v3/pkg/logql/syntax"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
)
@@ -55,15 +54,14 @@ func createTasksForRequests(t *testing.T, tenant string, requests ...*logproto.F
func TestTask_RequestIterator(t *testing.T) {
ts := mktime("2024-01-24 12:00")
tenant := "fake"
- tokenizer := v1.NewNGramTokenizer(4, 0)
t.Run("empty request yields empty iterator", func(t *testing.T) {
swb := seriesWithInterval{
interval: bloomshipper.Interval{Start: 0, End: math.MaxInt64},
series: []*logproto.GroupedChunkRefs{},
}
- task := newTask(context.Background(), tenant, swb, []syntax.LineFilterExpr{}, nil)
- it := task.RequestIter(tokenizer)
+ task := newTask(context.Background(), tenant, swb, nil, nil)
+ it := task.RequestIter()
// nothing to iterate over
require.False(t, it.Next())
})
@@ -106,7 +104,7 @@ func TestTask_RequestIterator(t *testing.T) {
iters := make([]v2.PeekIterator[v1.Request], 0, len(tasks))
for _, task := range tasks {
- iters = append(iters, v2.NewPeekIter(task.RequestIter(tokenizer)))
+ iters = append(iters, v2.NewPeekIter(task.RequestIter()))
}
// merge the request iterators using the heap sort iterator
diff --git a/pkg/bloomgateway/processor.go b/pkg/bloomgateway/processor.go
index edd882e1e4210..ad804555a3ff3 100644
--- a/pkg/bloomgateway/processor.go
+++ b/pkg/bloomgateway/processor.go
@@ -145,7 +145,11 @@ func (p *processor) processBlock(_ context.Context, bq *bloomshipper.CloseableBl
return err
}
- tokenizer := v1.NewNGramTokenizer(schema.NGramLen(), schema.NGramSkip())
+ // We require V3+ schema
+ if schema.Version() < v1.V3 {
+ return v1.ErrUnsupportedSchemaVersion
+ }
+
iters := make([]iter.PeekIterator[v1.Request], 0, len(tasks))
for _, task := range tasks {
@@ -159,7 +163,7 @@ func (p *processor) processBlock(_ context.Context, bq *bloomshipper.CloseableBl
// sp.LogKV("process block", blockID, "series", len(task.series))
// }
- it := iter.NewPeekIter(task.RequestIter(tokenizer))
+ it := iter.NewPeekIter(task.RequestIter())
iters = append(iters, it)
}
diff --git a/pkg/bloomgateway/processor_test.go b/pkg/bloomgateway/processor_test.go
index 8ce78e7bdb76c..f1120fe530a41 100644
--- a/pkg/bloomgateway/processor_test.go
+++ b/pkg/bloomgateway/processor_test.go
@@ -14,7 +14,6 @@ import (
"github.com/stretchr/testify/require"
"go.uber.org/atomic"
- "github.com/grafana/loki/v3/pkg/logql/syntax"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/chunk/client"
"github.com/grafana/loki/v3/pkg/storage/config"
@@ -140,17 +139,16 @@ func TestProcessor(t *testing.T) {
},
day: config.NewDayTime(truncateDay(now)),
}
- filters := []syntax.LineFilterExpr{
- {
- LineFilter: syntax.LineFilter{
- Ty: 0,
- Match: "no match",
- },
+
+ matchers := []v1.LabelMatcher{
+ v1.PlainLabelMatcher{
+ Key: "trace_id",
+ Value: "nomatch",
},
}
t.Log("series", len(swb.series))
- task := newTask(ctx, "fake", swb, filters, nil)
+ task := newTask(ctx, "fake", swb, matchers, nil)
tasks := []Task{task}
results := atomic.NewInt64(0)
@@ -192,17 +190,15 @@ func TestProcessor(t *testing.T) {
},
day: config.NewDayTime(truncateDay(now)),
}
- filters := []syntax.LineFilterExpr{
- {
- LineFilter: syntax.LineFilter{
- Ty: 0,
- Match: "no match",
- },
+ matchers := []v1.LabelMatcher{
+ v1.PlainLabelMatcher{
+ Key: "trace_id",
+ Value: "nomatch",
},
}
t.Log("series", len(swb.series))
- task := newTask(ctx, "fake", swb, filters, blocks)
+ task := newTask(ctx, "fake", swb, matchers, blocks)
tasks := []Task{task}
results := atomic.NewInt64(0)
@@ -241,17 +237,15 @@ func TestProcessor(t *testing.T) {
},
day: config.NewDayTime(truncateDay(now)),
}
- filters := []syntax.LineFilterExpr{
- {
- LineFilter: syntax.LineFilter{
- Ty: 0,
- Match: "no match",
- },
+ matchers := []v1.LabelMatcher{
+ v1.PlainLabelMatcher{
+ Key: "trace_id",
+ Value: "nomatch",
},
}
t.Log("series", len(swb.series))
- task := newTask(ctx, "fake", swb, filters, nil)
+ task := newTask(ctx, "fake", swb, matchers, nil)
tasks := []Task{task}
results := atomic.NewInt64(0)
diff --git a/pkg/bloomgateway/querier.go b/pkg/bloomgateway/querier.go
index 23de7a15e2be7..dfc3746380ab3 100644
--- a/pkg/bloomgateway/querier.go
+++ b/pkg/bloomgateway/querier.go
@@ -103,7 +103,7 @@ func convertToShortRef(ref *logproto.ChunkRef) *logproto.ShortRef {
func (bq *BloomQuerier) FilterChunkRefs(ctx context.Context, tenant string, from, through model.Time, chunkRefs []*logproto.ChunkRef, queryPlan plan.QueryPlan) ([]*logproto.ChunkRef, error) {
// Shortcut that does not require any filtering
- if !bq.limits.BloomGatewayEnabled(tenant) || len(chunkRefs) == 0 || len(v1.ExtractTestableLineFilters(queryPlan.AST)) == 0 {
+ if !bq.limits.BloomGatewayEnabled(tenant) || len(chunkRefs) == 0 || len(v1.ExtractTestableLabelMatchers(queryPlan.AST)) == 0 {
return chunkRefs, nil
}
diff --git a/pkg/bloomgateway/querier_test.go b/pkg/bloomgateway/querier_test.go
index d4b24447ae124..ca4036d266edb 100644
--- a/pkg/bloomgateway/querier_test.go
+++ b/pkg/bloomgateway/querier_test.go
@@ -93,7 +93,7 @@ func TestBloomQuerier(t *testing.T) {
through := model.Now()
from := through.Add(-12 * time.Hour)
chunkRefs := []*logproto.ChunkRef{}
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "uuid"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="exists"`)
require.NoError(t, err)
res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, plan.QueryPlan{AST: expr})
require.NoError(t, err)
@@ -113,7 +113,7 @@ func TestBloomQuerier(t *testing.T) {
{Fingerprint: 1000, UserID: tenant, From: from, Through: through, Checksum: 2},
{Fingerprint: 2000, UserID: tenant, From: from, Through: through, Checksum: 3},
}
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "uuid"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="exists"`)
require.NoError(t, err)
res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, plan.QueryPlan{AST: expr})
require.Error(t, err)
@@ -132,7 +132,7 @@ func TestBloomQuerier(t *testing.T) {
{Fingerprint: 2000, UserID: tenant, From: mktime("2024-04-16 23:30"), Through: mktime("2024-04-17 00:30"), Checksum: 2}, // day 1
{Fingerprint: 3000, UserID: tenant, From: mktime("2024-04-17 00:30"), Through: mktime("2024-04-17 01:30"), Checksum: 3}, // day 2
}
- expr, err := syntax.ParseExpr(`{foo="bar"} |= "uuid"`)
+ expr, err := syntax.ParseExpr(`{foo="bar"} | trace_id="exists"`)
require.NoError(t, err)
res, err := bq.FilterChunkRefs(ctx, tenant, from, through, chunkRefs, plan.QueryPlan{AST: expr})
require.NoError(t, err)
diff --git a/pkg/bloomgateway/stats.go b/pkg/bloomgateway/stats.go
index 09f78841e544a..59dd9d25287d8 100644
--- a/pkg/bloomgateway/stats.go
+++ b/pkg/bloomgateway/stats.go
@@ -9,7 +9,7 @@ import (
type Stats struct {
Status string
- NumTasks, NumFilters int
+ NumTasks, NumMatchers int
ChunksRequested, ChunksFiltered int
SeriesRequested, SeriesFiltered int
QueueTime *atomic.Duration
@@ -70,7 +70,7 @@ func (s *Stats) KVArgs() []any {
"msg", "stats-report",
"status", s.Status,
"tasks", s.NumTasks,
- "filters", s.NumFilters,
+ "matchers", s.NumMatchers,
"blocks_processed", s.ProcessedBlocks.Load(),
"series_requested", s.SeriesRequested,
"series_filtered", s.SeriesFiltered,
diff --git a/pkg/bloomutils/ring.go b/pkg/bloomutils/ring.go
deleted file mode 100644
index 9743298e89b4d..0000000000000
--- a/pkg/bloomutils/ring.go
+++ /dev/null
@@ -1,178 +0,0 @@
-// This file contains a bunch of utility functions for bloom components.
-
-package bloomutils
-
-import (
- "errors"
- "fmt"
- "math"
- "sort"
-
- "github.com/grafana/dskit/ring"
- "github.com/prometheus/common/model"
- "golang.org/x/exp/constraints"
- "golang.org/x/exp/slices"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
-)
-
-var (
- Uint32Range = Range[uint32]{Min: 0, Max: math.MaxUint32}
- Uint64Range = Range[uint64]{Min: 0, Max: math.MaxUint64}
-)
-
-type Range[T constraints.Unsigned] struct {
- Min, Max T
-}
-
-func (r Range[T]) String() string {
- return fmt.Sprintf("%016x-%016x", r.Min, r.Max)
-}
-
-func (r Range[T]) Less(other Range[T]) bool {
- if r.Min != other.Min {
- return r.Min < other.Min
- }
- return r.Max <= other.Max
-}
-
-func (r Range[T]) Cmp(t T) v1.BoundsCheck {
- if t < r.Min {
- return v1.Before
- } else if t > r.Max {
- return v1.After
- }
- return v1.Overlap
-}
-
-func NewRange[T constraints.Unsigned](min, max T) Range[T] {
- return Range[T]{Min: min, Max: max}
-}
-
-func NewTokenRange(min, max uint32) Range[uint32] {
- return Range[uint32]{Min: min, Max: max}
-}
-
-type InstanceWithTokenRange struct {
- Instance ring.InstanceDesc
- TokenRange Range[uint32]
-}
-
-func (i InstanceWithTokenRange) Cmp(token uint32) v1.BoundsCheck {
- return i.TokenRange.Cmp(token)
-}
-
-type InstancesWithTokenRange []InstanceWithTokenRange
-
-func (i InstancesWithTokenRange) Contains(token uint32) bool {
- for _, instance := range i {
- if instance.Cmp(token) == v1.Overlap {
- return true
- }
- }
- return false
-}
-
-// TODO(owen-d): use https://github.com/grafana/loki/pull/11975 after merge
-func KeyspacesFromTokenRanges(tokenRanges ring.TokenRanges) []v1.FingerprintBounds {
- keyspaces := make([]v1.FingerprintBounds, 0, len(tokenRanges)/2)
- for i := 0; i < len(tokenRanges)-1; i += 2 {
- keyspaces = append(keyspaces, v1.FingerprintBounds{
- Min: model.Fingerprint(tokenRanges[i]) << 32,
- Max: model.Fingerprint(tokenRanges[i+1])<<32 | model.Fingerprint(math.MaxUint32),
- })
- }
- return keyspaces
-}
-
-func TokenRangesForInstance(id string, instances []ring.InstanceDesc) (ranges ring.TokenRanges, err error) {
- var ownedTokens map[uint32]struct{}
-
- // lifted from grafana/dskit/ring/model.go <*Desc>.GetTokens()
- toks := make([][]uint32, 0, len(instances))
- for _, instance := range instances {
- if instance.Id == id {
- ranges = make(ring.TokenRanges, 0, 2*(len(instance.Tokens)+1))
- ownedTokens = make(map[uint32]struct{}, len(instance.Tokens))
- for _, tok := range instance.Tokens {
- ownedTokens[tok] = struct{}{}
- }
- }
-
- // Tokens may not be sorted for an older version which, so we enforce sorting here.
- tokens := instance.Tokens
- if !sort.IsSorted(ring.Tokens(tokens)) {
- sort.Sort(ring.Tokens(tokens))
- }
-
- toks = append(toks, tokens)
- }
-
- if cap(ranges) == 0 {
- return nil, fmt.Errorf("instance %s not found", id)
- }
-
- allTokens := ring.MergeTokens(toks)
- if len(allTokens) == 0 {
- return nil, errors.New("no tokens in the ring")
- }
-
- // mostly lifted from grafana/dskit/ring/token_range.go <*Ring>.GetTokenRangesForInstance()
-
- // non-zero value means we're now looking for start of the range. Zero value means we're looking for next end of range (ie. token owned by this instance).
- rangeEnd := uint32(0)
-
- // if this instance claimed the first token, it owns the wrap-around range, which we'll break into two separate ranges
- firstToken := allTokens[0]
- _, ownsFirstToken := ownedTokens[firstToken]
-
- if ownsFirstToken {
- // we'll start by looking for the beginning of the range that ends with math.MaxUint32
- rangeEnd = math.MaxUint32
- }
-
- // walk the ring backwards, alternating looking for ends and starts of ranges
- for i := len(allTokens) - 1; i > 0; i-- {
- token := allTokens[i]
- _, owned := ownedTokens[token]
-
- if rangeEnd == 0 {
- // we're looking for the end of the next range
- if owned {
- rangeEnd = token - 1
- }
- } else {
- // we have a range end, and are looking for the start of the range
- if !owned {
- ranges = append(ranges, rangeEnd, token)
- rangeEnd = 0
- }
- }
- }
-
- // finally look at the first token again
- // - if we have a range end, check if we claimed token 0
- // - if we don't, we have our start
- // - if we do, the start is 0
- // - if we don't have a range end, check if we claimed token 0
- // - if we don't, do nothing
- // - if we do, add the range of [0, token-1]
- // - BUT, if the token itself is 0, do nothing, because we don't own the tokens themselves (we should be covered by the already added range that ends with MaxUint32)
-
- if rangeEnd == 0 {
- if ownsFirstToken && firstToken != 0 {
- ranges = append(ranges, firstToken-1, 0)
- }
- } else {
- if ownsFirstToken {
- ranges = append(ranges, rangeEnd, 0)
- } else {
- ranges = append(ranges, rangeEnd, firstToken)
- }
- }
-
- // Ensure returned ranges are sorted.
- slices.Sort(ranges)
-
- return ranges, nil
-}
diff --git a/pkg/bloomutils/ring_test.go b/pkg/bloomutils/ring_test.go
deleted file mode 100644
index 8a373696c7c92..0000000000000
--- a/pkg/bloomutils/ring_test.go
+++ /dev/null
@@ -1,48 +0,0 @@
-package bloomutils
-
-import (
- "fmt"
- "math"
- "testing"
-
- "github.com/grafana/dskit/ring"
- "github.com/stretchr/testify/require"
-
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
-)
-
-func uint64Range(min, max uint64) Range[uint64] {
- return Range[uint64]{min, max}
-}
-
-func TestKeyspacesFromTokenRanges(t *testing.T) {
- for i, tc := range []struct {
- tokenRanges ring.TokenRanges
- exp []v1.FingerprintBounds
- }{
- {
- tokenRanges: ring.TokenRanges{
- 0, math.MaxUint32 / 2,
- math.MaxUint32/2 + 1, math.MaxUint32,
- },
- exp: []v1.FingerprintBounds{
- v1.NewBounds(0, math.MaxUint64/2),
- v1.NewBounds(math.MaxUint64/2+1, math.MaxUint64),
- },
- },
- {
- tokenRanges: ring.TokenRanges{
- 0, math.MaxUint8,
- math.MaxUint16, math.MaxUint16 << 1,
- },
- exp: []v1.FingerprintBounds{
- v1.NewBounds(0, 0xff00000000|math.MaxUint32),
- v1.NewBounds(math.MaxUint16<<32, math.MaxUint16<<33|math.MaxUint32),
- },
- },
- } {
- t.Run(fmt.Sprint(i), func(t *testing.T) {
- require.Equal(t, tc.exp, KeyspacesFromTokenRanges(tc.tokenRanges))
- })
- }
-}
diff --git a/pkg/canary/comparator/comparator.go b/pkg/canary/comparator/comparator.go
index a575c74eccd81..8d72fac6260f9 100644
--- a/pkg/canary/comparator/comparator.go
+++ b/pkg/canary/comparator/comparator.go
@@ -427,7 +427,7 @@ func (c *Comparator) spotCheckEntries(currTime time.Time) {
func(_ int, t *time.Time) bool {
return t.Before(currTime.Add(-c.spotCheckMax))
},
- func(_ int, t *time.Time) {
+ func(_ int, _ *time.Time) {
})
@@ -513,7 +513,7 @@ func (c *Comparator) pruneEntries(currentTime time.Time) {
func(_ int, t *time.Time) bool {
return t.Before(currentTime.Add(-c.wait))
},
- func(_ int, t *time.Time) {
+ func(_ int, _ *time.Time) {
})
}
diff --git a/pkg/chunkenc/memchunk_test.go b/pkg/chunkenc/memchunk_test.go
index daa97a2616917..85cccd743cfbb 100644
--- a/pkg/chunkenc/memchunk_test.go
+++ b/pkg/chunkenc/memchunk_test.go
@@ -1539,7 +1539,7 @@ func TestMemChunk_ReboundAndFilter_with_filter(t *testing.T) {
{
name: "no matches - chunk without structured metadata",
testMemChunk: buildFilterableTestMemChunk(t, chkFrom, chkThrough, &chkFrom, &chkThroughPlus1, false),
- filterFunc: func(_ time.Time, in string, structuredMetadata ...labels.Label) bool {
+ filterFunc: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong
},
nrMatching: 0,
@@ -1548,7 +1548,7 @@ func TestMemChunk_ReboundAndFilter_with_filter(t *testing.T) {
{
name: "structured metadata not matching",
testMemChunk: buildFilterableTestMemChunk(t, chkFrom, chkThrough, &chkFrom, &chkThroughPlus1, true),
- filterFunc: func(_ time.Time, in string, structuredMetadata ...labels.Label) bool {
+ filterFunc: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get("ding") == "dong"
},
nrMatching: 0,
@@ -1557,7 +1557,7 @@ func TestMemChunk_ReboundAndFilter_with_filter(t *testing.T) {
{
name: "some lines removed - with structured metadata",
testMemChunk: buildFilterableTestMemChunk(t, chkFrom, chkThrough, &chkFrom, &chkFromPlus5, true),
- filterFunc: func(_ time.Time, in string, structuredMetadata ...labels.Label) bool {
+ filterFunc: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong
},
nrMatching: 5,
diff --git a/pkg/compactor/deletion/delete_request_test.go b/pkg/compactor/deletion/delete_request_test.go
index f67a06dc483fb..899e83f802e37 100644
--- a/pkg/compactor/deletion/delete_request_test.go
+++ b/pkg/compactor/deletion/delete_request_test.go
@@ -93,7 +93,7 @@ func TestDeleteRequest_IsDeleted(t *testing.T) {
},
expectedResp: resp{
isDeleted: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, structuredMetadata ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if labels.Labels(structuredMetadata).Get(lblPing) == lblPong && now.Add(-3*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-time.Hour).UnixNano() {
return true
@@ -131,7 +131,7 @@ func TestDeleteRequest_IsDeleted(t *testing.T) {
},
expectedResp: resp{
isDeleted: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, _ ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if now.Add(-3*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-2*time.Hour).UnixNano() {
return true
@@ -150,7 +150,7 @@ func TestDeleteRequest_IsDeleted(t *testing.T) {
},
expectedResp: resp{
isDeleted: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, _ ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if now.Add(-2*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.UnixNano() {
return true
@@ -188,7 +188,7 @@ func TestDeleteRequest_IsDeleted(t *testing.T) {
},
expectedResp: resp{
isDeleted: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, structuredMetadata ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if labels.Labels(structuredMetadata).Get(lblPing) == lblPong && now.Add(-2*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.UnixNano() {
return true
@@ -226,7 +226,7 @@ func TestDeleteRequest_IsDeleted(t *testing.T) {
},
expectedResp: resp{
isDeleted: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, _ ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if now.Add(-(2*time.Hour+30*time.Minute)).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-(time.Hour+30*time.Minute)).UnixNano() {
return true
diff --git a/pkg/compactor/deletion/delete_requests_manager_test.go b/pkg/compactor/deletion/delete_requests_manager_test.go
index 04aa986ac492d..6eabf2de38799 100644
--- a/pkg/compactor/deletion/delete_requests_manager_test.go
+++ b/pkg/compactor/deletion/delete_requests_manager_test.go
@@ -168,7 +168,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, s string, _ ...labels.Label) bool {
return strings.Contains(s, "fizz")
},
},
@@ -195,7 +195,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong
},
},
@@ -222,7 +222,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, s string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong && strings.Contains(s, "fizz")
},
},
@@ -346,7 +346,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, s string, _ ...labels.Label) bool {
return strings.Contains(s, "fizz")
},
},
@@ -380,7 +380,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong
},
},
@@ -428,7 +428,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, _ ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if (now.Add(-13*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-11*time.Hour).UnixNano()) ||
(now.Add(-10*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-8*time.Hour).UnixNano()) ||
@@ -469,7 +469,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, _ string, _ ...labels.Label) bool {
return true
},
},
@@ -503,7 +503,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, s string, _ ...labels.Label) bool {
return strings.Contains(s, "fizz")
},
},
@@ -537,7 +537,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong
},
},
@@ -578,7 +578,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, _ string, _ ...labels.Label) bool {
return true
},
},
@@ -619,7 +619,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, s string, _ ...labels.Label) bool {
return strings.Contains(s, "fizz")
},
},
@@ -660,7 +660,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, structuredMetadata ...labels.Label) bool {
+ expectedFilter: func(_ time.Time, _ string, structuredMetadata ...labels.Label) bool {
return labels.Labels(structuredMetadata).Get(lblPing) == lblPong
},
},
@@ -784,7 +784,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, _ ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if (now.Add(-13*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-11*time.Hour).UnixNano()) ||
(now.Add(-10*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-8*time.Hour).UnixNano()) {
@@ -852,7 +852,7 @@ func TestDeleteRequestsManager_Expired(t *testing.T) {
},
expectedResp: resp{
isExpired: true,
- expectedFilter: func(ts time.Time, s string, _ ...labels.Label) bool {
+ expectedFilter: func(ts time.Time, _ string, _ ...labels.Label) bool {
tsUnixNano := ts.UnixNano()
if (now.Add(-13*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-11*time.Hour).UnixNano()) ||
(now.Add(-10*time.Hour).UnixNano() <= tsUnixNano && tsUnixNano <= now.Add(-8*time.Hour).UnixNano()) {
diff --git a/pkg/compactor/deletion/delete_requests_store.go b/pkg/compactor/deletion/delete_requests_store.go
index ee8f324d6b0be..b7ddfe13a6182 100644
--- a/pkg/compactor/deletion/delete_requests_store.go
+++ b/pkg/compactor/deletion/delete_requests_store.go
@@ -225,7 +225,7 @@ func (ds *deleteRequestsStore) GetCacheGenerationNumber(ctx context.Context, use
ctx = user.InjectOrgID(ctx, userID)
genNumber := ""
- err := ds.indexClient.QueryPages(ctx, []index.Query{query}, func(query index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
+ err := ds.indexClient.QueryPages(ctx, []index.Query{query}, func(_ index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
itr := batch.Iterator()
for itr.Next() {
genNumber = string(itr.Value())
@@ -244,7 +244,7 @@ func (ds *deleteRequestsStore) GetCacheGenerationNumber(ctx context.Context, use
func (ds *deleteRequestsStore) queryDeleteRequests(ctx context.Context, deleteQuery index.Query) ([]DeleteRequest, error) {
var deleteRequests []DeleteRequest
var err error
- err = ds.indexClient.QueryPages(ctx, []index.Query{deleteQuery}, func(query index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
+ err = ds.indexClient.QueryPages(ctx, []index.Query{deleteQuery}, func(_ index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
// No need to lock inside the callback since we run a single index query.
itr := batch.Iterator()
for itr.Next() {
@@ -297,7 +297,7 @@ func (ds *deleteRequestsStore) queryDeleteRequestDetails(ctx context.Context, de
var marshalError error
var requestWithDetails DeleteRequest
- err := ds.indexClient.QueryPages(ctx, deleteRequestQuery, func(query index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
+ err := ds.indexClient.QueryPages(ctx, deleteRequestQuery, func(_ index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
if requestWithDetails, marshalError = unmarshalDeleteRequestDetails(batch.Iterator(), deleteRequest); marshalError != nil {
return false
}
diff --git a/pkg/compactor/deletion/tenant_request_handler_test.go b/pkg/compactor/deletion/tenant_request_handler_test.go
index c57dc84ba4caf..cca06f4c18cfe 100644
--- a/pkg/compactor/deletion/tenant_request_handler_test.go
+++ b/pkg/compactor/deletion/tenant_request_handler_test.go
@@ -21,7 +21,7 @@ func TestDeleteRequestHandlerDeletionMiddleware(t *testing.T) {
}
// Setup handler
- middle := TenantMiddleware(fl, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}))
+ middle := TenantMiddleware(fl, http.HandlerFunc(func(_ http.ResponseWriter, _ *http.Request) {}))
// User that has deletion enabled
req := httptest.NewRequest(http.MethodGet, "http://www.your-domain.com", nil)
diff --git a/pkg/compactor/retention/marker_test.go b/pkg/compactor/retention/marker_test.go
index 48aab32b73f4a..e90ac7dc4aaf9 100644
--- a/pkg/compactor/retention/marker_test.go
+++ b/pkg/compactor/retention/marker_test.go
@@ -50,7 +50,7 @@ func Test_marlkerProcessor_Deadlock(t *testing.T) {
paths, _, err := p.availablePath()
require.NoError(t, err)
for _, path := range paths {
- require.NoError(t, p.processPath(path, func(ctx context.Context, chunkId []byte) error { return nil }))
+ require.NoError(t, p.processPath(path, func(_ context.Context, _ []byte) error { return nil }))
require.NoError(t, p.deleteEmptyMarks(path))
}
paths, _, err = p.availablePath()
diff --git a/pkg/compactor/retention/retention_test.go b/pkg/compactor/retention/retention_test.go
index b140b3661f4d4..4885c835003c2 100644
--- a/pkg/compactor/retention/retention_test.go
+++ b/pkg/compactor/retention/retention_test.go
@@ -47,7 +47,7 @@ func (m *mockChunkClient) IsChunkNotFoundErr(_ error) bool {
return false
}
-func (m *mockChunkClient) getDeletedChunkIds() []string {
+func (m *mockChunkClient) getDeletedChunkIDs() []string {
m.mtx.Lock()
defer m.mtx.Unlock()
@@ -166,7 +166,7 @@ func Test_Retention(t *testing.T) {
store.Stop()
if len(expectDeleted) != 0 {
require.Eventually(t, func() bool {
- actual := chunkClient.getDeletedChunkIds()
+ actual := chunkClient.getDeletedChunkIDs()
sort.Strings(actual)
return assert.ObjectsAreEqual(expectDeleted, actual)
}, 10*time.Second, 1*time.Second)
@@ -301,7 +301,7 @@ func TestChunkRewriter(t *testing.T) {
{
name: "no rewrites",
chunk: createChunk(t, "1", labels.Labels{labels.Label{Name: "foo", Value: "bar"}}, todaysTableInterval.Start, todaysTableInterval.Start.Add(time.Hour)),
- filterFunc: func(ts time.Time, s string, _ ...labels.Label) bool {
+ filterFunc: func(_ time.Time, _ string, _ ...labels.Label) bool {
return false
},
expectedRespByTables: map[string]tableResp{
@@ -311,7 +311,7 @@ func TestChunkRewriter(t *testing.T) {
{
name: "no rewrites with chunk spanning multiple tables",
chunk: createChunk(t, "1", labels.Labels{labels.Label{Name: "foo", Value: "bar"}}, todaysTableInterval.End.Add(-48*time.Hour), todaysTableInterval.End),
- filterFunc: func(ts time.Time, s string, _ ...labels.Label) bool {
+ filterFunc: func(_ time.Time, _ string, _ ...labels.Label) bool {
return false
},
expectedRespByTables: map[string]tableResp{
@@ -672,7 +672,7 @@ func TestMarkForDelete_SeriesCleanup(t *testing.T) {
expiry: []chunkExpiry{
{
isExpired: true,
- filterFunc: func(ts time.Time, s string, _ ...labels.Label) bool {
+ filterFunc: func(_ time.Time, _ string, _ ...labels.Label) bool {
return false
},
},
@@ -814,7 +814,7 @@ func TestMarkForDelete_SeriesCleanup(t *testing.T) {
expiry: []chunkExpiry{
{
isExpired: true,
- filterFunc: func(ts time.Time, s string, _ ...labels.Label) bool {
+ filterFunc: func(ts time.Time, _ string, _ ...labels.Label) bool {
return ts.UnixNano() < todaysTableInterval.Start.UnixNano()
},
},
@@ -840,7 +840,7 @@ func TestMarkForDelete_SeriesCleanup(t *testing.T) {
expiry: []chunkExpiry{
{
isExpired: true,
- filterFunc: func(ts time.Time, s string, _ ...labels.Label) bool {
+ filterFunc: func(ts time.Time, _ string, _ ...labels.Label) bool {
return ts.UnixNano() < todaysTableInterval.Start.Add(-30*time.Minute).UnixNano()
},
},
diff --git a/pkg/compactor/table.go b/pkg/compactor/table.go
index c371a5db88f59..8be8190c0ac06 100644
--- a/pkg/compactor/table.go
+++ b/pkg/compactor/table.go
@@ -198,7 +198,7 @@ func (t *table) done() error {
userIDs = append(userIDs, userID)
}
- err := concurrency.ForEachJob(t.ctx, len(userIDs), t.uploadConcurrency, func(ctx context.Context, idx int) error {
+ err := concurrency.ForEachJob(t.ctx, len(userIDs), t.uploadConcurrency, func(_ context.Context, idx int) error {
return t.indexSets[userIDs[idx]].done()
})
if err != nil {
diff --git a/pkg/compactor/table_test.go b/pkg/compactor/table_test.go
index 462511eca4782..b4f71bbc93956 100644
--- a/pkg/compactor/table_test.go
+++ b/pkg/compactor/table_test.go
@@ -305,7 +305,7 @@ func TestTable_CompactionRetention(t *testing.T) {
_, err := os.ReadDir(filepath.Join(storagePath, tableName))
require.True(t, os.IsNotExist(err))
},
- tableMarker: TableMarkerFunc(func(ctx context.Context, tableName, userID string, indexFile retention.IndexProcessor, logger log.Logger) (bool, bool, error) {
+ tableMarker: TableMarkerFunc(func(_ context.Context, _, _ string, _ retention.IndexProcessor, _ log.Logger) (bool, bool, error) {
return true, true, nil
}),
},
@@ -325,7 +325,7 @@ func TestTable_CompactionRetention(t *testing.T) {
require.True(t, strings.HasSuffix(filename, ".gz"))
})
},
- tableMarker: TableMarkerFunc(func(ctx context.Context, tableName, userID string, indexFile retention.IndexProcessor, logger log.Logger) (bool, bool, error) {
+ tableMarker: TableMarkerFunc(func(_ context.Context, _, _ string, _ retention.IndexProcessor, _ log.Logger) (bool, bool, error) {
return false, true, nil
}),
},
@@ -345,7 +345,7 @@ func TestTable_CompactionRetention(t *testing.T) {
require.True(t, strings.HasSuffix(filename, ".gz"))
})
},
- tableMarker: TableMarkerFunc(func(ctx context.Context, tableName, userID string, indexFile retention.IndexProcessor, logger log.Logger) (bool, bool, error) {
+ tableMarker: TableMarkerFunc(func(_ context.Context, _, _ string, _ retention.IndexProcessor, _ log.Logger) (bool, bool, error) {
return false, false, nil
}),
},
@@ -377,7 +377,7 @@ func TestTable_CompactionRetention(t *testing.T) {
table, err := newTable(context.Background(), tableWorkingDirectory, storage.NewIndexStorageClient(objectClient, ""),
newTestIndexCompactor(), config.PeriodConfig{},
- tt.tableMarker, IntervalMayHaveExpiredChunksFunc(func(interval model.Interval, userID string) bool {
+ tt.tableMarker, IntervalMayHaveExpiredChunksFunc(func(_ model.Interval, _ string) bool {
return true
}), 10)
require.NoError(t, err)
diff --git a/pkg/configs/client/client.go b/pkg/configs/client/client.go
index 5592fbe1b83dc..44af1bda4f504 100644
--- a/pkg/configs/client/client.go
+++ b/pkg/configs/client/client.go
@@ -96,7 +96,7 @@ func (c ConfigDBClient) GetRules(ctx context.Context, since userconfig.ID) (map[
}
endpoint := fmt.Sprintf("%s/private/api/prom/configs/rules%s", c.URL.String(), suffix)
var response *ConfigsResponse
- err := instrument.CollectedRequest(ctx, "GetRules", configsRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "GetRules", configsRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var err error
response, err = doRequest(endpoint, c.Timeout, c.TLSConfig, since)
return err
@@ -122,7 +122,7 @@ func (c ConfigDBClient) GetAlerts(ctx context.Context, since userconfig.ID) (*Co
}
endpoint := fmt.Sprintf("%s/private/api/prom/configs/alertmanager%s", c.URL.String(), suffix)
var response *ConfigsResponse
- err := instrument.CollectedRequest(ctx, "GetAlerts", configsRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "GetAlerts", configsRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var err error
response, err = doRequest(endpoint, c.Timeout, c.TLSConfig, since)
return err
diff --git a/pkg/configs/client/configs_test.go b/pkg/configs/client/configs_test.go
index 311c33ca91ad9..64f4b98d202e0 100644
--- a/pkg/configs/client/configs_test.go
+++ b/pkg/configs/client/configs_test.go
@@ -28,7 +28,7 @@ var response = `{
`
func TestDoRequest(t *testing.T) {
- server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
_, err := w.Write([]byte(response))
require.NoError(t, err)
}))
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 0a4dfe6d146a2..f6ae454e1482a 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -445,7 +445,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
var validationErr error
if validationErrors.Err() != nil {
- validationErr = httpgrpc.Errorf(http.StatusBadRequest, validationErrors.Error())
+ validationErr = httpgrpc.Errorf(http.StatusBadRequest, "%s", validationErrors.Error())
}
// Return early if none of the streams contained entries
@@ -456,8 +456,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
now := time.Now()
if block, until, retStatusCode := d.validator.ShouldBlockIngestion(validationContext, now); block {
- validation.DiscardedSamples.WithLabelValues(validation.BlockedIngestion, tenantID).Add(float64(validatedLineCount))
- validation.DiscardedBytes.WithLabelValues(validation.BlockedIngestion, tenantID).Add(float64(validatedLineSize))
+ d.trackDiscardedData(ctx, req, validationContext, tenantID, validatedLineCount, validatedLineSize, validation.BlockedIngestion)
err = fmt.Errorf(validation.BlockedIngestionErrorMsg, tenantID, until.Format(time.RFC3339), retStatusCode)
d.writeFailuresManager.Log(tenantID, err)
@@ -468,35 +467,16 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
return &logproto.PushResponse{}, nil
}
- return nil, httpgrpc.Errorf(retStatusCode, err.Error())
+ return nil, httpgrpc.Errorf(retStatusCode, "%s", err.Error())
}
if !d.ingestionRateLimiter.AllowN(now, tenantID, validatedLineSize) {
- // Return a 429 to indicate to the client they are being rate limited
- validation.DiscardedSamples.WithLabelValues(validation.RateLimited, tenantID).Add(float64(validatedLineCount))
- validation.DiscardedBytes.WithLabelValues(validation.RateLimited, tenantID).Add(float64(validatedLineSize))
-
- if d.usageTracker != nil {
- for _, stream := range req.Streams {
- lbs, _, _, err := d.parseStreamLabels(validationContext, stream.Labels, stream)
- if err != nil {
- continue
- }
-
- discardedStreamBytes := 0
- for _, e := range stream.Entries {
- discardedStreamBytes += len(e.Line)
- }
-
- if d.usageTracker != nil {
- d.usageTracker.DiscardedBytesAdd(ctx, tenantID, validation.RateLimited, lbs, float64(discardedStreamBytes))
- }
- }
- }
+ d.trackDiscardedData(ctx, req, validationContext, tenantID, validatedLineCount, validatedLineSize, validation.RateLimited)
err = fmt.Errorf(validation.RateLimitedErrorMsg, tenantID, int(d.ingestionRateLimiter.Limit(now, tenantID)), validatedLineCount, validatedLineSize)
d.writeFailuresManager.Log(tenantID, err)
- return nil, httpgrpc.Errorf(http.StatusTooManyRequests, err.Error())
+ // Return a 429 to indicate to the client they are being rate limited
+ return nil, httpgrpc.Errorf(http.StatusTooManyRequests, "%s", err.Error())
}
// Nil check for performance reasons, to avoid dynamic lookup and/or no-op
@@ -569,6 +549,37 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
}
}
+func (d *Distributor) trackDiscardedData(
+ ctx context.Context,
+ req *logproto.PushRequest,
+ validationContext validationContext,
+ tenantID string,
+ validatedLineCount int,
+ validatedLineSize int,
+ reason string,
+) {
+ validation.DiscardedSamples.WithLabelValues(reason, tenantID).Add(float64(validatedLineCount))
+ validation.DiscardedBytes.WithLabelValues(reason, tenantID).Add(float64(validatedLineSize))
+
+ if d.usageTracker != nil {
+ for _, stream := range req.Streams {
+ lbs, _, _, err := d.parseStreamLabels(validationContext, stream.Labels, stream)
+ if err != nil {
+ continue
+ }
+
+ discardedStreamBytes := 0
+ for _, e := range stream.Entries {
+ discardedStreamBytes += len(e.Line)
+ }
+
+ if d.usageTracker != nil {
+ d.usageTracker.DiscardedBytesAdd(ctx, tenantID, reason, lbs, float64(discardedStreamBytes))
+ }
+ }
+ }
+}
+
func hasAnyLevelLabels(l labels.Labels) (string, bool) {
for lbl := range allowedLabelsForLevel {
if l.Has(lbl) {
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index 3335f64b523b7..2e8f7b895e0f9 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -124,7 +124,7 @@ func TestDistributor(t *testing.T) {
if len(tc.expectedErrors) > 0 {
for _, expectedError := range tc.expectedErrors {
if len(tc.expectedErrors) == 1 {
- assert.Equal(t, err, expectedError)
+ assert.Equal(t, expectedError, err)
} else {
assert.Contains(t, err.Error(), expectedError.Error())
}
@@ -404,7 +404,7 @@ func Test_IncrementTimestamp(t *testing.T) {
t.Run(testName, func(t *testing.T) {
ing := &mockIngester{}
- distributors, _ := prepare(t, 1, 3, testData.limits, func(addr string) (ring_client.PoolClient, error) { return ing, nil })
+ distributors, _ := prepare(t, 1, 3, testData.limits, func(_ string) (ring_client.PoolClient, error) { return ing, nil })
_, err := distributors[0].Push(ctx, testData.push)
assert.NoError(t, err)
topVal := ing.Peek()
@@ -510,7 +510,7 @@ func Test_SortLabelsOnPush(t *testing.T) {
limits := &validation.Limits{}
flagext.DefaultValues(limits)
ingester := &mockIngester{}
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
request := makeWriteRequest(10, 10)
request.Streams[0].Labels = `{buzz="f", service_name="foo", a="b"}`
@@ -533,7 +533,7 @@ func Test_TruncateLogLines(t *testing.T) {
t.Run("it truncates lines to MaxLineSize when MaxLineSizeTruncate is true", func(t *testing.T) {
limits, ingester := setup()
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
_, err := distributors[0].Push(ctx, makeWriteRequest(1, 10))
require.NoError(t, err)
@@ -553,10 +553,10 @@ func Test_DiscardEmptyStreamsAfterValidation(t *testing.T) {
t.Run("it discards invalid entries and discards resulting empty streams completely", func(t *testing.T) {
limits, ingester := setup()
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
_, err := distributors[0].Push(ctx, makeWriteRequest(1, 10))
- require.Equal(t, err, httpgrpc.Errorf(http.StatusBadRequest, fmt.Sprintf(validation.LineTooLongErrorMsg, 5, "{foo=\"bar\"}", 10)))
+ require.Equal(t, err, httpgrpc.Errorf(http.StatusBadRequest, "%s", fmt.Sprintf(validation.LineTooLongErrorMsg, 5, "{foo=\"bar\"}", 10)))
topVal := ingester.Peek()
require.Nil(t, topVal)
})
@@ -1506,7 +1506,7 @@ func Test_DetectLogLevels(t *testing.T) {
t.Run("log level detection disabled", func(t *testing.T) {
limits, ingester := setup(false)
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
writeReq := makeWriteRequestWithLabels(1, 10, []string{`{foo="bar"}`})
_, err := distributors[0].Push(ctx, writeReq)
@@ -1518,7 +1518,7 @@ func Test_DetectLogLevels(t *testing.T) {
t.Run("log level detection enabled but level cannot be detected", func(t *testing.T) {
limits, ingester := setup(true)
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
writeReq := makeWriteRequestWithLabels(1, 10, []string{`{foo="bar"}`})
_, err := distributors[0].Push(ctx, writeReq)
@@ -1530,7 +1530,7 @@ func Test_DetectLogLevels(t *testing.T) {
t.Run("log level detection enabled and warn logs", func(t *testing.T) {
limits, ingester := setup(true)
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
writeReq := makeWriteRequestWithLabelsWithLevel(1, 10, []string{`{foo="bar"}`}, "warn")
_, err := distributors[0].Push(ctx, writeReq)
@@ -1547,7 +1547,7 @@ func Test_DetectLogLevels(t *testing.T) {
t.Run("log level detection enabled but log level already present in stream", func(t *testing.T) {
limits, ingester := setup(true)
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
writeReq := makeWriteRequestWithLabels(1, 10, []string{`{foo="bar", level="debug"}`})
_, err := distributors[0].Push(ctx, writeReq)
@@ -1562,7 +1562,7 @@ func Test_DetectLogLevels(t *testing.T) {
t.Run("log level detection enabled but log level already present as structured metadata", func(t *testing.T) {
limits, ingester := setup(true)
- distributors, _ := prepare(t, 1, 5, limits, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ distributors, _ := prepare(t, 1, 5, limits, func(_ string) (ring_client.PoolClient, error) { return ingester, nil })
writeReq := makeWriteRequestWithLabels(1, 10, []string{`{foo="bar"}`})
writeReq.Streams[0].Entries[0].StructuredMetadata = push.LabelsAdapter{
diff --git a/pkg/distributor/writefailures/manager_test.go b/pkg/distributor/writefailures/manager_test.go
index fb3d7577953a7..1618e3f048e9c 100644
--- a/pkg/distributor/writefailures/manager_test.go
+++ b/pkg/distributor/writefailures/manager_test.go
@@ -58,7 +58,7 @@ func TestWriteFailuresRateLimiting(t *testing.T) {
logger := log.NewLogfmtLogger(buf)
provider := &providerMock{
- tenantConfig: func(tenantID string) *runtime.Config {
+ tenantConfig: func(_ string) *runtime.Config {
return &runtime.Config{
LimitedLogPushErrors: true,
}
@@ -84,7 +84,7 @@ func TestWriteFailuresRateLimiting(t *testing.T) {
errorStr.WriteRune('z')
}
- manager.Log("known-tenant", fmt.Errorf(errorStr.String()))
+ manager.Log("known-tenant", fmt.Errorf("%s", errorStr.String()))
content := buf.String()
require.Empty(t, content)
@@ -98,7 +98,7 @@ func TestWriteFailuresRateLimiting(t *testing.T) {
errorStr.WriteRune('z')
}
- manager.Log("known-tenant", fmt.Errorf(errorStr.String()))
+ manager.Log("known-tenant", fmt.Errorf("%s", errorStr.String()))
content := buf.String()
require.NotEmpty(t, content)
@@ -117,10 +117,10 @@ func TestWriteFailuresRateLimiting(t *testing.T) {
errorStr2.WriteRune('y')
}
- manager.Log("known-tenant", fmt.Errorf(errorStr1.String()))
- manager.Log("known-tenant", fmt.Errorf(errorStr2.String())) // more than 1KB/s
+ manager.Log("known-tenant", fmt.Errorf("%s", errorStr1.String()))
+ manager.Log("known-tenant", fmt.Errorf("%s", errorStr2.String())) // more than 1KB/s
time.Sleep(time.Second)
- manager.Log("known-tenant", fmt.Errorf(errorStr3.String()))
+ manager.Log("known-tenant", fmt.Errorf("%s", errorStr3.String()))
content := buf.String()
require.NotEmpty(t, content)
diff --git a/pkg/indexgateway/client.go b/pkg/indexgateway/client.go
index b5e05b7e26ecd..e8c4c23c243c2 100644
--- a/pkg/indexgateway/client.go
+++ b/pkg/indexgateway/client.go
@@ -349,7 +349,7 @@ func (s *GatewayClient) GetShards(
return nil
},
- func(err error) bool {
+ func(_ error) bool {
errCt++
return errCt <= maxErrs
},
diff --git a/pkg/indexgateway/client_test.go b/pkg/indexgateway/client_test.go
index 03fdfbcbc1a3c..91005a591eb15 100644
--- a/pkg/indexgateway/client_test.go
+++ b/pkg/indexgateway/client_test.go
@@ -259,7 +259,7 @@ func TestGatewayClient(t *testing.T) {
}
numCallbacks := 0
- err = gatewayClient.QueryPages(ctx, queries, func(query index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
+ err = gatewayClient.QueryPages(ctx, queries, func(_ index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
itr := batch.Iterator()
for j := 0; j <= numCallbacks; j++ {
diff --git a/pkg/indexgateway/gateway.go b/pkg/indexgateway/gateway.go
index 052575647951e..745b114c08ac0 100644
--- a/pkg/indexgateway/gateway.go
+++ b/pkg/indexgateway/gateway.go
@@ -246,9 +246,10 @@ func (g *Gateway) GetChunkRef(ctx context.Context, req *logproto.GetChunkRefRequ
return result, nil
}
- // Extract LineFiltersExpr from the plan. If there is none, we can short-circuit and return before making a req
- // to the bloom-gateway (through the g.bloomQuerier)
- if len(v1.ExtractTestableLineFilters(req.Plan.AST)) == 0 {
+ // Extract testable LabelFilters from the plan. If there is none, we can
+ // short-circuit and return before making a req to the bloom-gateway (through
+ // the g.bloomQuerier)
+ if len(v1.ExtractTestableLabelMatchers(req.Plan.AST)) == 0 {
return result, nil
}
@@ -464,7 +465,7 @@ func (g *Gateway) boundedShards(
filtered := refs
// 2) filter via blooms if enabled
- filters := syntax.ExtractLineFilters(p.Plan().AST)
+ filters := v1.ExtractTestableLabelMatchers(p.Plan().AST)
if g.bloomQuerier != nil && len(filters) > 0 {
xs, err := g.bloomQuerier.FilterChunkRefs(ctx, instanceID, req.From, req.Through, refs, p.Plan())
if err != nil {
diff --git a/pkg/indexgateway/gateway_test.go b/pkg/indexgateway/gateway_test.go
index cf5cd7256486e..9396e865da71b 100644
--- a/pkg/indexgateway/gateway_test.go
+++ b/pkg/indexgateway/gateway_test.go
@@ -446,7 +446,7 @@ func TestAccumulateChunksToShards(t *testing.T) {
fsImpl := func(series [][]refWithSizingInfo) sharding.ForSeriesFunc {
return sharding.ForSeriesFunc(
func(
- ctx context.Context,
+ _ context.Context,
_ string,
_ tsdb_index.FingerprintFilter,
_, _ model.Time,
@@ -454,7 +454,7 @@ func TestAccumulateChunksToShards(t *testing.T) {
_ labels.Labels,
fp model.Fingerprint,
chks []tsdb_index.ChunkMeta,
- ) (stop bool), matchers ...*labels.Matcher) error {
+ ) (stop bool), _ ...*labels.Matcher) error {
for _, s := range series {
chks := []tsdb_index.ChunkMeta{}
diff --git a/pkg/ingester-kafka/kafka/kafka_tee.go b/pkg/ingester-kafka/kafka/kafka_tee.go
new file mode 100644
index 0000000000000..6aeaad9724e68
--- /dev/null
+++ b/pkg/ingester-kafka/kafka/kafka_tee.go
@@ -0,0 +1,209 @@
+package kafka
+
+import (
+ "context"
+ "errors"
+ "flag"
+ "fmt"
+ "math"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/user"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/twmb/franz-go/pkg/kgo"
+
+ "github.com/twmb/franz-go/plugin/kprom"
+
+ "github.com/grafana/loki/v3/pkg/distributor"
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+const writeTimeout = time.Minute
+
+type Config struct {
+ Address string `yaml:"address" docs:"the kafka endpoint to connect to"`
+ Topic string `yaml:"topic" docs:"the kafka topic to write to"`
+}
+
+func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
+ cfg.RegisterFlagsWithPrefix("", f)
+}
+
+func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
+ f.StringVar(&cfg.Address, prefix+"address", "localhost:9092", "the kafka endpoint to connect to")
+ f.StringVar(&cfg.Topic, prefix+".topic", "loki.push", "The Kafka topic name.")
+}
+
+type Tee struct {
+ logger log.Logger
+ kafkaClient *kgo.Client
+ partitionRing *ring.PartitionInstanceRing
+
+ ingesterAppends *prometheus.CounterVec
+}
+
+func NewTee(
+ cfg Config,
+ metricsNamespace string,
+ registerer prometheus.Registerer,
+ logger log.Logger,
+ partitionRing *ring.PartitionInstanceRing,
+) (*Tee, error) {
+ registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
+
+ metrics := kprom.NewMetrics(
+ "", // No prefix. We expect the input prometheus.Registered to be wrapped with a prefix.
+ kprom.Registerer(registerer),
+ kprom.FetchAndProduceDetail(kprom.Batches, kprom.Records, kprom.CompressedBytes, kprom.UncompressedBytes))
+
+ opts := append([]kgo.Opt{},
+ kgo.SeedBrokers(cfg.Address),
+
+ kgo.WithHooks(metrics),
+ // commonKafkaClientOptions(kafkaCfg, metrics, logger),
+ kgo.RequiredAcks(kgo.AllISRAcks()),
+ kgo.DefaultProduceTopic(cfg.Topic),
+
+ kgo.AllowAutoTopicCreation(),
+ // We set the partition field in each record.
+ kgo.RecordPartitioner(kgo.ManualPartitioner()),
+
+ // Set the upper bounds the size of a record batch.
+ kgo.ProducerBatchMaxBytes(1024*1024*1),
+
+ // By default, the Kafka client allows 1 Produce in-flight request per broker. Disabling write idempotency
+ // (which we don't need), we can increase the max number of in-flight Produce requests per broker. A higher
+ // number of in-flight requests, in addition to short buffering ("linger") in client side before firing the
+ // next Produce request allows us to reduce the end-to-end latency.
+ //
+ // The result of the multiplication of producer linger and max in-flight requests should match the maximum
+ // Produce latency expected by the Kafka backend in a steady state. For example, 50ms * 20 requests = 1s,
+ // which means the Kafka client will keep issuing a Produce request every 50ms as far as the Kafka backend
+ // doesn't take longer than 1s to process them (if it takes longer, the client will buffer data and stop
+ // issuing new Produce requests until some previous ones complete).
+ kgo.DisableIdempotentWrite(),
+ kgo.ProducerLinger(50*time.Millisecond),
+ kgo.MaxProduceRequestsInflightPerBroker(20),
+
+ // Unlimited number of Produce retries but a deadline on the max time a record can take to be delivered.
+ // With the default config it would retry infinitely.
+ //
+ // Details of the involved timeouts:
+ // - RecordDeliveryTimeout: how long a Kafka client Produce() call can take for a given record. The overhead
+ // timeout is NOT applied.
+ // - ProduceRequestTimeout: how long to wait for the response to the Produce request (the Kafka protocol message)
+ // after being sent on the network. The actual timeout is increased by the configured overhead.
+ //
+ // When a Produce request to Kafka fail, the client will retry up until the RecordDeliveryTimeout is reached.
+ // Once the timeout is reached, the Produce request will fail and all other buffered requests in the client
+ // (for the same partition) will fail too. See kgo.RecordDeliveryTimeout() documentation for more info.
+ kgo.RecordRetries(math.MaxInt),
+ kgo.RecordDeliveryTimeout(time.Minute),
+ kgo.ProduceRequestTimeout(time.Minute),
+ kgo.RequestTimeoutOverhead(time.Minute),
+
+ // Unlimited number of buffered records because we limit on bytes in Writer. The reason why we don't use
+ // kgo.MaxBufferedBytes() is because it suffers a deadlock issue:
+ // https://github.com/twmb/franz-go/issues/777
+ kgo.MaxBufferedRecords(math.MaxInt), // Use a high value to set it as unlimited, because the client doesn't support "0 as unlimited".
+ kgo.MaxBufferedBytes(0),
+ )
+
+ kafkaClient, err := kgo.NewClient(opts...)
+ if err != nil {
+ panic("failed to start kafka client")
+ }
+
+ t := &Tee{
+ logger: log.With(logger, "component", "kafka-tee"),
+ ingesterAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "kafka_ingester_appends_total",
+ Help: "The total number of appends sent to kafka ingest path.",
+ }, []string{"partition", "status"}),
+ kafkaClient: kafkaClient,
+ partitionRing: partitionRing,
+ }
+
+ return t, nil
+}
+
+// Duplicate Implements distributor.Tee which is used to tee distributor requests to pattern ingesters.
+func (t *Tee) Duplicate(tenant string, streams []distributor.KeyedStream) {
+ for idx := range streams {
+ go func(stream distributor.KeyedStream) {
+ if err := t.sendStream(tenant, stream); err != nil {
+ level.Error(t.logger).Log("msg", "failed to send stream to kafka", "err", err)
+ }
+ }(streams[idx])
+ }
+}
+
+func (t *Tee) sendStream(tenant string, stream distributor.KeyedStream) error {
+ partitionID, err := t.partitionRing.PartitionRing().ActivePartitionForKey(stream.HashKey)
+ if err != nil {
+ t.ingesterAppends.WithLabelValues("partition_unknown", "fail").Inc()
+ return fmt.Errorf("failed to find active partition for stream: %w", err)
+ }
+ records, err := marshalWriteRequestToRecords(partitionID, tenant, stream.Stream, 1024*1024)
+
+ ctx, cancel := context.WithTimeout(user.InjectOrgID(context.Background(), tenant), writeTimeout)
+ defer cancel()
+ produceResults := t.kafkaClient.ProduceSync(ctx, records...)
+
+ var finalErr error
+ for _, result := range produceResults {
+ if result.Err != nil {
+ t.ingesterAppends.WithLabelValues(fmt.Sprintf("partition_%d", partitionID), "fail").Inc()
+ finalErr = err
+ } else {
+ t.ingesterAppends.WithLabelValues(fmt.Sprintf("partition_%d", partitionID), "success").Inc()
+ }
+ }
+
+ return finalErr
+}
+
+// marshalWriteRequestToRecords marshals a mimirpb.WriteRequest to one or more Kafka records.
+// The request may be split to multiple records to get that each single Kafka record
+// data size is not bigger than maxSize.
+//
+// This function is a best-effort. The returned Kafka records are not strictly guaranteed to
+// have their data size limited to maxSize. The reason is that the WriteRequest is split
+// by each individual Timeseries and Metadata: if a single Timeseries or Metadata is bigger than
+// maxSize, than the resulting record will be bigger than the limit as well.
+func marshalWriteRequestToRecords(partitionID int32, tenantID string, stream logproto.Stream, maxSize int) ([]*kgo.Record, error) {
+ reqSize := stream.Size()
+
+ if reqSize <= maxSize {
+ // No need to split the request. We can take a fast path.
+ rec, err := marshalWriteRequestToRecord(partitionID, tenantID, stream, reqSize)
+ if err != nil {
+ return nil, err
+ }
+
+ return []*kgo.Record{rec}, nil
+ }
+ return nil, errors.New("large write requests are not supported yet")
+
+ // return marshalWriteRequestsToRecords(partitionID, tenantID, mimirpb.SplitWriteRequestByMaxMarshalSize(req, reqSize, maxSize))
+}
+
+func marshalWriteRequestToRecord(partitionID int32, tenantID string, stream logproto.Stream, reqSize int) (*kgo.Record, error) {
+ // Marshal the request.
+ data := make([]byte, reqSize)
+ n, err := stream.MarshalToSizedBuffer(data)
+ if err != nil {
+ return nil, fmt.Errorf("failed to serialise write request: %w", err)
+ }
+ data = data[:n]
+
+ return &kgo.Record{
+ Key: []byte(tenantID), // We don't partition based on the key, so the value here doesn't make any difference.
+ Value: data,
+ Partition: partitionID,
+ }, nil
+}
diff --git a/pkg/ingester-rf1/flush.go b/pkg/ingester-rf1/flush.go
index 2d194a12f5574..f2710695d7045 100644
--- a/pkg/ingester-rf1/flush.go
+++ b/pkg/ingester-rf1/flush.go
@@ -114,7 +114,7 @@ func (i *Ingester) flushSegment(ctx context.Context, j int, w *wal.SegmentWriter
wal.ReportSegmentStats(stats, i.metrics.segmentMetrics)
id := ulid.MustNew(ulid.Timestamp(time.Now()), rand.Reader).String()
- if err := i.store.PutObject(ctx, fmt.Sprintf(wal.Dir+id), buf); err != nil {
+ if err := i.store.PutObject(ctx, wal.Dir+id, buf); err != nil {
i.metrics.flushFailuresTotal.Inc()
return fmt.Errorf("failed to put object: %w", err)
}
diff --git a/pkg/ingester-rf1/ingester.go b/pkg/ingester-rf1/ingester.go
index 8ee0d0e8928b3..583aa6494e77c 100644
--- a/pkg/ingester-rf1/ingester.go
+++ b/pkg/ingester-rf1/ingester.go
@@ -213,9 +213,7 @@ type Ingester struct {
customStreamsTracker push.UsageTracker
- // recalculateOwnedStreams periodically checks the ring for changes and recalculates owned streams for each instance.
readRing ring.ReadRing
- // recalculateOwnedStreams *recalculateOwnedStreams
}
// New makes a new Ingester.
@@ -399,11 +397,6 @@ func (i *Ingester) starting(ctx context.Context) error {
// return fmt.Errorf("can not start recalculate owned streams service: %w", err)
//}
- err = i.lifecycler.AwaitRunning(ctx)
- if err != nil {
- return fmt.Errorf("can not ensure recalculate owned streams service is running: %w", err)
- }
-
go i.periodicStreamMaintenance()
return nil
}
diff --git a/pkg/ingester-rf1/instance.go b/pkg/ingester-rf1/instance.go
index e05e99ba8b2f6..0444475f7a6bf 100644
--- a/pkg/ingester-rf1/instance.go
+++ b/pkg/ingester-rf1/instance.go
@@ -98,7 +98,7 @@ func (i *instance) Push(ctx context.Context, w *wal.Manager, req *logproto.PushR
s, err := i.createStream(ctx, reqStream)
return s, err
},
- func(s *stream) error {
+ func(_ *stream) error {
return nil
},
)
@@ -185,7 +185,7 @@ func (i *instance) createStream(ctx context.Context, pushReqStream logproto.Stre
"stream", pushReqStream.Labels,
)
}
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if err != nil {
diff --git a/pkg/ingester-rf1/metastore/metastore.go b/pkg/ingester-rf1/metastore/metastore.go
index 6d999b8edd518..282114c683589 100644
--- a/pkg/ingester-rf1/metastore/metastore.go
+++ b/pkg/ingester-rf1/metastore/metastore.go
@@ -12,6 +12,7 @@ import (
"sync"
"time"
+ "github.com/coder/quartz"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/flagext"
@@ -88,6 +89,9 @@ type Metastore struct {
done chan struct{}
wg sync.WaitGroup
+
+ // Used in tests.
+ clock quartz.Clock
}
func New(config Config, logger log.Logger, reg prometheus.Registerer, hs health.Service) (*Metastore, error) {
@@ -97,6 +101,7 @@ func New(config Config, logger log.Logger, reg prometheus.Registerer, hs health.
reg: reg,
db: newDB(config, logger),
done: make(chan struct{}),
+ clock: quartz.NewReal(),
}
m.leaderhealth = raftleader.NewRaftLeaderHealthObserver(hs, logger)
m.state = newMetastoreState(logger, m.db)
diff --git a/pkg/ingester-rf1/metastore/metastore_hack.go b/pkg/ingester-rf1/metastore/metastore_hack.go
index faef242d0217f..ffcbefa6127ab 100644
--- a/pkg/ingester-rf1/metastore/metastore_hack.go
+++ b/pkg/ingester-rf1/metastore/metastore_hack.go
@@ -29,7 +29,7 @@ func (m *Metastore) cleanupLoop() {
if m.raft.State() != raft.Leader {
continue
}
- timestamp := uint64(time.Now().Add(-1 * time.Hour).UnixMilli())
+ timestamp := uint64(m.clock.Now().Add(-1 * time.Hour).UnixMilli())
req := &raftlogpb.TruncateCommand{Timestamp: timestamp}
_, _, err := applyCommand[*raftlogpb.TruncateCommand, *anypb.Any](m.raft, req, m.config.Raft.ApplyTimeout)
if err != nil {
diff --git a/pkg/ingester-rf1/objstore/storage.go b/pkg/ingester-rf1/objstore/storage.go
index beb544e1980aa..ec0d734b316b7 100644
--- a/pkg/ingester-rf1/objstore/storage.go
+++ b/pkg/ingester-rf1/objstore/storage.go
@@ -6,6 +6,7 @@ import (
"io"
"sort"
+ "github.com/opentracing/opentracing-go"
"github.com/prometheus/common/model"
"github.com/grafana/loki/v3/pkg/storage"
@@ -36,7 +37,7 @@ func New(
}
// sort by From time
sort.Slice(periodicConfigs, func(i, j int) bool {
- return periodicConfigs[i].From.Time.Before(periodicConfigs[i].From.Time)
+ return periodicConfigs[i].From.Time.Before(periodicConfigs[j].From.Time)
})
for _, periodicConfig := range periodicConfigs {
objectClient, err := storage.NewObjectClient(periodicConfig.ObjectType, storageConfig, clientMetrics)
@@ -94,6 +95,11 @@ func (m *Multi) GetObject(ctx context.Context, objectKey string) (io.ReadCloser,
}
func (m *Multi) GetObjectRange(ctx context.Context, objectKey string, off, length int64) (io.ReadCloser, error) {
+ sp, _ := opentracing.StartSpanFromContext(ctx, "GetObjectRange")
+ if sp != nil {
+ sp.LogKV("objectKey", objectKey, "off", off, "length", length)
+ }
+ defer sp.Finish()
s, err := m.GetStoreFor(model.Now())
if err != nil {
return nil, err
diff --git a/pkg/ingester-rf1/objstore/test_storage.go b/pkg/ingester-rf1/objstore/test_storage.go
new file mode 100644
index 0000000000000..db25c8487cca9
--- /dev/null
+++ b/pkg/ingester-rf1/objstore/test_storage.go
@@ -0,0 +1,37 @@
+package objstore
+
+import (
+ "os"
+ "testing"
+
+ "github.com/prometheus/common/model"
+
+ "github.com/grafana/loki/v3/pkg/storage"
+ "github.com/grafana/loki/v3/pkg/storage/chunk/client/local"
+ "github.com/grafana/loki/v3/pkg/storage/config"
+)
+
+var metrics *storage.ClientMetrics
+
+func NewTestStorage(t testing.TB) (*Multi, error) {
+ if metrics == nil {
+ m := storage.NewClientMetrics()
+ metrics = &m
+ }
+ dir := t.TempDir()
+ t.Cleanup(func() {
+ os.RemoveAll(dir)
+ metrics.Unregister()
+ })
+ cfg := storage.Config{
+ FSConfig: local.FSConfig{
+ Directory: dir,
+ },
+ }
+ return New([]config.PeriodConfig{
+ {
+ From: config.DayTime{Time: model.Now()},
+ ObjectType: "filesystem",
+ },
+ }, cfg, *metrics)
+}
diff --git a/pkg/ingester-rf1/stream.go b/pkg/ingester-rf1/stream.go
index 32ccf454c41c4..8913e206a7c2a 100644
--- a/pkg/ingester-rf1/stream.go
+++ b/pkg/ingester-rf1/stream.go
@@ -176,7 +176,7 @@ func errorForFailedEntries(s *stream, failedEntriesWithError []entryWithError, t
fmt.Fprintf(&buf, "user '%s', total ignored: %d out of %d for stream: %s", s.tenant, len(failedEntriesWithError), totalEntries, streamName)
- return httpgrpc.Errorf(statusCode, buf.String())
+ return httpgrpc.Errorf(statusCode, "%s", buf.String())
}
func hasRateLimitErr(errs []entryWithError) bool {
diff --git a/pkg/ingester/ingester_test.go b/pkg/ingester/ingester_test.go
index f201da437e4ea..17d34b57dc549 100644
--- a/pkg/ingester/ingester_test.go
+++ b/pkg/ingester/ingester_test.go
@@ -1434,7 +1434,7 @@ func createIngesterServer(t *testing.T, ingesterConfig Config) (ingesterClient,
}()
// nolint:staticcheck // grpc.DialContext() has been deprecated; we'll address it before upgrading to gRPC 2.
- conn, err := grpc.DialContext(context.Background(), "", grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(func(ctx context.Context, s string) (net.Conn, error) {
+ conn, err := grpc.DialContext(context.Background(), "", grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithContextDialer(func(_ context.Context, _ string) (net.Conn, error) {
return listener.Dial()
}))
require.NoError(t, err)
diff --git a/pkg/ingester/instance.go b/pkg/ingester/instance.go
index aeddd5e9f0f13..e2fd472656a9f 100644
--- a/pkg/ingester/instance.go
+++ b/pkg/ingester/instance.go
@@ -289,7 +289,7 @@ func (i *instance) createStream(ctx context.Context, pushReqStream logproto.Stre
"stream", pushReqStream.Labels,
)
}
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if record != nil {
diff --git a/pkg/ingester/stream.go b/pkg/ingester/stream.go
index 0754de9caf0fa..fe4a644c71109 100644
--- a/pkg/ingester/stream.go
+++ b/pkg/ingester/stream.go
@@ -268,7 +268,7 @@ func errorForFailedEntries(s *stream, failedEntriesWithError []entryWithError, t
fmt.Fprintf(&buf, "user '%s', total ignored: %d out of %d for stream: %s", s.tenant, len(failedEntriesWithError), totalEntries, streamName)
- return httpgrpc.Errorf(statusCode, buf.String())
+ return httpgrpc.Errorf(statusCode, "%s", buf.String())
}
func hasRateLimitErr(errs []entryWithError) bool {
diff --git a/pkg/ingester/stream_test.go b/pkg/ingester/stream_test.go
index 3bbd091b25c5c..6dbd521f1abc7 100644
--- a/pkg/ingester/stream_test.go
+++ b/pkg/ingester/stream_test.go
@@ -101,7 +101,7 @@ func TestMaxReturnedStreamsErrors(t *testing.T) {
}
fmt.Fprintf(&expected, "user 'fake', total ignored: %d out of %d for stream: {foo=\"bar\"}", numLogs, numLogs)
- expectErr := httpgrpc.Errorf(http.StatusBadRequest, expected.String())
+ expectErr := httpgrpc.Errorf(http.StatusBadRequest, "%s", expected.String())
_, err = s.Push(context.Background(), newLines, recordPool.GetRecord(), 0, true, false, nil)
require.Error(t, err)
diff --git a/pkg/iter/entry_iterator.go b/pkg/iter/entry_iterator.go
index 58e0ab929e7f4..60bf032cab91e 100644
--- a/pkg/iter/entry_iterator.go
+++ b/pkg/iter/entry_iterator.go
@@ -53,16 +53,17 @@ func (i *streamIterator) Close() error {
return nil
}
-// HeapIterator iterates over a heap of iterators with ability to push new iterators and get some properties like time of entry at peek and len
-// Not safe for concurrent use
-type HeapIterator interface {
+// MergeEntryIterator exposes additional fields that are used by the Tailer only.
+// Not safe for concurrent use!
+type MergeEntryIterator interface {
EntryIterator
+
Peek() time.Time
IsEmpty() bool
Push(EntryIterator)
}
-// mergeEntryIterator iterates over a heap of iterators and merge duplicate entries.
+// mergeEntryIterator implements the MergeEntryIterator interface functions.
type mergeEntryIterator struct {
tree *loser.Tree[sortFields, EntryIterator]
stats *stats.Context
@@ -74,11 +75,11 @@ type mergeEntryIterator struct {
errs []error
}
-// NewMergeEntryIterator returns a new iterator which uses a heap to merge together entries for multiple iterators and deduplicate entries if any.
+// NewMergeEntryIterator returns a new iterator which uses a looser tree to merge together entries for multiple iterators and deduplicate entries if any.
// The iterator only order and merge entries across given `is` iterators, it does not merge entries within individual iterator.
// This means using this iterator with a single iterator will result in the same result as the input iterator.
// If you don't need to deduplicate entries, use `NewSortEntryIterator` instead.
-func NewMergeEntryIterator(ctx context.Context, is []EntryIterator, direction logproto.Direction) HeapIterator {
+func NewMergeEntryIterator(ctx context.Context, is []EntryIterator, direction logproto.Direction) MergeEntryIterator {
maxVal, less := treeLess(direction)
result := &mergeEntryIterator{stats: stats.FromContext(ctx)}
result.tree = loser.New(is, maxVal, sortFieldsAt, less, result.closeEntry)
diff --git a/pkg/iter/entry_iterator_test.go b/pkg/iter/entry_iterator_test.go
index e49ecf3ee528a..fb1548ddc35d0 100644
--- a/pkg/iter/entry_iterator_test.go
+++ b/pkg/iter/entry_iterator_test.go
@@ -162,16 +162,16 @@ func TestIteratorMultipleLabels(t *testing.T) {
func TestMergeIteratorPrefetch(t *testing.T) {
t.Parallel()
- type tester func(t *testing.T, i HeapIterator)
+ type tester func(t *testing.T, i MergeEntryIterator)
tests := map[string]tester{
- "prefetch on IsEmpty() when called as first method": func(t *testing.T, i HeapIterator) {
+ "prefetch on IsEmpty() when called as first method": func(t *testing.T, i MergeEntryIterator) {
assert.Equal(t, false, i.IsEmpty())
},
- "prefetch on Peek() when called as first method": func(t *testing.T, i HeapIterator) {
+ "prefetch on Peek() when called as first method": func(t *testing.T, i MergeEntryIterator) {
assert.Equal(t, time.Unix(0, 0), i.Peek())
},
- "prefetch on Next() when called as first method": func(t *testing.T, i HeapIterator) {
+ "prefetch on Next() when called as first method": func(t *testing.T, i MergeEntryIterator) {
assert.True(t, i.Next())
assert.Equal(t, logproto.Entry{Timestamp: time.Unix(0, 0), Line: "0"}, i.At())
},
diff --git a/pkg/iter/v2/ordering_test.go b/pkg/iter/v2/ordering_test.go
index 6a2e81abae014..fb29cf888a383 100644
--- a/pkg/iter/v2/ordering_test.go
+++ b/pkg/iter/v2/ordering_test.go
@@ -84,7 +84,7 @@ func TestOrdering(t *testing.T) {
return o.Unwrap()
})
- EqualIterators[int](t, func(a, b int) {}, NewSliceIter(tc.expected), unmap)
+ EqualIterators[int](t, func(_, _ int) {}, NewSliceIter(tc.expected), unmap)
})
}
}
diff --git a/pkg/kafka/config.go b/pkg/kafka/config.go
new file mode 100644
index 0000000000000..f916b145f0084
--- /dev/null
+++ b/pkg/kafka/config.go
@@ -0,0 +1,109 @@
+package kafka
+
+import (
+ "errors"
+ "flag"
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+)
+
+const (
+ consumeFromLastOffset = "last-offset"
+ consumeFromStart = "start"
+ consumeFromEnd = "end"
+ consumeFromTimestamp = "timestamp"
+
+ // writerRequestTimeoutOverhead is the overhead applied by the Writer to every Kafka timeout.
+ // You can think about this overhead as an extra time for requests sitting in the client's buffer
+ // before being sent on the wire and the actual time it takes to send it over the network and
+ // start being processed by Kafka.
+ writerRequestTimeoutOverhead = 2 * time.Second
+
+ // producerBatchMaxBytes is the max allowed size of a batch of Kafka records.
+ producerBatchMaxBytes = 16_000_000
+
+ // maxProducerRecordDataBytesLimit is the max allowed size of a single record data. Given we have a limit
+ // on the max batch size (producerBatchMaxBytes), a Kafka record data can't be bigger than the batch size
+ // minus some overhead required to serialise the batch and the record itself. We use 16KB as such overhead
+ // in the worst case scenario, which is expected to be way above the actual one.
+ maxProducerRecordDataBytesLimit = producerBatchMaxBytes - 16384
+ minProducerRecordDataBytesLimit = 1024 * 1024
+
+ kafkaConfigFlagPrefix = "ingest-storage.kafka"
+ targetConsumerLagAtStartupFlag = kafkaConfigFlagPrefix + ".target-consumer-lag-at-startup"
+ maxConsumerLagAtStartupFlag = kafkaConfigFlagPrefix + ".max-consumer-lag-at-startup"
+)
+
+var (
+ ErrMissingKafkaAddress = errors.New("the Kafka address has not been configured")
+ ErrMissingKafkaTopic = errors.New("the Kafka topic has not been configured")
+ ErrInvalidProducerMaxRecordSizeBytes = fmt.Errorf("the configured producer max record size bytes must be a value between %d and %d", minProducerRecordDataBytesLimit, maxProducerRecordDataBytesLimit)
+
+ consumeFromPositionOptions = []string{consumeFromLastOffset, consumeFromStart, consumeFromEnd, consumeFromTimestamp}
+)
+
+// Config holds the generic config for the Kafka backend.
+type Config struct {
+ Address string `yaml:"address"`
+ Topic string `yaml:"topic"`
+ ClientID string `yaml:"client_id"`
+ DialTimeout time.Duration `yaml:"dial_timeout"`
+ WriteTimeout time.Duration `yaml:"write_timeout"`
+
+ ConsumerGroup string `yaml:"consumer_group"`
+
+ LastProducedOffsetRetryTimeout time.Duration `yaml:"last_produced_offset_retry_timeout"`
+
+ AutoCreateTopicEnabled bool `yaml:"auto_create_topic_enabled"`
+ // AutoCreateTopicDefaultPartitions int `yaml:"auto_create_topic_default_partitions"`
+
+ ProducerMaxRecordSizeBytes int `yaml:"producer_max_record_size_bytes"`
+ ProducerMaxBufferedBytes int64 `yaml:"producer_max_buffered_bytes"`
+}
+
+func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
+ cfg.RegisterFlagsWithPrefix("kafka", f)
+}
+
+func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
+ f.StringVar(&cfg.Address, prefix+".address", "localhost:9092", "The Kafka backend address.")
+ f.StringVar(&cfg.Topic, prefix+".topic", "", "The Kafka topic name.")
+ f.StringVar(&cfg.ClientID, prefix+".client-id", "", "The Kafka client ID.")
+ f.DurationVar(&cfg.DialTimeout, prefix+".dial-timeout", 2*time.Second, "The maximum time allowed to open a connection to a Kafka broker.")
+ f.DurationVar(&cfg.WriteTimeout, prefix+".write-timeout", 10*time.Second, "How long to wait for an incoming write request to be successfully committed to the Kafka backend.")
+
+ f.StringVar(&cfg.ConsumerGroup, prefix+".consumer-group", "", "The consumer group used by the consumer to track the last consumed offset. The consumer group must be different for each ingester. If the configured consumer group contains the '' placeholder, it is replaced with the actual partition ID owned by the ingester. When empty (recommended), Mimir uses the ingester instance ID to guarantee uniqueness.")
+
+ f.DurationVar(&cfg.LastProducedOffsetRetryTimeout, prefix+".last-produced-offset-retry-timeout", 10*time.Second, "How long to retry a failed request to get the last produced offset.")
+
+ f.BoolVar(&cfg.AutoCreateTopicEnabled, prefix+".auto-create-topic-enabled", true, "Enable auto-creation of Kafka topic if it doesn't exist.")
+ // f.IntVar(&cfg.AutoCreateTopicDefaultPartitions, prefix+".auto-create-topic-default-partitions", 0, "When auto-creation of Kafka topic is enabled and this value is positive, Kafka's num.partitions configuration option is set on Kafka brokers with this value when Mimir component that uses Kafka starts. This configuration option specifies the default number of partitions that the Kafka broker uses for auto-created topics. Note that this is a Kafka-cluster wide setting, and applies to any auto-created topic. If the setting of num.partitions fails, Mimir proceeds anyways, but auto-created topics could have an incorrect number of partitions.")
+
+ f.IntVar(&cfg.ProducerMaxRecordSizeBytes, prefix+".producer-max-record-size-bytes", maxProducerRecordDataBytesLimit, "The maximum size of a Kafka record data that should be generated by the producer. An incoming write request larger than this size is split into multiple Kafka records. We strongly recommend to not change this setting unless for testing purposes.")
+ f.Int64Var(&cfg.ProducerMaxBufferedBytes, prefix+".producer-max-buffered-bytes", 1024*1024*1024, "The maximum size of (uncompressed) buffered and unacknowledged produced records sent to Kafka. The produce request fails once this limit is reached. This limit is per Kafka client. 0 to disable the limit.")
+}
+
+func (cfg *Config) Validate() error {
+ if cfg.Address == "" {
+ return ErrMissingKafkaAddress
+ }
+ if cfg.Topic == "" {
+ return ErrMissingKafkaTopic
+ }
+ if cfg.ProducerMaxRecordSizeBytes < minProducerRecordDataBytesLimit || cfg.ProducerMaxRecordSizeBytes > maxProducerRecordDataBytesLimit {
+ return ErrInvalidProducerMaxRecordSizeBytes
+ }
+
+ return nil
+}
+
+// GetConsumerGroup returns the consumer group to use for the given instanceID and partitionID.
+func (cfg *Config) GetConsumerGroup(instanceID string, partitionID int32) string {
+ if cfg.ConsumerGroup == "" {
+ return instanceID
+ }
+
+ return strings.ReplaceAll(cfg.ConsumerGroup, "", strconv.Itoa(int(partitionID)))
+}
diff --git a/pkg/kafka/encoding.go b/pkg/kafka/encoding.go
new file mode 100644
index 0000000000000..c4977054f32f6
--- /dev/null
+++ b/pkg/kafka/encoding.go
@@ -0,0 +1,175 @@
+// Package kafka provides encoding and decoding functionality for Loki's Kafka integration.
+package kafka
+
+import (
+ "errors"
+ "fmt"
+ math_bits "math/bits"
+ "sync"
+
+ "github.com/twmb/franz-go/pkg/kgo"
+
+ lru "github.com/hashicorp/golang-lru"
+ "github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+)
+
+var encoderPool = sync.Pool{
+ New: func() any {
+ return &logproto.Stream{}
+ },
+}
+
+// Encode converts a logproto.Stream into one or more Kafka records.
+// It handles splitting large streams into multiple records if necessary.
+//
+// The encoding process works as follows:
+// 1. If the stream size is smaller than maxSize, it's encoded into a single record.
+// 2. For larger streams, it splits the entries into multiple batches, each under maxSize.
+// 3. The data is wrapped in a Kafka record with the tenant ID as the key.
+//
+// The format of each record is:
+// - Key: Tenant ID (used for routing, not for partitioning)
+// - Value: Protobuf serialized logproto.Stream
+// - Partition: As specified in the partitionID parameter
+//
+// Parameters:
+// - partitionID: The Kafka partition ID for the record
+// - tenantID: The tenant ID for the stream
+// - stream: The logproto.Stream to be encoded
+// - maxSize: The maximum size of each Kafka record
+func Encode(partitionID int32, tenantID string, stream logproto.Stream, maxSize int) ([]*kgo.Record, error) {
+ reqSize := stream.Size()
+
+ // Fast path for small requests
+ if reqSize <= maxSize {
+ rec, err := marshalWriteRequestToRecord(partitionID, tenantID, stream)
+ if err != nil {
+ return nil, err
+ }
+ return []*kgo.Record{rec}, nil
+ }
+
+ var records []*kgo.Record
+ batch := encoderPool.Get().(*logproto.Stream)
+ defer encoderPool.Put(batch)
+
+ batch.Labels = stream.Labels
+ batch.Hash = stream.Hash
+
+ if batch.Entries == nil {
+ batch.Entries = make([]logproto.Entry, 0, 1024)
+ }
+ batch.Entries = batch.Entries[:0]
+ labelsSize := batch.Size()
+ currentSize := labelsSize
+
+ for i, entry := range stream.Entries {
+ l := entry.Size()
+ // Size of the entry in the stream
+ entrySize := 1 + l + sovPush(uint64(l))
+
+ // Check if a single entry is too big
+ if entrySize > maxSize || (i == 0 && currentSize+entrySize > maxSize) {
+ return nil, fmt.Errorf("single entry size (%d) exceeds maximum allowed size (%d)", entrySize, maxSize)
+ }
+
+ if currentSize+entrySize > maxSize {
+ // Current stream is full, create a record and start a new stream
+ if len(batch.Entries) > 0 {
+ rec, err := marshalWriteRequestToRecord(partitionID, tenantID, *batch)
+ if err != nil {
+ return nil, err
+ }
+ records = append(records, rec)
+ }
+ // Reset currentStream
+ batch.Entries = batch.Entries[:0]
+ currentSize = labelsSize
+ }
+ batch.Entries = append(batch.Entries, entry)
+ currentSize += entrySize
+ }
+
+ // Handle any remaining entries
+ if len(batch.Entries) > 0 {
+ rec, err := marshalWriteRequestToRecord(partitionID, tenantID, *batch)
+ if err != nil {
+ return nil, err
+ }
+ records = append(records, rec)
+ }
+
+ if len(records) == 0 {
+ return nil, errors.New("no valid records created")
+ }
+
+ return records, nil
+}
+
+func marshalWriteRequestToRecord(partitionID int32, tenantID string, stream logproto.Stream) (*kgo.Record, error) {
+ data, err := stream.Marshal()
+ if err != nil {
+ return nil, fmt.Errorf("failed to marshal stream: %w", err)
+ }
+
+ return &kgo.Record{
+ Key: []byte(tenantID),
+ Value: data,
+ Partition: partitionID,
+ }, nil
+}
+
+// Decoder is responsible for decoding Kafka record data back into logproto.Stream format.
+// It caches parsed labels for efficiency.
+type Decoder struct {
+ stream *logproto.Stream
+ cache *lru.Cache
+}
+
+func NewDecoder() (*Decoder, error) {
+ cache, err := lru.New(5000) // Set LRU size to 5000, adjust as needed
+ if err != nil {
+ return nil, fmt.Errorf("failed to create LRU cache: %w", err)
+ }
+ return &Decoder{
+ stream: &logproto.Stream{},
+ cache: cache,
+ }, nil
+}
+
+// Decode converts a Kafka record's byte data back into a logproto.Stream and labels.Labels.
+// The decoding process works as follows:
+// 1. Unmarshal the data into a logproto.Stream.
+// 2. Parse and cache the labels for efficiency in future decodes.
+//
+// Returns the decoded logproto.Stream, parsed labels, and any error encountered.
+func (d *Decoder) Decode(data []byte) (logproto.Stream, labels.Labels, error) {
+ d.stream.Entries = d.stream.Entries[:0]
+ if err := d.stream.Unmarshal(data); err != nil {
+ return logproto.Stream{}, nil, fmt.Errorf("failed to unmarshal stream: %w", err)
+ }
+
+ var ls labels.Labels
+ if cachedLabels, ok := d.cache.Get(d.stream.Labels); ok {
+ ls = cachedLabels.(labels.Labels)
+ } else {
+ var err error
+ ls, err = syntax.ParseLabels(d.stream.Labels)
+ if err != nil {
+ return logproto.Stream{}, nil, fmt.Errorf("failed to parse labels: %w", err)
+ }
+ d.cache.Add(d.stream.Labels, ls)
+ }
+
+ return *d.stream, ls, nil
+}
+
+// sovPush calculates the size of varint-encoded uint64.
+// It is used to determine the number of bytes needed to encode a uint64 value
+// in Protocol Buffers' variable-length integer format.
+func sovPush(x uint64) (n int) {
+ return (math_bits.Len64(x|1) + 6) / 7
+}
diff --git a/pkg/kafka/encoding_test.go b/pkg/kafka/encoding_test.go
new file mode 100644
index 0000000000000..3b058b782fdaf
--- /dev/null
+++ b/pkg/kafka/encoding_test.go
@@ -0,0 +1,151 @@
+package kafka
+
+import (
+ "math/rand"
+ "testing"
+ "time"
+
+ "github.com/prometheus/prometheus/model/labels"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+func TestEncoderDecoder(t *testing.T) {
+ tests := []struct {
+ name string
+ stream logproto.Stream
+ maxSize int
+ expectSplit bool
+ }{
+ {
+ name: "Small stream, no split",
+ stream: generateStream(10, 100),
+ maxSize: 1024 * 1024,
+ expectSplit: false,
+ },
+ {
+ name: "Large stream, expect split",
+ stream: generateStream(1000, 1000),
+ maxSize: 1024 * 10,
+ expectSplit: true,
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ decoder, err := NewDecoder()
+ require.NoError(t, err)
+
+ records, err := Encode(0, "test-tenant", tt.stream, tt.maxSize)
+ require.NoError(t, err)
+
+ if tt.expectSplit {
+ require.Greater(t, len(records), 1)
+ } else {
+ require.Equal(t, 1, len(records))
+ }
+
+ var decodedEntries []logproto.Entry
+ var decodedLabels labels.Labels
+
+ for _, record := range records {
+ stream, ls, err := decoder.Decode(record.Value)
+ require.NoError(t, err)
+ decodedEntries = append(decodedEntries, stream.Entries...)
+ if decodedLabels == nil {
+ decodedLabels = ls
+ } else {
+ require.Equal(t, decodedLabels, ls)
+ }
+ }
+
+ require.Equal(t, tt.stream.Labels, decodedLabels.String())
+ require.Equal(t, len(tt.stream.Entries), len(decodedEntries))
+ for i, entry := range tt.stream.Entries {
+ require.Equal(t, entry.Timestamp.UTC(), decodedEntries[i].Timestamp.UTC())
+ require.Equal(t, entry.Line, decodedEntries[i].Line)
+ }
+ })
+ }
+}
+
+func TestEncoderSingleEntryTooLarge(t *testing.T) {
+ stream := generateStream(1, 1000)
+
+ _, err := Encode(0, "test-tenant", stream, 100)
+ require.Error(t, err)
+ require.Contains(t, err.Error(), "single entry size")
+}
+
+func TestDecoderInvalidData(t *testing.T) {
+ decoder, err := NewDecoder()
+ require.NoError(t, err)
+
+ _, _, err = decoder.Decode([]byte("invalid data"))
+ require.Error(t, err)
+}
+
+func TestEncoderDecoderEmptyStream(t *testing.T) {
+ decoder, err := NewDecoder()
+ require.NoError(t, err)
+
+ stream := logproto.Stream{
+ Labels: `{app="test"}`,
+ }
+
+ records, err := Encode(0, "test-tenant", stream, 10<<20)
+ require.NoError(t, err)
+ require.Len(t, records, 1)
+
+ decodedStream, decodedLabels, err := decoder.Decode(records[0].Value)
+ require.NoError(t, err)
+ require.Equal(t, stream.Labels, decodedLabels.String())
+ require.Empty(t, decodedStream.Entries)
+}
+
+func BenchmarkEncodeDecode(b *testing.B) {
+ decoder, _ := NewDecoder()
+ stream := generateStream(1000, 200)
+
+ b.ResetTimer()
+ for i := 0; i < b.N; i++ {
+ records, err := Encode(0, "test-tenant", stream, 10<<20)
+ if err != nil {
+ b.Fatal(err)
+ }
+ for _, record := range records {
+ _, _, err := decoder.Decode(record.Value)
+ if err != nil {
+ b.Fatal(err)
+ }
+ }
+ }
+}
+
+// Helper function to generate a test stream
+func generateStream(entries, lineLength int) logproto.Stream {
+ stream := logproto.Stream{
+ Labels: `{app="test", env="prod"}`,
+ Entries: make([]logproto.Entry, entries),
+ }
+
+ for i := 0; i < entries; i++ {
+ stream.Entries[i] = logproto.Entry{
+ Timestamp: time.Now(),
+ Line: generateRandomString(lineLength),
+ }
+ }
+
+ return stream
+}
+
+// Helper function to generate a random string
+func generateRandomString(length int) string {
+ const charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
+ b := make([]byte, length)
+ for i := range b {
+ b[i] = charset[rand.Intn(len(charset))]
+ }
+ return string(b)
+}
diff --git a/pkg/kafka/ingester/consumer.go b/pkg/kafka/ingester/consumer.go
new file mode 100644
index 0000000000000..d011ae16517d2
--- /dev/null
+++ b/pkg/kafka/ingester/consumer.go
@@ -0,0 +1,307 @@
+package ingester
+
+import (
+ "bytes"
+ "context"
+ "crypto/rand"
+ "fmt"
+ "io"
+ "math"
+ "sync"
+ "time"
+
+ "github.com/dustin/go-humanize"
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/oklog/ulid"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "google.golang.org/grpc"
+
+ "github.com/grafana/dskit/backoff"
+
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/metastorepb"
+ "github.com/grafana/loki/v3/pkg/kafka"
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
+)
+
+// ObjectStorage defines an interface for object storage operations
+type ObjectStorage interface {
+ PutObject(ctx context.Context, objectKey string, object io.Reader) error
+}
+
+// MetadataStore defines an interface for metadata storage operations
+type MetadataStore interface {
+ AddBlock(ctx context.Context, in *metastorepb.AddBlockRequest, opts ...grpc.CallOption) (*metastorepb.AddBlockResponse, error)
+}
+
+// Committer defines an interface for committing offsets
+type Committer interface {
+ Commit(ctx context.Context, offset int64) error
+}
+
+// consumer represents a Kafka consumer that processes and stores log entries
+type consumer struct {
+ metastoreClient MetadataStore
+ storage ObjectStorage
+ writer *wal.SegmentWriter
+ committer Committer
+ flushInterval time.Duration
+ maxFlushSize int64
+ lastOffset int64
+
+ flushBuf *bytes.Buffer
+ decoder *kafka.Decoder
+ toStore []*logproto.Entry
+
+ metrics *consumerMetrics
+ logger log.Logger
+}
+
+// NewConsumerFactory creates and initializes a new consumer instance
+func NewConsumerFactory(
+ metastoreClient MetadataStore,
+ storage ObjectStorage,
+ flushInterval time.Duration,
+ maxFlushSize int64,
+ logger log.Logger,
+ reg prometheus.Registerer,
+) ConsumerFactory {
+ return func(committer Committer) (Consumer, error) {
+ writer, err := wal.NewWalSegmentWriter()
+ if err != nil {
+ return nil, err
+ }
+ decoder, err := kafka.NewDecoder()
+ if err != nil {
+ return nil, err
+ }
+ return &consumer{
+ logger: logger,
+ metastoreClient: metastoreClient,
+ storage: storage,
+ writer: writer,
+ metrics: newConsumerMetrics(reg),
+ flushBuf: bytes.NewBuffer(make([]byte, 0, 10<<20)), // 10 MB
+ decoder: decoder,
+ committer: committer,
+ flushInterval: flushInterval,
+ maxFlushSize: maxFlushSize,
+ lastOffset: -1,
+ }, nil
+ }
+}
+
+// Start starts the consumer and returns a function to wait for it to finish
+// It consumes records from the recordsChan, and flushes them to storage periodically.
+func (c *consumer) Start(ctx context.Context, recordsChan <-chan []record) func() {
+ var wg sync.WaitGroup
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ flushTicker := time.NewTicker(c.flushInterval)
+ defer flushTicker.Stop()
+ for {
+ select {
+ case <-flushTicker.C:
+ level.Info(c.logger).Log("msg", "flushing block")
+ c.Flush()
+ case <-ctx.Done():
+ level.Info(c.logger).Log("msg", "shutting down consumer")
+ c.Flush()
+ return
+ case records := <-recordsChan:
+ if err := c.consume(records); err != nil {
+ level.Error(c.logger).Log("msg", "failed to consume records", "error", err)
+ return
+ }
+ if c.writer.InputSize() > c.maxFlushSize {
+ level.Info(c.logger).Log("msg", "flushing block due to size limit", "size", humanize.Bytes(uint64(c.writer.InputSize())))
+ c.Flush()
+ }
+ }
+ }
+ }()
+ return wg.Wait
+}
+
+// consume processes a batch of Kafka records, decoding and storing them
+func (c *consumer) consume(records []record) error {
+ if len(records) == 0 {
+ return nil
+ }
+ var (
+ minOffset = int64(math.MaxInt64)
+ maxOffset = int64(0)
+ )
+ for _, record := range records {
+ minOffset = min(minOffset, record.offset)
+ maxOffset = max(maxOffset, record.offset)
+ }
+ level.Debug(c.logger).Log("msg", "consuming records", "min_offset", minOffset, "max_offset", maxOffset)
+ return c.retryWithBackoff(context.Background(), backoff.Config{
+ MinBackoff: 250 * time.Millisecond,
+ MaxBackoff: 2 * time.Second,
+ MaxRetries: 0, // retry forever
+ }, func(boff *backoff.Backoff) error {
+ consumeStart := time.Now()
+ if err := c.appendRecords(records); err != nil {
+ level.Error(c.logger).Log(
+ "msg", "encountered error while ingesting data from Kafka; should retry",
+ "err", err,
+ "record_min_offset", minOffset,
+ "record_max_offset", maxOffset,
+ "num_retries", boff.NumRetries(),
+ )
+ return err
+ }
+ c.lastOffset = maxOffset
+ c.metrics.currentOffset.Set(float64(c.lastOffset))
+ c.metrics.consumeLatency.Observe(time.Since(consumeStart).Seconds())
+ return nil
+ })
+}
+
+func (c *consumer) appendRecords(records []record) error {
+ for _, record := range records {
+ stream, labels, err := c.decoder.Decode(record.content)
+ if err != nil {
+ return fmt.Errorf("failed to decode record: %w", err)
+ }
+ if len(stream.Entries) == 0 {
+ continue
+ }
+ if len(c.toStore) == 0 {
+ c.toStore = make([]*logproto.Entry, 0, len(stream.Entries))
+ }
+ c.toStore = c.toStore[:0]
+ for _, entry := range stream.Entries {
+ c.toStore = append(c.toStore, &logproto.Entry{
+ Timestamp: entry.Timestamp,
+ Line: entry.Line,
+ StructuredMetadata: entry.StructuredMetadata,
+ Parsed: entry.Parsed,
+ })
+ }
+ c.writer.Append(record.tenantID, stream.Labels, labels, c.toStore, time.Now())
+ }
+ return nil
+}
+
+// Flush writes the accumulated data to storage and updates the metadata store
+func (c *consumer) Flush() {
+ if c.writer.InputSize() == 0 {
+ return
+ }
+ if c.lastOffset == -1 {
+ return
+ }
+ if err := c.retryWithBackoff(context.Background(), backoff.Config{
+ MinBackoff: 250 * time.Millisecond,
+ MaxBackoff: 10 * time.Second,
+ MaxRetries: 0, // retry forever
+ }, func(boff *backoff.Backoff) error {
+ start := time.Now()
+ c.metrics.flushesTotal.Add(1)
+ defer func() { c.metrics.flushDuration.Observe(time.Since(start).Seconds()) }()
+ ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
+ defer cancel()
+ if err := c.flush(ctx); err != nil {
+ c.metrics.flushFailuresTotal.Inc()
+ level.Error(c.logger).Log(
+ "msg", "failed to flush block",
+ "error", err,
+ "num_retries", boff.NumRetries(),
+ )
+ return err
+ }
+ c.lastOffset = -1
+ return nil
+ }); err != nil {
+ level.Error(c.logger).Log("msg", "failed to flush block", "error", err)
+ }
+}
+
+func (c *consumer) retryWithBackoff(ctx context.Context, cfg backoff.Config, fn func(boff *backoff.Backoff) error) error {
+ boff := backoff.New(ctx, cfg)
+ var err error
+ for boff.Ongoing() {
+ err = fn(boff)
+ if err == nil {
+ return nil
+ }
+ boff.Wait()
+ }
+ if err != nil {
+ return err
+ }
+ return boff.ErrCause()
+}
+
+func (c *consumer) flush(ctx context.Context) error {
+ defer c.flushBuf.Reset()
+ if _, err := c.writer.WriteTo(c.flushBuf); err != nil {
+ return err
+ }
+
+ stats := wal.GetSegmentStats(c.writer, time.Now())
+ wal.ReportSegmentStats(stats, c.metrics.segmentMetrics)
+
+ id := ulid.MustNew(ulid.Timestamp(time.Now()), rand.Reader).String()
+ if err := c.storage.PutObject(ctx, wal.Dir+id, c.flushBuf); err != nil {
+ return fmt.Errorf("failed to put object to object storage: %w", err)
+ }
+
+ if _, err := c.metastoreClient.AddBlock(ctx, &metastorepb.AddBlockRequest{
+ Block: c.writer.Meta(id),
+ }); err != nil {
+ return fmt.Errorf("failed to add block to metastore: %w", err)
+ }
+ c.writer.Reset()
+ if err := c.committer.Commit(ctx, c.lastOffset); err != nil {
+ return fmt.Errorf("failed to commit offset: %w", err)
+ }
+
+ return nil
+}
+
+// consumerMetrics holds various Prometheus metrics for monitoring consumer operations
+type consumerMetrics struct {
+ flushesTotal prometheus.Counter
+ flushFailuresTotal prometheus.Counter
+ flushDuration prometheus.Histogram
+ segmentMetrics *wal.SegmentMetrics
+ consumeLatency prometheus.Histogram
+ currentOffset prometheus.Gauge
+}
+
+// newConsumerMetrics initializes and returns a new consumerMetrics instance
+func newConsumerMetrics(reg prometheus.Registerer) *consumerMetrics {
+ return &consumerMetrics{
+ flushesTotal: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "loki_kafka_ingester_flushes_total",
+ Help: "The total number of flushes.",
+ }),
+ flushFailuresTotal: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "loki_kafka_ingester_flush_failures_total",
+ Help: "The total number of failed flushes.",
+ }),
+ flushDuration: promauto.With(reg).NewHistogram(prometheus.HistogramOpts{
+ Name: "loki_kafka_ingester_flush_duration_seconds",
+ Help: "The flush duration (in seconds).",
+ Buckets: prometheus.ExponentialBuckets(0.001, 4, 8),
+ NativeHistogramBucketFactor: 1.1,
+ }),
+ consumeLatency: promauto.With(reg).NewHistogram(prometheus.HistogramOpts{
+ Name: "loki_ingest_storage_reader_records_batch_process_duration_seconds",
+ Help: "How long a consumer spent processing a batch of records from Kafka.",
+ NativeHistogramBucketFactor: 1.1,
+ }),
+ segmentMetrics: wal.NewSegmentMetrics(reg),
+ currentOffset: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_kafka_ingester_current_offset",
+ Help: "The current offset of the Kafka consumer.",
+ }),
+ }
+}
diff --git a/pkg/kafka/ingester/consumer_test.go b/pkg/kafka/ingester/consumer_test.go
new file mode 100644
index 0000000000000..3f0adcce6247d
--- /dev/null
+++ b/pkg/kafka/ingester/consumer_test.go
@@ -0,0 +1,192 @@
+package ingester
+
+import (
+ "context"
+ "os"
+ "strings"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/metastorepb"
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/objstore"
+ "github.com/grafana/loki/v3/pkg/kafka"
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+type mockCommitter struct {
+ committed int64
+}
+
+func newMockCommitter() *mockCommitter {
+ return &mockCommitter{
+ committed: -1,
+ }
+}
+
+func (m *mockCommitter) Commit(_ context.Context, offset int64) error {
+ m.committed = offset
+ return nil
+}
+
+func TestConsumer_PeriodicFlush(t *testing.T) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ storage, err := objstore.NewTestStorage(t)
+ require.NoError(t, err)
+
+ metastore := NewTestMetastore()
+ reg := prometheus.NewRegistry()
+
+ flushInterval := 100 * time.Millisecond
+ maxFlushSize := int64(1000)
+
+ committer := &mockCommitter{}
+ consumerFactory := NewConsumerFactory(metastore, storage, flushInterval, maxFlushSize, log.NewLogfmtLogger(os.Stdout), reg)
+ consumer, err := consumerFactory(committer)
+ require.NoError(t, err)
+
+ recordsChan := make(chan []record)
+ _ = consumer.Start(ctx, recordsChan)
+
+ stream := logproto.Stream{
+ Labels: `{__name__="test_metric", label="value1"}`,
+ Entries: []logproto.Entry{
+ {Timestamp: time.Unix(0, 1000), Line: "10.5"},
+ },
+ }
+
+ encodedRecords, err := kafka.Encode(0, "tenant1", stream, 10<<20)
+ require.NoError(t, err)
+
+ records := []record{{
+ tenantID: "tenant1",
+ content: encodedRecords[0].Value,
+ offset: 0,
+ }}
+
+ recordsChan <- records
+
+ require.Eventually(t, func() bool {
+ blocks, err := metastore.ListBlocksForQuery(ctx, &metastorepb.ListBlocksForQueryRequest{
+ TenantId: "tenant1",
+ StartTime: 0,
+ EndTime: 100000,
+ })
+ require.NoError(t, err)
+ return len(blocks.Blocks) == 1
+ }, 5*time.Second, 100*time.Millisecond)
+
+ // Verify committed offset
+ require.Equal(t, int64(0), committer.committed)
+}
+
+func TestConsumer_ShutdownFlush(t *testing.T) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ storage, err := objstore.NewTestStorage(t)
+ require.NoError(t, err)
+
+ metastore := NewTestMetastore()
+ reg := prometheus.NewRegistry()
+
+ flushInterval := 1 * time.Hour
+ maxFlushSize := int64(1000)
+
+ committer := &mockCommitter{}
+ consumerFactory := NewConsumerFactory(metastore, storage, flushInterval, maxFlushSize, log.NewLogfmtLogger(os.Stdout), reg)
+ consumer, err := consumerFactory(committer)
+ require.NoError(t, err)
+
+ recordsChan := make(chan []record)
+ wait := consumer.Start(ctx, recordsChan)
+
+ stream := logproto.Stream{
+ Labels: `{__name__="test_metric", label="value1"}`,
+ Entries: []logproto.Entry{
+ {Timestamp: time.Unix(0, 1000), Line: "10.5"},
+ },
+ }
+
+ encodedRecords, err := kafka.Encode(0, "tenant1", stream, 10<<20)
+ require.NoError(t, err)
+
+ records := []record{{
+ tenantID: "tenant1",
+ content: encodedRecords[0].Value,
+ offset: 0,
+ }}
+
+ recordsChan <- records
+
+ cancel()
+ wait()
+
+ blocks, err := metastore.ListBlocksForQuery(ctx, &metastorepb.ListBlocksForQueryRequest{
+ TenantId: "tenant1",
+ StartTime: 0,
+ EndTime: 100000,
+ })
+ require.NoError(t, err)
+ require.Equal(t, 1, len(blocks.Blocks))
+
+ // Verify committed offset
+ require.Equal(t, int64(0), committer.committed)
+}
+
+func TestConsumer_MaxFlushSize(t *testing.T) {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+
+ storage, err := objstore.NewTestStorage(t)
+ require.NoError(t, err)
+
+ metastore := NewTestMetastore()
+ reg := prometheus.NewRegistry()
+
+ flushInterval := 1 * time.Hour
+ maxFlushSize := int64(10)
+
+ committer := &mockCommitter{}
+ consumerFactory := NewConsumerFactory(metastore, storage, flushInterval, maxFlushSize, log.NewLogfmtLogger(os.Stdout), reg)
+ consumer, err := consumerFactory(committer)
+ require.NoError(t, err)
+
+ recordsChan := make(chan []record)
+ _ = consumer.Start(ctx, recordsChan)
+
+ stream := logproto.Stream{
+ Labels: `{__name__="test_metric", label="value1"}`,
+ Entries: []logproto.Entry{
+ {Timestamp: time.Unix(0, 1000), Line: strings.Repeat("a", 100)},
+ },
+ }
+
+ encodedRecords, err := kafka.Encode(0, "tenant1", stream, 10<<20)
+ require.NoError(t, err)
+
+ records := []record{{
+ tenantID: "tenant1",
+ content: encodedRecords[0].Value,
+ offset: 0,
+ }}
+
+ recordsChan <- records
+
+ require.Eventually(t, func() bool {
+ blocks, err := metastore.ListBlocksForQuery(ctx, &metastorepb.ListBlocksForQueryRequest{
+ TenantId: "tenant1",
+ StartTime: 0,
+ EndTime: 100000,
+ })
+ require.NoError(t, err)
+ return len(blocks.Blocks) == 1
+ }, 5*time.Second, 100*time.Millisecond)
+
+ require.Equal(t, int64(0), committer.committed)
+}
diff --git a/pkg/kafka/ingester/ingester.go b/pkg/kafka/ingester/ingester.go
new file mode 100644
index 0000000000000..56421b1b712d7
--- /dev/null
+++ b/pkg/kafka/ingester/ingester.go
@@ -0,0 +1,405 @@
+package ingester
+
+import (
+ "context"
+ "errors"
+ "flag"
+ "fmt"
+ "net/http"
+ "regexp"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/kv"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/services"
+ "github.com/prometheus/client_golang/prometheus"
+ "google.golang.org/grpc/health/grpc_health_v1"
+
+ "github.com/grafana/loki/v3/pkg/kafka"
+ "github.com/grafana/loki/v3/pkg/kafka/ingester/shutdownmarker"
+ "github.com/grafana/loki/v3/pkg/kafka/partitionring"
+ util_log "github.com/grafana/loki/v3/pkg/util/log"
+
+ "github.com/grafana/loki/v3/pkg/util"
+)
+
+const (
+ RingName = "kafka-ingester"
+ PartitionRingName = "kafka-partition"
+)
+
+var (
+ ingesterIDRegexp = regexp.MustCompile("-([0-9]+)$")
+ defaultFlushInterval = 15 * time.Second
+ defaultFlushSize int64 = 300 << 20 // 300 MB
+)
+
+// Config for an ingester.
+type Config struct {
+ Enabled bool `yaml:"enabled" doc:"description=Whether the kafka ingester is enabled."`
+ LifecyclerConfig ring.LifecyclerConfig `yaml:"lifecycler,omitempty" doc:"description=Configures how the lifecycle of the ingester will operate and where it will register for discovery."`
+ ShutdownMarkerPath string `yaml:"shutdown_marker_path"`
+ FlushInterval time.Duration `yaml:"flush_interval" doc:"description=The interval at which the ingester will flush and commit offsets to Kafka. If not set, the default flush interval will be used."`
+ FlushSize int64 `yaml:"flush_size" doc:"description=The size at which the ingester will flush and commit offsets to Kafka. If not set, the default flush size will be used."`
+ PartitionRingConfig partitionring.Config `yaml:"partition_ring" category:"experimental"`
+ KafkaConfig kafka.Config `yaml:"-"`
+}
+
+// RegisterFlags registers the flags.
+func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
+ cfg.LifecyclerConfig.RegisterFlagsWithPrefix("kafka-ingester", f, util_log.Logger)
+ cfg.PartitionRingConfig.RegisterFlags(f)
+ f.StringVar(&cfg.ShutdownMarkerPath, "kafka-ingester.shutdown-marker-path", "", "Path where the shutdown marker file is stored. If not set and common.path_prefix is set then common.path_prefix will be used.")
+ f.BoolVar(&cfg.Enabled, "kafka-ingester.enabled", false, "Whether the Kafka-based ingester path is enabled")
+ f.DurationVar(&cfg.FlushInterval, "kafka-ingester.flush-interval", defaultFlushInterval, "The interval at which the ingester will flush and commit offsets to Kafka. If not set, the default flush interval will be used.")
+ f.Int64Var(&cfg.FlushSize, "kafka-ingester.flush-size", defaultFlushSize, "The size at which the ingester will flush and commit offsets to Kafka. If not set, the default flush size will be used.")
+}
+
+func (cfg *Config) Validate() error {
+ if !cfg.Enabled {
+ return nil
+ }
+ if cfg.FlushInterval <= 0 {
+ return errors.New("kafka-ingester.flush-interval must be greater than 0")
+ }
+ if cfg.LifecyclerConfig.RingConfig.ReplicationFactor != 1 {
+ cfg.LifecyclerConfig.RingConfig.ReplicationFactor = 1
+ level.Warn(util_log.Logger).Log("msg", "kafka-ingester.lifecycler.replication-factor has been set to 1. This is the only supported replication factor for the kafka-ingester.")
+ }
+ return nil
+}
+
+type Wrapper interface {
+ Wrap(wrapped Interface) Interface
+}
+
+// Interface is an interface for the Ingester
+type Interface interface {
+ services.Service
+ http.Handler
+ CheckReady(ctx context.Context) error
+ FlushHandler(w http.ResponseWriter, _ *http.Request)
+}
+
+// Ingester builds chunks for incoming log streams.
+type Ingester struct {
+ services.Service
+
+ cfg Config
+ logger log.Logger
+
+ metrics *ingesterMetrics
+
+ lifecycler *ring.Lifecycler
+ lifecyclerWatcher *services.FailureWatcher
+ ingesterPartitionID int32
+ partitionRingLifecycler *ring.PartitionInstanceLifecycler
+ partitionReader *PartitionReader
+}
+
+// New makes a new Ingester.
+func New(cfg Config,
+ consumerFactory ConsumerFactory,
+ logger log.Logger,
+ metricsNamespace string,
+ registerer prometheus.Registerer,
+) (*Ingester, error) {
+ metrics := newIngesterMetrics(registerer)
+
+ ingesterPartitionID, err := extractIngesterPartitionID(cfg.LifecyclerConfig.ID)
+ if err != nil {
+ return nil, fmt.Errorf("calculating ingester partition ID: %w", err)
+ }
+
+ partitionRingKV := cfg.PartitionRingConfig.KVStore.Mock
+ if partitionRingKV == nil {
+ partitionRingKV, err = kv.NewClient(cfg.PartitionRingConfig.KVStore, ring.GetPartitionRingCodec(), kv.RegistererWithKVName(registerer, PartitionRingName+"-lifecycler"), logger)
+ if err != nil {
+ return nil, fmt.Errorf("creating KV store for ingester partition ring: %w", err)
+ }
+ }
+
+ partitionRingLifecycler := ring.NewPartitionInstanceLifecycler(
+ cfg.PartitionRingConfig.ToLifecyclerConfig(ingesterPartitionID, cfg.LifecyclerConfig.ID),
+ PartitionRingName,
+ PartitionRingName+"-key",
+ partitionRingKV,
+ logger,
+ prometheus.WrapRegistererWithPrefix("loki_", registerer))
+ i := &Ingester{
+ cfg: cfg,
+ logger: logger,
+ ingesterPartitionID: ingesterPartitionID,
+ partitionRingLifecycler: partitionRingLifecycler,
+ metrics: metrics,
+ }
+
+ i.lifecycler, err = ring.NewLifecycler(cfg.LifecyclerConfig, i, RingName, RingName+"-ring", true, logger, prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer))
+ if err != nil {
+ return nil, err
+ }
+ i.partitionReader, err = NewPartitionReader(cfg.KafkaConfig, ingesterPartitionID, cfg.LifecyclerConfig.ID, consumerFactory, logger, registerer)
+ if err != nil {
+ return nil, err
+ }
+
+ i.lifecyclerWatcher = services.NewFailureWatcher()
+ i.lifecyclerWatcher.WatchService(i.lifecycler)
+ i.lifecyclerWatcher.WatchService(i.partitionRingLifecycler)
+ i.lifecyclerWatcher.WatchService(i.partitionReader)
+
+ i.Service = services.NewBasicService(i.starting, i.running, i.stopping)
+
+ return i, nil
+}
+
+// ingesterPartitionID returns the partition ID owner the the given ingester.
+func extractIngesterPartitionID(ingesterID string) (int32, error) {
+ if strings.Contains(ingesterID, "local") {
+ return 0, nil
+ }
+
+ match := ingesterIDRegexp.FindStringSubmatch(ingesterID)
+ if len(match) == 0 {
+ return 0, fmt.Errorf("ingester ID %s doesn't match regular expression %q", ingesterID, ingesterIDRegexp.String())
+ }
+ // Parse the ingester sequence number.
+ ingesterSeq, err := strconv.Atoi(match[1])
+ if err != nil {
+ return 0, fmt.Errorf("no ingester sequence number in ingester ID %s", ingesterID)
+ }
+
+ return int32(ingesterSeq), nil
+}
+
+// ServeHTTP implements the pattern ring status page.
+func (i *Ingester) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ i.lifecycler.ServeHTTP(w, r)
+}
+
+func (i *Ingester) starting(ctx context.Context) (err error) {
+ defer func() {
+ if err != nil {
+ // if starting() fails for any reason (e.g., context canceled),
+ // the lifecycler must be stopped.
+ _ = services.StopAndAwaitTerminated(context.Background(), i.lifecycler)
+ }
+ }()
+
+ // First of all we have to check if the shutdown marker is set. This needs to be done
+ // as first thing because, if found, it may change the behaviour of the ingester startup.
+ if exists, err := shutdownmarker.Exists(shutdownmarker.GetPath(i.cfg.ShutdownMarkerPath)); err != nil {
+ return fmt.Errorf("failed to check ingester shutdown marker: %w", err)
+ } else if exists {
+ level.Info(i.logger).Log("msg", "detected existing shutdown marker, setting unregister and flush on shutdown", "path", shutdownmarker.GetPath(i.cfg.ShutdownMarkerPath))
+ i.setPrepareShutdown()
+ }
+
+ // pass new context to lifecycler, so that it doesn't stop automatically when Ingester's service context is done
+ err = i.lifecycler.StartAsync(context.Background())
+ if err != nil {
+ return err
+ }
+
+ err = i.lifecycler.AwaitRunning(ctx)
+ if err != nil {
+ return err
+ }
+
+ err = i.partitionRingLifecycler.StartAsync(context.Background())
+ if err != nil {
+ return err
+ }
+ err = i.partitionRingLifecycler.AwaitRunning(ctx)
+ if err != nil {
+ return err
+ }
+ err = i.partitionReader.StartAsync(context.Background())
+ if err != nil {
+ return err
+ }
+ err = i.partitionReader.AwaitRunning(ctx)
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (i *Ingester) running(ctx context.Context) error {
+ var serviceError error
+ select {
+ // wait until service is asked to stop
+ case <-ctx.Done():
+ // stop
+ case err := <-i.lifecyclerWatcher.Chan():
+ serviceError = fmt.Errorf("lifecycler failed: %w", err)
+ }
+
+ return serviceError
+}
+
+// stopping is called when Ingester transitions to Stopping state.
+//
+// At this point, loop no longer runs, but flushers are still running.
+func (i *Ingester) stopping(_ error) error {
+ var errs util.MultiError
+
+ errs.Add(services.StopAndAwaitTerminated(context.Background(), i.partitionReader))
+ errs.Add(services.StopAndAwaitTerminated(context.Background(), i.lifecycler))
+ errs.Add(services.StopAndAwaitTerminated(context.Background(), i.partitionRingLifecycler))
+ // Remove the shutdown marker if it exists since we are shutting down
+ shutdownMarkerPath := shutdownmarker.GetPath(i.cfg.ShutdownMarkerPath)
+ exist, err := shutdownmarker.Exists(shutdownMarkerPath)
+ if err != nil {
+ level.Warn(i.logger).Log("msg", "failed to check for prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ } else if exist {
+ if err := shutdownmarker.Remove(shutdownMarkerPath); err != nil {
+ level.Warn(i.logger).Log("msg", "failed to remove shutdown marker", "path", shutdownMarkerPath, "err", err)
+ }
+ }
+ return errs.Err()
+}
+
+// Watch implements grpc_health_v1.HealthCheck.
+func (*Ingester) Watch(*grpc_health_v1.HealthCheckRequest, grpc_health_v1.Health_WatchServer) error {
+ return nil
+}
+
+func (i *Ingester) PreparePartitionDownscaleHandler(w http.ResponseWriter, r *http.Request) {
+ logger := log.With(i.logger, "partition", i.ingesterPartitionID)
+ // Don't allow callers to change the shutdown configuration while we're in the middle
+ // of starting or shutting down.
+ if i.State() != services.Running {
+ w.WriteHeader(http.StatusServiceUnavailable)
+ return
+ }
+
+ shutdownMarkerPath := shutdownmarker.GetPath(i.cfg.ShutdownMarkerPath)
+ exists, err := shutdownmarker.Exists(shutdownMarkerPath)
+ if err != nil {
+ level.Error(i.logger).Log("msg", "unable to check for prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+ switch r.Method {
+ case http.MethodPost:
+ // It's not allowed to prepare the downscale while in PENDING state. Why? Because if the downscale
+ // will be later cancelled, we don't know if it was requested in PENDING or ACTIVE state, so we
+ // don't know to which state reverting back. Given a partition is expected to stay in PENDING state
+ // for a short period, we simply don't allow this case.
+ state, _, err := i.partitionRingLifecycler.GetPartitionState(r.Context())
+ if err != nil {
+ level.Error(logger).Log("msg", "failed to check partition state in the ring", "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+
+ if state == ring.PartitionPending {
+ level.Warn(logger).Log("msg", "received a request to prepare partition for shutdown, but the request can't be satisfied because the partition is in PENDING state")
+ w.WriteHeader(http.StatusConflict)
+ return
+ }
+
+ if err := i.partitionRingLifecycler.ChangePartitionState(r.Context(), ring.PartitionInactive); err != nil {
+ level.Error(logger).Log("msg", "failed to change partition state to inactive", "err", err)
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ if !exists {
+ if err := shutdownmarker.Create(shutdownMarkerPath); err != nil {
+ level.Error(i.logger).Log("msg", "unable to create prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+ }
+ i.setPrepareShutdown()
+
+ case http.MethodDelete:
+ state, _, err := i.partitionRingLifecycler.GetPartitionState(r.Context())
+ if err != nil {
+ level.Error(logger).Log("msg", "failed to check partition state in the ring", "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+
+ // If partition is inactive, make it active. We ignore other states Active and especially Pending.
+ if state == ring.PartitionInactive {
+
+ // We don't switch it back to PENDING state if there are not enough owners because we want to guarantee consistency
+ // in the read path. If the partition is within the lookback period we need to guarantee that partition will be queried.
+ // Moving back to PENDING will cause us loosing consistency, because PENDING partitions are not queried by design.
+ // We could move back to PENDING if there are not enough owners and the partition moved to INACTIVE more than
+ // "lookback period" ago, but since we delete inactive partitions with no owners that moved to inactive since longer
+ // than "lookback period" ago, it looks to be an edge case not worth to address.
+ if err := i.partitionRingLifecycler.ChangePartitionState(r.Context(), ring.PartitionActive); err != nil {
+ level.Error(logger).Log("msg", "failed to change partition state to active", "err", err)
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
+ if exists {
+ if err := shutdownmarker.Remove(shutdownMarkerPath); err != nil {
+ level.Error(i.logger).Log("msg", "unable to remove prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+ }
+ i.unsetPrepareShutdown()
+ }
+ }
+
+ state, stateTimestamp, err := i.partitionRingLifecycler.GetPartitionState(r.Context())
+ if err != nil {
+ level.Error(logger).Log("msg", "failed to check partition state in the ring", "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+
+ if state == ring.PartitionInactive {
+ util.WriteJSONResponse(w, map[string]any{"timestamp": stateTimestamp.Unix()})
+ } else {
+ util.WriteJSONResponse(w, map[string]any{"timestamp": 0})
+ }
+}
+
+// setPrepareShutdown toggles ingester lifecycler config to prepare for shutdown
+func (i *Ingester) setPrepareShutdown() {
+ i.lifecycler.SetUnregisterOnShutdown(true)
+ i.lifecycler.SetFlushOnShutdown(true)
+ i.partitionRingLifecycler.SetCreatePartitionOnStartup(false)
+ i.partitionRingLifecycler.SetRemoveOwnerOnShutdown(true)
+ i.metrics.shutdownMarker.Set(1)
+}
+
+func (i *Ingester) unsetPrepareShutdown() {
+ i.lifecycler.SetUnregisterOnShutdown(i.cfg.LifecyclerConfig.UnregisterOnShutdown)
+ i.lifecycler.SetFlushOnShutdown(true)
+ i.partitionRingLifecycler.SetCreatePartitionOnStartup(true)
+ i.partitionRingLifecycler.SetRemoveOwnerOnShutdown(false)
+ i.metrics.shutdownMarker.Set(0)
+}
+
+// ReadinessHandler is used to indicate to k8s when the ingesters are ready for
+// the addition removal of another ingester. Returns 204 when the ingester is
+// ready, 500 otherwise.
+func (i *Ingester) CheckReady(ctx context.Context) error {
+ // todo.
+ if s := i.State(); s != services.Running && s != services.Stopping {
+ return fmt.Errorf("ingester not ready: %v", s)
+ }
+ return i.lifecycler.CheckReady(ctx)
+}
+
+// Flush implements ring.FlushTransferer
+// Flush triggers a flush of all the chunks and closes the flush queues.
+// Called from the Lifecycler as part of the ingester shutdown.
+func (i *Ingester) Flush() {
+}
+
+func (i *Ingester) TransferOut(_ context.Context) error {
+ return nil
+}
diff --git a/pkg/kafka/ingester/ingester_test.go b/pkg/kafka/ingester/ingester_test.go
new file mode 100644
index 0000000000000..a3bcca72ca3d8
--- /dev/null
+++ b/pkg/kafka/ingester/ingester_test.go
@@ -0,0 +1,174 @@
+package ingester
+
+import (
+ "context"
+ "net/http"
+ "net/http/httptest"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ gokitlog "github.com/go-kit/log"
+ "github.com/grafana/dskit/flagext"
+ "github.com/grafana/dskit/kv/consul"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/services"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/stretchr/testify/require"
+ "google.golang.org/grpc"
+
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/metastorepb"
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/objstore"
+ "github.com/grafana/loki/v3/pkg/util/test"
+)
+
+func TestPreparePartitionDownscaleHandler(t *testing.T) {
+ cfg := defaultIngesterTestConfig(t)
+ // start ingester.
+ storage, err := objstore.NewTestStorage(t)
+ require.NoError(t, err)
+ ing, err := New(cfg,
+ NewConsumerFactory(NewTestMetastore(), storage, cfg.FlushInterval, cfg.FlushSize, gokitlog.NewNopLogger(), prometheus.NewRegistry()),
+ gokitlog.NewNopLogger(), "test", prometheus.NewRegistry())
+ require.NoError(t, err)
+ err = services.StartAndAwaitRunning(context.Background(), ing)
+ require.NoError(t, err)
+
+ t.Run("get state", func(t *testing.T) {
+ w := httptest.NewRecorder()
+ ing.PreparePartitionDownscaleHandler(w, httptest.NewRequest("GET", "/", nil))
+ require.Equal(t, http.StatusOK, w.Code)
+ require.Equal(t, "{\"timestamp\":0}", w.Body.String())
+ })
+ t.Run("prepare shutdown pending", func(t *testing.T) {
+ w := httptest.NewRecorder()
+ ing.PreparePartitionDownscaleHandler(w, httptest.NewRequest("POST", "/", nil))
+ require.Equal(t, http.StatusConflict, w.Code)
+ })
+ t.Run("prepare shutdown and cancel", func(t *testing.T) {
+ w := httptest.NewRecorder()
+ test.Poll(t, 5*time.Second, ring.PartitionActive, func() interface{} {
+ return getState(t, cfg)
+ })
+ ing.PreparePartitionDownscaleHandler(w, httptest.NewRequest("POST", "/", nil))
+ require.Equal(t, http.StatusOK, w.Code)
+ test.Poll(t, 5*time.Second, ring.PartitionInactive, func() interface{} {
+ return getState(t, cfg)
+ })
+ w2 := httptest.NewRecorder()
+ ing.PreparePartitionDownscaleHandler(w2, httptest.NewRequest("DELETE", "/", nil))
+ require.Equal(t, http.StatusOK, w.Code)
+ test.Poll(t, 5*time.Second, ring.PartitionActive, func() interface{} {
+ return getState(t, cfg)
+ })
+ })
+ require.NoError(t, services.StopAndAwaitTerminated(context.Background(), ing))
+}
+
+func getState(t *testing.T, cfg Config) ring.PartitionState {
+ get, err := cfg.PartitionRingConfig.KVStore.Mock.Get(context.Background(), PartitionRingName+"-key")
+ require.NoError(t, err)
+
+ ringDesc := ring.GetOrCreatePartitionRingDesc(get)
+ return ringDesc.Partitions[0].State
+}
+
+// nolint
+func defaultIngesterTestConfig(t testing.TB) Config {
+ kvRing, closer := consul.NewInMemoryClient(ring.GetCodec(), log.NewNopLogger(), nil)
+ t.Cleanup(func() { require.NoError(t, closer.Close()) })
+
+ kvPartitionRing, closerPartitionRing := consul.NewInMemoryClient(ring.GetPartitionRingCodec(), log.NewNopLogger(), nil)
+ t.Cleanup(func() { require.NoError(t, closerPartitionRing.Close()) })
+
+ cfg := Config{}
+ flagext.DefaultValues(&cfg)
+
+ cfg.LifecyclerConfig.RingConfig.KVStore.Mock = kvRing
+ cfg.PartitionRingConfig.KVStore.Mock = kvPartitionRing
+ cfg.PartitionRingConfig.MinOwnersCount = 1
+ cfg.PartitionRingConfig.MinOwnersDuration = 0
+ cfg.LifecyclerConfig.RingConfig.ReplicationFactor = 1
+ cfg.LifecyclerConfig.NumTokens = 1
+ cfg.LifecyclerConfig.ListenPort = 0
+ cfg.LifecyclerConfig.Addr = "localhost"
+ cfg.LifecyclerConfig.ID = "localhost"
+ cfg.LifecyclerConfig.FinalSleep = 0
+ cfg.LifecyclerConfig.MinReadyDuration = 0
+
+ return cfg
+}
+
+func TestExtractIngesterPartitionID(t *testing.T) {
+ tests := []struct {
+ name string
+ ingesterID string
+ want int32
+ wantErr bool
+ }{
+ {
+ name: "Valid ingester ID",
+ ingesterID: "ingester-5",
+ want: 5,
+ wantErr: false,
+ },
+ {
+ name: "Local ingester ID",
+ ingesterID: "ingester-local",
+ want: 0,
+ wantErr: false,
+ },
+ {
+ name: "Invalid ingester ID format",
+ ingesterID: "invalid-format",
+ want: 0,
+ wantErr: true,
+ },
+ {
+ name: "Invalid sequence number",
+ ingesterID: "ingester-abc",
+ want: 0,
+ wantErr: true,
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ got, err := extractIngesterPartitionID(tt.ingesterID)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("extractIngesterPartitionID() error = %v, wantErr %v", err, tt.wantErr)
+ return
+ }
+ if got != tt.want {
+ t.Errorf("extractIngesterPartitionID() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+// TestMetastore is a simple in-memory metastore for testing
+type TestMetastore struct {
+ blocks map[string][]*metastorepb.BlockMeta
+}
+
+func NewTestMetastore() *TestMetastore {
+ return &TestMetastore{blocks: make(map[string][]*metastorepb.BlockMeta)}
+}
+
+func (m *TestMetastore) ListBlocksForQuery(_ context.Context, req *metastorepb.ListBlocksForQueryRequest, _ ...grpc.CallOption) (*metastorepb.ListBlocksForQueryResponse, error) {
+ blocks := m.blocks[req.TenantId]
+ var result []*metastorepb.BlockMeta
+ for _, block := range blocks {
+ if block.MinTime <= req.EndTime && block.MaxTime >= req.StartTime {
+ result = append(result, block)
+ }
+ }
+ return &metastorepb.ListBlocksForQueryResponse{Blocks: result}, nil
+}
+
+func (m *TestMetastore) AddBlock(_ context.Context, in *metastorepb.AddBlockRequest, _ ...grpc.CallOption) (*metastorepb.AddBlockResponse, error) {
+ for _, stream := range in.Block.TenantStreams {
+ m.blocks[stream.TenantId] = append(m.blocks[stream.TenantId], in.Block)
+ }
+ return &metastorepb.AddBlockResponse{}, nil
+}
diff --git a/pkg/kafka/ingester/metrics.go b/pkg/kafka/ingester/metrics.go
new file mode 100644
index 0000000000000..e73ee08095c8e
--- /dev/null
+++ b/pkg/kafka/ingester/metrics.go
@@ -0,0 +1,20 @@
+package ingester
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+)
+
+type ingesterMetrics struct {
+ // Shutdown marker for ingester scale down
+ shutdownMarker prometheus.Gauge
+}
+
+func newIngesterMetrics(r prometheus.Registerer) *ingesterMetrics {
+ return &ingesterMetrics{
+ shutdownMarker: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingester_prepare_shutdown_requested",
+ Help: "1 if the ingester has been requested to prepare for shutdown via endpoint or marker file.",
+ }),
+ }
+}
diff --git a/pkg/kafka/ingester/partition_committer.go b/pkg/kafka/ingester/partition_committer.go
new file mode 100644
index 0000000000000..a76e363a64e4b
--- /dev/null
+++ b/pkg/kafka/ingester/partition_committer.go
@@ -0,0 +1,103 @@
+package ingester
+
+import (
+ "context"
+ "strconv"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/twmb/franz-go/pkg/kadm"
+
+ "github.com/grafana/loki/v3/pkg/kafka"
+)
+
+// partitionCommitter is responsible for committing offsets for a specific Kafka partition
+// to the Kafka broker. It also tracks metrics related to the commit process.
+type partitionCommitter struct {
+ commitRequestsTotal prometheus.Counter
+ commitRequestsLatency prometheus.Histogram
+ commitFailuresTotal prometheus.Counter
+ lastCommittedOffset prometheus.Gauge
+
+ logger log.Logger
+ admClient *kadm.Client
+
+ kafkaCfg kafka.Config
+ partitionID int32
+ consumerGroup string
+}
+
+// newPartitionCommitter creates and initializes a new partitionCommitter.
+// It sets up the necessary metrics and initializes the committer with the provided configuration.
+func newPartitionCommitter(kafkaCfg kafka.Config, admClient *kadm.Client, partitionID int32, consumerGroup string, logger log.Logger, reg prometheus.Registerer) *partitionCommitter {
+ c := &partitionCommitter{
+ logger: logger,
+ kafkaCfg: kafkaCfg,
+ partitionID: partitionID,
+ consumerGroup: consumerGroup,
+ admClient: admClient,
+ commitRequestsTotal: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingest_storage_reader_offset_commit_requests_total",
+ Help: "Total number of requests issued to commit the last consumed offset (includes both successful and failed requests).",
+ ConstLabels: prometheus.Labels{"partition": strconv.Itoa(int(partitionID))},
+ }),
+ commitFailuresTotal: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingest_storage_reader_offset_commit_failures_total",
+ Help: "Total number of failed requests to commit the last consumed offset.",
+ ConstLabels: prometheus.Labels{"partition": strconv.Itoa(int(partitionID))},
+ }),
+ commitRequestsLatency: promauto.With(reg).NewHistogram(prometheus.HistogramOpts{
+ Name: "loki_ingest_storage_reader_offset_commit_request_duration_seconds",
+ Help: "The duration of requests to commit the last consumed offset.",
+ ConstLabels: prometheus.Labels{"partition": strconv.Itoa(int(partitionID))},
+ NativeHistogramBucketFactor: 1.1,
+ NativeHistogramMaxBucketNumber: 100,
+ NativeHistogramMinResetDuration: time.Hour,
+ Buckets: prometheus.DefBuckets,
+ }),
+ lastCommittedOffset: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingest_storage_reader_last_committed_offset",
+ Help: "The last consumed offset successfully committed by the partition reader. Set to -1 if not offset has been committed yet.",
+ ConstLabels: prometheus.Labels{"partition": strconv.Itoa(int(partitionID))},
+ }),
+ }
+
+ // Initialise the last committed offset metric to -1 to signal no offset has been committed yet (0 is a valid offset).
+ c.lastCommittedOffset.Set(-1)
+
+ return c
+}
+
+// commit attempts to commit the given offset to Kafka for the partition this committer is responsible for.
+// It updates relevant metrics and logs the result of the commit operation.
+func (r *partitionCommitter) Commit(ctx context.Context, offset int64) (returnErr error) {
+ startTime := time.Now()
+ r.commitRequestsTotal.Inc()
+
+ defer func() {
+ r.commitRequestsLatency.Observe(time.Since(startTime).Seconds())
+
+ if returnErr != nil {
+ level.Error(r.logger).Log("msg", "failed to commit last consumed offset to Kafka", "err", returnErr, "offset", offset)
+ r.commitFailuresTotal.Inc()
+ }
+ }()
+
+ // Commit the last consumed offset.
+ toCommit := kadm.Offsets{}
+ toCommit.AddOffset(r.kafkaCfg.Topic, r.partitionID, offset, -1)
+ committed, err := r.admClient.CommitOffsets(ctx, r.consumerGroup, toCommit)
+ if err != nil {
+ return err
+ } else if !committed.Ok() {
+ return committed.Error()
+ }
+
+ committedOffset, _ := committed.Lookup(r.kafkaCfg.Topic, r.partitionID)
+ level.Debug(r.logger).Log("msg", "last commit offset successfully committed to Kafka", "offset", committedOffset.At)
+ r.lastCommittedOffset.Set(float64(committedOffset.At))
+ return nil
+}
diff --git a/pkg/kafka/ingester/partition_committer_test.go b/pkg/kafka/ingester/partition_committer_test.go
new file mode 100644
index 0000000000000..8fb823e3f2ed5
--- /dev/null
+++ b/pkg/kafka/ingester/partition_committer_test.go
@@ -0,0 +1,77 @@
+package ingester
+
+import (
+ "context"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+ "github.com/twmb/franz-go/pkg/kadm"
+ "github.com/twmb/franz-go/plugin/kprom"
+
+ "github.com/prometheus/client_golang/prometheus/testutil"
+
+ "github.com/grafana/loki/v3/pkg/kafka"
+ "github.com/grafana/loki/v3/pkg/kafka/testkafka"
+)
+
+func TestPartitionCommitter(t *testing.T) {
+ // Create a test Kafka cluster
+ numPartitions := int32(3)
+ topicName := "test-topic"
+ _, kafkaCfg := testkafka.CreateCluster(t, numPartitions, topicName)
+
+ client, err := kafka.NewReaderClient(kafkaCfg, kprom.NewMetrics("foo"), log.NewNopLogger())
+ require.NoError(t, err)
+
+ // Create a Kafka admin client
+ admClient := kadm.NewClient(client)
+ defer admClient.Close()
+
+ // Create a partition committer
+ logger := log.NewNopLogger()
+ reg := prometheus.NewRegistry()
+ partitionID := int32(1)
+ consumerGroup := "test-consumer-group"
+ committer := newPartitionCommitter(kafkaCfg, admClient, partitionID, consumerGroup, logger, reg)
+
+ // Test committing an offset
+ ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
+ defer cancel()
+
+ testOffset := int64(100)
+ err = committer.Commit(ctx, testOffset)
+ require.NoError(t, err)
+
+ // Verify metrics
+ assert.Equal(t, float64(1), testutil.ToFloat64(committer.commitRequestsTotal))
+ assert.Equal(t, float64(0), testutil.ToFloat64(committer.commitFailuresTotal))
+ assert.Equal(t, float64(testOffset), testutil.ToFloat64(committer.lastCommittedOffset))
+
+ // Verify committed offset
+ offsets, err := admClient.FetchOffsets(context.Background(), consumerGroup)
+ require.NoError(t, err)
+ committedOffset, ok := offsets.Lookup(topicName, partitionID)
+ require.True(t, ok)
+ assert.Equal(t, testOffset, committedOffset.At)
+
+ // Test committing a new offset
+ newTestOffset := int64(200)
+ err = committer.Commit(ctx, newTestOffset)
+ require.NoError(t, err)
+
+ // Verify updated metrics
+ assert.Equal(t, float64(2), testutil.ToFloat64(committer.commitRequestsTotal))
+ assert.Equal(t, float64(0), testutil.ToFloat64(committer.commitFailuresTotal))
+ assert.Equal(t, float64(newTestOffset), testutil.ToFloat64(committer.lastCommittedOffset))
+
+ // Verify updated committed offset
+ offsets, err = admClient.FetchOffsets(context.Background(), consumerGroup)
+ require.NoError(t, err)
+ committedOffset, ok = offsets.Lookup(topicName, partitionID)
+ require.True(t, ok)
+ assert.Equal(t, newTestOffset, committedOffset.At)
+}
diff --git a/pkg/kafka/ingester/partition_reader.go b/pkg/kafka/ingester/partition_reader.go
new file mode 100644
index 0000000000000..5ed70412d9e0c
--- /dev/null
+++ b/pkg/kafka/ingester/partition_reader.go
@@ -0,0 +1,269 @@
+package ingester
+
+import (
+ "context"
+ "fmt"
+ "math"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/multierror"
+ "github.com/grafana/dskit/services"
+ "github.com/pkg/errors"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/twmb/franz-go/pkg/kadm"
+ "github.com/twmb/franz-go/pkg/kgo"
+ "github.com/twmb/franz-go/plugin/kprom"
+
+ "github.com/grafana/loki/v3/pkg/kafka"
+)
+
+// PartitionReader is responsible for reading data from a specific Kafka partition
+// and passing it to the consumer for processing. It is a core component of the
+// Loki ingester's Kafka-based ingestion pipeline.
+type PartitionReader struct {
+ services.Service
+
+ kafkaCfg kafka.Config
+ partitionID int32
+ consumerGroup string
+ consumerFactory ConsumerFactory
+ committer *partitionCommitter
+ lastProcessedOffset int64
+
+ client *kgo.Client
+ logger log.Logger
+ metrics readerMetrics
+ reg prometheus.Registerer
+}
+
+type record struct {
+ // Context holds the tracing (and potentially other) info, that the record was enriched with on fetch from Kafka.
+ ctx context.Context
+ tenantID string
+ content []byte
+ offset int64
+}
+
+type ConsumerFactory func(committer Committer) (Consumer, error)
+
+type Consumer interface {
+ Start(ctx context.Context, recordsChan <-chan []record) func()
+}
+
+// NewPartitionReader creates and initializes a new PartitionReader.
+// It sets up the basic service and initializes the reader with the provided configuration.
+func NewPartitionReader(
+ kafkaCfg kafka.Config,
+ partitionID int32,
+ instanceID string,
+ consumerFactory ConsumerFactory,
+ logger log.Logger,
+ reg prometheus.Registerer,
+) (*PartitionReader, error) {
+ r := &PartitionReader{
+ kafkaCfg: kafkaCfg,
+ partitionID: partitionID,
+ consumerGroup: kafkaCfg.GetConsumerGroup(instanceID, partitionID),
+ logger: logger,
+ metrics: newReaderMetrics(reg),
+ reg: reg,
+ lastProcessedOffset: -1,
+ consumerFactory: consumerFactory,
+ }
+ r.Service = services.NewBasicService(r.start, r.run, nil)
+ return r, nil
+}
+
+// start initializes the Kafka client and committer for the PartitionReader.
+// This method is called when the PartitionReader service starts.
+func (p *PartitionReader) start(_ context.Context) error {
+ var err error
+ p.client, err = kafka.NewReaderClient(p.kafkaCfg, p.metrics.kprom, p.logger,
+ kgo.ConsumePartitions(map[string]map[int32]kgo.Offset{
+ p.kafkaCfg.Topic: {p.partitionID: kgo.NewOffset().AtStart()},
+ }),
+ )
+ if err != nil {
+ return errors.Wrap(err, "creating kafka reader client")
+ }
+ p.committer = newPartitionCommitter(p.kafkaCfg, kadm.NewClient(p.client), p.partitionID, p.consumerGroup, p.logger, p.reg)
+
+ return nil
+}
+
+// run is the main loop of the PartitionReader. It continuously fetches and processes
+// data from Kafka, and send it to the consumer.
+func (p *PartitionReader) run(ctx context.Context) error {
+ level.Info(p.logger).Log("msg", "starting partition reader", "partition", p.partitionID, "consumer_group", p.consumerGroup)
+ ctx, cancel := context.WithCancel(ctx)
+ defer cancel()
+
+ consumer, err := p.consumerFactory(p.committer)
+ if err != nil {
+ return errors.Wrap(err, "creating consumer")
+ }
+
+ recordsChan := p.startFetchLoop(ctx)
+ wait := consumer.Start(ctx, recordsChan)
+
+ wait()
+ return nil
+}
+
+func (p *PartitionReader) startFetchLoop(ctx context.Context) <-chan []record {
+ records := make(chan []record)
+ go func() {
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ default:
+ records <- p.poll(ctx)
+ }
+ }
+ }()
+ return records
+}
+
+// logFetchErrors logs any errors encountered during the fetch operation.
+func (p *PartitionReader) logFetchErrors(fetches kgo.Fetches) {
+ mErr := multierror.New()
+ fetches.EachError(func(topic string, partition int32, err error) {
+ if errors.Is(err, context.Canceled) {
+ return
+ }
+
+ // kgo advises to "restart" the kafka client if the returned error is a kerr.Error.
+ // Recreating the client would cause duplicate metrics registration, so we don't do it for now.
+ mErr.Add(fmt.Errorf("topic %q, partition %d: %w", topic, partition, err))
+ })
+ if len(mErr) == 0 {
+ return
+ }
+ p.metrics.fetchesErrors.Add(float64(len(mErr)))
+ level.Error(p.logger).Log("msg", "encountered error while fetching", "err", mErr.Err())
+}
+
+// pollFetches retrieves the next batch of records from Kafka and measures the fetch duration.
+func (p *PartitionReader) poll(ctx context.Context) []record {
+ defer func(start time.Time) {
+ p.metrics.fetchWaitDuration.Observe(time.Since(start).Seconds())
+ }(time.Now())
+ fetches := p.client.PollFetches(ctx)
+ p.recordFetchesMetrics(fetches)
+ p.logFetchErrors(fetches)
+ fetches = filterOutErrFetches(fetches)
+ if fetches.NumRecords() == 0 {
+ return nil
+ }
+ records := make([]record, 0, fetches.NumRecords())
+ fetches.EachRecord(func(rec *kgo.Record) {
+ records = append(records, record{
+ // This context carries the tracing data for this individual record;
+ // kotel populates this data when it fetches the messages.
+ ctx: rec.Context,
+ tenantID: string(rec.Key),
+ content: rec.Value,
+ offset: rec.Offset,
+ })
+ })
+ p.lastProcessedOffset = records[len(records)-1].offset
+ return records
+}
+
+// recordFetchesMetrics updates various metrics related to the fetch operation.
+func (p *PartitionReader) recordFetchesMetrics(fetches kgo.Fetches) {
+ var (
+ now = time.Now()
+ numRecords = 0
+ )
+ fetches.EachRecord(func(record *kgo.Record) {
+ numRecords++
+ delay := now.Sub(record.Timestamp).Seconds()
+ if p.lastProcessedOffset == -1 {
+ p.metrics.receiveDelayWhenStarting.Observe(delay)
+ } else {
+ p.metrics.receiveDelayWhenRunning.Observe(delay)
+ }
+ })
+
+ p.metrics.fetchesTotal.Add(float64(len(fetches)))
+ p.metrics.recordsPerFetch.Observe(float64(numRecords))
+}
+
+// filterOutErrFetches removes any fetches that resulted in errors from the provided slice.
+func filterOutErrFetches(fetches kgo.Fetches) kgo.Fetches {
+ filtered := make(kgo.Fetches, 0, len(fetches))
+ for i, fetch := range fetches {
+ if !isErrFetch(fetch) {
+ filtered = append(filtered, fetches[i])
+ }
+ }
+
+ return filtered
+}
+
+// isErrFetch checks if a given fetch resulted in any errors.
+func isErrFetch(fetch kgo.Fetch) bool {
+ for _, t := range fetch.Topics {
+ for _, p := range t.Partitions {
+ if p.Err != nil {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+type readerMetrics struct {
+ receiveDelayWhenStarting prometheus.Observer
+ receiveDelayWhenRunning prometheus.Observer
+ recordsPerFetch prometheus.Histogram
+ fetchesErrors prometheus.Counter
+ fetchesTotal prometheus.Counter
+ fetchWaitDuration prometheus.Histogram
+ // strongConsistencyInstrumentation *StrongReadConsistencyInstrumentation[struct{}]
+ // lastConsumedOffset prometheus.Gauge
+ consumeLatency prometheus.Histogram
+ kprom *kprom.Metrics
+}
+
+// newReaderMetrics initializes and returns a new set of metrics for the PartitionReader.
+func newReaderMetrics(reg prometheus.Registerer) readerMetrics {
+ receiveDelay := promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
+ Name: "loki_ingest_storage_reader_receive_delay_seconds",
+ Help: "Delay between producing a record and receiving it in the consumer.",
+ NativeHistogramZeroThreshold: math.Pow(2, -10), // Values below this will be considered to be 0. Equals to 0.0009765625, or about 1ms.
+ NativeHistogramBucketFactor: 1.2, // We use higher factor (scheme=2) to have wider spread of buckets.
+ NativeHistogramMaxBucketNumber: 100,
+ NativeHistogramMinResetDuration: 1 * time.Hour,
+ Buckets: prometheus.ExponentialBuckets(0.125, 2, 18), // Buckets between 125ms and 9h.
+ }, []string{"phase"})
+
+ return readerMetrics{
+ receiveDelayWhenStarting: receiveDelay.WithLabelValues("starting"),
+ receiveDelayWhenRunning: receiveDelay.WithLabelValues("running"),
+ kprom: kafka.NewReaderClientMetrics("partition-reader", reg),
+ fetchWaitDuration: promauto.With(reg).NewHistogram(prometheus.HistogramOpts{
+ Name: "loki_ingest_storage_reader_records_batch_wait_duration_seconds",
+ Help: "How long a consumer spent waiting for a batch of records from the Kafka client. If fetching is faster than processing, then this will be close to 0.",
+ NativeHistogramBucketFactor: 1.1,
+ }),
+ recordsPerFetch: promauto.With(reg).NewHistogram(prometheus.HistogramOpts{
+ Name: "loki_ingest_storage_reader_records_per_fetch",
+ Help: "The number of records received by the consumer in a single fetch operation.",
+ Buckets: prometheus.ExponentialBuckets(1, 2, 15),
+ }),
+ fetchesErrors: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingest_storage_reader_fetch_errors_total",
+ Help: "The number of fetch errors encountered by the consumer.",
+ }),
+ fetchesTotal: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingest_storage_reader_fetches_total",
+ Help: "Total number of Kafka fetches received by the consumer.",
+ }),
+ }
+}
diff --git a/pkg/kafka/ingester/partition_reader_test.go b/pkg/kafka/ingester/partition_reader_test.go
new file mode 100644
index 0000000000000..de71dc53b7691
--- /dev/null
+++ b/pkg/kafka/ingester/partition_reader_test.go
@@ -0,0 +1,102 @@
+package ingester
+
+import (
+ "context"
+ "sync"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/grafana/dskit/services"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/prometheus/model/labels"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/mock"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/kafka"
+ "github.com/grafana/loki/v3/pkg/kafka/testkafka"
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+type mockConsumer struct {
+ mock.Mock
+ recordsChan chan []record
+ wg sync.WaitGroup
+}
+
+func newMockConsumer() *mockConsumer {
+ return &mockConsumer{
+ recordsChan: make(chan []record, 100),
+ }
+}
+
+func (m *mockConsumer) Start(ctx context.Context, recordsChan <-chan []record) func() {
+ m.wg.Add(1)
+ go func() {
+ defer m.wg.Done()
+ for {
+ select {
+ case <-ctx.Done():
+ return
+ case records := <-recordsChan:
+ m.recordsChan <- records
+ }
+ }
+ }()
+ return m.wg.Wait
+}
+
+func (m *mockConsumer) Flush(ctx context.Context) error {
+ args := m.Called(ctx)
+ return args.Error(0)
+}
+
+func TestPartitionReader_BasicFunctionality(t *testing.T) {
+ _, kafkaCfg := testkafka.CreateCluster(t, 1, "test-topic")
+ consumer := newMockConsumer()
+
+ consumerFactory := func(_ Committer) (Consumer, error) {
+ return consumer, nil
+ }
+
+ partitionReader, err := NewPartitionReader(kafkaCfg, 0, "test-consumer-group", consumerFactory, log.NewNopLogger(), prometheus.NewRegistry())
+ require.NoError(t, err)
+ producer, err := kafka.NewWriterClient(kafkaCfg, 100, log.NewNopLogger(), prometheus.NewRegistry())
+ require.NoError(t, err)
+
+ err = services.StartAndAwaitRunning(context.Background(), partitionReader)
+ require.NoError(t, err)
+
+ stream := logproto.Stream{
+ Labels: labels.FromStrings("foo", "bar").String(),
+ Entries: []logproto.Entry{{Timestamp: time.Now(), Line: "test"}},
+ }
+
+ records, err := kafka.Encode(0, "test-tenant", stream, 10<<20)
+ require.NoError(t, err)
+ require.Len(t, records, 1)
+
+ producer.ProduceSync(context.Background(), records...)
+ producer.ProduceSync(context.Background(), records...)
+
+ // Wait for records to be processed
+ assert.Eventually(t, func() bool {
+ return len(consumer.recordsChan) == 2
+ }, 10*time.Second, 100*time.Millisecond)
+
+ // Verify the records
+ for i := 0; i < 2; i++ {
+ select {
+ case receivedRecords := <-consumer.recordsChan:
+ require.Len(t, receivedRecords, 1)
+ assert.Equal(t, "test-tenant", receivedRecords[0].tenantID)
+ assert.Equal(t, records[0].Value, receivedRecords[0].content)
+ case <-time.After(1 * time.Second):
+ t.Fatal("Timeout waiting for records")
+ }
+ }
+
+ err = services.StopAndAwaitTerminated(context.Background(), partitionReader)
+ require.NoError(t, err)
+}
diff --git a/pkg/kafka/ingester/shutdownmarker/shutdown_marker.go b/pkg/kafka/ingester/shutdownmarker/shutdown_marker.go
new file mode 100644
index 0000000000000..7d1a4ec2f353f
--- /dev/null
+++ b/pkg/kafka/ingester/shutdownmarker/shutdown_marker.go
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package shutdownmarker
+
+import (
+ "os"
+ "path"
+ "strings"
+ "time"
+
+ "github.com/grafana/dskit/multierror"
+
+ "github.com/grafana/loki/v3/pkg/util/atomicfs"
+)
+
+const shutdownMarkerFilename = "shutdown-requested.txt"
+
+// Create writes a marker file on the given path to indicate that a component is
+// going to be scaled down in the future. The presence of this file means that a component
+// should perform some operations specified by the component itself before being shutdown.
+func Create(p string) error {
+ return atomicfs.CreateFile(p, strings.NewReader(time.Now().UTC().Format(time.RFC3339)))
+}
+
+// Remove removes the shutdown marker file on the given path if it exists.
+func Remove(p string) error {
+ err := os.Remove(p)
+ if err != nil && !os.IsNotExist(err) {
+ return err
+ }
+
+ dir, err := os.OpenFile(path.Dir(p), os.O_RDONLY, 0o777)
+ if err != nil {
+ return err
+ }
+
+ merr := multierror.New()
+ merr.Add(dir.Sync())
+ merr.Add(dir.Close())
+ return merr.Err()
+}
+
+// Exists returns true if the shutdown marker file exists on the given path, false otherwise
+func Exists(p string) (bool, error) {
+ s, err := os.Stat(p)
+ if err != nil && os.IsNotExist(err) {
+ return false, nil
+ }
+
+ if err != nil {
+ return false, err
+ }
+
+ return s.Mode().IsRegular(), nil
+}
+
+// GetPath returns the absolute path of the shutdown marker file
+func GetPath(dirPath string) string {
+ return path.Join(dirPath, shutdownMarkerFilename)
+}
diff --git a/pkg/kafka/ingester/shutdownmarker/shutdown_marker_test.go b/pkg/kafka/ingester/shutdownmarker/shutdown_marker_test.go
new file mode 100644
index 0000000000000..c8e0b851be4e1
--- /dev/null
+++ b/pkg/kafka/ingester/shutdownmarker/shutdown_marker_test.go
@@ -0,0 +1,49 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package shutdownmarker
+
+import (
+ "path/filepath"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestShutdownMarker_GetPath(t *testing.T) {
+ dir := "/a/b/c"
+ expectedPath := filepath.Join(dir, shutdownMarkerFilename)
+ require.Equal(t, expectedPath, GetPath(dir))
+}
+
+func TestShutdownMarker_Create(t *testing.T) {
+ dir := t.TempDir()
+ shutdownMarkerPath := GetPath(dir)
+ exists, err := Exists(shutdownMarkerPath)
+ require.NoError(t, err)
+ require.False(t, exists)
+
+ err = Create(shutdownMarkerPath)
+ require.NoError(t, err)
+
+ exists, err = Exists(shutdownMarkerPath)
+ require.NoError(t, err)
+ require.True(t, exists)
+}
+
+func TestShutdownMarker_Remove(t *testing.T) {
+ dir := t.TempDir()
+ shutdownMarkerPath := GetPath(dir)
+ exists, err := Exists(shutdownMarkerPath)
+ require.NoError(t, err)
+ require.False(t, exists)
+
+ require.Nil(t, Create(shutdownMarkerPath))
+ exists, err = Exists(shutdownMarkerPath)
+ require.NoError(t, err)
+ require.True(t, exists)
+
+ require.Nil(t, Remove(shutdownMarkerPath))
+ exists, err = Exists(shutdownMarkerPath)
+ require.NoError(t, err)
+ require.False(t, exists)
+}
diff --git a/pkg/kafka/logger.go b/pkg/kafka/logger.go
new file mode 100644
index 0000000000000..e055094a4163b
--- /dev/null
+++ b/pkg/kafka/logger.go
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package kafka
+
+import (
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/twmb/franz-go/pkg/kgo"
+)
+
+type logger struct {
+ logger log.Logger
+}
+
+func newLogger(l log.Logger) *logger {
+ return &logger{
+ logger: log.With(l, "component", "kafka_client"),
+ }
+}
+
+func (l *logger) Level() kgo.LogLevel {
+ // The Kafka client calls Level() to check whether debug level is enabled or not.
+ // To keep it simple, we always return Info, so the Kafka client will never try
+ // to log expensive debug messages.
+ return kgo.LogLevelInfo
+}
+
+func (l *logger) Log(lev kgo.LogLevel, msg string, keyvals ...any) {
+ keyvals = append([]any{"msg", msg}, keyvals...)
+ switch lev {
+ case kgo.LogLevelDebug:
+ level.Debug(l.logger).Log(keyvals...)
+ case kgo.LogLevelInfo:
+ level.Info(l.logger).Log(keyvals...)
+ case kgo.LogLevelWarn:
+ level.Warn(l.logger).Log(keyvals...)
+ case kgo.LogLevelError:
+ level.Error(l.logger).Log(keyvals...)
+ }
+}
diff --git a/pkg/kafka/partitionring/partition_ring.go b/pkg/kafka/partitionring/partition_ring.go
new file mode 100644
index 0000000000000..dedfb8ac33bb3
--- /dev/null
+++ b/pkg/kafka/partitionring/partition_ring.go
@@ -0,0 +1,47 @@
+package partitionring
+
+import (
+ "flag"
+ "time"
+
+ "github.com/grafana/dskit/kv"
+ "github.com/grafana/dskit/ring"
+)
+
+type Config struct {
+ KVStore kv.Config `yaml:"kvstore" doc:"description=The key-value store used to share the hash ring across multiple instances. This option needs be set on ingesters, distributors, queriers, and rulers when running in microservices mode."`
+
+ // MinOwnersCount maps to ring.PartitionInstanceLifecyclerConfig's WaitOwnersCountOnPending.
+ MinOwnersCount int `yaml:"min_partition_owners_count"`
+
+ // MinOwnersDuration maps to ring.PartitionInstanceLifecyclerConfig's WaitOwnersDurationOnPending.
+ MinOwnersDuration time.Duration `yaml:"min_partition_owners_duration"`
+
+ // DeleteInactivePartitionAfter maps to ring.PartitionInstanceLifecyclerConfig's DeleteInactivePartitionAfterDuration.
+ DeleteInactivePartitionAfter time.Duration `yaml:"delete_inactive_partition_after"`
+
+ // lifecyclerPollingInterval is the lifecycler polling interval. This setting is used to lower it in tests.
+ lifecyclerPollingInterval time.Duration
+}
+
+// RegisterFlags adds the flags required to config this to the given FlagSet
+func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
+ // Ring flags
+ cfg.KVStore.Store = "memberlist" // Override default value.
+ cfg.KVStore.RegisterFlagsWithPrefix("ingester.partition-ring.", "collectors/", f)
+
+ f.IntVar(&cfg.MinOwnersCount, "ingester.partition-ring.min-partition-owners-count", 1, "Minimum number of owners to wait before a PENDING partition gets switched to ACTIVE.")
+ f.DurationVar(&cfg.MinOwnersDuration, "ingester.partition-ring.min-partition-owners-duration", 10*time.Second, "How long the minimum number of owners are enforced before a PENDING partition gets switched to ACTIVE.")
+ f.DurationVar(&cfg.DeleteInactivePartitionAfter, "ingester.partition-ring.delete-inactive-partition-after", 13*time.Hour, "How long to wait before an INACTIVE partition is eligible for deletion. The partition is deleted only if it has been in INACTIVE state for at least the configured duration and it has no owners registered. A value of 0 disables partitions deletion.")
+}
+
+func (cfg *Config) ToLifecyclerConfig(partitionID int32, instanceID string) ring.PartitionInstanceLifecyclerConfig {
+ return ring.PartitionInstanceLifecyclerConfig{
+ PartitionID: partitionID,
+ InstanceID: instanceID,
+ WaitOwnersCountOnPending: cfg.MinOwnersCount,
+ WaitOwnersDurationOnPending: cfg.MinOwnersDuration,
+ DeleteInactivePartitionAfterDuration: cfg.DeleteInactivePartitionAfter,
+ PollingInterval: cfg.lifecyclerPollingInterval,
+ }
+}
diff --git a/pkg/kafka/reader_client.go b/pkg/kafka/reader_client.go
new file mode 100644
index 0000000000000..1b8c6b3bc1dc5
--- /dev/null
+++ b/pkg/kafka/reader_client.go
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package kafka
+
+import (
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/pkg/errors"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/twmb/franz-go/pkg/kgo"
+ "github.com/twmb/franz-go/plugin/kprom"
+)
+
+// NewReaderClient returns the kgo.Client that should be used by the Reader.
+func NewReaderClient(cfg Config, metrics *kprom.Metrics, logger log.Logger, opts ...kgo.Opt) (*kgo.Client, error) {
+ const fetchMaxBytes = 100_000_000
+
+ opts = append(opts, commonKafkaClientOptions(cfg, metrics, logger)...)
+ opts = append(opts,
+ kgo.FetchMinBytes(1),
+ kgo.FetchMaxBytes(fetchMaxBytes),
+ kgo.FetchMaxWait(5*time.Second),
+ kgo.FetchMaxPartitionBytes(50_000_000),
+
+ // BrokerMaxReadBytes sets the maximum response size that can be read from
+ // Kafka. This is a safety measure to avoid OOMing on invalid responses.
+ // franz-go recommendation is to set it 2x FetchMaxBytes.
+ kgo.BrokerMaxReadBytes(2*fetchMaxBytes),
+ )
+ client, err := kgo.NewClient(opts...)
+ if err != nil {
+ return nil, errors.Wrap(err, "creating kafka client")
+ }
+
+ return client, nil
+}
+
+func NewReaderClientMetrics(component string, reg prometheus.Registerer) *kprom.Metrics {
+ return kprom.NewMetrics("loki_ingest_storage_reader",
+ kprom.Registerer(prometheus.WrapRegistererWith(prometheus.Labels{"component": component}, reg)),
+ // Do not export the client ID, because we use it to specify options to the backend.
+ kprom.FetchAndProduceDetail(kprom.Batches, kprom.Records, kprom.CompressedBytes, kprom.UncompressedBytes))
+}
diff --git a/pkg/kafka/tee/tee.go b/pkg/kafka/tee/tee.go
new file mode 100644
index 0000000000000..2228883efb32f
--- /dev/null
+++ b/pkg/kafka/tee/tee.go
@@ -0,0 +1,174 @@
+package tee
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/user"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/twmb/franz-go/pkg/kgo"
+
+ "github.com/grafana/loki/v3/pkg/distributor"
+ "github.com/grafana/loki/v3/pkg/kafka"
+)
+
+const writeTimeout = time.Minute
+
+// Tee represents a component that duplicates log streams to Kafka.
+type Tee struct {
+ logger log.Logger
+ producer *kafka.Producer
+ partitionRing ring.PartitionRingReader
+ cfg kafka.Config
+
+ ingesterAppends *prometheus.CounterVec
+ writeLatency prometheus.Histogram
+ writeBytesTotal prometheus.Counter
+ recordsPerRequest prometheus.Histogram
+}
+
+// NewTee creates and initializes a new Tee instance.
+//
+// Parameters:
+// - cfg: Kafka configuration
+// - metricsNamespace: Namespace for Prometheus metrics
+// - registerer: Prometheus registerer for metrics
+// - logger: Logger instance
+// - partitionRing: Ring for managing partitions
+//
+// Returns:
+// - A new Tee instance and any error encountered during initialization
+func NewTee(
+ cfg kafka.Config,
+ metricsNamespace string,
+ registerer prometheus.Registerer,
+ logger log.Logger,
+ partitionRing ring.PartitionRingReader,
+) (*Tee, error) {
+ registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
+
+ kafkaClient, err := kafka.NewWriterClient(cfg, 20, logger, registerer)
+ if err != nil {
+ return nil, fmt.Errorf("failed to start kafka client: %w", err)
+ }
+ producer := kafka.NewProducer(kafkaClient, cfg.ProducerMaxBufferedBytes,
+ prometheus.WrapRegistererWithPrefix("_kafka_ingester_", registerer))
+
+ t := &Tee{
+ logger: log.With(logger, "component", "kafka-tee"),
+ ingesterAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "kafka_ingester_appends_total",
+ Help: "The total number of appends sent to kafka ingest path.",
+ }, []string{"partition", "status"}),
+ producer: producer,
+ partitionRing: partitionRing,
+ cfg: cfg,
+ // Metrics.
+ writeLatency: promauto.With(registerer).NewHistogram(prometheus.HistogramOpts{
+ Name: "kafka_ingester_tee_latency_seconds",
+ Help: "Latency to write an incoming request to the ingest storage.",
+ NativeHistogramBucketFactor: 1.1,
+ NativeHistogramMinResetDuration: 1 * time.Hour,
+ NativeHistogramMaxBucketNumber: 100,
+ Buckets: prometheus.DefBuckets,
+ }),
+ writeBytesTotal: promauto.With(registerer).NewCounter(prometheus.CounterOpts{
+ Name: "kafka_ingester_tee_sent_bytes_total",
+ Help: "Total number of bytes sent to the ingest storage.",
+ }),
+ recordsPerRequest: promauto.With(registerer).NewHistogram(prometheus.HistogramOpts{
+ Name: "kafka_ingester_tee_records_per_write_request",
+ Help: "The number of records a single per-partition write request has been split into.",
+ Buckets: prometheus.ExponentialBuckets(1, 2, 8),
+ }),
+ }
+
+ return t, nil
+}
+
+// Duplicate implements the distributor.Tee interface, which is used to duplicate
+// distributor requests to pattern ingesters. It asynchronously sends each stream
+// to Kafka.
+//
+// Parameters:
+// - tenant: The tenant identifier
+// - streams: A slice of KeyedStream to be duplicated
+func (t *Tee) Duplicate(tenant string, streams []distributor.KeyedStream) {
+ for idx := range streams {
+ go func(stream distributor.KeyedStream) {
+ if err := t.sendStream(tenant, stream); err != nil {
+ level.Error(t.logger).Log("msg", "failed to send stream to kafka", "err", err)
+ }
+ }(streams[idx])
+ }
+}
+
+func (t *Tee) Close() {
+ t.producer.Close()
+}
+
+// sendStream sends a single stream to Kafka.
+//
+// Parameters:
+// - tenant: The tenant identifier
+// - stream: The KeyedStream to be sent
+//
+// Returns:
+// - An error if the stream couldn't be sent successfully
+func (t *Tee) sendStream(tenant string, stream distributor.KeyedStream) error {
+ if len(stream.Stream.Entries) == 0 {
+ return nil
+ }
+ partitionID, err := t.partitionRing.PartitionRing().ActivePartitionForKey(stream.HashKey)
+ if err != nil {
+ t.ingesterAppends.WithLabelValues("partition_unknown", "fail").Inc()
+ return fmt.Errorf("failed to find active partition for stream: %w", err)
+ }
+
+ startTime := time.Now()
+
+ records, err := kafka.Encode(partitionID, tenant, stream.Stream, t.cfg.ProducerMaxRecordSizeBytes)
+ if err != nil {
+ t.ingesterAppends.WithLabelValues(fmt.Sprintf("partition_%d", partitionID), "fail").Inc()
+ return fmt.Errorf("failed to marshal write request to records: %w", err)
+ }
+
+ t.recordsPerRequest.Observe(float64(len(records)))
+
+ ctx, cancel := context.WithTimeout(user.InjectOrgID(context.Background(), tenant), writeTimeout)
+ defer cancel()
+ produceResults := t.producer.ProduceSync(ctx, records)
+
+ if count, sizeBytes := successfulProduceRecordsStats(produceResults); count > 0 {
+ t.writeLatency.Observe(time.Since(startTime).Seconds())
+ t.writeBytesTotal.Add(float64(sizeBytes))
+ }
+
+ var finalErr error
+ for _, result := range produceResults {
+ if result.Err != nil {
+ t.ingesterAppends.WithLabelValues(fmt.Sprintf("partition_%d", partitionID), "fail").Inc()
+ finalErr = err
+ } else {
+ t.ingesterAppends.WithLabelValues(fmt.Sprintf("partition_%d", partitionID), "success").Inc()
+ }
+ }
+
+ return finalErr
+}
+
+func successfulProduceRecordsStats(results kgo.ProduceResults) (count, sizeBytes int) {
+ for _, res := range results {
+ if res.Err == nil && res.Record != nil {
+ count++
+ sizeBytes += len(res.Record.Value)
+ }
+ }
+
+ return
+}
diff --git a/pkg/kafka/tee/tee_test.go b/pkg/kafka/tee/tee_test.go
new file mode 100644
index 0000000000000..2431f42033fc7
--- /dev/null
+++ b/pkg/kafka/tee/tee_test.go
@@ -0,0 +1,50 @@
+package tee
+
+import (
+ "os"
+ "testing"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/grafana/dskit/ring"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/distributor"
+ "github.com/grafana/loki/v3/pkg/kafka/testkafka"
+
+ "github.com/grafana/loki/pkg/push"
+)
+
+func TestPushKafkaRecords(t *testing.T) {
+ _, cfg := testkafka.CreateCluster(t, 1, "topic")
+ tee, err := NewTee(cfg, "test", prometheus.NewRegistry(), log.NewLogfmtLogger(os.Stdout), newTestPartitionRing())
+ require.NoError(t, err)
+
+ err = tee.sendStream("test", distributor.KeyedStream{
+ HashKey: 1,
+ Stream: push.Stream{
+ Labels: `{foo="bar"}`,
+ Entries: []push.Entry{
+ {Timestamp: time.Now(), Line: "test"},
+ },
+ },
+ })
+ require.NoError(t, err)
+}
+
+type testPartitionRing struct {
+ partitionRing *ring.PartitionRing
+}
+
+func (t *testPartitionRing) PartitionRing() *ring.PartitionRing {
+ return t.partitionRing
+}
+
+func newTestPartitionRing() ring.PartitionRingReader {
+ desc := ring.NewPartitionRingDesc()
+ desc.AddPartition(0, ring.PartitionActive, time.Now())
+ return &testPartitionRing{
+ partitionRing: ring.NewPartitionRing(*desc),
+ }
+}
diff --git a/pkg/kafka/testkafka/cluster.go b/pkg/kafka/testkafka/cluster.go
new file mode 100644
index 0000000000000..fc00e7272e7aa
--- /dev/null
+++ b/pkg/kafka/testkafka/cluster.go
@@ -0,0 +1,152 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package testkafka
+
+import (
+ "testing"
+ "time"
+
+ "github.com/grafana/dskit/flagext"
+ "github.com/stretchr/testify/assert"
+ "github.com/stretchr/testify/require"
+ "github.com/twmb/franz-go/pkg/kfake"
+ "github.com/twmb/franz-go/pkg/kmsg"
+
+ "github.com/grafana/loki/v3/pkg/kafka"
+)
+
+// CreateCluster returns a fake Kafka cluster for unit testing.
+func CreateCluster(t testing.TB, numPartitions int32, topicName string) (*kfake.Cluster, kafka.Config) {
+ cluster, addr := CreateClusterWithoutCustomConsumerGroupsSupport(t, numPartitions, topicName)
+ addSupportForConsumerGroups(t, cluster, topicName, numPartitions)
+
+ return cluster, createTestKafkaConfig(addr, topicName)
+}
+
+func createTestKafkaConfig(clusterAddr, topicName string) kafka.Config {
+ cfg := kafka.Config{}
+ flagext.DefaultValues(&cfg)
+
+ cfg.Address = clusterAddr
+ cfg.Topic = topicName
+ cfg.WriteTimeout = 2 * time.Second
+
+ return cfg
+}
+
+func CreateClusterWithoutCustomConsumerGroupsSupport(t testing.TB, numPartitions int32, topicName string) (*kfake.Cluster, string) {
+ cluster, err := kfake.NewCluster(kfake.NumBrokers(1), kfake.SeedTopics(numPartitions, topicName))
+ require.NoError(t, err)
+ t.Cleanup(cluster.Close)
+
+ addrs := cluster.ListenAddrs()
+ require.Len(t, addrs, 1)
+
+ return cluster, addrs[0]
+}
+
+// addSupportForConsumerGroups adds very bare-bones support for one consumer group.
+// It expects that only one partition is consumed at a time.
+func addSupportForConsumerGroups(t testing.TB, cluster *kfake.Cluster, topicName string, numPartitions int32) {
+ committedOffsets := map[string][]int64{}
+
+ ensureConsumerGroupExists := func(consumerGroup string) {
+ if _, ok := committedOffsets[consumerGroup]; ok {
+ return
+ }
+ committedOffsets[consumerGroup] = make([]int64, numPartitions+1)
+
+ // Initialise the partition offsets with the special value -1 which means "no offset committed".
+ for i := 0; i < len(committedOffsets[consumerGroup]); i++ {
+ committedOffsets[consumerGroup][i] = -1
+ }
+ }
+
+ cluster.ControlKey(kmsg.OffsetCommit.Int16(), func(request kmsg.Request) (kmsg.Response, error, bool) {
+ cluster.KeepControl()
+ commitR := request.(*kmsg.OffsetCommitRequest)
+ consumerGroup := commitR.Group
+ ensureConsumerGroupExists(consumerGroup)
+ assert.Len(t, commitR.Topics, 1, "test only has support for one topic per request")
+ topic := commitR.Topics[0]
+ assert.Equal(t, topicName, topic.Topic)
+ assert.Len(t, topic.Partitions, 1, "test only has support for one partition per request")
+
+ partitionID := topic.Partitions[0].Partition
+ committedOffsets[consumerGroup][partitionID] = topic.Partitions[0].Offset
+
+ resp := request.ResponseKind().(*kmsg.OffsetCommitResponse)
+ resp.Default()
+ resp.Topics = []kmsg.OffsetCommitResponseTopic{
+ {
+ Topic: topicName,
+ Partitions: []kmsg.OffsetCommitResponseTopicPartition{{Partition: partitionID}},
+ },
+ }
+
+ return resp, nil, true
+ })
+
+ cluster.ControlKey(kmsg.OffsetFetch.Int16(), func(kreq kmsg.Request) (kmsg.Response, error, bool) {
+ cluster.KeepControl()
+ req := kreq.(*kmsg.OffsetFetchRequest)
+ assert.Len(t, req.Groups, 1, "test only has support for one consumer group per request")
+ consumerGroup := req.Groups[0].Group
+ ensureConsumerGroupExists(consumerGroup)
+
+ const allPartitions = -1
+ var partitionID int32
+
+ if len(req.Groups[0].Topics) == 0 {
+ // An empty request means fetch all topic-partitions for this group.
+ partitionID = allPartitions
+ } else {
+ partitionID = req.Groups[0].Topics[0].Partitions[0]
+ assert.Len(t, req.Groups[0], 1, "test only has support for one partition per request")
+ assert.Len(t, req.Groups[0].Topics[0].Partitions, 1, "test only has support for one partition per request")
+ }
+
+ // Prepare the list of partitions for which the offset has been committed.
+ // This mimics the real Kafka behaviour.
+ var partitionsResp []kmsg.OffsetFetchResponseGroupTopicPartition
+ if partitionID == allPartitions {
+ for i := int32(1); i < numPartitions+1; i++ {
+ if committedOffsets[consumerGroup][i] >= 0 {
+ partitionsResp = append(partitionsResp, kmsg.OffsetFetchResponseGroupTopicPartition{
+ Partition: i,
+ Offset: committedOffsets[consumerGroup][i],
+ })
+ }
+ }
+ } else {
+ if committedOffsets[consumerGroup][partitionID] >= 0 {
+ partitionsResp = append(partitionsResp, kmsg.OffsetFetchResponseGroupTopicPartition{
+ Partition: partitionID,
+ Offset: committedOffsets[consumerGroup][partitionID],
+ })
+ }
+ }
+
+ // Prepare the list topics for which there are some committed offsets.
+ // This mimics the real Kafka behaviour.
+ var topicsResp []kmsg.OffsetFetchResponseGroupTopic
+ if len(partitionsResp) > 0 {
+ topicsResp = []kmsg.OffsetFetchResponseGroupTopic{
+ {
+ Topic: topicName,
+ Partitions: partitionsResp,
+ },
+ }
+ }
+
+ resp := kreq.ResponseKind().(*kmsg.OffsetFetchResponse)
+ resp.Default()
+ resp.Groups = []kmsg.OffsetFetchResponseGroup{
+ {
+ Group: consumerGroup,
+ Topics: topicsResp,
+ },
+ }
+ return resp, nil, true
+ })
+}
diff --git a/pkg/kafka/testkafka/message.go b/pkg/kafka/testkafka/message.go
new file mode 100644
index 0000000000000..cb7fd50a3f50c
--- /dev/null
+++ b/pkg/kafka/testkafka/message.go
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package testkafka
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// CreateProduceResponseError returns a kmsg.ProduceResponse containing err for the input topic and partition.
+func CreateProduceResponseError(version int16, topic string, partition int32, err *kerr.Error) *kmsg.ProduceResponse {
+ return &kmsg.ProduceResponse{
+ Version: version,
+ Topics: []kmsg.ProduceResponseTopic{
+ {
+ Topic: topic,
+ Partitions: []kmsg.ProduceResponseTopicPartition{
+ {
+ Partition: partition,
+ ErrorCode: err.Code,
+ },
+ },
+ },
+ },
+ }
+}
diff --git a/pkg/kafka/writer_client.go b/pkg/kafka/writer_client.go
new file mode 100644
index 0000000000000..8f65679c01c17
--- /dev/null
+++ b/pkg/kafka/writer_client.go
@@ -0,0 +1,322 @@
+package kafka
+
+import (
+ "context"
+ "errors"
+ "math"
+ "sync"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kgo"
+ "github.com/twmb/franz-go/pkg/kmsg"
+ "github.com/twmb/franz-go/plugin/kotel"
+ "github.com/twmb/franz-go/plugin/kprom"
+ "go.opentelemetry.io/otel/propagation"
+ "go.opentelemetry.io/otel/trace"
+ "go.uber.org/atomic"
+)
+
+// NewWriterClient returns the kgo.Client that should be used by the Writer.
+//
+// The input prometheus.Registerer must be wrapped with a prefix (the names of metrics
+// registered don't have a prefix).
+func NewWriterClient(kafkaCfg Config, maxInflightProduceRequests int, logger log.Logger, reg prometheus.Registerer) (*kgo.Client, error) {
+ // Do not export the client ID, because we use it to specify options to the backend.
+ metrics := kprom.NewMetrics(
+ "", // No prefix. We expect the input prometheus.Registered to be wrapped with a prefix.
+ kprom.Registerer(reg),
+ kprom.FetchAndProduceDetail(kprom.Batches, kprom.Records, kprom.CompressedBytes, kprom.UncompressedBytes))
+
+ opts := append(
+ commonKafkaClientOptions(kafkaCfg, metrics, logger),
+ kgo.RequiredAcks(kgo.AllISRAcks()),
+ kgo.DefaultProduceTopic(kafkaCfg.Topic),
+
+ // We set the partition field in each record.
+ kgo.RecordPartitioner(kgo.ManualPartitioner()),
+
+ // Set the upper bounds the size of a record batch.
+ kgo.ProducerBatchMaxBytes(producerBatchMaxBytes),
+
+ // By default, the Kafka client allows 1 Produce in-flight request per broker. Disabling write idempotency
+ // (which we don't need), we can increase the max number of in-flight Produce requests per broker. A higher
+ // number of in-flight requests, in addition to short buffering ("linger") in client side before firing the
+ // next Produce request allows us to reduce the end-to-end latency.
+ //
+ // The result of the multiplication of producer linger and max in-flight requests should match the maximum
+ // Produce latency expected by the Kafka backend in a steady state. For example, 50ms * 20 requests = 1s,
+ // which means the Kafka client will keep issuing a Produce request every 50ms as far as the Kafka backend
+ // doesn't take longer than 1s to process them (if it takes longer, the client will buffer data and stop
+ // issuing new Produce requests until some previous ones complete).
+ kgo.DisableIdempotentWrite(),
+ kgo.ProducerLinger(50*time.Millisecond),
+ kgo.MaxProduceRequestsInflightPerBroker(maxInflightProduceRequests),
+
+ // Unlimited number of Produce retries but a deadline on the max time a record can take to be delivered.
+ // With the default config it would retry infinitely.
+ //
+ // Details of the involved timeouts:
+ // - RecordDeliveryTimeout: how long a Kafka client Produce() call can take for a given record. The overhead
+ // timeout is NOT applied.
+ // - ProduceRequestTimeout: how long to wait for the response to the Produce request (the Kafka protocol message)
+ // after being sent on the network. The actual timeout is increased by the configured overhead.
+ //
+ // When a Produce request to Kafka fail, the client will retry up until the RecordDeliveryTimeout is reached.
+ // Once the timeout is reached, the Produce request will fail and all other buffered requests in the client
+ // (for the same partition) will fail too. See kgo.RecordDeliveryTimeout() documentation for more info.
+ kgo.RecordRetries(math.MaxInt),
+ kgo.RecordDeliveryTimeout(kafkaCfg.WriteTimeout),
+ kgo.ProduceRequestTimeout(kafkaCfg.WriteTimeout),
+ kgo.RequestTimeoutOverhead(writerRequestTimeoutOverhead),
+
+ // Unlimited number of buffered records because we limit on bytes in Writer. The reason why we don't use
+ // kgo.MaxBufferedBytes() is because it suffers a deadlock issue:
+ // https://github.com/twmb/franz-go/issues/777
+ kgo.MaxBufferedRecords(math.MaxInt), // Use a high value to set it as unlimited, because the client doesn't support "0 as unlimited".
+ kgo.MaxBufferedBytes(0),
+ )
+
+ return kgo.NewClient(opts...)
+}
+
+type onlySampledTraces struct {
+ propagation.TextMapPropagator
+}
+
+func (o onlySampledTraces) Inject(ctx context.Context, carrier propagation.TextMapCarrier) {
+ sc := trace.SpanContextFromContext(ctx)
+ if !sc.IsSampled() {
+ return
+ }
+ o.TextMapPropagator.Inject(ctx, carrier)
+}
+
+func commonKafkaClientOptions(cfg Config, metrics *kprom.Metrics, logger log.Logger) []kgo.Opt {
+ opts := []kgo.Opt{
+ kgo.ClientID(cfg.ClientID),
+ kgo.SeedBrokers(cfg.Address),
+ kgo.DialTimeout(cfg.DialTimeout),
+
+ // A cluster metadata update is a request sent to a broker and getting back the map of partitions and
+ // the leader broker for each partition. The cluster metadata can be updated (a) periodically or
+ // (b) when some events occur (e.g. backoff due to errors).
+ //
+ // MetadataMinAge() sets the minimum time between two cluster metadata updates due to events.
+ // MetadataMaxAge() sets how frequently the periodic update should occur.
+ //
+ // It's important to note that the periodic update is also used to discover new brokers (e.g. during a
+ // rolling update or after a scale up). For this reason, it's important to run the update frequently.
+ //
+ // The other two side effects of frequently updating the cluster metadata:
+ // 1. The "metadata" request may be expensive to run on the Kafka backend.
+ // 2. If the backend returns each time a different authoritative owner for a partition, then each time
+ // the cluster metadata is updated the Kafka client will create a new connection for each partition,
+ // leading to a high connections churn rate.
+ //
+ // We currently set min and max age to the same value to have constant load on the Kafka backend: regardless
+ // there are errors or not, the metadata requests frequency doesn't change.
+ kgo.MetadataMinAge(10 * time.Second),
+ kgo.MetadataMaxAge(10 * time.Second),
+
+ kgo.WithLogger(newLogger(logger)),
+
+ kgo.RetryTimeoutFn(func(key int16) time.Duration {
+ switch key {
+ case ((*kmsg.ListOffsetsRequest)(nil)).Key():
+ return cfg.LastProducedOffsetRetryTimeout
+ }
+
+ // 30s is the default timeout in the Kafka client.
+ return 30 * time.Second
+ }),
+ }
+
+ if cfg.AutoCreateTopicEnabled {
+ opts = append(opts, kgo.AllowAutoTopicCreation())
+ }
+
+ tracer := kotel.NewTracer(
+ kotel.TracerPropagator(propagation.NewCompositeTextMapPropagator(onlySampledTraces{propagation.TraceContext{}})),
+ )
+ opts = append(opts, kgo.WithHooks(kotel.NewKotel(kotel.WithTracer(tracer)).Hooks()...))
+
+ if metrics != nil {
+ opts = append(opts, kgo.WithHooks(metrics))
+ }
+
+ return opts
+}
+
+// Producer is a kgo.Client wrapper exposing some higher level features and metrics useful for producers.
+type Producer struct {
+ *kgo.Client
+
+ closeOnce *sync.Once
+ closed chan struct{}
+
+ // Keep track of Kafka records size (bytes) currently in-flight in the Kafka client.
+ // This counter is used to implement a limit on the max buffered bytes.
+ bufferedBytes *atomic.Int64
+
+ // The max buffered bytes allowed. Once this limit is reached, produce requests fail.
+ maxBufferedBytes int64
+
+ // Custom metrics.
+ bufferedProduceBytes prometheus.Summary
+ bufferedProduceBytesLimit prometheus.Gauge
+ produceRequestsTotal prometheus.Counter
+ produceFailuresTotal *prometheus.CounterVec
+}
+
+// NewProducer returns a new KafkaProducer.
+//
+// The input prometheus.Registerer must be wrapped with a prefix (the names of metrics
+// registered don't have a prefix).
+func NewProducer(client *kgo.Client, maxBufferedBytes int64, reg prometheus.Registerer) *Producer {
+ producer := &Producer{
+ Client: client,
+ closeOnce: &sync.Once{},
+ closed: make(chan struct{}),
+ bufferedBytes: atomic.NewInt64(0),
+ maxBufferedBytes: maxBufferedBytes,
+
+ // Metrics.
+ bufferedProduceBytes: promauto.With(reg).NewSummary(
+ prometheus.SummaryOpts{
+ Name: "buffered_produce_bytes",
+ Help: "The buffered produce records in bytes. Quantile buckets keep track of buffered records size over the last 60s.",
+ Objectives: map[float64]float64{0.5: 0.05, 0.99: 0.001, 1: 0.001},
+ MaxAge: time.Minute,
+ AgeBuckets: 6,
+ }),
+ bufferedProduceBytesLimit: promauto.With(reg).NewGauge(
+ prometheus.GaugeOpts{
+ Name: "buffered_produce_bytes_limit",
+ Help: "The bytes limit on buffered produce records. Produce requests fail once this limit is reached.",
+ }),
+ produceRequestsTotal: promauto.With(reg).NewCounter(prometheus.CounterOpts{
+ Name: "produce_requests_total",
+ Help: "Total number of produce requests issued to Kafka.",
+ }),
+ produceFailuresTotal: promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
+ Name: "produce_failures_total",
+ Help: "Total number of failed produce requests issued to Kafka.",
+ }, []string{"reason"}),
+ }
+
+ producer.bufferedProduceBytesLimit.Set(float64(maxBufferedBytes))
+
+ go producer.updateMetricsLoop()
+
+ return producer
+}
+
+func (c *Producer) Close() {
+ c.closeOnce.Do(func() {
+ close(c.closed)
+ })
+
+ c.Client.Close()
+}
+
+func (c *Producer) updateMetricsLoop() {
+ // We observe buffered produce bytes and at regular intervals, to have a good
+ // approximation of the peak value reached over the observation period.
+ ticker := time.NewTicker(250 * time.Millisecond)
+
+ for {
+ select {
+ case <-ticker.C:
+ c.bufferedProduceBytes.Observe(float64(c.Client.BufferedProduceBytes()))
+
+ case <-c.closed:
+ return
+ }
+ }
+}
+
+// ProduceSync produces records to Kafka and returns once all records have been successfully committed,
+// or an error occurred.
+//
+// This function honors the configure max buffered bytes and refuse to produce a record, returnin kgo.ErrMaxBuffered,
+// if the configured limit is reached.
+func (c *Producer) ProduceSync(ctx context.Context, records []*kgo.Record) kgo.ProduceResults {
+ var (
+ remaining = atomic.NewInt64(int64(len(records)))
+ done = make(chan struct{})
+ resMx sync.Mutex
+ res = make(kgo.ProduceResults, 0, len(records))
+ )
+
+ c.produceRequestsTotal.Add(float64(len(records)))
+
+ onProduceDone := func(r *kgo.Record, err error) {
+ if c.maxBufferedBytes > 0 {
+ c.bufferedBytes.Add(-int64(len(r.Value)))
+ }
+
+ resMx.Lock()
+ res = append(res, kgo.ProduceResult{Record: r, Err: err})
+ resMx.Unlock()
+
+ if err != nil {
+ c.produceFailuresTotal.WithLabelValues(produceErrReason(err)).Inc()
+ }
+
+ // In case of error we'll wait for all responses anyway before returning from produceSync().
+ // It allows us to keep code easier, given we don't expect this function to be frequently
+ // called with multiple records.
+ if remaining.Dec() == 0 {
+ close(done)
+ }
+ }
+
+ for _, record := range records {
+ // Fast fail if the Kafka client buffer is full. Buffered bytes counter is decreased onProducerDone().
+ if c.maxBufferedBytes > 0 && c.bufferedBytes.Add(int64(len(record.Value))) > c.maxBufferedBytes {
+ onProduceDone(record, kgo.ErrMaxBuffered)
+ continue
+ }
+
+ // We use a new context to avoid that other Produce() may be cancelled when this call's context is
+ // canceled. It's important to note that cancelling the context passed to Produce() doesn't actually
+ // prevent the data to be sent over the wire (because it's never removed from the buffer) but in some
+ // cases may cause all requests to fail with context cancelled.
+ //
+ // Produce() may theoretically block if the buffer is full, but we configure the Kafka client with
+ // unlimited buffer because we implement the buffer limit ourselves (see maxBufferedBytes). This means
+ // Produce() should never block for us in practice.
+ c.Client.Produce(context.WithoutCancel(ctx), record, onProduceDone)
+ }
+
+ // Wait for a response or until the context has done.
+ select {
+ case <-ctx.Done():
+ return kgo.ProduceResults{{Err: context.Cause(ctx)}}
+ case <-done:
+ // Once we're done, it's guaranteed that no more results will be appended, so we can safely return it.
+ return res
+ }
+}
+
+func produceErrReason(err error) string {
+ if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, kgo.ErrRecordTimeout) {
+ return "timeout"
+ }
+ if errors.Is(err, kgo.ErrMaxBuffered) {
+ return "buffer-full"
+ }
+ if errors.Is(err, kerr.MessageTooLarge) {
+ return "record-too-large"
+ }
+ if errors.Is(err, context.Canceled) {
+ // This should never happen because we don't cancel produce requests, however we
+ // check this error anyway to detect if something unexpected happened.
+ return "canceled"
+ }
+ return "other"
+}
diff --git a/pkg/logcli/client/client.go b/pkg/logcli/client/client.go
index 0c7880d62257c..531ef15f61ad5 100644
--- a/pkg/logcli/client/client.go
+++ b/pkg/logcli/client/client.go
@@ -449,7 +449,7 @@ func (c *DefaultClient) wsConnect(path, query string, quiet bool) (*websocket.Co
}
if c.ProxyURL != "" {
- ws.Proxy = func(req *http.Request) (*url.URL, error) {
+ ws.Proxy = func(_ *http.Request) (*url.URL, error) {
return url.Parse(c.ProxyURL)
}
}
diff --git a/pkg/loghttp/push/otlp.go b/pkg/loghttp/push/otlp.go
index a361bbbf196de..13aea9ee59caa 100644
--- a/pkg/loghttp/push/otlp.go
+++ b/pkg/loghttp/push/otlp.go
@@ -47,7 +47,7 @@ func ParseOTLPRequest(userID string, r *http.Request, tenantsRetention TenantsRe
return nil, nil, err
}
- req := otlpToLokiPushRequest(r.Context(), otlpLogs, userID, tenantsRetention, limits.OTLPConfig(userID), tracker, stats)
+ req := otlpToLokiPushRequest(r.Context(), otlpLogs, userID, tenantsRetention, limits.OTLPConfig(userID), limits.DiscoverServiceName(userID), tracker, stats)
return req, stats, nil
}
@@ -98,7 +98,7 @@ func extractLogs(r *http.Request, pushStats *Stats) (plog.Logs, error) {
return req.Logs(), nil
}
-func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, tenantsRetention TenantsRetention, otlpConfig OTLPConfig, tracker UsageTracker, stats *Stats) *logproto.PushRequest {
+func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, tenantsRetention TenantsRetention, otlpConfig OTLPConfig, discoverServiceName []string, tracker UsageTracker, stats *Stats) *logproto.PushRequest {
if ld.LogRecordCount() == 0 {
return &logproto.PushRequest{}
}
@@ -111,12 +111,14 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
res := rls.At(i).Resource()
resAttrs := res.Attributes()
- if v, ok := resAttrs.Get(attrServiceName); !ok || v.AsString() == "" {
- resAttrs.PutStr(attrServiceName, "unknown_service")
- }
resourceAttributesAsStructuredMetadata := make(push.LabelsAdapter, 0, resAttrs.Len())
streamLabels := make(model.LabelSet, 30) // we have a default labels limit of 30 so just initialize the map of same size
+ shouldDiscoverServiceName := len(discoverServiceName) > 0 && !stats.IsAggregatedMetric
+ hasServiceName := false
+ if v, ok := resAttrs.Get(attrServiceName); ok && v.AsString() != "" {
+ hasServiceName = true
+ }
resAttrs.Range(func(k string, v pcommon.Value) bool {
action := otlpConfig.ActionForResourceAttribute(k)
if action == Drop {
@@ -127,6 +129,16 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
if action == IndexLabel {
for _, lbl := range attributeAsLabels {
streamLabels[model.LabelName(lbl.Name)] = model.LabelValue(lbl.Value)
+
+ if !hasServiceName && shouldDiscoverServiceName {
+ for _, labelName := range discoverServiceName {
+ if lbl.Name == labelName {
+ streamLabels[model.LabelName(LabelServiceName)] = model.LabelValue(lbl.Value)
+ hasServiceName = true
+ break
+ }
+ }
+ }
}
} else if action == StructuredMetadata {
resourceAttributesAsStructuredMetadata = append(resourceAttributesAsStructuredMetadata, attributeAsLabels...)
@@ -135,6 +147,10 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
return true
})
+ if !hasServiceName && shouldDiscoverServiceName {
+ streamLabels[model.LabelName(LabelServiceName)] = model.LabelValue(ServiceUnknown)
+ }
+
if err := streamLabels.Validate(); err != nil {
stats.Errs = append(stats.Errs, fmt.Errorf("invalid labels: %w", err))
continue
diff --git a/pkg/loghttp/push/otlp_test.go b/pkg/loghttp/push/otlp_test.go
index bcdeb18d17069..e2ca137f274c0 100644
--- a/pkg/loghttp/push/otlp_test.go
+++ b/pkg/loghttp/push/otlp_test.go
@@ -20,6 +20,20 @@ import (
func TestOTLPToLokiPushRequest(t *testing.T) {
now := time.Unix(0, time.Now().UnixNano())
+ defaultServiceDetection := []string{
+ "service",
+ "app",
+ "application",
+ "name",
+ "app_kubernetes_io_name",
+ "container",
+ "container_name",
+ "k8s_container_name",
+ "component",
+ "workload",
+ "job",
+ "k8s_job_name",
+ }
for _, tc := range []struct {
name string
@@ -346,7 +360,8 @@ func TestOTLPToLokiPushRequest(t *testing.T) {
{
Action: IndexLabel,
Attributes: []string{"pod.name"},
- }, {
+ },
+ {
Action: IndexLabel,
Regex: relabel.MustNewRegexp("service.*"),
},
@@ -493,7 +508,7 @@ func TestOTLPToLokiPushRequest(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
stats := newPushStats()
tracker := NewMockTracker()
- pushReq := otlpToLokiPushRequest(context.Background(), tc.generateLogs(), "foo", fakeRetention{}, tc.otlpConfig, tracker, stats)
+ pushReq := otlpToLokiPushRequest(context.Background(), tc.generateLogs(), "foo", fakeRetention{}, tc.otlpConfig, defaultServiceDetection, tracker, stats)
require.Equal(t, tc.expectedPushRequest, *pushReq)
require.Equal(t, tc.expectedStats, *stats)
@@ -592,7 +607,6 @@ func TestOTLPLogToPushEntry(t *testing.T) {
require.Equal(t, tc.expectedResp, otlpLogToPushEntry(tc.buildLogRecord(), DefaultOTLPConfig(defaultGlobalOTLPConfig)))
})
}
-
}
func TestAttributesToLabels(t *testing.T) {
diff --git a/pkg/loghttp/push/push_test.go b/pkg/loghttp/push/push_test.go
index 80e7c5e7eead1..e63b2c873c8de 100644
--- a/pkg/loghttp/push/push_test.go
+++ b/pkg/loghttp/push/push_test.go
@@ -6,7 +6,9 @@ import (
"compress/gzip"
"context"
"fmt"
+ "io"
"log"
+ "net/http"
"net/http/httptest"
"strings"
"testing"
@@ -16,6 +18,10 @@ import (
"github.com/prometheus/prometheus/model/labels"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
+ "go.opentelemetry.io/collector/pdata/pcommon"
+ "go.opentelemetry.io/collector/pdata/plog"
+
+ "github.com/grafana/dskit/flagext"
util_log "github.com/grafana/loki/v3/pkg/util/log"
)
@@ -256,7 +262,7 @@ func TestParseRequest(t *testing.T) {
}
tracker := NewMockTracker()
- data, err := ParseRequest(util_log.Logger, "fake", request, nil, &fakeLimits{test.enableServiceDiscovery}, ParseLokiRequest, tracker)
+ data, err := ParseRequest(util_log.Logger, "fake", request, nil, &fakeLimits{enabled: test.enableServiceDiscovery}, ParseLokiRequest, tracker)
structuredMetadataBytesReceived := int(structuredMetadataBytesReceivedStats.Value()["total"].(int64)) - previousStructuredMetadataBytesReceived
previousStructuredMetadataBytesReceived += structuredMetadataBytesReceived
@@ -314,19 +320,124 @@ func TestParseRequest(t *testing.T) {
}
}
+func Test_ServiceDetection(t *testing.T) {
+ tracker := NewMockTracker()
+
+ createOtlpLogs := func(labels ...string) []byte {
+ now := time.Unix(0, time.Now().UnixNano())
+ ld := plog.NewLogs()
+ for i := 0; i < len(labels); i += 2 {
+ ld.ResourceLogs().AppendEmpty().Resource().Attributes().PutStr(labels[i], labels[i+1])
+ }
+ ld.ResourceLogs().At(0).ScopeLogs().AppendEmpty().LogRecords().AppendEmpty().Body().SetStr("test body")
+ ld.ResourceLogs().At(0).ScopeLogs().At(0).LogRecords().At(0).SetTimestamp(pcommon.Timestamp(now.UnixNano()))
+
+ jsonMarshaller := plog.JSONMarshaler{}
+ body, err := jsonMarshaller.MarshalLogs(ld)
+
+ require.NoError(t, err)
+ return body
+ }
+
+ createRequest := func(path string, body io.Reader) *http.Request {
+ request := httptest.NewRequest(
+ "POST",
+ path,
+ body,
+ )
+ request.Header.Add("Content-Type", "application/json")
+
+ return request
+ }
+
+ t.Run("detects servce from loki push requests", func(t *testing.T) {
+ body := `{"streams": [{ "stream": { "foo": "bar" }, "values": [ [ "1570818238000000000", "fizzbuzz" ] ] }]}`
+ request := createRequest("/loki/api/v1/push", strings.NewReader(body))
+
+ limits := &fakeLimits{enabled: true, labels: []string{"foo"}}
+ data, err := ParseRequest(util_log.Logger, "fake", request, nil, limits, ParseLokiRequest, tracker)
+
+ require.NoError(t, err)
+ require.Equal(t, labels.FromStrings("foo", "bar", LabelServiceName, "bar").String(), data.Streams[0].Labels)
+ })
+
+ t.Run("detects servce from OTLP push requests using default indexing", func(t *testing.T) {
+ body := createOtlpLogs("k8s.job.name", "bar")
+ request := createRequest("/otlp/v1/push", bytes.NewReader(body))
+
+ limits := &fakeLimits{enabled: true}
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker)
+ require.NoError(t, err)
+ require.Equal(t, labels.FromStrings("k8s_job_name", "bar", LabelServiceName, "bar").String(), data.Streams[0].Labels)
+ })
+
+ t.Run("detects servce from OTLP push requests using custom indexing", func(t *testing.T) {
+ body := createOtlpLogs("special", "sauce")
+ request := createRequest("/otlp/v1/push", bytes.NewReader(body))
+
+ limits := &fakeLimits{
+ enabled: true,
+ labels: []string{"special"},
+ indexAttributes: []string{"special"},
+ }
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker)
+ require.NoError(t, err)
+ require.Equal(t, labels.FromStrings("special", "sauce", LabelServiceName, "sauce").String(), data.Streams[0].Labels)
+ })
+
+ t.Run("only detects custom service label from indexed labels", func(t *testing.T) {
+ body := createOtlpLogs("special", "sauce")
+ request := createRequest("/otlp/v1/push", bytes.NewReader(body))
+
+ limits := &fakeLimits{
+ enabled: true,
+ labels: []string{"special"},
+ indexAttributes: []string{},
+ }
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker)
+ require.NoError(t, err)
+ require.Equal(t, labels.FromStrings(LabelServiceName, ServiceUnknown).String(), data.Streams[0].Labels)
+ })
+}
+
type fakeLimits struct {
- enabled bool
+ enabled bool
+ labels []string
+ indexAttributes []string
}
-func (l *fakeLimits) OTLPConfig(_ string) OTLPConfig {
- return OTLPConfig{}
+func (f *fakeLimits) RetentionPeriodFor(_ string, _ labels.Labels) time.Duration {
+ return time.Hour
}
-func (l *fakeLimits) DiscoverServiceName(_ string) []string {
- if !l.enabled {
+func (f *fakeLimits) OTLPConfig(_ string) OTLPConfig {
+ if len(f.indexAttributes) > 0 {
+ return OTLPConfig{
+ ResourceAttributes: ResourceAttributesConfig{
+ AttributesConfig: []AttributesConfig{
+ {
+ Action: IndexLabel,
+ Attributes: f.indexAttributes,
+ },
+ },
+ },
+ }
+ }
+
+ defaultGlobalOTLPConfig := GlobalOTLPConfig{}
+ flagext.DefaultValues(&defaultGlobalOTLPConfig)
+ return DefaultOTLPConfig(defaultGlobalOTLPConfig)
+}
+
+func (f *fakeLimits) DiscoverServiceName(_ string) []string {
+ if !f.enabled {
return nil
}
+ if len(f.labels) > 0 {
+ return f.labels
+ }
+
return []string{
"service",
"app",
@@ -335,9 +446,11 @@ func (l *fakeLimits) DiscoverServiceName(_ string) []string {
"app_kubernetes_io_name",
"container",
"container_name",
+ "k8s_container_name",
"component",
"workload",
"job",
+ "k8s_job_name",
}
}
diff --git a/pkg/loghttp/query.go b/pkg/loghttp/query.go
index 89ad4e00a79c0..af67b9df2d0a3 100644
--- a/pkg/loghttp/query.go
+++ b/pkg/loghttp/query.go
@@ -506,7 +506,7 @@ func ParseRangeQuery(r *http.Request) (*RangeQuery, error) {
if GetVersion(r.URL.Path) == VersionLegacy {
result.Query, err = parseRegexQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
expr, err := syntax.ParseExpr(result.Query)
diff --git a/pkg/loghttp/tail.go b/pkg/loghttp/tail.go
index 658ae112cce07..109a81b27e24e 100644
--- a/pkg/loghttp/tail.go
+++ b/pkg/loghttp/tail.go
@@ -83,7 +83,7 @@ func ParseTailQuery(r *http.Request) (*logproto.TailRequest, error) {
req.Query, err = parseRegexQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
req.Limit, err = limit(r)
diff --git a/pkg/logproto/logproto.pb.go b/pkg/logproto/logproto.pb.go
index 50509f87ea135..e5c36bb22b8d7 100644
--- a/pkg/logproto/logproto.pb.go
+++ b/pkg/logproto/logproto.pb.go
@@ -2835,7 +2835,7 @@ type DetectedField struct {
Label string `protobuf:"bytes,1,opt,name=label,proto3" json:"label,omitempty"`
Type DetectedFieldType `protobuf:"bytes,2,opt,name=type,proto3,casttype=DetectedFieldType" json:"type,omitempty"`
Cardinality uint64 `protobuf:"varint,3,opt,name=cardinality,proto3" json:"cardinality,omitempty"`
- Parsers []string `protobuf:"bytes,4,rep,name=parsers,proto3" json:"parsers,omitempty"`
+ Parsers []string `protobuf:"bytes,4,rep,name=parsers,proto3" json:"parsers"`
Sketch []byte `protobuf:"bytes,5,opt,name=sketch,proto3" json:"sketch,omitempty"`
}
@@ -3130,178 +3130,179 @@ func init() {
func init() { proto.RegisterFile("pkg/logproto/logproto.proto", fileDescriptor_c28a5f14f1f4c79a) }
var fileDescriptor_c28a5f14f1f4c79a = []byte{
- // 2734 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x1a, 0x4d, 0x8c, 0x5b, 0x47,
- 0xd9, 0xcf, 0xff, 0xfe, 0xec, 0xdd, 0x6c, 0x66, 0x9d, 0xc4, 0xda, 0xa4, 0x7e, 0xdb, 0x11, 0xb4,
- 0xa1, 0x49, 0xd7, 0x49, 0x4a, 0x4b, 0x9a, 0x52, 0x4a, 0xbc, 0xdb, 0xa4, 0x49, 0xb7, 0x49, 0x3a,
- 0x9b, 0xa6, 0x05, 0x51, 0x55, 0x2f, 0xf6, 0xac, 0xfd, 0x14, 0xfb, 0x3d, 0xe7, 0xbd, 0x71, 0x53,
- 0xdf, 0x90, 0x38, 0x23, 0x2a, 0x71, 0x00, 0x2e, 0x48, 0x48, 0x48, 0x20, 0xa4, 0x5e, 0x10, 0x27,
- 0x84, 0xe0, 0xc2, 0xa1, 0xdc, 0x7a, 0xac, 0x7a, 0x30, 0x74, 0x7b, 0x41, 0x2b, 0x21, 0x55, 0x42,
- 0x02, 0xa9, 0x27, 0x34, 0x7f, 0xef, 0xcd, 0x7b, 0xeb, 0x65, 0xeb, 0x10, 0xd4, 0xe6, 0x62, 0xcf,
- 0xf7, 0xcd, 0x37, 0xdf, 0xcc, 0xf7, 0x33, 0xdf, 0xf7, 0xcd, 0x67, 0xc3, 0xf1, 0xd1, 0x9d, 0x5e,
- 0x6b, 0xe0, 0xf7, 0x46, 0x81, 0xcf, 0xfc, 0x68, 0xb0, 0x26, 0x3e, 0x51, 0x59, 0xc3, 0x2b, 0xf5,
- 0x9e, 0xdf, 0xf3, 0x25, 0x0d, 0x1f, 0xc9, 0xf9, 0x15, 0xbb, 0xe7, 0xfb, 0xbd, 0x01, 0x6d, 0x09,
- 0xe8, 0xf6, 0x78, 0xbb, 0xc5, 0xdc, 0x21, 0x0d, 0x99, 0x33, 0x1c, 0x29, 0x82, 0x55, 0xc5, 0xfd,
- 0xee, 0x60, 0xe8, 0x77, 0xe9, 0xa0, 0x15, 0x32, 0x87, 0x85, 0xf2, 0x53, 0x51, 0x2c, 0x73, 0x8a,
- 0xd1, 0x38, 0xec, 0x8b, 0x0f, 0x85, 0x3c, 0xc3, 0x91, 0x21, 0xf3, 0x03, 0xa7, 0x47, 0x5b, 0x9d,
- 0xfe, 0xd8, 0xbb, 0xd3, 0xea, 0x38, 0x9d, 0x3e, 0x6d, 0x05, 0x34, 0x1c, 0x0f, 0x58, 0x28, 0x01,
- 0x36, 0x19, 0x51, 0xc5, 0x06, 0xff, 0xce, 0x82, 0x23, 0x9b, 0xce, 0x6d, 0x3a, 0xb8, 0xe9, 0xdf,
- 0x72, 0x06, 0x63, 0x1a, 0x12, 0x1a, 0x8e, 0x7c, 0x2f, 0xa4, 0x68, 0x1d, 0x8a, 0x03, 0x3e, 0x11,
- 0x36, 0xac, 0xd5, 0xdc, 0xc9, 0xea, 0xb9, 0x53, 0x6b, 0x91, 0x90, 0x33, 0x17, 0x48, 0x6c, 0xf8,
- 0xa2, 0xc7, 0x82, 0x09, 0x51, 0x4b, 0x57, 0x6e, 0x41, 0xd5, 0x40, 0xa3, 0x25, 0xc8, 0xdd, 0xa1,
- 0x93, 0x86, 0xb5, 0x6a, 0x9d, 0xac, 0x10, 0x3e, 0x44, 0x67, 0xa1, 0xf0, 0x36, 0x67, 0xd3, 0xc8,
- 0xae, 0x5a, 0x27, 0xab, 0xe7, 0x8e, 0xc7, 0x9b, 0xbc, 0xe6, 0xb9, 0x77, 0xc7, 0x54, 0xac, 0x56,
- 0x1b, 0x49, 0xca, 0x0b, 0xd9, 0xf3, 0x16, 0x3e, 0x05, 0x87, 0xf7, 0xcc, 0xa3, 0xa3, 0x50, 0x14,
- 0x14, 0xf2, 0xc4, 0x15, 0xa2, 0x20, 0x5c, 0x07, 0xb4, 0xc5, 0x02, 0xea, 0x0c, 0x89, 0xc3, 0xf8,
- 0x79, 0xef, 0x8e, 0x69, 0xc8, 0xf0, 0x2b, 0xb0, 0x9c, 0xc0, 0x2a, 0xb1, 0x9f, 0x81, 0x6a, 0x18,
- 0xa3, 0x95, 0xec, 0xf5, 0xf8, 0x58, 0xf1, 0x1a, 0x62, 0x12, 0xe2, 0x9f, 0x5b, 0x00, 0xf1, 0x1c,
- 0x6a, 0x02, 0xc8, 0xd9, 0x97, 0x9c, 0xb0, 0x2f, 0x04, 0xce, 0x13, 0x03, 0x83, 0x4e, 0xc3, 0xe1,
- 0x18, 0xba, 0xe6, 0x6f, 0xf5, 0x9d, 0xa0, 0x2b, 0x74, 0x90, 0x27, 0x7b, 0x27, 0x10, 0x82, 0x7c,
- 0xe0, 0x30, 0xda, 0xc8, 0xad, 0x5a, 0x27, 0x73, 0x44, 0x8c, 0xb9, 0xb4, 0x8c, 0x7a, 0x8e, 0xc7,
- 0x1a, 0x79, 0xa1, 0x4e, 0x05, 0x71, 0x3c, 0xf7, 0x08, 0x1a, 0x36, 0x0a, 0xab, 0xd6, 0xc9, 0x05,
- 0xa2, 0x20, 0xfc, 0xaf, 0x1c, 0xd4, 0x5e, 0x1d, 0xd3, 0x60, 0xa2, 0x14, 0x80, 0x9a, 0x50, 0x0e,
- 0xe9, 0x80, 0x76, 0x98, 0x1f, 0x48, 0x8b, 0xb4, 0xb3, 0x0d, 0x8b, 0x44, 0x38, 0x54, 0x87, 0xc2,
- 0xc0, 0x1d, 0xba, 0x4c, 0x1c, 0x6b, 0x81, 0x48, 0x00, 0x5d, 0x80, 0x42, 0xc8, 0x9c, 0x80, 0x89,
- 0xb3, 0x54, 0xcf, 0xad, 0xac, 0x49, 0x57, 0x5e, 0xd3, 0xae, 0xbc, 0x76, 0x53, 0xbb, 0x72, 0xbb,
- 0xfc, 0xfe, 0xd4, 0xce, 0xbc, 0xfb, 0x57, 0xdb, 0x22, 0x72, 0x09, 0x7a, 0x06, 0x72, 0xd4, 0xeb,
- 0x8a, 0xf3, 0x7e, 0xde, 0x95, 0x7c, 0x01, 0x3a, 0x0b, 0x95, 0xae, 0x1b, 0xd0, 0x0e, 0x73, 0x7d,
- 0x4f, 0x48, 0xb5, 0x78, 0x6e, 0x39, 0xb6, 0xc8, 0x86, 0x9e, 0x22, 0x31, 0x15, 0x3a, 0x0d, 0xc5,
- 0x90, 0xab, 0x2e, 0x6c, 0x94, 0xb8, 0x2f, 0xb4, 0xeb, 0xbb, 0x53, 0x7b, 0x49, 0x62, 0x4e, 0xfb,
- 0x43, 0x97, 0xd1, 0xe1, 0x88, 0x4d, 0x88, 0xa2, 0x41, 0x4f, 0x40, 0xa9, 0x4b, 0x07, 0x94, 0x1b,
- 0xbc, 0x2c, 0x0c, 0xbe, 0x64, 0xb0, 0x17, 0x13, 0x44, 0x13, 0xa0, 0x37, 0x21, 0x3f, 0x1a, 0x38,
- 0x5e, 0xa3, 0x22, 0xa4, 0x58, 0x8c, 0x09, 0x6f, 0x0c, 0x1c, 0xaf, 0xfd, 0xec, 0x47, 0x53, 0xfb,
- 0xe9, 0x9e, 0xcb, 0xfa, 0xe3, 0xdb, 0x6b, 0x1d, 0x7f, 0xd8, 0xea, 0x05, 0xce, 0xb6, 0xe3, 0x39,
- 0xad, 0x81, 0x7f, 0xc7, 0x6d, 0xbd, 0xfd, 0x54, 0x8b, 0x5f, 0xd0, 0xbb, 0x63, 0x1a, 0xb8, 0x34,
- 0x68, 0x71, 0x36, 0x6b, 0xc2, 0x24, 0x7c, 0x29, 0x11, 0x6c, 0xd1, 0x55, 0xee, 0x7f, 0x7e, 0x40,
- 0xd7, 0xf9, 0xed, 0x0d, 0x1b, 0x20, 0x76, 0x39, 0x16, 0xef, 0x22, 0xf0, 0x84, 0x6e, 0x5f, 0x0e,
- 0xfc, 0xf1, 0xa8, 0x7d, 0x68, 0x77, 0x6a, 0x9b, 0xf4, 0xc4, 0x04, 0xae, 0xe6, 0xcb, 0xc5, 0xa5,
- 0x12, 0x7e, 0x2f, 0x07, 0x68, 0xcb, 0x19, 0x8e, 0x06, 0x74, 0x2e, 0xf3, 0x47, 0x86, 0xce, 0xde,
- 0xb7, 0xa1, 0x73, 0xf3, 0x1a, 0x3a, 0xb6, 0x5a, 0x7e, 0x3e, 0xab, 0x15, 0x3e, 0xaf, 0xd5, 0x8a,
- 0x5f, 0x7a, 0xab, 0xe1, 0x06, 0xe4, 0x39, 0x67, 0x1e, 0x2c, 0x03, 0xe7, 0x9e, 0xb0, 0x4d, 0x8d,
- 0xf0, 0x21, 0xde, 0x84, 0xa2, 0x94, 0x0b, 0xad, 0xa4, 0x8d, 0x97, 0xbc, 0xb7, 0xb1, 0xe1, 0x72,
- 0xda, 0x24, 0x4b, 0xb1, 0x49, 0x72, 0x42, 0xd9, 0xf8, 0x0f, 0x16, 0x2c, 0x28, 0x8f, 0x50, 0xb1,
- 0xef, 0x36, 0x94, 0x64, 0xec, 0xd1, 0x71, 0xef, 0x58, 0x3a, 0xee, 0x5d, 0xec, 0x3a, 0x23, 0x46,
- 0x83, 0x76, 0xeb, 0xfd, 0xa9, 0x6d, 0x7d, 0x34, 0xb5, 0x1f, 0xdf, 0x4f, 0x69, 0x3a, 0x3b, 0xe9,
- 0x78, 0xa9, 0x19, 0xa3, 0x53, 0xe2, 0x74, 0x2c, 0x54, 0x6e, 0x75, 0x68, 0x4d, 0x26, 0xb5, 0x2b,
- 0x5e, 0x8f, 0x86, 0x9c, 0x73, 0x9e, 0x7b, 0x04, 0x91, 0x34, 0x5c, 0xcc, 0x7b, 0x4e, 0xe0, 0xb9,
- 0x5e, 0x2f, 0x6c, 0xe4, 0x44, 0x4c, 0x8f, 0x60, 0xfc, 0x53, 0x0b, 0x96, 0x13, 0x6e, 0xad, 0x84,
- 0x38, 0x0f, 0xc5, 0x90, 0x5b, 0x4a, 0xcb, 0x60, 0x38, 0xc5, 0x96, 0xc0, 0xb7, 0x17, 0xd5, 0xe1,
- 0x8b, 0x12, 0x26, 0x8a, 0xfe, 0xc1, 0x1d, 0xed, 0xcf, 0x16, 0xd4, 0x44, 0x62, 0xd2, 0x77, 0x0d,
- 0x41, 0xde, 0x73, 0x86, 0x54, 0x99, 0x4a, 0x8c, 0x8d, 0x6c, 0xc5, 0xb7, 0x2b, 0xeb, 0x6c, 0x35,
- 0x6f, 0x80, 0xb5, 0xee, 0x3b, 0xc0, 0x5a, 0xf1, 0xbd, 0xab, 0x43, 0x81, 0xbb, 0xf7, 0x44, 0x04,
- 0xd7, 0x0a, 0x91, 0x00, 0x7e, 0x1c, 0x16, 0x94, 0x14, 0x4a, 0xb5, 0xfb, 0x25, 0xd8, 0x21, 0x14,
- 0xa5, 0x25, 0xd0, 0x57, 0xa0, 0x12, 0x95, 0x32, 0x42, 0xda, 0x5c, 0xbb, 0xb8, 0x3b, 0xb5, 0xb3,
- 0x2c, 0x24, 0xf1, 0x04, 0xb2, 0xcd, 0xa4, 0x6f, 0xb5, 0x2b, 0xbb, 0x53, 0x5b, 0x22, 0x54, 0x8a,
- 0x47, 0x27, 0x20, 0xdf, 0xe7, 0x79, 0x93, 0xab, 0x20, 0xdf, 0x2e, 0xef, 0x4e, 0x6d, 0x01, 0x13,
- 0xf1, 0x89, 0x2f, 0x43, 0x6d, 0x93, 0xf6, 0x9c, 0xce, 0x44, 0x6d, 0x5a, 0xd7, 0xec, 0xf8, 0x86,
- 0x96, 0xe6, 0xf1, 0x28, 0xd4, 0xa2, 0x1d, 0xdf, 0x1a, 0x86, 0xea, 0x36, 0x54, 0x23, 0xdc, 0x2b,
- 0x21, 0xfe, 0x99, 0x05, 0xca, 0x07, 0x10, 0x36, 0xaa, 0x1d, 0x1e, 0x0b, 0x61, 0x77, 0x6a, 0x2b,
- 0x8c, 0x2e, 0x66, 0xd0, 0x73, 0x50, 0x0a, 0xc5, 0x8e, 0x9c, 0x59, 0xda, 0xb5, 0xc4, 0x44, 0xfb,
- 0x10, 0x77, 0x91, 0xdd, 0xa9, 0xad, 0x09, 0x89, 0x1e, 0xa0, 0xb5, 0x44, 0x41, 0x20, 0x05, 0x5b,
- 0xdc, 0x9d, 0xda, 0x06, 0xd6, 0x2c, 0x10, 0xf0, 0x67, 0x16, 0x54, 0x6f, 0x3a, 0x6e, 0xe4, 0x42,
- 0x0d, 0x6d, 0xa2, 0x38, 0x56, 0x4b, 0x04, 0xf7, 0xc4, 0x2e, 0x1d, 0x38, 0x93, 0x4b, 0x7e, 0x20,
- 0xf8, 0x2e, 0x90, 0x08, 0x8e, 0x73, 0x78, 0x7e, 0x66, 0x0e, 0x2f, 0xcc, 0x1f, 0xda, 0xff, 0xbf,
- 0x81, 0xf4, 0x6a, 0xbe, 0x9c, 0x5d, 0xca, 0xe1, 0xf7, 0x2c, 0xa8, 0x49, 0xe1, 0x95, 0xe7, 0x7d,
- 0x0f, 0x8a, 0x52, 0x37, 0x42, 0xfc, 0xff, 0x12, 0x98, 0x4e, 0xcd, 0x13, 0x94, 0x14, 0x4f, 0xf4,
- 0x02, 0x2c, 0x76, 0x03, 0x7f, 0x34, 0xa2, 0xdd, 0x2d, 0x15, 0xfe, 0xb2, 0xe9, 0xf0, 0xb7, 0x61,
- 0xce, 0x93, 0x14, 0x39, 0xfe, 0x8b, 0x05, 0x0b, 0x2a, 0x98, 0x28, 0x73, 0x45, 0x2a, 0xb6, 0xee,
- 0x3b, 0x7b, 0x66, 0xe7, 0xcd, 0x9e, 0x47, 0xa1, 0xd8, 0xe3, 0xf9, 0x45, 0x07, 0x24, 0x05, 0xcd,
- 0x97, 0x55, 0xf1, 0x55, 0x58, 0xd4, 0xa2, 0xec, 0x13, 0x51, 0x57, 0xd2, 0x11, 0xf5, 0x4a, 0x97,
- 0x7a, 0xcc, 0xdd, 0x76, 0xa3, 0x18, 0xa9, 0xe8, 0xf1, 0x8f, 0x2c, 0x58, 0x4a, 0x93, 0xa0, 0x8d,
- 0xd4, 0xc3, 0xe2, 0xb1, 0xfd, 0xd9, 0x99, 0x6f, 0x0a, 0xcd, 0x5a, 0xbd, 0x2c, 0x9e, 0x3e, 0xe8,
- 0x65, 0x51, 0x37, 0x83, 0x4c, 0x45, 0x45, 0x05, 0xfc, 0x13, 0x0b, 0x16, 0x12, 0xb6, 0x44, 0xe7,
- 0x21, 0xbf, 0x1d, 0xf8, 0xc3, 0xb9, 0x0c, 0x25, 0x56, 0xa0, 0xaf, 0x43, 0x96, 0xf9, 0x73, 0x99,
- 0x29, 0xcb, 0x7c, 0x6e, 0x25, 0x25, 0x7e, 0x4e, 0xd6, 0xed, 0x12, 0xc2, 0x4f, 0x43, 0x45, 0x08,
- 0x74, 0xc3, 0x71, 0x83, 0x99, 0x09, 0x63, 0xb6, 0x40, 0xcf, 0xc1, 0x21, 0x19, 0x0c, 0x67, 0x2f,
- 0xae, 0xcd, 0x5a, 0x5c, 0xd3, 0x8b, 0x8f, 0x43, 0x41, 0x14, 0x1d, 0x7c, 0x49, 0xd7, 0x61, 0x8e,
- 0x5e, 0xc2, 0xc7, 0xf8, 0x08, 0x2c, 0xf3, 0x3b, 0x48, 0x83, 0x70, 0xdd, 0x1f, 0x7b, 0x4c, 0xbf,
- 0x9b, 0x4e, 0x43, 0x3d, 0x89, 0x56, 0x5e, 0x52, 0x87, 0x42, 0x87, 0x23, 0x04, 0x8f, 0x05, 0x22,
- 0x01, 0xfc, 0x4b, 0x0b, 0xd0, 0x65, 0xca, 0xc4, 0x2e, 0x57, 0x36, 0xa2, 0xeb, 0xb1, 0x02, 0xe5,
- 0xa1, 0xc3, 0x3a, 0x7d, 0x1a, 0x84, 0xba, 0x7e, 0xd1, 0xf0, 0x17, 0x51, 0x78, 0xe2, 0xb3, 0xb0,
- 0x9c, 0x38, 0xa5, 0x92, 0x69, 0x05, 0xca, 0x1d, 0x85, 0x53, 0x29, 0x2f, 0x82, 0xf1, 0x6f, 0xb3,
- 0x50, 0xd6, 0x65, 0x1d, 0x3a, 0x0b, 0xd5, 0x6d, 0xd7, 0xeb, 0xd1, 0x60, 0x14, 0xb8, 0x4a, 0x05,
- 0x79, 0x59, 0xe6, 0x19, 0x68, 0x62, 0x02, 0xe8, 0x49, 0x28, 0x8d, 0x43, 0x1a, 0xbc, 0xe5, 0xca,
- 0x9b, 0x5e, 0x69, 0xd7, 0x77, 0xa6, 0x76, 0xf1, 0xb5, 0x90, 0x06, 0x57, 0x36, 0x78, 0xf2, 0x19,
- 0x8b, 0x11, 0x91, 0xdf, 0x5d, 0xf4, 0xb2, 0x72, 0x53, 0x51, 0xc0, 0xb5, 0xbf, 0xc1, 0x8f, 0x9f,
- 0x0a, 0x75, 0xa3, 0xc0, 0x1f, 0x52, 0xd6, 0xa7, 0xe3, 0xb0, 0xd5, 0xf1, 0x87, 0x43, 0xdf, 0x6b,
- 0x89, 0xde, 0x81, 0x10, 0x9a, 0x67, 0x50, 0xbe, 0x5c, 0x79, 0xee, 0x4d, 0x28, 0xb1, 0x7e, 0xe0,
- 0x8f, 0x7b, 0x7d, 0x91, 0x18, 0x72, 0xed, 0x0b, 0xf3, 0xf3, 0xd3, 0x1c, 0x88, 0x1e, 0xa0, 0x47,
- 0xb9, 0xb6, 0x68, 0xe7, 0x4e, 0x38, 0x1e, 0xca, 0xb7, 0x67, 0xbb, 0xb0, 0x3b, 0xb5, 0xad, 0x27,
- 0x49, 0x84, 0xc6, 0x17, 0x61, 0x21, 0x51, 0x0a, 0xa3, 0x33, 0x90, 0x0f, 0xe8, 0xb6, 0x0e, 0x05,
- 0x68, 0x6f, 0xc5, 0x2c, 0xb3, 0x3f, 0xa7, 0x21, 0xe2, 0x13, 0xff, 0x30, 0x0b, 0xb6, 0xf1, 0xea,
- 0xbf, 0xe4, 0x07, 0xaf, 0x50, 0x16, 0xb8, 0x9d, 0x6b, 0xce, 0x90, 0x6a, 0xf7, 0xb2, 0xa1, 0x3a,
- 0x14, 0xc8, 0xb7, 0x8c, 0x5b, 0x04, 0xc3, 0x88, 0x0e, 0x3d, 0x02, 0x20, 0xae, 0x9d, 0x9c, 0x97,
- 0x17, 0xaa, 0x22, 0x30, 0x62, 0x7a, 0x3d, 0xa1, 0xec, 0xd6, 0x9c, 0xca, 0x51, 0x4a, 0xbe, 0x92,
- 0x56, 0xf2, 0xdc, 0x7c, 0x22, 0xcd, 0x9a, 0xd7, 0xa5, 0x90, 0xbc, 0x2e, 0xf8, 0x1f, 0x16, 0x34,
- 0x37, 0xf5, 0xc9, 0xef, 0x53, 0x1d, 0x5a, 0xde, 0xec, 0x03, 0x92, 0x37, 0xf7, 0x00, 0xe5, 0xcd,
- 0xa7, 0xe4, 0x6d, 0x02, 0x6c, 0xba, 0x1e, 0xbd, 0xe4, 0x0e, 0x18, 0x0d, 0x66, 0x3c, 0x92, 0x7e,
- 0x9c, 0x8b, 0x23, 0x0e, 0xa1, 0xdb, 0x5a, 0x07, 0xeb, 0x46, 0x98, 0x7f, 0x10, 0x22, 0x66, 0x1f,
- 0xa0, 0x88, 0xb9, 0x54, 0x04, 0xf4, 0xa0, 0xb4, 0x2d, 0xc4, 0x93, 0x19, 0x3b, 0xd1, 0x7f, 0x8a,
- 0x65, 0x6f, 0x7f, 0x4b, 0x6d, 0xfe, 0xcc, 0x01, 0x05, 0x97, 0xe8, 0x23, 0xb6, 0xc2, 0x89, 0xc7,
- 0x9c, 0x77, 0x8c, 0xf5, 0x44, 0x6f, 0x82, 0x1c, 0x55, 0xd3, 0x15, 0x66, 0xd6, 0x74, 0xcf, 0xab,
- 0x6d, 0xfe, 0x97, 0xba, 0x0e, 0x3f, 0x1f, 0x07, 0x58, 0x61, 0x14, 0x15, 0x60, 0x1f, 0x3b, 0xe8,
- 0xfa, 0xab, 0x4b, 0xff, 0x47, 0x0b, 0x96, 0x2e, 0x53, 0x96, 0xac, 0xb1, 0x1e, 0x22, 0x93, 0xe2,
- 0x97, 0xe0, 0xb0, 0x71, 0x7e, 0x25, 0xfd, 0x53, 0xa9, 0xc2, 0xea, 0x48, 0x2c, 0xff, 0x15, 0xaf,
- 0x4b, 0xdf, 0x51, 0xef, 0xd5, 0x64, 0x4d, 0x75, 0x03, 0xaa, 0xc6, 0x24, 0xba, 0x98, 0xaa, 0xa6,
- 0x96, 0x53, 0x6d, 0x5a, 0x5e, 0x11, 0xb4, 0xeb, 0x4a, 0x26, 0xf9, 0x2a, 0x55, 0xb5, 0x72, 0x54,
- 0x79, 0x6c, 0x01, 0x12, 0xe6, 0x12, 0x6c, 0xcd, 0xdc, 0x27, 0xb0, 0x2f, 0x47, 0x65, 0x55, 0x04,
- 0xa3, 0x47, 0x21, 0x1f, 0xf8, 0xf7, 0x74, 0x99, 0xbc, 0x10, 0x6f, 0x49, 0xfc, 0x7b, 0x44, 0x4c,
- 0xe1, 0xe7, 0x20, 0x47, 0xfc, 0x7b, 0xa8, 0x09, 0x10, 0x38, 0x5e, 0x8f, 0xde, 0x8a, 0x1e, 0x68,
- 0x35, 0x62, 0x60, 0xf6, 0xa9, 0x4b, 0xd6, 0xe1, 0xb0, 0x79, 0x22, 0x69, 0xee, 0x35, 0x28, 0xbd,
- 0x3a, 0x36, 0xd5, 0x55, 0x4f, 0xa9, 0x4b, 0xf6, 0x01, 0x34, 0x11, 0xf7, 0x19, 0x88, 0xf1, 0xe8,
- 0x04, 0x54, 0x98, 0x73, 0x7b, 0x40, 0xaf, 0xc5, 0x21, 0x30, 0x46, 0xf0, 0x59, 0xfe, 0xb6, 0xbc,
- 0x65, 0x14, 0x58, 0x31, 0x02, 0x3d, 0x01, 0x4b, 0xf1, 0x99, 0x6f, 0x04, 0x74, 0xdb, 0x7d, 0x47,
- 0x58, 0xb8, 0x46, 0xf6, 0xe0, 0xd1, 0x49, 0x38, 0x14, 0xe3, 0xb6, 0x44, 0x21, 0x93, 0x17, 0xa4,
- 0x69, 0x34, 0xd7, 0x8d, 0x10, 0xf7, 0xc5, 0xbb, 0x63, 0x67, 0x20, 0x2e, 0x5f, 0x8d, 0x18, 0x18,
- 0xfc, 0x27, 0x0b, 0x0e, 0x4b, 0x53, 0x33, 0x87, 0x3d, 0x94, 0x5e, 0xff, 0x2b, 0x0b, 0x90, 0x29,
- 0x81, 0x72, 0xad, 0xaf, 0x9a, 0x7d, 0x26, 0x5e, 0x29, 0x55, 0xc5, 0x93, 0x59, 0xa2, 0xe2, 0x56,
- 0x11, 0x86, 0x62, 0x47, 0xf6, 0xd3, 0x44, 0x63, 0x5c, 0xbe, 0xc9, 0x25, 0x86, 0xa8, 0x6f, 0x64,
- 0x43, 0xe1, 0xf6, 0x84, 0xd1, 0x50, 0xbd, 0xa8, 0x45, 0x2b, 0x41, 0x20, 0x88, 0xfc, 0xe2, 0x7b,
- 0x51, 0x8f, 0x09, 0xaf, 0xc9, 0xc7, 0x7b, 0x29, 0x14, 0xd1, 0x03, 0xfc, 0xef, 0x2c, 0x2c, 0xdc,
- 0xf2, 0x07, 0xe3, 0x38, 0x69, 0x3e, 0x4c, 0x09, 0x23, 0xf1, 0xcc, 0x2f, 0xe8, 0x67, 0x3e, 0x82,
- 0x7c, 0xc8, 0xe8, 0x48, 0x78, 0x56, 0x8e, 0x88, 0x31, 0xc2, 0x50, 0x63, 0x4e, 0xd0, 0xa3, 0x4c,
- 0x3e, 0x9e, 0x1a, 0x45, 0x51, 0xd5, 0x26, 0x70, 0x68, 0x15, 0xaa, 0x4e, 0xaf, 0x17, 0xd0, 0x9e,
- 0xc3, 0x68, 0x7b, 0xd2, 0x28, 0x89, 0xcd, 0x4c, 0x14, 0xba, 0x0a, 0x8b, 0x1d, 0xa7, 0xd3, 0x77,
- 0xbd, 0xde, 0xf5, 0x11, 0x73, 0x7d, 0x2f, 0x6c, 0x94, 0x45, 0xea, 0x38, 0xb1, 0x66, 0xfe, 0xd0,
- 0xb4, 0xb6, 0x9e, 0xa0, 0x51, 0x71, 0x2c, 0xb5, 0x12, 0xbf, 0x01, 0x8b, 0x5a, 0xf1, 0xca, 0x3d,
- 0xce, 0x40, 0xe9, 0x6d, 0x81, 0x99, 0xd1, 0xc2, 0x93, 0xa4, 0x8a, 0x95, 0x26, 0x4b, 0xfe, 0x54,
- 0xa1, 0xe5, 0xc7, 0x57, 0xa1, 0x28, 0xc9, 0xd1, 0x09, 0xf3, 0x39, 0x25, 0x2b, 0x4a, 0x0e, 0xab,
- 0xb7, 0x11, 0x86, 0xa2, 0x64, 0xa4, 0x9c, 0x48, 0xf8, 0x99, 0xc4, 0x10, 0xf5, 0x8d, 0xff, 0x69,
- 0xc1, 0x91, 0x0d, 0xca, 0x68, 0x87, 0xd1, 0xee, 0x25, 0x97, 0x0e, 0xba, 0x5f, 0xe8, 0x4b, 0x3f,
- 0xea, 0xd7, 0xe5, 0x8c, 0x7e, 0x1d, 0x8f, 0x61, 0x03, 0xd7, 0xa3, 0x9b, 0x46, 0xc3, 0x27, 0x46,
- 0xf0, 0x68, 0xb3, 0xcd, 0x0f, 0x2e, 0xa7, 0xe5, 0x6f, 0x43, 0x06, 0x26, 0xf2, 0x96, 0x62, 0xec,
- 0x2d, 0xf8, 0x07, 0x16, 0x1c, 0x4d, 0x4b, 0xad, 0x8c, 0xd4, 0x82, 0xa2, 0x58, 0x3c, 0xa3, 0x55,
- 0x9c, 0x58, 0x41, 0x14, 0x19, 0x3a, 0x9f, 0xd8, 0x5f, 0xfc, 0xa6, 0xd4, 0x6e, 0xec, 0x4e, 0xed,
- 0x7a, 0x8c, 0x35, 0xba, 0x11, 0x06, 0x2d, 0xfe, 0x3d, 0x7f, 0xb3, 0x9b, 0x3c, 0x85, 0xbd, 0xb9,
- 0xaf, 0xaa, 0x38, 0x2e, 0x01, 0xf4, 0x35, 0xc8, 0xb3, 0xc9, 0x48, 0x85, 0xef, 0xf6, 0x91, 0xcf,
- 0xa6, 0xf6, 0xe1, 0xc4, 0xb2, 0x9b, 0x93, 0x11, 0x25, 0x82, 0x84, 0xbb, 0x78, 0xc7, 0x09, 0xba,
- 0xae, 0xe7, 0x0c, 0x5c, 0x26, 0xd5, 0x98, 0x27, 0x26, 0x0a, 0x35, 0xa0, 0x34, 0x72, 0x82, 0x50,
- 0xd7, 0x60, 0x15, 0xa2, 0x41, 0xd1, 0x4e, 0xb9, 0x43, 0x59, 0xa7, 0x2f, 0x43, 0xb6, 0x6a, 0xa7,
- 0x08, 0x4c, 0xa2, 0x9d, 0x22, 0x30, 0xf8, 0x17, 0x86, 0xe3, 0xc8, 0xfb, 0xf5, 0xa5, 0x73, 0x1c,
- 0xfc, 0x9d, 0xd8, 0xca, 0xfa, 0x88, 0xca, 0xca, 0x2f, 0xc0, 0x62, 0x37, 0x31, 0xb3, 0xbf, 0xb5,
- 0x65, 0xab, 0x38, 0x45, 0x8e, 0xc7, 0xb1, 0xe9, 0x04, 0x66, 0x1f, 0xd3, 0xa5, 0xec, 0x91, 0xdd,
- 0x6b, 0x8f, 0x58, 0xeb, 0xb9, 0x83, 0xb5, 0xfe, 0xc4, 0x63, 0x50, 0x89, 0x7e, 0x16, 0x44, 0x55,
- 0x28, 0x5d, 0xba, 0x4e, 0x5e, 0xbf, 0x48, 0x36, 0x96, 0x32, 0xa8, 0x06, 0xe5, 0xf6, 0xc5, 0xf5,
- 0x97, 0x05, 0x64, 0x9d, 0xfb, 0x4d, 0x51, 0x17, 0x15, 0x01, 0xfa, 0x26, 0x14, 0x64, 0xa5, 0x70,
- 0x34, 0x16, 0xce, 0xfc, 0xc5, 0x6c, 0xe5, 0xd8, 0x1e, 0xbc, 0xd4, 0x12, 0xce, 0x9c, 0xb1, 0xd0,
- 0x35, 0xa8, 0x0a, 0xa4, 0xea, 0x49, 0x9f, 0x48, 0xb7, 0x86, 0x13, 0x9c, 0x1e, 0xd9, 0x67, 0xd6,
- 0xe0, 0x77, 0x01, 0x0a, 0x52, 0x61, 0x47, 0x53, 0x05, 0xdd, 0x8c, 0xd3, 0x24, 0xba, 0xf4, 0x38,
- 0x83, 0x9e, 0x85, 0xfc, 0x4d, 0xc7, 0x1d, 0x20, 0xa3, 0x9e, 0x34, 0x5a, 0xc9, 0x2b, 0x47, 0xd3,
- 0x68, 0x63, 0xdb, 0xe7, 0xa3, 0x8e, 0xf8, 0xb1, 0x74, 0x5b, 0x4e, 0x2f, 0x6f, 0xec, 0x9d, 0x88,
- 0x76, 0xbe, 0x2e, 0xfb, 0xb6, 0xba, 0x39, 0x84, 0x1e, 0x49, 0x6e, 0x95, 0xea, 0x25, 0xad, 0x34,
- 0xf7, 0x9b, 0x8e, 0x18, 0x6e, 0x42, 0xd5, 0x68, 0xcc, 0x98, 0x6a, 0xdd, 0xdb, 0x55, 0x32, 0xd5,
- 0x3a, 0xa3, 0x9b, 0x83, 0x33, 0xe8, 0x32, 0x94, 0x79, 0x15, 0x2e, 0x7e, 0xc0, 0x39, 0x9e, 0x2e,
- 0xb6, 0x8d, 0x22, 0x6b, 0xe5, 0xc4, 0xec, 0xc9, 0x88, 0xd1, 0xb7, 0xa1, 0x72, 0x99, 0x32, 0x95,
- 0x5d, 0x8e, 0xa5, 0xd3, 0xd3, 0x0c, 0x4d, 0x25, 0x53, 0x1c, 0xce, 0xa0, 0x37, 0xc4, 0x83, 0x20,
- 0x19, 0x5c, 0x91, 0xbd, 0x4f, 0x10, 0x8d, 0xce, 0xb5, 0xba, 0x3f, 0x41, 0xc4, 0xf9, 0xf5, 0x04,
- 0x67, 0x95, 0xd3, 0xed, 0x7d, 0x2e, 0x6c, 0xc4, 0xd9, 0x3e, 0xe0, 0xef, 0x1d, 0x38, 0x73, 0xee,
- 0x4d, 0xfd, 0x0f, 0x87, 0x0d, 0x87, 0x39, 0xe8, 0x3a, 0x2c, 0x0a, 0x5d, 0x46, 0x7f, 0x81, 0x48,
- 0xf8, 0xfc, 0x9e, 0xff, 0x5b, 0x24, 0x7c, 0x7e, 0xef, 0xff, 0x2e, 0x70, 0xa6, 0xfd, 0xe6, 0x07,
- 0x1f, 0x37, 0x33, 0x1f, 0x7e, 0xdc, 0xcc, 0x7c, 0xfa, 0x71, 0xd3, 0xfa, 0xfe, 0x4e, 0xd3, 0xfa,
- 0xf5, 0x4e, 0xd3, 0x7a, 0x7f, 0xa7, 0x69, 0x7d, 0xb0, 0xd3, 0xb4, 0xfe, 0xb6, 0xd3, 0xb4, 0xfe,
- 0xbe, 0xd3, 0xcc, 0x7c, 0xba, 0xd3, 0xb4, 0xde, 0xfd, 0xa4, 0x99, 0xf9, 0xe0, 0x93, 0x66, 0xe6,
- 0xc3, 0x4f, 0x9a, 0x99, 0xef, 0x3e, 0x7e, 0xf0, 0xe3, 0x57, 0x86, 0xc5, 0xa2, 0xf8, 0x7a, 0xea,
- 0x3f, 0x01, 0x00, 0x00, 0xff, 0xff, 0xc8, 0xfe, 0xc4, 0xb1, 0xb9, 0x23, 0x00, 0x00,
+ // 2739 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x3a, 0x4d, 0x6c, 0x1b, 0xc7,
+ 0xd5, 0x5c, 0x72, 0x49, 0x91, 0x8f, 0x94, 0x2c, 0x8f, 0x68, 0x9b, 0x90, 0x1d, 0xae, 0x32, 0xf8,
+ 0xbe, 0xc4, 0x5f, 0xec, 0x88, 0xb6, 0xf3, 0x25, 0x75, 0x9c, 0xa6, 0xa9, 0x29, 0xc5, 0x8e, 0x1d,
+ 0xc5, 0x76, 0x46, 0x8e, 0x93, 0x16, 0x0d, 0x82, 0x35, 0x39, 0x22, 0x17, 0x26, 0x77, 0xe9, 0xdd,
+ 0x61, 0x1c, 0xde, 0x0a, 0xf4, 0x5c, 0x34, 0x40, 0x0f, 0x6d, 0x2f, 0x05, 0x0a, 0x14, 0x68, 0x51,
+ 0x20, 0x97, 0xa2, 0xc7, 0xa2, 0xbd, 0x14, 0x68, 0x7a, 0xcb, 0x31, 0xc8, 0x81, 0x6d, 0x94, 0x4b,
+ 0x21, 0xa0, 0x40, 0x80, 0x02, 0x2d, 0x90, 0x53, 0x31, 0x7f, 0xbb, 0xb3, 0x2b, 0xaa, 0x0e, 0x5d,
+ 0x17, 0x49, 0x2e, 0xe4, 0xcc, 0x9b, 0x37, 0x6f, 0xe6, 0xfd, 0xcc, 0xfb, 0x23, 0xe1, 0xf8, 0xe8,
+ 0x4e, 0xaf, 0x35, 0x08, 0x7a, 0xa3, 0x30, 0x60, 0x41, 0x3c, 0x58, 0x17, 0x9f, 0xa8, 0xac, 0xe7,
+ 0xab, 0xf5, 0x5e, 0xd0, 0x0b, 0x24, 0x0e, 0x1f, 0xc9, 0xf5, 0x55, 0xa7, 0x17, 0x04, 0xbd, 0x01,
+ 0x6d, 0x89, 0xd9, 0xed, 0xf1, 0x4e, 0x8b, 0x79, 0x43, 0x1a, 0x31, 0x77, 0x38, 0x52, 0x08, 0x6b,
+ 0x8a, 0xfa, 0xdd, 0xc1, 0x30, 0xe8, 0xd2, 0x41, 0x2b, 0x62, 0x2e, 0x8b, 0xe4, 0xa7, 0xc2, 0x58,
+ 0xe1, 0x18, 0xa3, 0x71, 0xd4, 0x17, 0x1f, 0x0a, 0x78, 0x86, 0x03, 0x23, 0x16, 0x84, 0x6e, 0x8f,
+ 0xb6, 0x3a, 0xfd, 0xb1, 0x7f, 0xa7, 0xd5, 0x71, 0x3b, 0x7d, 0xda, 0x0a, 0x69, 0x34, 0x1e, 0xb0,
+ 0x48, 0x4e, 0xd8, 0x64, 0x44, 0x15, 0x19, 0xfc, 0x1b, 0x0b, 0x8e, 0x6c, 0xb9, 0xb7, 0xe9, 0xe0,
+ 0x66, 0x70, 0xcb, 0x1d, 0x8c, 0x69, 0x44, 0x68, 0x34, 0x0a, 0xfc, 0x88, 0xa2, 0x0d, 0x28, 0x0d,
+ 0xf8, 0x42, 0xd4, 0xb0, 0xd6, 0x0a, 0x27, 0xab, 0xe7, 0x4e, 0xad, 0xc7, 0x4c, 0xce, 0xdc, 0x20,
+ 0xa1, 0xd1, 0x8b, 0x3e, 0x0b, 0x27, 0x44, 0x6d, 0x5d, 0xbd, 0x05, 0x55, 0x03, 0x8c, 0x96, 0xa1,
+ 0x70, 0x87, 0x4e, 0x1a, 0xd6, 0x9a, 0x75, 0xb2, 0x42, 0xf8, 0x10, 0x9d, 0x85, 0xe2, 0xdb, 0x9c,
+ 0x4c, 0x23, 0xbf, 0x66, 0x9d, 0xac, 0x9e, 0x3b, 0x9e, 0x1c, 0xf2, 0x9a, 0xef, 0xdd, 0x1d, 0x53,
+ 0xb1, 0x5b, 0x1d, 0x24, 0x31, 0x2f, 0xe4, 0xcf, 0x5b, 0xf8, 0x14, 0x1c, 0xde, 0xb7, 0x8e, 0x8e,
+ 0x42, 0x49, 0x60, 0xc8, 0x1b, 0x57, 0x88, 0x9a, 0xe1, 0x3a, 0xa0, 0x6d, 0x16, 0x52, 0x77, 0x48,
+ 0x5c, 0xc6, 0xef, 0x7b, 0x77, 0x4c, 0x23, 0x86, 0x5f, 0x81, 0x95, 0x14, 0x54, 0xb1, 0xfd, 0x0c,
+ 0x54, 0xa3, 0x04, 0xac, 0x78, 0xaf, 0x27, 0xd7, 0x4a, 0xf6, 0x10, 0x13, 0x11, 0xff, 0xd4, 0x02,
+ 0x48, 0xd6, 0x50, 0x13, 0x40, 0xae, 0xbe, 0xe4, 0x46, 0x7d, 0xc1, 0xb0, 0x4d, 0x0c, 0x08, 0x3a,
+ 0x0d, 0x87, 0x93, 0xd9, 0xb5, 0x60, 0xbb, 0xef, 0x86, 0x5d, 0x21, 0x03, 0x9b, 0xec, 0x5f, 0x40,
+ 0x08, 0xec, 0xd0, 0x65, 0xb4, 0x51, 0x58, 0xb3, 0x4e, 0x16, 0x88, 0x18, 0x73, 0x6e, 0x19, 0xf5,
+ 0x5d, 0x9f, 0x35, 0x6c, 0x21, 0x4e, 0x35, 0xe3, 0x70, 0x6e, 0x11, 0x34, 0x6a, 0x14, 0xd7, 0xac,
+ 0x93, 0x8b, 0x44, 0xcd, 0xf0, 0x3f, 0x0a, 0x50, 0x7b, 0x75, 0x4c, 0xc3, 0x89, 0x12, 0x00, 0x6a,
+ 0x42, 0x39, 0xa2, 0x03, 0xda, 0x61, 0x41, 0x28, 0x35, 0xd2, 0xce, 0x37, 0x2c, 0x12, 0xc3, 0x50,
+ 0x1d, 0x8a, 0x03, 0x6f, 0xe8, 0x31, 0x71, 0xad, 0x45, 0x22, 0x27, 0xe8, 0x02, 0x14, 0x23, 0xe6,
+ 0x86, 0x4c, 0xdc, 0xa5, 0x7a, 0x6e, 0x75, 0x5d, 0x9a, 0xf2, 0xba, 0x36, 0xe5, 0xf5, 0x9b, 0xda,
+ 0x94, 0xdb, 0xe5, 0xf7, 0xa7, 0x4e, 0xee, 0xdd, 0x3f, 0x3b, 0x16, 0x91, 0x5b, 0xd0, 0x33, 0x50,
+ 0xa0, 0x7e, 0x57, 0xdc, 0xf7, 0xf3, 0xee, 0xe4, 0x1b, 0xd0, 0x59, 0xa8, 0x74, 0xbd, 0x90, 0x76,
+ 0x98, 0x17, 0xf8, 0x82, 0xab, 0xa5, 0x73, 0x2b, 0x89, 0x46, 0x36, 0xf5, 0x12, 0x49, 0xb0, 0xd0,
+ 0x69, 0x28, 0x45, 0x5c, 0x74, 0x51, 0x63, 0x81, 0xdb, 0x42, 0xbb, 0xbe, 0x37, 0x75, 0x96, 0x25,
+ 0xe4, 0x74, 0x30, 0xf4, 0x18, 0x1d, 0x8e, 0xd8, 0x84, 0x28, 0x1c, 0xf4, 0x04, 0x2c, 0x74, 0xe9,
+ 0x80, 0x72, 0x85, 0x97, 0x85, 0xc2, 0x97, 0x0d, 0xf2, 0x62, 0x81, 0x68, 0x04, 0xf4, 0x26, 0xd8,
+ 0xa3, 0x81, 0xeb, 0x37, 0x2a, 0x82, 0x8b, 0xa5, 0x04, 0xf1, 0xc6, 0xc0, 0xf5, 0xdb, 0xcf, 0x7e,
+ 0x34, 0x75, 0x9e, 0xee, 0x79, 0xac, 0x3f, 0xbe, 0xbd, 0xde, 0x09, 0x86, 0xad, 0x5e, 0xe8, 0xee,
+ 0xb8, 0xbe, 0xdb, 0x1a, 0x04, 0x77, 0xbc, 0xd6, 0xdb, 0x4f, 0xb5, 0xf8, 0x03, 0xbd, 0x3b, 0xa6,
+ 0xa1, 0x47, 0xc3, 0x16, 0x27, 0xb3, 0x2e, 0x54, 0xc2, 0xb7, 0x12, 0x41, 0x16, 0x5d, 0xe5, 0xf6,
+ 0x17, 0x84, 0x74, 0x83, 0xbf, 0xde, 0xa8, 0x01, 0xe2, 0x94, 0x63, 0xc9, 0x29, 0x02, 0x4e, 0xe8,
+ 0xce, 0xe5, 0x30, 0x18, 0x8f, 0xda, 0x87, 0xf6, 0xa6, 0x8e, 0x89, 0x4f, 0xcc, 0xc9, 0x55, 0xbb,
+ 0x5c, 0x5a, 0x5e, 0xc0, 0xef, 0x15, 0x00, 0x6d, 0xbb, 0xc3, 0xd1, 0x80, 0xce, 0xa5, 0xfe, 0x58,
+ 0xd1, 0xf9, 0x07, 0x56, 0x74, 0x61, 0x5e, 0x45, 0x27, 0x5a, 0xb3, 0xe7, 0xd3, 0x5a, 0xf1, 0xf3,
+ 0x6a, 0xad, 0xf4, 0xa5, 0xd7, 0x1a, 0x6e, 0x80, 0xcd, 0x29, 0x73, 0x67, 0x19, 0xba, 0xf7, 0x84,
+ 0x6e, 0x6a, 0x84, 0x0f, 0xf1, 0x16, 0x94, 0x24, 0x5f, 0x68, 0x35, 0xab, 0xbc, 0xf4, 0xbb, 0x4d,
+ 0x14, 0x57, 0xd0, 0x2a, 0x59, 0x4e, 0x54, 0x52, 0x10, 0xc2, 0xc6, 0xbf, 0xb5, 0x60, 0x51, 0x59,
+ 0x84, 0xf2, 0x7d, 0xb7, 0x61, 0x41, 0xfa, 0x1e, 0xed, 0xf7, 0x8e, 0x65, 0xfd, 0xde, 0xc5, 0xae,
+ 0x3b, 0x62, 0x34, 0x6c, 0xb7, 0xde, 0x9f, 0x3a, 0xd6, 0x47, 0x53, 0xe7, 0xf1, 0x83, 0x84, 0xa6,
+ 0xa3, 0x93, 0xf6, 0x97, 0x9a, 0x30, 0x3a, 0x25, 0x6e, 0xc7, 0x22, 0x65, 0x56, 0x87, 0xd6, 0x65,
+ 0x50, 0xbb, 0xe2, 0xf7, 0x68, 0xc4, 0x29, 0xdb, 0xdc, 0x22, 0x88, 0xc4, 0xe1, 0x6c, 0xde, 0x73,
+ 0x43, 0xdf, 0xf3, 0x7b, 0x51, 0xa3, 0x20, 0x7c, 0x7a, 0x3c, 0xc7, 0x3f, 0xb6, 0x60, 0x25, 0x65,
+ 0xd6, 0x8a, 0x89, 0xf3, 0x50, 0x8a, 0xb8, 0xa6, 0x34, 0x0f, 0x86, 0x51, 0x6c, 0x0b, 0x78, 0x7b,
+ 0x49, 0x5d, 0xbe, 0x24, 0xe7, 0x44, 0xe1, 0x3f, 0xbc, 0xab, 0xfd, 0xc1, 0x82, 0x9a, 0x08, 0x4c,
+ 0xfa, 0xad, 0x21, 0xb0, 0x7d, 0x77, 0x48, 0x95, 0xaa, 0xc4, 0xd8, 0x88, 0x56, 0xfc, 0xb8, 0xb2,
+ 0x8e, 0x56, 0xf3, 0x3a, 0x58, 0xeb, 0x81, 0x1d, 0xac, 0x95, 0xbc, 0xbb, 0x3a, 0x14, 0xb9, 0x79,
+ 0x4f, 0x84, 0x73, 0xad, 0x10, 0x39, 0xc1, 0x8f, 0xc3, 0xa2, 0xe2, 0x42, 0x89, 0xf6, 0xa0, 0x00,
+ 0x3b, 0x84, 0x92, 0xd4, 0x04, 0xfa, 0x1f, 0xa8, 0xc4, 0xa9, 0x8c, 0xe0, 0xb6, 0xd0, 0x2e, 0xed,
+ 0x4d, 0x9d, 0x3c, 0x8b, 0x48, 0xb2, 0x80, 0x1c, 0x33, 0xe8, 0x5b, 0xed, 0xca, 0xde, 0xd4, 0x91,
+ 0x00, 0x15, 0xe2, 0xd1, 0x09, 0xb0, 0xfb, 0x3c, 0x6e, 0x72, 0x11, 0xd8, 0xed, 0xf2, 0xde, 0xd4,
+ 0x11, 0x73, 0x22, 0x3e, 0xf1, 0x65, 0xa8, 0x6d, 0xd1, 0x9e, 0xdb, 0x99, 0xa8, 0x43, 0xeb, 0x9a,
+ 0x1c, 0x3f, 0xd0, 0xd2, 0x34, 0x1e, 0x85, 0x5a, 0x7c, 0xe2, 0x5b, 0xc3, 0x48, 0xbd, 0x86, 0x6a,
+ 0x0c, 0x7b, 0x25, 0xc2, 0x3f, 0xb1, 0x40, 0xd9, 0x00, 0xc2, 0x46, 0xb6, 0xc3, 0x7d, 0x21, 0xec,
+ 0x4d, 0x1d, 0x05, 0xd1, 0xc9, 0x0c, 0x7a, 0x0e, 0x16, 0x22, 0x71, 0x22, 0x27, 0x96, 0x35, 0x2d,
+ 0xb1, 0xd0, 0x3e, 0xc4, 0x4d, 0x64, 0x6f, 0xea, 0x68, 0x44, 0xa2, 0x07, 0x68, 0x3d, 0x95, 0x10,
+ 0x48, 0xc6, 0x96, 0xf6, 0xa6, 0x8e, 0x01, 0x35, 0x13, 0x04, 0xfc, 0x99, 0x05, 0xd5, 0x9b, 0xae,
+ 0x17, 0x9b, 0x50, 0x43, 0xab, 0x28, 0xf1, 0xd5, 0x12, 0xc0, 0x2d, 0xb1, 0x4b, 0x07, 0xee, 0xe4,
+ 0x52, 0x10, 0x0a, 0xba, 0x8b, 0x24, 0x9e, 0x27, 0x31, 0xdc, 0x9e, 0x19, 0xc3, 0x8b, 0xf3, 0xbb,
+ 0xf6, 0xff, 0xae, 0x23, 0xbd, 0x6a, 0x97, 0xf3, 0xcb, 0x05, 0xfc, 0x9e, 0x05, 0x35, 0xc9, 0xbc,
+ 0xb2, 0xbc, 0xef, 0x40, 0x49, 0xca, 0x46, 0xb0, 0xff, 0x6f, 0x1c, 0xd3, 0xa9, 0x79, 0x9c, 0x92,
+ 0xa2, 0x89, 0x5e, 0x80, 0xa5, 0x6e, 0x18, 0x8c, 0x46, 0xb4, 0xbb, 0xad, 0xdc, 0x5f, 0x3e, 0xeb,
+ 0xfe, 0x36, 0xcd, 0x75, 0x92, 0x41, 0xc7, 0x7f, 0xb2, 0x60, 0x51, 0x39, 0x13, 0xa5, 0xae, 0x58,
+ 0xc4, 0xd6, 0x03, 0x47, 0xcf, 0xfc, 0xbc, 0xd1, 0xf3, 0x28, 0x94, 0x7a, 0x3c, 0xbe, 0x68, 0x87,
+ 0xa4, 0x66, 0xf3, 0x45, 0x55, 0x7c, 0x15, 0x96, 0x34, 0x2b, 0x07, 0x78, 0xd4, 0xd5, 0xac, 0x47,
+ 0xbd, 0xd2, 0xa5, 0x3e, 0xf3, 0x76, 0xbc, 0xd8, 0x47, 0x2a, 0x7c, 0xfc, 0x03, 0x0b, 0x96, 0xb3,
+ 0x28, 0x68, 0x33, 0x53, 0x58, 0x3c, 0x76, 0x30, 0x39, 0xb3, 0xa6, 0xd0, 0xa4, 0x55, 0x65, 0xf1,
+ 0xf4, 0xfd, 0x2a, 0x8b, 0xba, 0xe9, 0x64, 0x2a, 0xca, 0x2b, 0xe0, 0x1f, 0x59, 0xb0, 0x98, 0xd2,
+ 0x25, 0x3a, 0x0f, 0xf6, 0x4e, 0x18, 0x0c, 0xe7, 0x52, 0x94, 0xd8, 0x81, 0xfe, 0x1f, 0xf2, 0x2c,
+ 0x98, 0x4b, 0x4d, 0x79, 0x16, 0x70, 0x2d, 0x29, 0xf6, 0x0b, 0x32, 0x6f, 0x97, 0x33, 0xfc, 0x34,
+ 0x54, 0x04, 0x43, 0x37, 0x5c, 0x2f, 0x9c, 0x19, 0x30, 0x66, 0x33, 0xf4, 0x1c, 0x1c, 0x92, 0xce,
+ 0x70, 0xf6, 0xe6, 0xda, 0xac, 0xcd, 0x35, 0xbd, 0xf9, 0x38, 0x14, 0x45, 0xd2, 0xc1, 0xb7, 0x74,
+ 0x5d, 0xe6, 0xea, 0x2d, 0x7c, 0x8c, 0x8f, 0xc0, 0x0a, 0x7f, 0x83, 0x34, 0x8c, 0x36, 0x82, 0xb1,
+ 0xcf, 0x74, 0xdd, 0x74, 0x1a, 0xea, 0x69, 0xb0, 0xb2, 0x92, 0x3a, 0x14, 0x3b, 0x1c, 0x20, 0x68,
+ 0x2c, 0x12, 0x39, 0xc1, 0x3f, 0xb7, 0x00, 0x5d, 0xa6, 0x4c, 0x9c, 0x72, 0x65, 0x33, 0x7e, 0x1e,
+ 0xab, 0x50, 0x1e, 0xba, 0xac, 0xd3, 0xa7, 0x61, 0xa4, 0xf3, 0x17, 0x3d, 0xff, 0x22, 0x12, 0x4f,
+ 0x7c, 0x16, 0x56, 0x52, 0xb7, 0x54, 0x3c, 0xad, 0x42, 0xb9, 0xa3, 0x60, 0x2a, 0xe4, 0xc5, 0x73,
+ 0xfc, 0xeb, 0x3c, 0x94, 0x75, 0x5a, 0x87, 0xce, 0x42, 0x75, 0xc7, 0xf3, 0x7b, 0x34, 0x1c, 0x85,
+ 0x9e, 0x12, 0x81, 0x2d, 0xd3, 0x3c, 0x03, 0x4c, 0xcc, 0x09, 0x7a, 0x12, 0x16, 0xc6, 0x11, 0x0d,
+ 0xdf, 0xf2, 0xe4, 0x4b, 0xaf, 0xb4, 0xeb, 0xbb, 0x53, 0xa7, 0xf4, 0x5a, 0x44, 0xc3, 0x2b, 0x9b,
+ 0x3c, 0xf8, 0x8c, 0xc5, 0x88, 0xc8, 0xef, 0x2e, 0x7a, 0x59, 0x99, 0xa9, 0x48, 0xe0, 0xda, 0x5f,
+ 0xe3, 0xd7, 0xcf, 0xb8, 0xba, 0x51, 0x18, 0x0c, 0x29, 0xeb, 0xd3, 0x71, 0xd4, 0xea, 0x04, 0xc3,
+ 0x61, 0xe0, 0xb7, 0x44, 0xef, 0x40, 0x30, 0xcd, 0x23, 0x28, 0xdf, 0xae, 0x2c, 0xf7, 0x26, 0x2c,
+ 0xb0, 0x7e, 0x18, 0x8c, 0x7b, 0x7d, 0x11, 0x18, 0x0a, 0xed, 0x0b, 0xf3, 0xd3, 0xd3, 0x14, 0x88,
+ 0x1e, 0xa0, 0x47, 0xb9, 0xb4, 0x68, 0xe7, 0x4e, 0x34, 0x1e, 0xca, 0xda, 0xb3, 0x5d, 0xdc, 0x9b,
+ 0x3a, 0xd6, 0x93, 0x24, 0x06, 0xe3, 0x8b, 0xb0, 0x98, 0x4a, 0x85, 0xd1, 0x19, 0xb0, 0x43, 0xba,
+ 0xa3, 0x5d, 0x01, 0xda, 0x9f, 0x31, 0xcb, 0xe8, 0xcf, 0x71, 0x88, 0xf8, 0xc4, 0xdf, 0xcf, 0x83,
+ 0x63, 0x54, 0xfd, 0x97, 0x82, 0xf0, 0x15, 0xca, 0x42, 0xaf, 0x73, 0xcd, 0x1d, 0x52, 0x6d, 0x5e,
+ 0x0e, 0x54, 0x87, 0x02, 0xf8, 0x96, 0xf1, 0x8a, 0x60, 0x18, 0xe3, 0xa1, 0x47, 0x00, 0xc4, 0xb3,
+ 0x93, 0xeb, 0xf2, 0x41, 0x55, 0x04, 0x44, 0x2c, 0x6f, 0xa4, 0x84, 0xdd, 0x9a, 0x53, 0x38, 0x4a,
+ 0xc8, 0x57, 0xb2, 0x42, 0x9e, 0x9b, 0x4e, 0x2c, 0x59, 0xf3, 0xb9, 0x14, 0xd3, 0xcf, 0x05, 0xff,
+ 0xcd, 0x82, 0xe6, 0x96, 0xbe, 0xf9, 0x03, 0x8a, 0x43, 0xf3, 0x9b, 0x7f, 0x48, 0xfc, 0x16, 0x1e,
+ 0x22, 0xbf, 0x76, 0x86, 0xdf, 0x26, 0xc0, 0x96, 0xe7, 0xd3, 0x4b, 0xde, 0x80, 0xd1, 0x70, 0x46,
+ 0x91, 0xf4, 0xc3, 0x42, 0xe2, 0x71, 0x08, 0xdd, 0xd1, 0x32, 0xd8, 0x30, 0xdc, 0xfc, 0xc3, 0x60,
+ 0x31, 0xff, 0x10, 0x59, 0x2c, 0x64, 0x3c, 0xa0, 0x0f, 0x0b, 0x3b, 0x82, 0x3d, 0x19, 0xb1, 0x53,
+ 0xfd, 0xa7, 0x84, 0xf7, 0xf6, 0x37, 0xd4, 0xe1, 0xcf, 0xdc, 0x27, 0xe1, 0x12, 0x7d, 0xc4, 0x56,
+ 0x34, 0xf1, 0x99, 0xfb, 0x8e, 0xb1, 0x9f, 0xe8, 0x43, 0x90, 0xab, 0x72, 0xba, 0xe2, 0xcc, 0x9c,
+ 0xee, 0x79, 0x75, 0xcc, 0x7f, 0x92, 0xd7, 0xe1, 0xe7, 0x13, 0x07, 0x2b, 0x94, 0xa2, 0x1c, 0xec,
+ 0x63, 0xf7, 0x7b, 0xfe, 0xea, 0xd1, 0xff, 0xce, 0x82, 0xe5, 0xcb, 0x94, 0xa5, 0x73, 0xac, 0xaf,
+ 0x90, 0x4a, 0xf1, 0x4b, 0x70, 0xd8, 0xb8, 0xbf, 0xe2, 0xfe, 0xa9, 0x4c, 0x62, 0x75, 0x24, 0xe1,
+ 0xff, 0x8a, 0xdf, 0xa5, 0xef, 0xa8, 0x7a, 0x35, 0x9d, 0x53, 0xdd, 0x80, 0xaa, 0xb1, 0x88, 0x2e,
+ 0x66, 0xb2, 0xa9, 0x95, 0x4c, 0x9b, 0x96, 0x67, 0x04, 0xed, 0xba, 0xe2, 0x49, 0x56, 0xa5, 0x2a,
+ 0x57, 0x8e, 0x33, 0x8f, 0x6d, 0x40, 0x42, 0x5d, 0x82, 0xac, 0x19, 0xfb, 0x04, 0xf4, 0xe5, 0x38,
+ 0xad, 0x8a, 0xe7, 0xe8, 0x51, 0xb0, 0xc3, 0xe0, 0x9e, 0x4e, 0x93, 0x17, 0x93, 0x23, 0x49, 0x70,
+ 0x8f, 0x88, 0x25, 0xfc, 0x1c, 0x14, 0x48, 0x70, 0x0f, 0x35, 0x01, 0x42, 0xd7, 0xef, 0xd1, 0x5b,
+ 0x71, 0x81, 0x56, 0x23, 0x06, 0xe4, 0x80, 0xbc, 0x64, 0x03, 0x0e, 0x9b, 0x37, 0x92, 0xea, 0x5e,
+ 0x87, 0x85, 0x57, 0xc7, 0xa6, 0xb8, 0xea, 0x19, 0x71, 0xc9, 0x3e, 0x80, 0x46, 0xe2, 0x36, 0x03,
+ 0x09, 0x1c, 0x9d, 0x80, 0x0a, 0x73, 0x6f, 0x0f, 0xe8, 0xb5, 0xc4, 0x05, 0x26, 0x00, 0xbe, 0xca,
+ 0x6b, 0xcb, 0x5b, 0x46, 0x82, 0x95, 0x00, 0xd0, 0x13, 0xb0, 0x9c, 0xdc, 0xf9, 0x46, 0x48, 0x77,
+ 0xbc, 0x77, 0x84, 0x86, 0x6b, 0x64, 0x1f, 0x1c, 0x9d, 0x84, 0x43, 0x09, 0x6c, 0x5b, 0x24, 0x32,
+ 0xb6, 0x40, 0xcd, 0x82, 0xb9, 0x6c, 0x04, 0xbb, 0x2f, 0xde, 0x1d, 0xbb, 0x03, 0xf1, 0xf8, 0x6a,
+ 0xc4, 0x80, 0xe0, 0xdf, 0x5b, 0x70, 0x58, 0xaa, 0x9a, 0xb9, 0xec, 0x2b, 0x69, 0xf5, 0xbf, 0xb0,
+ 0x00, 0x99, 0x1c, 0x28, 0xd3, 0xfa, 0x5f, 0xb3, 0xcf, 0xc4, 0x33, 0xa5, 0xaa, 0x28, 0x99, 0x25,
+ 0x28, 0x69, 0x15, 0x61, 0x28, 0x75, 0x64, 0x3f, 0x4d, 0x34, 0xc6, 0x65, 0x4d, 0x2e, 0x21, 0x44,
+ 0x7d, 0x23, 0x07, 0x8a, 0xb7, 0x27, 0x8c, 0x46, 0xaa, 0xa2, 0x16, 0xad, 0x04, 0x01, 0x20, 0xf2,
+ 0x8b, 0x9f, 0x45, 0x7d, 0x26, 0xac, 0xc6, 0x4e, 0xce, 0x52, 0x20, 0xa2, 0x07, 0xf8, 0x9f, 0x79,
+ 0x58, 0xbc, 0x15, 0x0c, 0xc6, 0x49, 0xd0, 0xfc, 0x2a, 0x05, 0x8c, 0x54, 0x99, 0x5f, 0xd4, 0x65,
+ 0x3e, 0x02, 0x3b, 0x62, 0x74, 0x24, 0x2c, 0xab, 0x40, 0xc4, 0x18, 0x61, 0xa8, 0x31, 0x37, 0xec,
+ 0x51, 0x26, 0x8b, 0xa7, 0x46, 0x49, 0x64, 0xb5, 0x29, 0x18, 0x5a, 0x83, 0xaa, 0xdb, 0xeb, 0x85,
+ 0xb4, 0xe7, 0x32, 0xda, 0x9e, 0x34, 0x16, 0xc4, 0x61, 0x26, 0x08, 0x5d, 0x85, 0xa5, 0x8e, 0xdb,
+ 0xe9, 0x7b, 0x7e, 0xef, 0xfa, 0x88, 0x79, 0x81, 0x1f, 0x35, 0xca, 0x22, 0x74, 0x9c, 0x58, 0x37,
+ 0x7f, 0x68, 0x5a, 0xdf, 0x48, 0xe1, 0x28, 0x3f, 0x96, 0xd9, 0x89, 0xdf, 0x80, 0x25, 0x2d, 0x78,
+ 0x65, 0x1e, 0x67, 0x60, 0xe1, 0x6d, 0x01, 0x99, 0xd1, 0xc2, 0x93, 0xa8, 0x8a, 0x94, 0x46, 0x4b,
+ 0xff, 0x54, 0xa1, 0xf9, 0xc7, 0x57, 0xa1, 0x24, 0xd1, 0xd1, 0x09, 0xb3, 0x9c, 0x92, 0x19, 0x25,
+ 0x9f, 0xab, 0xda, 0x08, 0x43, 0x49, 0x12, 0x52, 0x46, 0x24, 0xec, 0x4c, 0x42, 0x88, 0xfa, 0xc6,
+ 0x7f, 0xb7, 0xe0, 0xc8, 0x26, 0x65, 0xb4, 0xc3, 0x68, 0xf7, 0x92, 0x47, 0x07, 0xdd, 0x2f, 0xb4,
+ 0xd2, 0x8f, 0xfb, 0x75, 0x05, 0xa3, 0x5f, 0xc7, 0x7d, 0xd8, 0xc0, 0xf3, 0xe9, 0x96, 0xd1, 0xf0,
+ 0x49, 0x00, 0xdc, 0xdb, 0xec, 0xf0, 0x8b, 0xcb, 0x65, 0xf9, 0xdb, 0x90, 0x01, 0x89, 0xad, 0xa5,
+ 0x94, 0x58, 0x0b, 0xfe, 0x9e, 0x05, 0x47, 0xb3, 0x5c, 0x2b, 0x25, 0xb5, 0xa0, 0x24, 0x36, 0xcf,
+ 0x68, 0x15, 0xa7, 0x76, 0x10, 0x85, 0x86, 0xce, 0xa7, 0xce, 0x17, 0xbf, 0x29, 0xb5, 0x1b, 0x7b,
+ 0x53, 0xa7, 0x9e, 0x40, 0x8d, 0x6e, 0x84, 0x81, 0x8b, 0xff, 0xc8, 0x6b, 0x76, 0x93, 0xa6, 0xd0,
+ 0x37, 0xb7, 0x55, 0xe5, 0xc7, 0xe5, 0x04, 0xfd, 0x1f, 0xd8, 0x6c, 0x32, 0x52, 0xee, 0xbb, 0x7d,
+ 0xe4, 0xb3, 0xa9, 0x73, 0x38, 0xb5, 0xed, 0xe6, 0x64, 0x44, 0x89, 0x40, 0xe1, 0x26, 0xde, 0x71,
+ 0xc3, 0xae, 0xe7, 0xbb, 0x03, 0x8f, 0x49, 0x31, 0xda, 0xc4, 0x04, 0x71, 0xbf, 0x31, 0x72, 0xc3,
+ 0x48, 0xe7, 0x60, 0x15, 0xe9, 0x37, 0x14, 0x88, 0xe8, 0x81, 0xe8, 0xad, 0xdc, 0xa1, 0xac, 0xd3,
+ 0x97, 0xfe, 0x5b, 0xf5, 0x56, 0x04, 0x24, 0xd5, 0x5b, 0x11, 0x10, 0xfc, 0x33, 0xc3, 0x8a, 0xe4,
+ 0x63, 0xfb, 0xd2, 0x59, 0x11, 0xfe, 0x56, 0xa2, 0x72, 0x7d, 0x45, 0xa5, 0xf2, 0x17, 0x60, 0xa9,
+ 0x9b, 0x5a, 0x39, 0x58, 0xf5, 0xb2, 0x6f, 0x9c, 0x41, 0xc7, 0xe3, 0x44, 0x8f, 0x02, 0x72, 0x80,
+ 0x1e, 0x33, 0xca, 0xc9, 0xef, 0x57, 0x4e, 0x22, 0xf5, 0xc2, 0xfd, 0xa5, 0xfe, 0xc4, 0x63, 0x50,
+ 0x89, 0x7f, 0x23, 0x44, 0x55, 0x58, 0xb8, 0x74, 0x9d, 0xbc, 0x7e, 0x91, 0x6c, 0x2e, 0xe7, 0x50,
+ 0x0d, 0xca, 0xed, 0x8b, 0x1b, 0x2f, 0x8b, 0x99, 0x75, 0xee, 0x57, 0x25, 0x9d, 0x61, 0x84, 0xe8,
+ 0xeb, 0x50, 0x94, 0x69, 0xc3, 0xd1, 0x84, 0x39, 0xf3, 0xe7, 0xb3, 0xd5, 0x63, 0xfb, 0xe0, 0x52,
+ 0x4a, 0x38, 0x77, 0xc6, 0x42, 0xd7, 0xa0, 0x2a, 0x80, 0xaa, 0x41, 0x7d, 0x22, 0xdb, 0x27, 0x4e,
+ 0x51, 0x7a, 0xe4, 0x80, 0x55, 0x83, 0xde, 0x05, 0x28, 0x4a, 0x81, 0x1d, 0xcd, 0x64, 0x77, 0x33,
+ 0x6e, 0x93, 0x6a, 0xd9, 0xe3, 0x1c, 0x7a, 0x16, 0xec, 0x9b, 0xae, 0x37, 0x40, 0x46, 0x72, 0x69,
+ 0xf4, 0x95, 0x57, 0x8f, 0x66, 0xc1, 0xc6, 0xb1, 0xcf, 0xc7, 0xed, 0xf1, 0x63, 0xd9, 0x1e, 0x9d,
+ 0xde, 0xde, 0xd8, 0xbf, 0x10, 0x9f, 0x7c, 0x5d, 0x36, 0x71, 0x75, 0xa7, 0x08, 0x3d, 0x92, 0x3e,
+ 0x2a, 0xd3, 0x58, 0x5a, 0x6d, 0x1e, 0xb4, 0x1c, 0x13, 0xdc, 0x82, 0xaa, 0xd1, 0xa5, 0x31, 0xc5,
+ 0xba, 0xbf, 0xc5, 0x64, 0x8a, 0x75, 0x46, 0x6b, 0x07, 0xe7, 0xd0, 0x65, 0x28, 0xf3, 0x94, 0x5c,
+ 0xfc, 0x9a, 0x73, 0x3c, 0x9b, 0x79, 0x1b, 0x19, 0xd7, 0xea, 0x89, 0xd9, 0x8b, 0x31, 0xa1, 0x6f,
+ 0x42, 0xe5, 0x32, 0x65, 0x2a, 0xd4, 0x1c, 0xcb, 0xc6, 0xaa, 0x19, 0x92, 0x4a, 0xc7, 0x3b, 0x9c,
+ 0x43, 0x6f, 0x88, 0xea, 0x20, 0xed, 0x69, 0x91, 0x73, 0x80, 0x47, 0x8d, 0xef, 0xb5, 0x76, 0x30,
+ 0x42, 0x4c, 0xf9, 0xf5, 0x14, 0x65, 0x15, 0xe0, 0x9d, 0x03, 0x1e, 0x6c, 0x4c, 0xd9, 0xb9, 0xcf,
+ 0x7f, 0x3d, 0x70, 0xee, 0xdc, 0x9b, 0xfa, 0xef, 0x0e, 0x9b, 0x2e, 0x73, 0xd1, 0x75, 0x58, 0x12,
+ 0xb2, 0x8c, 0xff, 0x0f, 0x91, 0xb2, 0xf9, 0x7d, 0x7f, 0xbe, 0x48, 0xd9, 0xfc, 0xfe, 0x3f, 0x61,
+ 0xe0, 0x5c, 0xfb, 0xcd, 0x0f, 0x3e, 0x6e, 0xe6, 0x3e, 0xfc, 0xb8, 0x99, 0xfb, 0xf4, 0xe3, 0xa6,
+ 0xf5, 0xdd, 0xdd, 0xa6, 0xf5, 0xcb, 0xdd, 0xa6, 0xf5, 0xfe, 0x6e, 0xd3, 0xfa, 0x60, 0xb7, 0x69,
+ 0xfd, 0x65, 0xb7, 0x69, 0xfd, 0x75, 0xb7, 0x99, 0xfb, 0x74, 0xb7, 0x69, 0xbd, 0xfb, 0x49, 0x33,
+ 0xf7, 0xc1, 0x27, 0xcd, 0xdc, 0x87, 0x9f, 0x34, 0x73, 0xdf, 0x7e, 0xfc, 0xfe, 0x95, 0xb0, 0x74,
+ 0x8b, 0x25, 0xf1, 0xf5, 0xd4, 0xbf, 0x02, 0x00, 0x00, 0xff, 0xff, 0xf9, 0xfc, 0x48, 0x69, 0xc6,
+ 0x23, 0x00, 0x00,
}
func (x Direction) String() string {
diff --git a/pkg/logproto/logproto.proto b/pkg/logproto/logproto.proto
index 483b8a1aeec64..18f9d6c01adb6 100644
--- a/pkg/logproto/logproto.proto
+++ b/pkg/logproto/logproto.proto
@@ -474,7 +474,7 @@ message DetectedField {
string label = 1;
string type = 2 [(gogoproto.casttype) = "DetectedFieldType"];
uint64 cardinality = 3;
- repeated string parsers = 4;
+ repeated string parsers = 4 [(gogoproto.jsontag) = "parsers"];
bytes sketch = 5 [(gogoproto.jsontag) = "sketch,omitempty"];
}
diff --git a/pkg/logql/accumulator.go b/pkg/logql/accumulator.go
index 446433f9a9144..434af93cb3c28 100644
--- a/pkg/logql/accumulator.go
+++ b/pkg/logql/accumulator.go
@@ -41,12 +41,19 @@ func (a *BufferedAccumulator) Result() []logqlmodel.Result {
type QuantileSketchAccumulator struct {
matrix ProbabilisticQuantileMatrix
+
+ stats stats.Result // for accumulating statistics from downstream requests
+ headers map[string][]string // for accumulating headers from downstream requests
+ warnings map[string]struct{} // for accumulating warnings from downstream requests}
}
// newQuantileSketchAccumulator returns an accumulator for sharded
// probabilistic quantile queries that merges results as they come in.
func newQuantileSketchAccumulator() *QuantileSketchAccumulator {
- return &QuantileSketchAccumulator{}
+ return &QuantileSketchAccumulator{
+ headers: make(map[string][]string),
+ warnings: make(map[string]struct{}),
+ }
}
func (a *QuantileSketchAccumulator) Accumulate(_ context.Context, res logqlmodel.Result, _ int) error {
@@ -57,6 +64,21 @@ func (a *QuantileSketchAccumulator) Accumulate(_ context.Context, res logqlmodel
if !ok {
return fmt.Errorf("unexpected matrix type: got (%T), want (ProbabilisticQuantileMatrix)", res.Data)
}
+
+ // TODO(owen-d/ewelch): Shard counts should be set by the querier
+ // so we don't have to do it in tricky ways in multiple places.
+ // See pkg/logql/downstream.go:DownstreamEvaluator.Downstream
+ // for another example.
+ if res.Statistics.Summary.Shards == 0 {
+ res.Statistics.Summary.Shards = 1
+ }
+ a.stats.Merge(res.Statistics)
+ metadata.ExtendHeaders(a.headers, res.Headers)
+
+ for _, w := range res.Warnings {
+ a.warnings[w] = struct{}{}
+ }
+
if a.matrix == nil {
a.matrix = data
return nil
@@ -68,7 +90,28 @@ func (a *QuantileSketchAccumulator) Accumulate(_ context.Context, res logqlmodel
}
func (a *QuantileSketchAccumulator) Result() []logqlmodel.Result {
- return []logqlmodel.Result{{Data: a.matrix}}
+ headers := make([]*definitions.PrometheusResponseHeader, 0, len(a.headers))
+ for name, vals := range a.headers {
+ headers = append(
+ headers,
+ &definitions.PrometheusResponseHeader{
+ Name: name,
+ Values: vals,
+ },
+ )
+ }
+
+ warnings := maps.Keys(a.warnings)
+ sort.Strings(warnings)
+
+ return []logqlmodel.Result{
+ {
+ Data: a.matrix,
+ Headers: headers,
+ Warnings: warnings,
+ Statistics: a.stats,
+ },
+ }
}
// heap impl for keeping only the top n results across m streams
diff --git a/pkg/logql/accumulator_test.go b/pkg/logql/accumulator_test.go
index 0975ea4789d2e..f9652402a5641 100644
--- a/pkg/logql/accumulator_test.go
+++ b/pkg/logql/accumulator_test.go
@@ -13,6 +13,8 @@ import (
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql/sketch"
"github.com/grafana/loki/v3/pkg/logqlmodel"
+ "github.com/grafana/loki/v3/pkg/logqlmodel/stats"
+ "github.com/grafana/loki/v3/pkg/querier/queryrange/queryrangebase/definitions"
)
func TestAccumulatedStreams(t *testing.T) {
@@ -149,6 +151,22 @@ func TestDownstreamAccumulatorMultiMerge(t *testing.T) {
}
}
+func TestQuantileSketchDownstreamAccumulatorSimple(t *testing.T) {
+ acc := newQuantileSketchAccumulator()
+ downstreamResult := newQuantileSketchResults()[0]
+
+ require.Nil(t, acc.Accumulate(context.Background(), downstreamResult, 0))
+
+ res := acc.Result()[0]
+ got, ok := res.Data.(ProbabilisticQuantileMatrix)
+ require.Equal(t, true, ok)
+ require.Equal(t, 10, len(got), "correct number of vectors")
+
+ require.Equal(t, res.Headers[0].Name, "HeaderA")
+ require.Equal(t, res.Warnings, []string{"warning"})
+ require.Equal(t, int64(33), res.Statistics.Summary.Shards)
+}
+
func BenchmarkAccumulator(b *testing.B) {
// dummy params. Only need to populate direction & limit
@@ -172,7 +190,7 @@ func BenchmarkAccumulator(b *testing.B) {
},
"quantile sketches": {
newQuantileSketchResults(),
- func(p Params, _ []logqlmodel.Result) Accumulator {
+ func(_ Params, _ []logqlmodel.Result) Accumulator {
return newQuantileSketchAccumulator()
},
params,
@@ -218,6 +236,9 @@ func newStreamResults() []logqlmodel.Result {
func newQuantileSketchResults() []logqlmodel.Result {
results := make([]logqlmodel.Result, 100)
+ statistics := stats.Result{
+ Summary: stats.Summary{Shards: 33},
+ }
for r := range results {
vectors := make([]ProbabilisticQuantileVector, 10)
@@ -231,7 +252,7 @@ func newQuantileSketchResults() []logqlmodel.Result {
}
}
}
- results[r] = logqlmodel.Result{Data: ProbabilisticQuantileMatrix(vectors)}
+ results[r] = logqlmodel.Result{Data: ProbabilisticQuantileMatrix(vectors), Headers: []*definitions.PrometheusResponseHeader{{Name: "HeaderA", Values: []string{"ValueA"}}}, Warnings: []string{"warning"}, Statistics: statistics}
}
return results
diff --git a/pkg/logql/log/ip.go b/pkg/logql/log/ip.go
index 851cc1a9fa6c7..c02c0a0ab3063 100644
--- a/pkg/logql/log/ip.go
+++ b/pkg/logql/log/ip.go
@@ -282,14 +282,14 @@ func isHexDigit(r byte) bool {
// It returns the number of chars in the initial segment of `s`
// which consist only of chars from `accept`.
func bytesSpan(s, accept []byte) int {
- m := make(map[byte]bool)
+ var charset [256]bool
for _, r := range accept {
- m[r] = true
+ charset[r] = true
}
for i, r := range s {
- if !m[r] {
+ if !charset[r] {
return i
}
}
diff --git a/pkg/logql/log/jsonexpr/lexer.go b/pkg/logql/log/jsonexpr/lexer.go
index f3ba6dcd9536b..2e0241cc18b37 100644
--- a/pkg/logql/log/jsonexpr/lexer.go
+++ b/pkg/logql/log/jsonexpr/lexer.go
@@ -23,7 +23,7 @@ func NewScanner(r io.Reader, debug bool) *Scanner {
}
func (sc *Scanner) Error(s string) {
- sc.err = fmt.Errorf(s)
+ sc.err = fmt.Errorf("%s", s)
fmt.Printf("syntax error: %s\n", s)
}
@@ -53,7 +53,7 @@ func (sc *Scanner) lex(lval *JSONExprSymType) int {
sc.unread()
val, err := sc.scanInt()
if err != nil {
- sc.err = fmt.Errorf(err.Error())
+ sc.err = fmt.Errorf("%s", err.Error())
return 0
}
diff --git a/pkg/logql/log/logfmt/lexer.go b/pkg/logql/log/logfmt/lexer.go
index 06756c0bfea6c..a14bbb55ae434 100644
--- a/pkg/logql/log/logfmt/lexer.go
+++ b/pkg/logql/log/logfmt/lexer.go
@@ -22,7 +22,7 @@ func NewScanner(r io.Reader, debug bool) *Scanner {
}
func (sc *Scanner) Error(s string) {
- sc.err = fmt.Errorf(s)
+ sc.err = fmt.Errorf("%s", s)
fmt.Printf("syntax error: %s\n", s)
}
diff --git a/pkg/logql/log/metrics_extraction.go b/pkg/logql/log/metrics_extraction.go
index e8605f6b293a7..a7e37cfcc042f 100644
--- a/pkg/logql/log/metrics_extraction.go
+++ b/pkg/logql/log/metrics_extraction.go
@@ -22,7 +22,7 @@ const (
type LineExtractor func([]byte) float64
var (
- CountExtractor LineExtractor = func(line []byte) float64 { return 1. }
+ CountExtractor LineExtractor = func(_ []byte) float64 { return 1. }
BytesExtractor LineExtractor = func(line []byte) float64 { return float64(len(line)) }
)
diff --git a/pkg/logql/log/parser.go b/pkg/logql/log/parser.go
index 9a5ae1395069c..c8e65061ba41d 100644
--- a/pkg/logql/log/parser.go
+++ b/pkg/logql/log/parser.go
@@ -4,6 +4,7 @@ import (
"bytes"
"errors"
"fmt"
+ "strings"
"unicode/utf8"
"github.com/grafana/jsonparser"
@@ -39,6 +40,14 @@ var (
errMissingCapture = errors.New("at least one named capture must be supplied")
errFoundAllLabels = errors.New("found all required labels")
errLabelDoesNotMatch = errors.New("found a label with a matcher that didn't match")
+
+ // the rune error replacement is rejected by Prometheus hence replacing them with space.
+ removeInvalidUtf = func(r rune) rune {
+ if r == utf8.RuneError {
+ return 32 // rune value for space
+ }
+ return r
+ }
)
type JSONParser struct {
@@ -200,12 +209,11 @@ func unescapeJSONString(b []byte) string {
return ""
}
res := string(bU)
- // rune error is rejected by Prometheus
- for _, r := range res {
- if r == utf8.RuneError {
- return ""
- }
+
+ if strings.ContainsRune(res, utf8.RuneError) {
+ res = strings.Map(removeInvalidUtf, res)
}
+
return res
}
@@ -339,9 +347,9 @@ func (l *LogfmtParser) Process(_ int64, line []byte, lbs *LabelsBuilder) ([]byte
}
val := l.dec.Value()
- // the rune error replacement is rejected by Prometheus, so we skip it.
+
if bytes.ContainsRune(val, utf8.RuneError) {
- val = nil
+ val = bytes.Map(removeInvalidUtf, val)
}
if !l.keepEmpty && len(val) == 0 {
diff --git a/pkg/logql/log/parser_test.go b/pkg/logql/log/parser_test.go
index 654e35bb13e73..28989b7cb5fe8 100644
--- a/pkg/logql/log/parser_test.go
+++ b/pkg/logql/log/parser_test.go
@@ -83,7 +83,7 @@ func Test_jsonParser_Parse(t *testing.T) {
labels.EmptyLabels(),
labels.FromStrings("counter", "1",
"price__net_", "5.56909",
- "foo", "",
+ "foo", " ",
),
NoParserHints(),
},
@@ -802,7 +802,7 @@ func TestLogfmtParser_parse(t *testing.T) {
"utf8 error rune",
[]byte(`buzz=foo bar=�f`),
labels.EmptyLabels(),
- labels.FromStrings("buzz", "foo"),
+ labels.FromStrings("bar", " f", "buzz", "foo"),
nil,
NoParserHints(),
},
@@ -1037,7 +1037,7 @@ func TestLogfmtParser_keepEmpty(t *testing.T) {
false,
labels.FromStrings("foo", "bar"),
labels.FromStrings("foo", "bar",
- "bar", "buzz"),
+ "bar", "buzz", "foo_extracted", "b r"),
},
{
"utf8 error rune with keep empty",
@@ -1045,7 +1045,7 @@ func TestLogfmtParser_keepEmpty(t *testing.T) {
true,
labels.FromStrings("foo", "bar"),
labels.FromStrings("foo", "bar",
- "foo_extracted", "",
+ "foo_extracted", "b r",
"bar", "buzz"),
},
}
diff --git a/pkg/logql/log/pipeline.go b/pkg/logql/log/pipeline.go
index 1396e552c655e..a205039dd7715 100644
--- a/pkg/logql/log/pipeline.go
+++ b/pkg/logql/log/pipeline.go
@@ -2,10 +2,11 @@ package log
import (
"context"
- "reflect"
"sync"
"unsafe"
+ "github.com/prometheus/prometheus/storage/remote/otlptranslator/prometheus"
+
"github.com/prometheus/prometheus/model/labels"
)
@@ -67,7 +68,7 @@ func (n *noopPipeline) ForStream(labels labels.Labels) StreamPipeline {
}
n.mu.RUnlock()
- sp := &noopStreamPipeline{n.baseBuilder.ForLabels(labels, h)}
+ sp := &noopStreamPipeline{n.baseBuilder.ForLabels(labels, h), make([]int, 0, 10)}
n.mu.Lock()
defer n.mu.Unlock()
@@ -92,7 +93,8 @@ func IsNoopPipeline(p Pipeline) bool {
}
type noopStreamPipeline struct {
- builder *LabelsBuilder
+ builder *LabelsBuilder
+ offsetsBuf []int
}
func (n noopStreamPipeline) ReferencedStructuredMetadata() bool {
@@ -101,6 +103,9 @@ func (n noopStreamPipeline) ReferencedStructuredMetadata() bool {
func (n noopStreamPipeline) Process(_ int64, line []byte, structuredMetadata ...labels.Label) ([]byte, LabelsResult, bool) {
n.builder.Reset()
+ for i, lb := range structuredMetadata {
+ structuredMetadata[i].Name = prometheus.NormalizeLabel(lb.Name)
+ }
n.builder.Add(StructuredMetadataLabel, structuredMetadata...)
return line, n.builder.LabelsResult(), true
}
@@ -176,12 +181,13 @@ func NewPipeline(stages []Stage) Pipeline {
}
type streamPipeline struct {
- stages []Stage
- builder *LabelsBuilder
+ stages []Stage
+ builder *LabelsBuilder
+ offsetsBuf []int
}
func NewStreamPipeline(stages []Stage, labelsBuilder *LabelsBuilder) StreamPipeline {
- return &streamPipeline{stages, labelsBuilder}
+ return &streamPipeline{stages, labelsBuilder, make([]int, 0, 10)}
}
func (p *pipeline) ForStream(labels labels.Labels) StreamPipeline {
@@ -220,6 +226,11 @@ func (p *streamPipeline) ReferencedStructuredMetadata() bool {
func (p *streamPipeline) Process(ts int64, line []byte, structuredMetadata ...labels.Label) ([]byte, LabelsResult, bool) {
var ok bool
p.builder.Reset()
+
+ for i, lb := range structuredMetadata {
+ structuredMetadata[i].Name = prometheus.NormalizeLabel(lb.Name)
+ }
+
p.builder.Add(StructuredMetadataLabel, structuredMetadata...)
for _, s := range p.stages {
@@ -371,11 +382,7 @@ func ReduceStages(stages []Stage) Stage {
}
func unsafeGetBytes(s string) []byte {
- var buf []byte
- p := unsafe.Pointer(&buf)
- *(*string)(p) = s
- (*reflect.SliceHeader)(p).Cap = len(s)
- return buf
+ return unsafe.Slice(unsafe.StringData(s), len(s))
}
func unsafeGetString(buf []byte) string {
diff --git a/pkg/logql/log/pipeline_test.go b/pkg/logql/log/pipeline_test.go
index ffa5df0d50b98..8c11d0c198a10 100644
--- a/pkg/logql/log/pipeline_test.go
+++ b/pkg/logql/log/pipeline_test.go
@@ -56,6 +56,18 @@ func TestNoopPipeline(t *testing.T) {
require.Equal(t, expectedLabelsResults.String(), lbr.String())
require.Equal(t, true, matches)
+ // test structured metadata with disallowed label names
+ structuredMetadata = append(labels.FromStrings("y", "1", "z", "2"), labels.Label{Name: "zsomething-bad", Value: "foo"})
+ expectedStructuredMetadata := append(labels.FromStrings("y", "1", "z", "2"), labels.Label{Name: "zsomething_bad", Value: "foo"})
+ expectedLabelsResults = append(lbs, expectedStructuredMetadata...)
+
+ l, lbr, matches = pipeline.ForStream(lbs).Process(0, []byte(""), structuredMetadata...)
+ require.Equal(t, []byte(""), l)
+ require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, expectedStructuredMetadata, labels.EmptyLabels()), lbr)
+ require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
+ require.Equal(t, expectedLabelsResults.String(), lbr.String())
+ require.Equal(t, true, matches)
+
pipeline.Reset()
require.Len(t, pipeline.cache, 0)
}
@@ -171,6 +183,17 @@ func TestPipelineWithStructuredMetadata(t *testing.T) {
require.Equal(t, nil, lbr)
require.Equal(t, false, matches)
+ // test structured metadata with disallowed label names
+ withBadLabel := append(structuredMetadata, labels.Label{Name: "zsomething-bad", Value: "foo"})
+ expectedStructuredMetadata := append(structuredMetadata, labels.Label{Name: "zsomething_bad", Value: "foo"})
+ expectedLabelsResults = append(lbs, expectedStructuredMetadata...)
+
+ _, lbr, matches = p.ForStream(lbs).Process(0, []byte(""), withBadLabel...)
+ require.Equal(t, NewLabelsResult(expectedLabelsResults.String(), expectedLabelsResults.Hash(), lbs, expectedStructuredMetadata, labels.EmptyLabels()), lbr)
+ require.Equal(t, expectedLabelsResults.Hash(), lbr.Hash())
+ require.Equal(t, expectedLabelsResults.String(), lbr.String())
+ require.Equal(t, true, matches)
+
// Reset caches
p.baseBuilder.del = []string{"foo", "bar"}
p.baseBuilder.add = [numValidCategories]labels.Labels{
@@ -523,6 +546,42 @@ func TestKeepLabelsPipeline(t *testing.T) {
}
+func TestUnsafeGetBytes(t *testing.T) {
+ tests := []struct {
+ name string
+ input string
+ want []byte
+ }{
+ {
+ name: "empty string",
+ input: "",
+ want: nil,
+ },
+ {
+ name: "simple string",
+ input: "hello",
+ want: []byte{'h', 'e', 'l', 'l', 'o'},
+ },
+ {
+ name: "string with spaces",
+ input: "hello world",
+ want: []byte{'h', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd'},
+ },
+ {
+ name: "string with special characters",
+ input: "hello\nworld\t!",
+ want: []byte{'h', 'e', 'l', 'l', 'o', '\n', 'w', 'o', 'r', 'l', 'd', '\t', '!'},
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ got := unsafeGetBytes(tt.input)
+ require.Equal(t, tt.want, got)
+ })
+ }
+}
+
func Benchmark_Pipeline(b *testing.B) {
b.ReportAllocs()
@@ -566,6 +625,19 @@ func Benchmark_Pipeline(b *testing.B) {
}
})
+ b.Run("pipeline bytes no invalid structured metadata", func(b *testing.B) {
+ b.ResetTimer()
+ for n := 0; n < b.N; n++ {
+ resLine, resLbs, resMatches = sp.Process(0, line, labels.Label{Name: "valid_name", Value: "foo"})
+ }
+ })
+ b.Run("pipeline string with invalid structured metadata", func(b *testing.B) {
+ b.ResetTimer()
+ for n := 0; n < b.N; n++ {
+ resLine, resLbs, resMatches = sp.Process(0, line, labels.Label{Name: "invalid-name", Value: "foo"}, labels.Label{Name: "other-invalid-name", Value: "foo"})
+ }
+ })
+
extractor, err := NewLineSampleExtractor(CountExtractor, stages, []string{"cluster", "level"}, false, false)
require.NoError(b, err)
ex := extractor.ForStream(lbs)
diff --git a/pkg/logql/metrics.go b/pkg/logql/metrics.go
index d06a8fbac7d2b..5b6422b815357 100644
--- a/pkg/logql/metrics.go
+++ b/pkg/logql/metrics.go
@@ -8,7 +8,6 @@ import (
"time"
"github.com/c2h5oh/datasize"
- "github.com/dustin/go-humanize"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
@@ -156,9 +155,9 @@ func RecordRangeAndInstantQueryMetrics(
"status", status,
"limit", p.Limit(),
"returned_lines", returnedLines,
- "throughput", humanizeBytes(uint64(stats.Summary.BytesProcessedPerSecond)),
- "total_bytes", humanizeBytes(uint64(stats.Summary.TotalBytesProcessed)),
- "total_bytes_structured_metadata", humanizeBytes(uint64(stats.Summary.TotalStructuredMetadataBytesProcessed)),
+ "throughput", util.HumanizeBytes(uint64(stats.Summary.BytesProcessedPerSecond)),
+ "total_bytes", util.HumanizeBytes(uint64(stats.Summary.TotalBytesProcessed)),
+ "total_bytes_structured_metadata", util.HumanizeBytes(uint64(stats.Summary.TotalStructuredMetadataBytesProcessed)),
"lines_per_second", stats.Summary.LinesProcessedPerSecond,
"total_lines", stats.Summary.TotalLinesProcessed,
"post_filter_lines", stats.Summary.TotalPostFilterLines,
@@ -197,11 +196,11 @@ func RecordRangeAndInstantQueryMetrics(
// Total ingester reached for this query.
"ingester_requests", stats.Ingester.GetTotalReached(),
// Total bytes processed but was already in memory (found in the headchunk). Includes structured metadata bytes.
- "ingester_chunk_head_bytes", humanizeBytes(uint64(stats.Ingester.Store.Chunk.GetHeadChunkBytes())),
+ "ingester_chunk_head_bytes", util.HumanizeBytes(uint64(stats.Ingester.Store.Chunk.GetHeadChunkBytes())),
// Total bytes of compressed chunks (blocks) processed.
- "ingester_chunk_compressed_bytes", humanizeBytes(uint64(stats.Ingester.Store.Chunk.GetCompressedBytes())),
+ "ingester_chunk_compressed_bytes", util.HumanizeBytes(uint64(stats.Ingester.Store.Chunk.GetCompressedBytes())),
// Total bytes decompressed and processed from chunks. Includes structured metadata bytes.
- "ingester_chunk_decompressed_bytes", humanizeBytes(uint64(stats.Ingester.Store.Chunk.GetDecompressedBytes())),
+ "ingester_chunk_decompressed_bytes", util.HumanizeBytes(uint64(stats.Ingester.Store.Chunk.GetDecompressedBytes())),
// Total lines post filtering.
"ingester_post_filter_lines", stats.Ingester.Store.Chunk.GetPostFilterLines(),
// Time spent being blocked on congestion control.
@@ -243,10 +242,6 @@ func RecordRangeAndInstantQueryMetrics(
recordUsageStats(queryType, stats)
}
-func humanizeBytes(val uint64) string {
- return strings.Replace(humanize.Bytes(val), " ", "", 1)
-}
-
func RecordLabelQueryMetrics(
ctx context.Context,
log log.Logger,
diff --git a/pkg/logql/sketch/topk.go b/pkg/logql/sketch/topk.go
index e5efad409727b..e6f2c036f3675 100644
--- a/pkg/logql/sketch/topk.go
+++ b/pkg/logql/sketch/topk.go
@@ -2,7 +2,6 @@ package sketch
import (
"container/heap"
- "reflect"
"sort"
"unsafe"
@@ -210,14 +209,8 @@ func (t *Topk) updateBF(removed, added string) {
}
}
-// todo: is there a way to save more bytes/allocs via a pool?
func unsafeGetBytes(s string) []byte {
- if s == "" {
- return nil // or []byte{}
- }
- return (*[0x7fff0000]byte)(unsafe.Pointer(
- (*reflect.StringHeader)(unsafe.Pointer(&s)).Data),
- )[:len(s):len(s)]
+ return unsafe.Slice(unsafe.StringData(s), len(s))
}
// Observe is our sketch event observation function, which is a bit more complex than the original count min sketch + heap TopK
diff --git a/pkg/logql/sketch/topk_slow_test.go b/pkg/logql/sketch/topk_slow_test.go
index 8545a0cde65c2..8f2095d987de6 100644
--- a/pkg/logql/sketch/topk_slow_test.go
+++ b/pkg/logql/sketch/topk_slow_test.go
@@ -171,7 +171,6 @@ func TestCMSTopk(t *testing.T) {
oTotal += int(n)
}
- rand.Seed(time.Now().UnixNano())
rand.Shuffle(len(events), func(i, j int) { events[i], events[j] = events[j], events[i] })
for _, e := range events {
@@ -254,7 +253,6 @@ func BenchmarkCMSTopk(b *testing.B) {
oTotal += int(n)
}
- rand.Seed(time.Now().UnixNano())
rand.Shuffle(len(events), func(i, j int) { events[i], events[j] = events[j], events[i] })
b.StartTimer()
@@ -304,7 +302,6 @@ func TestBFTopK(t *testing.T) {
}
}
- rand.Seed(time.Now().UnixNano())
rand.Shuffle(len(events), func(i, j int) { events[i], events[j] = events[j], events[i] })
topk, _ := NewSketchBF(100, 27189, 7)
diff --git a/pkg/logql/sketch/topk_test.go b/pkg/logql/sketch/topk_test.go
index fc59a2c52aeeb..d923cc60fc32f 100644
--- a/pkg/logql/sketch/topk_test.go
+++ b/pkg/logql/sketch/topk_test.go
@@ -9,7 +9,6 @@ import (
"sort"
"strconv"
"testing"
- "time"
"github.com/alicebob/miniredis/v2/hyperloglog"
"github.com/stretchr/testify/assert"
@@ -69,7 +68,6 @@ func TestTopK_Merge(t *testing.T) {
}
}
- rand.Seed(time.Now().UnixNano()) //nolint:all
rand.Shuffle(len(events), func(i, j int) { events[i], events[j] = events[j], events[i] })
topk1, err := NewCMSTopkForCardinality(nil, k, nStreams)
diff --git a/pkg/logql/syntax/ast.go b/pkg/logql/syntax/ast.go
index 38231a936a02b..0ecab6313a40f 100644
--- a/pkg/logql/syntax/ast.go
+++ b/pkg/logql/syntax/ast.go
@@ -60,7 +60,7 @@ func ExtractLineFilters(e Expr) []LineFilterExpr {
}
var filters []LineFilterExpr
visitor := &DepthFirstTraversal{
- VisitLineFilterFn: func(v RootVisitor, e *LineFilterExpr) {
+ VisitLineFilterFn: func(_ RootVisitor, e *LineFilterExpr) {
if e != nil {
filters = append(filters, *e)
}
diff --git a/pkg/logql/syntax/visit.go b/pkg/logql/syntax/visit.go
index d4478346859fc..968c5b53b01b5 100644
--- a/pkg/logql/syntax/visit.go
+++ b/pkg/logql/syntax/visit.go
@@ -95,7 +95,7 @@ func (v *DepthFirstTraversal) VisitDropLabels(e *DropLabelsExpr) {
if e == nil {
return
}
- if v.VisitDecolorizeFn != nil {
+ if v.VisitDropLabelsFn != nil {
v.VisitDropLabelsFn(v, e)
}
}
diff --git a/pkg/logql/syntax/visit_test.go b/pkg/logql/syntax/visit_test.go
index eeb040ce83a1a..445f165d9057a 100644
--- a/pkg/logql/syntax/visit_test.go
+++ b/pkg/logql/syntax/visit_test.go
@@ -12,16 +12,16 @@ func TestDepthFirstTraversalVisitor(t *testing.T) {
visited := [][2]string{}
visitor := &DepthFirstTraversal{
- VisitLabelParserFn: func(v RootVisitor, e *LabelParserExpr) {
+ VisitLabelParserFn: func(_ RootVisitor, e *LabelParserExpr) {
visited = append(visited, [2]string{fmt.Sprintf("%T", e), e.String()})
},
- VisitLineFilterFn: func(v RootVisitor, e *LineFilterExpr) {
+ VisitLineFilterFn: func(_ RootVisitor, e *LineFilterExpr) {
visited = append(visited, [2]string{fmt.Sprintf("%T", e), e.String()})
},
- VisitLogfmtParserFn: func(v RootVisitor, e *LogfmtParserExpr) {
+ VisitLogfmtParserFn: func(_ RootVisitor, e *LogfmtParserExpr) {
visited = append(visited, [2]string{fmt.Sprintf("%T", e), e.String()})
},
- VisitMatchersFn: func(v RootVisitor, e *MatchersExpr) {
+ VisitMatchersFn: func(_ RootVisitor, e *MatchersExpr) {
visited = append(visited, [2]string{fmt.Sprintf("%T", e), e.String()})
},
}
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index 48deb5151bb57..3885dffe6263b 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -276,6 +276,21 @@ func applyConfigToRings(r, defaults *ConfigWrapper, rc lokiring.RingConfig, merg
r.Pattern.LifecyclerConfig.ObservePeriod = rc.ObservePeriod
}
+ if mergeWithExisting {
+ r.KafkaIngester.LifecyclerConfig.RingConfig.KVStore = rc.KVStore
+ r.KafkaIngester.LifecyclerConfig.HeartbeatPeriod = rc.HeartbeatPeriod
+ r.KafkaIngester.LifecyclerConfig.RingConfig.HeartbeatTimeout = rc.HeartbeatTimeout
+ r.KafkaIngester.LifecyclerConfig.TokensFilePath = rc.TokensFilePath
+ r.KafkaIngester.LifecyclerConfig.RingConfig.ZoneAwarenessEnabled = rc.ZoneAwarenessEnabled
+ r.KafkaIngester.LifecyclerConfig.ID = rc.InstanceID
+ r.KafkaIngester.LifecyclerConfig.InfNames = rc.InstanceInterfaceNames
+ r.KafkaIngester.LifecyclerConfig.Port = rc.InstancePort
+ r.KafkaIngester.LifecyclerConfig.Addr = rc.InstanceAddr
+ r.KafkaIngester.LifecyclerConfig.Zone = rc.InstanceZone
+ r.KafkaIngester.LifecyclerConfig.ListenPort = rc.ListenPort
+ r.KafkaIngester.LifecyclerConfig.ObservePeriod = rc.ObservePeriod
+ }
+
// Distributor
if mergeWithExisting || reflect.DeepEqual(r.Distributor.DistributorRing, defaults.Distributor.DistributorRing) {
r.Distributor.DistributorRing.HeartbeatTimeout = rc.HeartbeatTimeout
@@ -336,20 +351,6 @@ func applyConfigToRings(r, defaults *ConfigWrapper, rc lokiring.RingConfig, merg
r.IndexGateway.Ring.ZoneAwarenessEnabled = rc.ZoneAwarenessEnabled
r.IndexGateway.Ring.KVStore = rc.KVStore
}
-
- // BloomCompactor
- if mergeWithExisting || reflect.DeepEqual(r.BloomCompactor.Ring, defaults.BloomCompactor.Ring) {
- r.BloomCompactor.Ring.HeartbeatTimeout = rc.HeartbeatTimeout
- r.BloomCompactor.Ring.HeartbeatPeriod = rc.HeartbeatPeriod
- r.BloomCompactor.Ring.InstancePort = rc.InstancePort
- r.BloomCompactor.Ring.InstanceAddr = rc.InstanceAddr
- r.BloomCompactor.Ring.InstanceID = rc.InstanceID
- r.BloomCompactor.Ring.InstanceInterfaceNames = rc.InstanceInterfaceNames
- r.BloomCompactor.Ring.InstanceZone = rc.InstanceZone
- r.BloomCompactor.Ring.ZoneAwarenessEnabled = rc.ZoneAwarenessEnabled
- r.BloomCompactor.Ring.KVStore = rc.KVStore
- r.BloomCompactor.Ring.NumTokens = rc.NumTokens
- }
}
func applyTokensFilePath(cfg *ConfigWrapper) error {
@@ -381,13 +382,6 @@ func applyTokensFilePath(cfg *ConfigWrapper) error {
}
cfg.IndexGateway.Ring.TokensFilePath = f
- // Bloom-Compactor
- f, err = tokensFile(cfg, "bloom-compactor.tokens")
- if err != nil {
- return err
- }
- cfg.BloomCompactor.Ring.TokensFilePath = f
-
// Pattern
f, err = tokensFile(cfg, "pattern.tokens")
if err != nil {
@@ -480,10 +474,6 @@ func appendLoopbackInterface(cfg, defaults *ConfigWrapper) {
if reflect.DeepEqual(cfg.IndexGateway.Ring.InstanceInterfaceNames, defaults.IndexGateway.Ring.InstanceInterfaceNames) {
cfg.IndexGateway.Ring.InstanceInterfaceNames = append(cfg.IndexGateway.Ring.InstanceInterfaceNames, loopbackIface)
}
-
- if reflect.DeepEqual(cfg.BloomCompactor.Ring.InstanceInterfaceNames, defaults.BloomCompactor.Ring.InstanceInterfaceNames) {
- cfg.BloomCompactor.Ring.InstanceInterfaceNames = append(cfg.BloomCompactor.Ring.InstanceInterfaceNames, loopbackIface)
- }
}
// applyMemberlistConfig will change the default ingester, distributor, ruler, and query scheduler ring configurations to use memberlist.
@@ -498,7 +488,6 @@ func applyMemberlistConfig(r *ConfigWrapper) {
r.QueryScheduler.SchedulerRing.KVStore.Store = memberlistStr
r.CompactorConfig.CompactorRing.KVStore.Store = memberlistStr
r.IndexGateway.Ring.KVStore.Store = memberlistStr
- r.BloomCompactor.Ring.KVStore.Store = memberlistStr
}
var ErrTooManyStorageConfigs = errors.New("too many storage configs provided in the common config, please only define one storage backend")
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index 01074ddf80416..5107bf9ee7651 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -32,7 +32,6 @@ import (
"github.com/grafana/loki/v3/pkg/analytics"
"github.com/grafana/loki/v3/pkg/bloombuild"
- "github.com/grafana/loki/v3/pkg/bloomcompactor"
"github.com/grafana/loki/v3/pkg/bloomgateway"
"github.com/grafana/loki/v3/pkg/compactor"
compactorclient "github.com/grafana/loki/v3/pkg/compactor/client"
@@ -45,6 +44,8 @@ import (
metastoreclient "github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/client"
"github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/health"
ingester_client "github.com/grafana/loki/v3/pkg/ingester/client"
+ "github.com/grafana/loki/v3/pkg/kafka"
+ ingester_kafka "github.com/grafana/loki/v3/pkg/kafka/ingester"
"github.com/grafana/loki/v3/pkg/loghttp/push"
"github.com/grafana/loki/v3/pkg/loki/common"
"github.com/grafana/loki/v3/pkg/lokifrontend"
@@ -95,10 +96,9 @@ type Config struct {
IngesterClient ingester_client.Config `yaml:"ingester_client,omitempty"`
IngesterRF1Client ingester_client.Config `yaml:"ingester_rf1_client,omitempty"`
Ingester ingester.Config `yaml:"ingester,omitempty"`
- IngesterRF1 ingester_rf1.Config `yaml:"ingester_rf1,omitempty"`
+ IngesterRF1 ingester_rf1.Config `yaml:"ingester_rf1,omitempty" category:"experimental"`
Pattern pattern.Config `yaml:"pattern_ingester,omitempty"`
IndexGateway indexgateway.Config `yaml:"index_gateway"`
- BloomCompactor bloomcompactor.Config `yaml:"bloom_compactor,omitempty" category:"experimental"`
BloomBuild bloombuild.Config `yaml:"bloom_build,omitempty" category:"experimental"`
BloomGateway bloomgateway.Config `yaml:"bloom_gateway,omitempty" category:"experimental"`
StorageConfig storage.Config `yaml:"storage_config,omitempty"`
@@ -113,6 +113,8 @@ type Config struct {
MemberlistKV memberlist.KVConfig `yaml:"memberlist"`
Metastore metastore.Config `yaml:"metastore,omitempty"`
MetastoreClient metastoreclient.Config `yaml:"metastore_client"`
+ KafkaConfig kafka.Config `yaml:"kafka_config,omitempty" category:"experimental"`
+ KafkaIngester ingester_kafka.Config `yaml:"kafka_ingester,omitempty" category:"experimental"`
RuntimeConfig runtimeconfig.Config `yaml:"runtime_config,omitempty"`
OperationalConfig runtime.Config `yaml:"operational_config,omitempty"`
@@ -188,7 +190,6 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
c.MemberlistKV.RegisterFlags(f)
c.Tracing.RegisterFlags(f)
c.CompactorConfig.RegisterFlags(f)
- c.BloomCompactor.RegisterFlags(f)
c.BloomBuild.RegisterFlags(f)
c.QueryScheduler.RegisterFlags(f)
c.Analytics.RegisterFlags(f)
@@ -196,6 +197,8 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
c.Profiling.RegisterFlags(f)
c.Metastore.RegisterFlags(f)
c.MetastoreClient.RegisterFlags(f)
+ c.KafkaConfig.RegisterFlags(f)
+ c.KafkaIngester.RegisterFlags(f)
}
func (c *Config) registerServerFlagsWithChangedDefaultValues(fs *flag.FlagSet) {
@@ -221,6 +224,8 @@ func (c *Config) registerServerFlagsWithChangedDefaultValues(fs *flag.FlagSet) {
case "pattern-ingester.distributor.replication-factor":
_ = f.Value.Set("1")
+ case "kafka-ingester.distributor.replication-factor":
+ _ = f.Value.Set("1")
}
fs.Var(f.Value, f.Name, f.Usage)
@@ -290,8 +295,8 @@ func (c *Config) Validate() error {
if err := c.QueryRange.Validate(); err != nil {
errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid query_range config"))
}
- if err := c.BloomCompactor.Validate(); err != nil {
- errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid bloom_compactor config"))
+ if err := c.BloomBuild.Validate(); err != nil {
+ errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid bloom_build config"))
}
if err := c.BloomGateway.Validate(); err != nil {
errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid bloom_gateway config"))
@@ -299,6 +304,14 @@ func (c *Config) Validate() error {
if err := c.Pattern.Validate(); err != nil {
errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid pattern_ingester config"))
}
+ if c.KafkaIngester.Enabled {
+ if err := c.KafkaConfig.Validate(); err != nil {
+ errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid kafka_config config"))
+ }
+ if err := c.KafkaIngester.Validate(); err != nil {
+ errs = append(errs, errors.Wrap(err, "CONFIG ERROR: invalid kafka_ingester config"))
+ }
+ }
errs = append(errs, validateSchemaValues(c)...)
errs = append(errs, ValidateConfigCompatibility(*c)...)
@@ -375,9 +388,10 @@ type Loki struct {
querySchedulerRingManager *lokiring.RingManager
usageReport *analytics.Reporter
indexGatewayRingManager *lokiring.RingManager
- bloomCompactorRingManager *lokiring.RingManager
- bloomGatewayRingManager *lokiring.RingManager
MetastoreClient *metastoreclient.Client
+ partitionRingWatcher *ring.PartitionRingWatcher
+ partitionRing *ring.PartitionInstanceRing
+ kafkaIngester *ingester_kafka.Ingester
ClientMetrics storage.ClientMetrics
deleteClientMetrics *deletion.DeleteRequestClientMetrics
@@ -681,6 +695,7 @@ func (t *Loki) setupModuleManager() error {
mm.RegisterModule(Querier, t.initQuerier)
mm.RegisterModule(Ingester, t.initIngester)
mm.RegisterModule(IngesterRF1, t.initIngesterRF1)
+ mm.RegisterModule(IngesterKafka, t.initKafkaIngester)
mm.RegisterModule(IngesterRF1RingClient, t.initIngesterRF1RingClient, modules.UserInvisibleModule)
mm.RegisterModule(IngesterQuerier, t.initIngesterQuerier)
mm.RegisterModule(IngesterGRPCInterceptors, t.initIngesterGRPCInterceptors, modules.UserInvisibleModule)
@@ -692,8 +707,6 @@ func (t *Loki) setupModuleManager() error {
mm.RegisterModule(TableManager, t.initTableManager)
mm.RegisterModule(Compactor, t.initCompactor)
mm.RegisterModule(BloomStore, t.initBloomStore, modules.UserInvisibleModule)
- mm.RegisterModule(BloomCompactor, t.initBloomCompactor)
- mm.RegisterModule(BloomCompactorRing, t.initBloomCompactorRing, modules.UserInvisibleModule)
mm.RegisterModule(BloomPlanner, t.initBloomPlanner)
mm.RegisterModule(BloomBuilder, t.initBloomBuilder)
mm.RegisterModule(IndexGateway, t.initIndexGateway)
@@ -709,6 +722,7 @@ func (t *Loki) setupModuleManager() error {
mm.RegisterModule(PatternIngester, t.initPatternIngester)
mm.RegisterModule(Metastore, t.initMetastore)
mm.RegisterModule(MetastoreClient, t.initMetastoreClient, modules.UserInvisibleModule)
+ mm.RegisterModule(PartitionRing, t.initPartitionRing, modules.UserInvisibleModule)
mm.RegisterModule(All, nil)
mm.RegisterModule(Read, nil)
@@ -722,9 +736,10 @@ func (t *Loki) setupModuleManager() error {
Overrides: {RuntimeConfig},
OverridesExporter: {Overrides, Server},
TenantConfigs: {RuntimeConfig},
- Distributor: {Ring, Server, Overrides, TenantConfigs, PatternRingClient, PatternIngesterTee, IngesterRF1RingClient, Analytics},
+ Distributor: {Ring, Server, Overrides, TenantConfigs, PatternRingClient, PatternIngesterTee, IngesterRF1RingClient, Analytics, PartitionRing},
Store: {Overrides, IndexGatewayRing},
- IngesterRF1: {Store, Server, MemberlistKV, TenantConfigs, MetastoreClient, Analytics},
+ IngesterRF1: {Store, Server, MemberlistKV, TenantConfigs, MetastoreClient, Analytics, PartitionRing},
+ IngesterKafka: {Store, Server, MemberlistKV, TenantConfigs, MetastoreClient, Analytics, PartitionRing},
Ingester: {Store, Server, MemberlistKV, TenantConfigs, Analytics},
Querier: {Store, Ring, Server, IngesterQuerier, PatternRingClient, MetastoreClient, Overrides, Analytics, CacheGenerationLoader, QuerySchedulerRing},
QueryFrontendTripperware: {Server, Overrides, TenantConfigs},
@@ -736,7 +751,6 @@ func (t *Loki) setupModuleManager() error {
Compactor: {Server, Overrides, MemberlistKV, Analytics},
IndexGateway: {Server, Store, BloomStore, IndexGatewayRing, IndexGatewayInterceptors, Analytics},
BloomGateway: {Server, BloomStore, Analytics},
- BloomCompactor: {Server, BloomStore, BloomCompactorRing, Analytics, Store},
BloomPlanner: {Server, BloomStore, Analytics, Store},
BloomBuilder: {Server, BloomStore, Analytics, Store},
BloomStore: {IndexGatewayRing},
@@ -748,14 +762,14 @@ func (t *Loki) setupModuleManager() error {
IngesterQuerier: {Ring},
QuerySchedulerRing: {Overrides, MemberlistKV},
IndexGatewayRing: {Overrides, MemberlistKV},
- BloomCompactorRing: {Overrides, MemberlistKV},
+ PartitionRing: {MemberlistKV, Server, Ring},
MemberlistKV: {Server},
Read: {QueryFrontend, Querier},
- Write: {Ingester, IngesterRF1, Distributor, PatternIngester},
- Backend: {QueryScheduler, Ruler, Compactor, IndexGateway, BloomGateway, BloomCompactor},
+ Write: {Ingester, IngesterRF1, Distributor, PatternIngester, IngesterKafka},
+ Backend: {QueryScheduler, Ruler, Compactor, IndexGateway, BloomPlanner, BloomBuilder, BloomGateway},
- All: {QueryScheduler, QueryFrontend, Querier, Ingester, IngesterRF1, PatternIngester, Distributor, Ruler, Compactor, Metastore},
+ All: {QueryScheduler, QueryFrontend, Querier, Ingester, IngesterRF1, PatternIngester, Distributor, Ruler, Compactor, Metastore, IngesterKafka},
}
if t.Cfg.Querier.PerRequestLimitsEnabled {
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 60e2683b599ff..b22d7315485e5 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -19,6 +19,7 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/dns"
+ "github.com/grafana/dskit/kv"
"github.com/grafana/dskit/kv/codec"
"github.com/grafana/dskit/kv/memberlist"
"github.com/grafana/dskit/middleware"
@@ -37,7 +38,6 @@ import (
"github.com/grafana/loki/v3/pkg/bloombuild/builder"
"github.com/grafana/loki/v3/pkg/bloombuild/planner"
bloomprotos "github.com/grafana/loki/v3/pkg/bloombuild/protos"
- "github.com/grafana/loki/v3/pkg/bloomcompactor"
"github.com/grafana/loki/v3/pkg/bloomgateway"
"github.com/grafana/loki/v3/pkg/compactor"
compactorclient "github.com/grafana/loki/v3/pkg/compactor/client"
@@ -53,6 +53,8 @@ import (
"github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/health"
"github.com/grafana/loki/v3/pkg/ingester-rf1/metastore/metastorepb"
"github.com/grafana/loki/v3/pkg/ingester-rf1/objstore"
+ ingesterkafka "github.com/grafana/loki/v3/pkg/kafka/ingester"
+ kafka_tee "github.com/grafana/loki/v3/pkg/kafka/tee"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql"
"github.com/grafana/loki/v3/pkg/logqlmodel/stats"
@@ -109,6 +111,7 @@ const (
CacheGenerationLoader string = "cache-generation-loader"
Ingester string = "ingester"
IngesterRF1 string = "ingester-rf1"
+ IngesterKafka string = "ingester-kafka"
IngesterRF1RingClient string = "ingester-rf1-ring-client"
PatternIngester string = "pattern-ingester"
PatternIngesterTee string = "pattern-ingester-tee"
@@ -133,8 +136,6 @@ const (
IndexGatewayInterceptors string = "index-gateway-interceptors"
QueryScheduler string = "query-scheduler"
QuerySchedulerRing string = "query-scheduler-ring"
- BloomCompactor string = "bloom-compactor"
- BloomCompactorRing string = "bloom-compactor-ring"
BloomPlanner string = "bloom-planner"
BloomBuilder string = "bloom-builder"
BloomStore string = "bloom-store"
@@ -146,13 +147,13 @@ const (
InitCodec string = "init-codec"
Metastore string = "metastore"
MetastoreClient string = "metastore-client"
+ PartitionRing string = "partition-ring"
)
const (
- schedulerRingKey = "scheduler"
- indexGatewayRingKey = "index-gateway"
- bloomGatewayRingKey = "bloom-gateway"
- bloomCompactorRingKey = "bloom-compactor"
+ schedulerRingKey = "scheduler"
+ indexGatewayRingKey = "index-gateway"
+ bloomGatewayRingKey = "bloom-gateway"
)
func (t *Loki) initServer() (services.Service, error) {
@@ -294,7 +295,6 @@ func (t *Loki) initRuntimeConfig() (services.Service, error) {
// By doing the initialization here instead of per-module init function, we avoid the problem
// of projects based on Loki forgetting the wiring if they override module's init method (they also don't have access to private symbols).
t.Cfg.CompactorConfig.CompactorRing.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
- t.Cfg.BloomCompactor.Ring.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
t.Cfg.Distributor.DistributorRing.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
t.Cfg.IndexGateway.Ring.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
t.Cfg.Ingester.LifecyclerConfig.RingConfig.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
@@ -341,6 +341,13 @@ func (t *Loki) initDistributor() (services.Service, error) {
}
t.Tee = distributor.WrapTee(t.Tee, rf1Tee)
}
+ if t.Cfg.KafkaIngester.Enabled {
+ kafkaTee, err := kafka_tee.NewTee(t.Cfg.KafkaConfig, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer, util_log.Logger, t.partitionRing)
+ if err != nil {
+ return nil, err
+ }
+ t.Tee = distributor.WrapTee(t.Tee, kafkaTee)
+ }
var err error
logger := log.With(util_log.Logger, "component", "distributor")
@@ -642,6 +649,41 @@ func (t *Loki) initIngester() (_ services.Service, err error) {
return t.Ingester, nil
}
+func (t *Loki) initKafkaIngester() (_ services.Service, err error) {
+ if !t.Cfg.KafkaIngester.Enabled {
+ return nil, nil
+ }
+ t.Cfg.KafkaIngester.KafkaConfig = t.Cfg.KafkaConfig
+ logger := log.With(util_log.Logger, "component", "ingester-kafka")
+ t.Cfg.KafkaIngester.LifecyclerConfig.ListenPort = t.Cfg.Server.GRPCListenPort
+
+ if t.Cfg.KafkaIngester.ShutdownMarkerPath == "" && t.Cfg.Common.PathPrefix != "" {
+ t.Cfg.KafkaIngester.ShutdownMarkerPath = t.Cfg.Common.PathPrefix
+ }
+ if t.Cfg.KafkaIngester.ShutdownMarkerPath == "" {
+ return nil, errors.New("the config setting shutdown marker path is not set. The /ingester/prepare-partition-downscale endpoint won't work")
+ }
+ storage, err := objstore.New(t.Cfg.SchemaConfig.Configs, t.Cfg.StorageConfig, t.ClientMetrics)
+ if err != nil {
+ return nil, err
+ }
+
+ consumerFactory := ingesterkafka.NewConsumerFactory(t.MetastoreClient, storage, t.Cfg.KafkaIngester.FlushInterval, t.Cfg.KafkaIngester.FlushSize, logger, prometheus.DefaultRegisterer)
+ t.kafkaIngester, err = ingesterkafka.New(t.Cfg.KafkaIngester, consumerFactory, logger, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer)
+ if err != nil {
+ return nil, err
+ }
+
+ httpMiddleware := middleware.Merge(
+ serverutil.RecoveryHTTPMiddleware,
+ )
+ t.Server.HTTP.Methods("POST", "GET", "DELETE").Path("/ingester/prepare-partition-downscale").Handler(
+ httpMiddleware.Wrap(http.HandlerFunc(t.kafkaIngester.PreparePartitionDownscaleHandler)),
+ )
+
+ return t.kafkaIngester, nil
+}
+
func (t *Loki) initIngesterRF1() (_ services.Service, err error) {
if !t.Cfg.IngesterRF1.Enabled {
return nil, nil
@@ -953,7 +995,7 @@ func (t *Loki) updateConfigForShipperStore() {
t.Cfg.StorageConfig.TSDBShipperConfig.Mode = indexshipper.ModeWriteOnly
t.Cfg.StorageConfig.TSDBShipperConfig.IngesterDBRetainPeriod = shipperQuerierIndexUpdateDelay(t.Cfg.StorageConfig.IndexCacheValidity, t.Cfg.StorageConfig.TSDBShipperConfig.ResyncInterval)
- case t.Cfg.isTarget(IngesterRF1), t.Cfg.isTarget(Querier), t.Cfg.isTarget(Ruler), t.Cfg.isTarget(Read), t.Cfg.isTarget(Backend), t.isModuleActive(IndexGateway), t.Cfg.isTarget(BloomCompactor), t.Cfg.isTarget(BloomPlanner), t.Cfg.isTarget(BloomBuilder):
+ case t.Cfg.isTarget(IngesterRF1), t.Cfg.isTarget(Querier), t.Cfg.isTarget(Ruler), t.Cfg.isTarget(Read), t.Cfg.isTarget(Backend), t.isModuleActive(IndexGateway), t.Cfg.isTarget(BloomPlanner), t.Cfg.isTarget(BloomBuilder):
// We do not want query to do any updates to index
t.Cfg.StorageConfig.BoltDBShipperConfig.Mode = indexshipper.ModeReadOnly
t.Cfg.StorageConfig.TSDBShipperConfig.Mode = indexshipper.ModeReadOnly
@@ -1122,7 +1164,7 @@ func (t *Loki) initCacheGenerationLoader() (_ services.Service, err error) {
}
t.cacheGenerationLoader = generationnumber.NewGenNumberLoader(client, prometheus.DefaultRegisterer)
- return services.NewIdleService(nil, func(failureCase error) error {
+ return services.NewIdleService(nil, func(_ error) error {
t.cacheGenerationLoader.Stop()
return nil
}), nil
@@ -1438,6 +1480,7 @@ func (t *Loki) initMemberlistKV() (services.Service, error) {
t.Cfg.MemberlistKV.Codecs = []codec.Codec{
ring.GetCodec(),
analytics.JSONCodec,
+ ring.GetPartitionRingCodec(),
}
dnsProviderReg := prometheus.WrapRegistererWithPrefix(
@@ -1457,9 +1500,10 @@ func (t *Loki) initMemberlistKV() (services.Service, error) {
t.Cfg.Ingester.LifecyclerConfig.RingConfig.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
t.Cfg.QueryScheduler.SchedulerRing.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
t.Cfg.Ruler.Ring.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
- t.Cfg.BloomCompactor.Ring.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
t.Cfg.Pattern.LifecyclerConfig.RingConfig.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
t.Cfg.IngesterRF1.LifecyclerConfig.RingConfig.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
+ t.Cfg.KafkaIngester.PartitionRingConfig.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
+ t.Cfg.KafkaIngester.LifecyclerConfig.RingConfig.KVStore.MemberlistKV = t.MemberlistKV.GetMemberlistKV
t.Server.HTTP.Handle("/memberlist", t.MemberlistKV)
if t.Cfg.InternalServer.Enable {
@@ -1597,7 +1641,7 @@ func (t *Loki) initIndexGateway() (services.Service, error) {
}
resolver := bloomgateway.NewBlockResolver(t.BloomStore, logger)
querierCfg := bloomgateway.QuerierConfig{
- MinTableOffset: t.Cfg.BloomCompactor.MinTableOffset,
+ MinTableOffset: t.Cfg.BloomBuild.Planner.MinTableOffset,
}
bloomQuerier = bloomgateway.NewQuerier(bloomGatewayClient, querierCfg, t.Overrides, resolver, prometheus.DefaultRegisterer, logger)
}
@@ -1658,52 +1702,6 @@ func (t *Loki) initIndexGatewayInterceptors() (services.Service, error) {
return nil, nil
}
-func (t *Loki) initBloomCompactor() (services.Service, error) {
- if !t.Cfg.BloomCompactor.Enabled {
- return nil, nil
- }
- logger := log.With(util_log.Logger, "component", "bloom-compactor")
-
- return bloomcompactor.New(
- t.Cfg.BloomCompactor,
- t.Cfg.SchemaConfig,
- t.Cfg.StorageConfig,
- t.ClientMetrics,
- t.Store,
- t.bloomCompactorRingManager.Ring,
- t.bloomCompactorRingManager.RingLifecycler,
- t.Overrides,
- t.BloomStore,
- logger,
- prometheus.DefaultRegisterer,
- )
-}
-
-func (t *Loki) initBloomCompactorRing() (services.Service, error) {
- if !t.Cfg.BloomCompactor.Enabled {
- return nil, nil
- }
- t.Cfg.BloomCompactor.Ring.ListenPort = t.Cfg.Server.GRPCListenPort
-
- // is LegacyMode needed?
- // legacyReadMode := t.Cfg.LegacyReadTarget && t.isModuleActive(Read)
-
- rm, err := lokiring.NewRingManager(bloomCompactorRingKey, lokiring.ServerMode, t.Cfg.BloomCompactor.Ring, 1, t.Cfg.BloomCompactor.Ring.NumTokens, util_log.Logger, prometheus.DefaultRegisterer)
- if err != nil {
- return nil, gerrors.Wrap(err, "error initializing bloom-compactor ring manager")
- }
-
- t.bloomCompactorRingManager = rm
-
- t.Server.HTTP.Path("/bloomcompactor/ring").Methods("GET", "POST").Handler(t.bloomCompactorRingManager)
-
- if t.Cfg.InternalServer.Enable {
- t.InternalServer.HTTP.Path("/bloomcompactor/ring").Methods("GET", "POST").Handler(t.bloomCompactorRingManager)
- }
-
- return t.bloomCompactorRingManager, nil
-}
-
func (t *Loki) initBloomPlanner() (services.Service, error) {
if !t.Cfg.BloomBuild.Enabled {
return nil, nil
@@ -1711,6 +1709,16 @@ func (t *Loki) initBloomPlanner() (services.Service, error) {
logger := log.With(util_log.Logger, "component", "bloom-planner")
+ var ringManager *lokiring.RingManager
+ if t.Cfg.isTarget(Backend) && t.indexGatewayRingManager != nil {
+ // Bloom planner and builder are part of the backend target in Simple Scalable Deployment mode.
+ // To avoid creating a new ring just for this special case, we can use the index gateway ring, which is already
+ // part of the backend target. The planner creates a watcher service that regularly checks which replica is
+ // the leader. Only the leader plans the tasks. Builders connect to the leader instance to pull tasks.
+ level.Info(logger).Log("msg", "initializing bloom planner in ring mode as part of backend target")
+ ringManager = t.indexGatewayRingManager
+ }
+
p, err := planner.New(
t.Cfg.BloomBuild.Planner,
t.Overrides,
@@ -1720,6 +1728,7 @@ func (t *Loki) initBloomPlanner() (services.Service, error) {
t.BloomStore,
logger,
prometheus.DefaultRegisterer,
+ ringManager,
)
if err != nil {
return nil, err
@@ -1736,6 +1745,16 @@ func (t *Loki) initBloomBuilder() (services.Service, error) {
logger := log.With(util_log.Logger, "component", "bloom-builder")
+ var ringManager *lokiring.RingManager
+ if t.Cfg.isTarget(Backend) && t.indexGatewayRingManager != nil {
+ // Bloom planner and builder are part of the backend target in Simple Scalable Deployment mode.
+ // To avoid creating a new ring just for this special case, we can use the index gateway ring, which is already
+ // part of the backend target. The planner creates a watcher service that regularly checks which replica is
+ // the leader. Only the leader plans the tasks. Builders connect to the leader instance to pull tasks.
+ level.Info(logger).Log("msg", "initializing bloom builder in ring mode as part of backend target")
+ ringManager = t.indexGatewayRingManager
+ }
+
return builder.New(
t.Cfg.BloomBuild.Builder,
t.Overrides,
@@ -1746,6 +1765,7 @@ func (t *Loki) initBloomBuilder() (services.Service, error) {
t.BloomStore,
logger,
prometheus.DefaultRegisterer,
+ ringManager,
)
}
@@ -1852,7 +1872,7 @@ func (t *Loki) initAnalytics() (services.Service, error) {
}
func (t *Loki) initMetastore() (services.Service, error) {
- if !t.Cfg.IngesterRF1.Enabled {
+ if !t.Cfg.IngesterRF1.Enabled && !t.Cfg.KafkaIngester.Enabled {
return nil, nil
}
if t.Cfg.isTarget(All) {
@@ -1869,7 +1889,7 @@ func (t *Loki) initMetastore() (services.Service, error) {
}
func (t *Loki) initMetastoreClient() (services.Service, error) {
- if !t.Cfg.IngesterRF1.Enabled && !t.Cfg.QuerierRF1.Enabled {
+ if !t.Cfg.IngesterRF1.Enabled && !t.Cfg.QuerierRF1.Enabled && !t.Cfg.KafkaIngester.Enabled {
return nil, nil
}
mc, err := metastoreclient.New(t.Cfg.MetastoreClient, prometheus.DefaultRegisterer)
@@ -1880,6 +1900,26 @@ func (t *Loki) initMetastoreClient() (services.Service, error) {
return mc.Service(), nil
}
+// The Ingest Partition Ring is responsible for watching the available ingesters and assigning partitions to incoming requests.
+func (t *Loki) initPartitionRing() (services.Service, error) {
+ if !t.Cfg.KafkaIngester.Enabled { // TODO: New config flag
+ return nil, nil
+ }
+
+ kvClient, err := kv.NewClient(t.Cfg.KafkaIngester.PartitionRingConfig.KVStore, ring.GetPartitionRingCodec(), kv.RegistererWithKVName(prometheus.DefaultRegisterer, ingesterkafka.PartitionRingName+"-watcher"), util_log.Logger)
+ if err != nil {
+ return nil, fmt.Errorf("creating KV store for partitions ring watcher: %w", err)
+ }
+
+ t.partitionRingWatcher = ring.NewPartitionRingWatcher(ingesterkafka.PartitionRingName, ingesterkafka.PartitionRingName+"-key", kvClient, util_log.Logger, prometheus.WrapRegistererWithPrefix("loki_", prometheus.DefaultRegisterer))
+ t.partitionRing = ring.NewPartitionInstanceRing(t.partitionRingWatcher, t.ring, t.Cfg.Ingester.LifecyclerConfig.RingConfig.HeartbeatTimeout)
+
+ // Expose a web page to view the partitions ring state.
+ t.Server.HTTP.Path("/partition-ring").Methods("GET", "POST").Handler(ring.NewPartitionRingPageHandler(t.partitionRingWatcher, ring.NewPartitionRingEditor(ingesterkafka.PartitionRingName+"-key", kvClient)))
+
+ return t.partitionRingWatcher, nil
+}
+
func (t *Loki) deleteRequestsClient(clientType string, limits limiter.CombinedLimits) (deletion.DeleteRequestsClient, error) {
if !t.supportIndexDeleteRequest() || !t.Cfg.CompactorConfig.RetentionEnabled {
return deletion.NewNoOpDeleteRequestsStore(), nil
diff --git a/pkg/loki/modules_test.go b/pkg/loki/modules_test.go
index 8c27c851d33e0..64241443d3439 100644
--- a/pkg/loki/modules_test.go
+++ b/pkg/loki/modules_test.go
@@ -410,7 +410,6 @@ func minimalWorkingConfig(t *testing.T, dir, target string, cfgTransformers ...f
cfg.Distributor.DistributorRing.InstanceAddr = localhost
cfg.IndexGateway.Mode = indexgateway.SimpleMode
cfg.IndexGateway.Ring.InstanceAddr = localhost
- cfg.BloomCompactor.Ring.InstanceAddr = localhost
cfg.CompactorConfig.CompactorRing.InstanceAddr = localhost
cfg.CompactorConfig.WorkingDirectory = filepath.Join(dir, "compactor")
diff --git a/pkg/loki/version_handler.go b/pkg/loki/version_handler.go
index ef49d1b0f7de7..bf4e28027508d 100644
--- a/pkg/loki/version_handler.go
+++ b/pkg/loki/version_handler.go
@@ -10,7 +10,7 @@ import (
)
func versionHandler() http.HandlerFunc {
- return func(w http.ResponseWriter, r *http.Request) {
+ return func(w http.ResponseWriter, _ *http.Request) {
info := prom.PrometheusVersion{
Version: build.Version,
Revision: build.Revision,
diff --git a/pkg/lokifrontend/frontend/transport/handler.go b/pkg/lokifrontend/frontend/transport/handler.go
index 7c9e50daf8b59..4163800f4bca2 100644
--- a/pkg/lokifrontend/frontend/transport/handler.go
+++ b/pkg/lokifrontend/frontend/transport/handler.go
@@ -37,8 +37,8 @@ const (
)
var (
- errCanceled = httpgrpc.Errorf(StatusClientClosedRequest, context.Canceled.Error())
- errDeadlineExceeded = httpgrpc.Errorf(http.StatusGatewayTimeout, context.DeadlineExceeded.Error())
+ errCanceled = httpgrpc.Errorf(StatusClientClosedRequest, "%s", context.Canceled.Error())
+ errDeadlineExceeded = httpgrpc.Errorf(http.StatusGatewayTimeout, "%s", context.DeadlineExceeded.Error())
errRequestEntityTooLarge = httpgrpc.Errorf(http.StatusRequestEntityTooLarge, "http: request body too large")
)
diff --git a/pkg/lokifrontend/frontend/v1/frontend_test.go b/pkg/lokifrontend/frontend/v1/frontend_test.go
index 2d26e9f188a3b..b7e4061d804a0 100644
--- a/pkg/lokifrontend/frontend/v1/frontend_test.go
+++ b/pkg/lokifrontend/frontend/v1/frontend_test.go
@@ -46,7 +46,7 @@ const (
)
func TestFrontend(t *testing.T) {
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &queryrange.LokiLabelNamesResponse{Data: []string{"Hello", "world"}, Version: uint32(loghttp.VersionV1)}, nil
})
test := func(addr string, _ *Frontend) {
@@ -77,7 +77,7 @@ func TestFrontendPropagateTrace(t *testing.T) {
observedTraceID := make(chan string, 2)
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(ctx context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
sp := opentracing.SpanFromContext(ctx)
defer sp.Finish()
@@ -157,7 +157,7 @@ func TestFrontendCheckReady(t *testing.T) {
// the underlying query is correctly cancelled _and not retried_.
func TestFrontendCancel(t *testing.T) {
var tries atomic.Int32
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(ctx context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
<-ctx.Done()
tries.Inc()
return nil, ctx.Err()
@@ -188,7 +188,7 @@ func TestFrontendCancel(t *testing.T) {
}
func TestFrontendMetricsCleanup(t *testing.T) {
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &queryrange.LokiLabelNamesResponse{Data: []string{"Hello", "world"}, Version: uint32(loghttp.VersionV1)}, nil
})
diff --git a/pkg/lokifrontend/frontend/v2/frontend_scheduler_worker.go b/pkg/lokifrontend/frontend/v2/frontend_scheduler_worker.go
index 1fe304f490ff4..d818d90c23454 100644
--- a/pkg/lokifrontend/frontend/v2/frontend_scheduler_worker.go
+++ b/pkg/lokifrontend/frontend/v2/frontend_scheduler_worker.go
@@ -325,7 +325,7 @@ func (w *frontendSchedulerWorker) schedulerLoop(loop schedulerpb.SchedulerForFro
case schedulerpb.ERROR:
req.enqueue <- enqueueResult{status: waitForResponse}
- req.response <- ResponseTuple{nil, httpgrpc.Errorf(http.StatusInternalServerError, resp.Error)}
+ req.response <- ResponseTuple{nil, httpgrpc.Errorf(http.StatusInternalServerError, "%s", resp.Error)}
case schedulerpb.TOO_MANY_REQUESTS_PER_TENANT:
req.enqueue <- enqueueResult{status: waitForResponse}
req.response <- ResponseTuple{nil, httpgrpc.Errorf(http.StatusTooManyRequests, "too many outstanding requests")}
diff --git a/pkg/lokifrontend/frontend/v2/frontend_test.go b/pkg/lokifrontend/frontend/v2/frontend_test.go
index 41fa9653f6949..baf62348216f5 100644
--- a/pkg/lokifrontend/frontend/v2/frontend_test.go
+++ b/pkg/lokifrontend/frontend/v2/frontend_test.go
@@ -186,7 +186,7 @@ func TestFrontendRetryEnqueue(t *testing.T) {
func TestFrontendEnqueueFailure(t *testing.T) {
cfg := Config{}
flagext.DefaultValues(&cfg)
- f, _ := setupFrontend(t, cfg, func(f *Frontend, msg *schedulerpb.FrontendToScheduler) *schedulerpb.SchedulerToFrontend {
+ f, _ := setupFrontend(t, cfg, func(_ *Frontend, _ *schedulerpb.FrontendToScheduler) *schedulerpb.SchedulerToFrontend {
return &schedulerpb.SchedulerToFrontend{Status: schedulerpb.SHUTTING_DOWN}
})
diff --git a/pkg/pattern/aggregation/push.go b/pkg/pattern/aggregation/push.go
index 9aac2e3a5050d..649d71f92029c 100644
--- a/pkg/pattern/aggregation/push.go
+++ b/pkg/pattern/aggregation/push.go
@@ -11,7 +11,6 @@ import (
"sync"
"time"
- "github.com/dustin/go-humanize"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/golang/snappy"
@@ -22,6 +21,7 @@ import (
"github.com/grafana/loki/v3/pkg/loghttp/push"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/util"
"github.com/grafana/loki/v3/pkg/util/build"
"github.com/grafana/dskit/backoff"
@@ -312,9 +312,9 @@ func AggregatedMetricEntry(
service string,
lbls labels.Labels,
) string {
- byteString := humanize.Bytes(totalBytes)
+ byteString := util.HumanizeBytes(totalBytes)
base := fmt.Sprintf(
- "ts=%d bytes=%s count=%d %s=%s",
+ "ts=%d bytes=%s count=%d %s=\"%s\"",
ts.UnixNano(),
byteString,
totalCount,
@@ -322,7 +322,7 @@ func AggregatedMetricEntry(
)
for _, l := range lbls {
- base += fmt.Sprintf(" %s=%s", l.Name, l.Value)
+ base += fmt.Sprintf(" %s=\"%s\"", l.Name, l.Value)
}
return base
diff --git a/pkg/pattern/aggregation/push_test.go b/pkg/pattern/aggregation/push_test.go
index 15f0336b5f7e8..149b54a977151 100644
--- a/pkg/pattern/aggregation/push_test.go
+++ b/pkg/pattern/aggregation/push_test.go
@@ -229,6 +229,9 @@ func Test_Push(t *testing.T) {
stream2.Entries[2].Line,
)
+ // sanity check that bytes are logged in humanized form without whitespaces
+ assert.Contains(t, stream1.Entries[0].Line, "bytes=1B")
+
case <-time.After(5 * time.Second):
t.Fatal("timeout")
}
diff --git a/pkg/pattern/drain/drain.go b/pkg/pattern/drain/drain.go
index 5c48d2980e022..e6706e0517bb0 100644
--- a/pkg/pattern/drain/drain.go
+++ b/pkg/pattern/drain/drain.go
@@ -36,14 +36,15 @@ import (
)
type Config struct {
- maxNodeDepth int
- LogClusterDepth int
- SimTh float64
- MaxChildren int
- ExtraDelimiters []string
- MaxClusters int
- ParamString string
- MaxEvictionRatio float64
+ maxNodeDepth int
+ LogClusterDepth int
+ SimTh float64
+ MaxChildren int
+ ExtraDelimiters []string
+ MaxClusters int
+ ParamString string
+ MaxEvictionRatio float64
+ MaxAllowedLineLength int
}
func createLogClusterCache(maxSize int, onEvict func(int, *LogCluster)) *LogClusterCache {
@@ -125,11 +126,12 @@ func DefaultConfig() *Config {
// Both SimTh and MaxClusterDepth impact branching factor: the greater
// MaxClusterDepth and SimTh, the less the chance that there will be
// "similar" clusters, but the greater the footprint.
- SimTh: 0.3,
- MaxChildren: 15,
- ParamString: `<_>`,
- MaxClusters: 300,
- MaxEvictionRatio: 0.25,
+ SimTh: 0.3,
+ MaxChildren: 15,
+ ParamString: `<_>`,
+ MaxClusters: 300,
+ MaxEvictionRatio: 0.25,
+ MaxAllowedLineLength: 3000,
}
}
@@ -140,11 +142,10 @@ func New(config *Config, format string, metrics *Metrics) *Drain {
config.maxNodeDepth = config.LogClusterDepth - 2
d := &Drain{
- config: config,
- rootNode: createNode(),
- metrics: metrics,
- maxAllowedLineLength: 3000,
- format: format,
+ config: config,
+ rootNode: createNode(),
+ metrics: metrics,
+ format: format,
}
limiter := newLimiter(config.MaxEvictionRatio)
@@ -180,18 +181,17 @@ func New(config *Config, format string, metrics *Metrics) *Drain {
}
type Drain struct {
- config *Config
- rootNode *Node
- idToCluster *LogClusterCache
- clustersCounter int
- metrics *Metrics
- tokenizer LineTokenizer
- maxAllowedLineLength int
- format string
- tokens []string
- state interface{}
- limiter *limiter
- pruning bool
+ config *Config
+ rootNode *Node
+ idToCluster *LogClusterCache
+ clustersCounter int
+ metrics *Metrics
+ tokenizer LineTokenizer
+ format string
+ tokens []string
+ state interface{}
+ limiter *limiter
+ pruning bool
}
func (d *Drain) Clusters() []*LogCluster {
@@ -206,7 +206,7 @@ func (d *Drain) Train(content string, ts int64) *LogCluster {
if !d.limiter.Allow() {
return nil
}
- if len(content) > d.maxAllowedLineLength {
+ if len(content) > d.config.MaxAllowedLineLength {
return nil
}
d.tokens, d.state = d.tokenizer.Tokenize(content, d.tokens, d.state)
@@ -312,14 +312,14 @@ func (d *Drain) pruneTree(node *Node) int {
}
}
- validClusterIds := 0
+ validClusterIDs := 0
for _, clusterID := range node.clusterIDs {
cluster := d.idToCluster.Get(clusterID)
if cluster != nil {
- validClusterIds++
+ validClusterIDs++
}
}
- return len(node.keyToChildNode) + validClusterIds
+ return len(node.keyToChildNode) + validClusterIDs
}
func (d *Drain) Delete(cluster *LogCluster) {
diff --git a/pkg/pattern/ingester.go b/pkg/pattern/ingester.go
index bd43908f289d5..3c1bb55b76804 100644
--- a/pkg/pattern/ingester.go
+++ b/pkg/pattern/ingester.go
@@ -33,16 +33,17 @@ import (
const readBatchSize = 1024
type Config struct {
- Enabled bool `yaml:"enabled,omitempty" doc:"description=Whether the pattern ingester is enabled."`
- LifecyclerConfig ring.LifecyclerConfig `yaml:"lifecycler,omitempty" doc:"description=Configures how the lifecycle of the pattern ingester will operate and where it will register for discovery."`
- ClientConfig clientpool.Config `yaml:"client_config,omitempty" doc:"description=Configures how the pattern ingester will connect to the ingesters."`
- ConcurrentFlushes int `yaml:"concurrent_flushes"`
- FlushCheckPeriod time.Duration `yaml:"flush_check_period"`
- MaxClusters int `yaml:"max_clusters,omitempty" doc:"description=The maximum number of detected pattern clusters that can be created by streams."`
- MaxEvictionRatio float64 `yaml:"max_eviction_ratio,omitempty" doc:"description=The maximum eviction ratio of patterns per stream. Once that ratio is reached, the stream will throttled pattern detection."`
- MetricAggregation aggregation.Config `yaml:"metric_aggregation,omitempty" doc:"description=Configures the metric aggregation and storage behavior of the pattern ingester."`
- TeeConfig TeeConfig `yaml:"tee_config,omitempty" doc:"description=Configures the pattern tee which forwards requests to the pattern ingester."`
- ConnectionTimeout time.Duration `yaml:"connection_timeout"`
+ Enabled bool `yaml:"enabled,omitempty" doc:"description=Whether the pattern ingester is enabled."`
+ LifecyclerConfig ring.LifecyclerConfig `yaml:"lifecycler,omitempty" doc:"description=Configures how the lifecycle of the pattern ingester will operate and where it will register for discovery."`
+ ClientConfig clientpool.Config `yaml:"client_config,omitempty" doc:"description=Configures how the pattern ingester will connect to the ingesters."`
+ ConcurrentFlushes int `yaml:"concurrent_flushes"`
+ FlushCheckPeriod time.Duration `yaml:"flush_check_period"`
+ MaxClusters int `yaml:"max_clusters,omitempty" doc:"description=The maximum number of detected pattern clusters that can be created by streams."`
+ MaxEvictionRatio float64 `yaml:"max_eviction_ratio,omitempty" doc:"description=The maximum eviction ratio of patterns per stream. Once that ratio is reached, the stream will throttled pattern detection."`
+ MetricAggregation aggregation.Config `yaml:"metric_aggregation,omitempty" doc:"description=Configures the metric aggregation and storage behavior of the pattern ingester."`
+ TeeConfig TeeConfig `yaml:"tee_config,omitempty" doc:"description=Configures the pattern tee which forwards requests to the pattern ingester."`
+ ConnectionTimeout time.Duration `yaml:"connection_timeout"`
+ MaxAllowedLineLength int `yaml:"max_allowed_line_length,omitempty" doc:"description=The maximum length of log lines that can be used for pattern detection."`
// For testing.
factory ring_client.PoolFactory `yaml:"-"`
@@ -91,6 +92,12 @@ func (cfg *Config) RegisterFlags(fs *flag.FlagSet) {
2*time.Second,
"Timeout for connections between the Loki and the pattern ingester.",
)
+ fs.IntVar(
+ &cfg.MaxAllowedLineLength,
+ "pattern-ingester.max-allowed-line-length",
+ drain.DefaultConfig().MaxAllowedLineLength,
+ "The maximum length of log lines that can be used for pattern detection.",
+ )
}
type TeeConfig struct {
diff --git a/pkg/pattern/ingester_querier.go b/pkg/pattern/ingester_querier.go
index a77dd47b31137..3a275ffd46445 100644
--- a/pkg/pattern/ingester_querier.go
+++ b/pkg/pattern/ingester_querier.go
@@ -52,7 +52,7 @@ func NewIngesterQuerier(
func (q *IngesterQuerier) Patterns(ctx context.Context, req *logproto.QueryPatternsRequest) (*logproto.QueryPatternsResponse, error) {
_, err := syntax.ParseMatchers(req.Query, true)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
resps, err := q.forAllIngesters(ctx, func(_ context.Context, client logproto.PatternClient) (interface{}, error) {
return client.Query(ctx, req)
diff --git a/pkg/pattern/instance.go b/pkg/pattern/instance.go
index 719f90d69075c..24f2814e467f5 100644
--- a/pkg/pattern/instance.go
+++ b/pkg/pattern/instance.go
@@ -102,7 +102,7 @@ func (i *instance) Push(ctx context.Context, req *logproto.PushRequest) error {
}
if ownedStream {
- if reqStream.Entries == nil || len(reqStream.Entries) == 0 {
+ if len(reqStream.Entries) == 0 {
continue
}
s, _, err := i.streams.LoadOrStoreNew(reqStream.Labels,
@@ -158,7 +158,7 @@ func (i *instance) isOwnedStream(ingesterID string, stream string) (bool, error)
func (i *instance) Iterator(ctx context.Context, req *logproto.QueryPatternsRequest) (iter.Iterator, error) {
matchers, err := syntax.ParseMatchers(req.Query, true)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
step := model.Time(req.Step)
@@ -216,7 +216,7 @@ outer:
func (i *instance) createStream(_ context.Context, pushReqStream logproto.Stream) (*stream, error) {
labels, err := syntax.ParseLabels(pushReqStream.Labels)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
fp := i.getHashForLabels(labels)
sortedLabels := i.index.Add(logproto.FromLabelsToLabelAdapters(labels), fp)
diff --git a/pkg/querier-rf1/http.go b/pkg/querier-rf1/http.go
index 279d52bf9ccc9..baa820a99460f 100644
--- a/pkg/querier-rf1/http.go
+++ b/pkg/querier-rf1/http.go
@@ -300,7 +300,7 @@ func (q *QuerierAPI) PatternsHandler(ctx context.Context, req *logproto.QueryPat
func (q *QuerierAPI) validateMaxEntriesLimits(ctx context.Context, expr syntax.Expr, limit uint32) error {
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
// entry limit does not apply to metric queries.
@@ -341,7 +341,7 @@ func WrapQuerySpanAndTimeout(call string, limits Limits) middleware.Interface {
tenants, err := tenant.TenantIDs(ctx)
if err != nil {
level.Error(log).Log("msg", "couldn't fetch tenantID", "err", err)
- serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, err.Error()), w)
+ serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error()), w)
return
}
diff --git a/pkg/querier-rf1/querier.go b/pkg/querier-rf1/querier.go
index ead5f744dae88..a2ff7100376ad 100644
--- a/pkg/querier-rf1/querier.go
+++ b/pkg/querier-rf1/querier.go
@@ -924,7 +924,7 @@ func (q *Rf1Querier) Patterns(ctx context.Context, req *logproto.QueryPatternsRe
}
res, err := q.patternQuerier.Patterns(ctx, req)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return res, err
diff --git a/pkg/querier/http.go b/pkg/querier/http.go
index 9a12b9b96271c..5f0e928b6a1c0 100644
--- a/pkg/querier/http.go
+++ b/pkg/querier/http.go
@@ -135,20 +135,20 @@ func (q *QuerierAPI) LabelHandler(ctx context.Context, req *logproto.LabelReques
// TailHandler is a http.HandlerFunc for handling tail queries.
func (q *QuerierAPI) TailHandler(w http.ResponseWriter, r *http.Request) {
upgrader := websocket.Upgrader{
- CheckOrigin: func(r *http.Request) bool { return true },
+ CheckOrigin: func(_ *http.Request) bool { return true },
}
logger := util_log.WithContext(r.Context(), util_log.Logger)
req, err := loghttp.ParseTailQuery(r)
if err != nil {
- serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, err.Error()), w)
+ serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error()), w)
return
}
tenantID, err := tenant.TenantID(r.Context())
if err != nil {
level.Warn(logger).Log("msg", "error getting tenant id", "err", err)
- serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, err.Error()), w)
+ serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error()), w)
return
}
@@ -420,7 +420,7 @@ func (q *QuerierAPI) PatternsHandler(ctx context.Context, req *logproto.QueryPat
func (q *QuerierAPI) validateMaxEntriesLimits(ctx context.Context, expr syntax.Expr, limit uint32) error {
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
// entry limit does not apply to metric queries.
@@ -461,7 +461,7 @@ func WrapQuerySpanAndTimeout(call string, limits Limits) middleware.Interface {
tenants, err := tenant.TenantIDs(ctx)
if err != nil {
level.Error(log).Log("msg", "couldn't fetch tenantID", "err", err)
- serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, err.Error()), w)
+ serverutil.WriteError(httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error()), w)
return
}
diff --git a/pkg/querier/querier.go b/pkg/querier/querier.go
index b320e5c5fd6ad..997a4bf7731ce 100644
--- a/pkg/querier/querier.go
+++ b/pkg/querier/querier.go
@@ -40,6 +40,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/index/seriesvolume"
"github.com/grafana/loki/v3/pkg/storage/stores/index/stats"
listutil "github.com/grafana/loki/v3/pkg/util"
+ "github.com/grafana/loki/v3/pkg/util/httpreq"
"github.com/grafana/loki/v3/pkg/util/spanlogger"
util_validation "github.com/grafana/loki/v3/pkg/util/validation"
@@ -963,7 +964,7 @@ func (q *SingleTenantQuerier) DetectedLabels(ctx context.Context, req *logproto.
var err error
start := model.TimeFromUnixNano(storeQueryInterval.start.UnixNano())
end := model.TimeFromUnixNano(storeQueryInterval.end.UnixNano())
- storeLabels, err := q.store.LabelNamesForMetricName(ctx, userID, start, end, "logs")
+ storeLabels, err := q.store.LabelNamesForMetricName(ctx, userID, start, end, "logs", matchers...)
for _, label := range storeLabels {
values, err := q.store.LabelValuesForMetricName(ctx, userID, start, end, "logs", label, matchers...)
if err != nil {
@@ -1075,6 +1076,8 @@ func (q *SingleTenantQuerier) DetectedFields(ctx context.Context, req *logproto.
if err != nil {
return nil, err
}
+ // just incject the header to categorize labels
+ ctx = httpreq.InjectHeader(ctx, httpreq.LokiEncodingFlagsHeader, (string)(httpreq.FlagCategorizeLabels))
params := logql.SelectLogParams{
QueryRequest: &logproto.QueryRequest{
Start: req.Start,
@@ -1110,13 +1113,16 @@ func (q *SingleTenantQuerier) DetectedFields(ctx context.Context, req *logproto.
level.Warn(q.logger).Log("msg", "failed to marshal hyperloglog sketch", "err", err)
continue
}
-
+ p := v.parsers
+ if len(p) == 0 {
+ p = nil
+ }
fields[fieldCount] = &logproto.DetectedField{
Label: k,
Type: v.fieldType,
Cardinality: v.Estimate(),
Sketch: sketch,
- Parsers: v.parsers,
+ Parsers: p,
}
fieldCount++
@@ -1128,21 +1134,44 @@ func (q *SingleTenantQuerier) DetectedFields(ctx context.Context, req *logproto.
}, nil
}
+func getParsersFromExpr(expr syntax.LogSelectorExpr) []string {
+ parsers := make([]string, 0)
+ expr.Walk(func(e syntax.Expr) {
+ switch concrete := e.(type) {
+ case *syntax.LogfmtParserExpr, *syntax.LogfmtExpressionParser:
+ if !slices.Contains(parsers, "logfmt") {
+ parsers = append(parsers, "logfmt")
+ }
+ case *syntax.JSONExpressionParser:
+ if !slices.Contains(parsers, "json") {
+ parsers = append(parsers, "json")
+ }
+ case *syntax.LabelParserExpr:
+ if concrete.Op == syntax.OpParserTypeJSON {
+ if !slices.Contains(parsers, "json") {
+ parsers = append(parsers, "json")
+ }
+ }
+ }
+ // bail if we found both parsers
+ if len(parsers) == 2 {
+ return
+ }
+ })
+ return parsers
+}
+
type parsedFields struct {
sketch *hyperloglog.Sketch
fieldType logproto.DetectedFieldType
parsers []string
}
-func newParsedFields(parser *string) *parsedFields {
- p := ""
- if parser != nil {
- p = *parser
- }
+func newParsedFields(parsers []string) *parsedFields {
return &parsedFields{
sketch: hyperloglog.New(),
fieldType: logproto.DetectedFieldString,
- parsers: []string{p},
+ parsers: parsers,
}
}
@@ -1196,15 +1225,20 @@ func determineType(value string) logproto.DetectedFieldType {
func parseDetectedFields(limit uint32, streams logqlmodel.Streams) map[string]*parsedFields {
detectedFields := make(map[string]*parsedFields, limit)
fieldCount := uint32(0)
- emtpyparser := ""
+ emtpyparsers := []string{}
for _, stream := range streams {
+ streamLbls, err := syntax.ParseLabels(stream.Labels)
+ if err != nil {
+ streamLbls = labels.EmptyLabels()
+ }
+
for _, entry := range stream.Entries {
structuredMetadata := getStructuredMetadata(entry)
for k, vals := range structuredMetadata {
df, ok := detectedFields[k]
if !ok && fieldCount < limit {
- df = newParsedFields(&emtpyparser)
+ df = newParsedFields(emtpyparsers)
detectedFields[k] = df
fieldCount++
}
@@ -1226,11 +1260,12 @@ func parseDetectedFields(limit uint32, streams logqlmodel.Streams) map[string]*p
}
}
- detected, parser := parseLine(entry.Line)
- for k, vals := range detected {
+ streamLbls := logql_log.NewBaseLabelsBuilder().ForLabels(streamLbls, streamLbls.Hash())
+ parsedLabels, parsers := parseEntry(entry, streamLbls)
+ for k, vals := range parsedLabels {
df, ok := detectedFields[k]
if !ok && fieldCount < limit {
- df = newParsedFields(parser)
+ df = newParsedFields(parsers)
detectedFields[k] = df
fieldCount++
}
@@ -1239,8 +1274,10 @@ func parseDetectedFields(limit uint32, streams logqlmodel.Streams) map[string]*p
continue
}
- if !slices.Contains(df.parsers, *parser) {
- df.parsers = append(df.parsers, *parser)
+ for _, parser := range parsers {
+ if !slices.Contains(df.parsers, parser) {
+ df.parsers = append(df.parsers, parser)
+ }
}
detectType := true
@@ -1283,24 +1320,50 @@ func getStructuredMetadata(entry push.Entry) map[string][]string {
return result
}
-func parseLine(line string) (map[string][]string, *string) {
- parser := "logfmt"
- logFmtParser := logql_log.NewLogfmtParser(true, false)
+func parseEntry(entry push.Entry, lbls *logql_log.LabelsBuilder) (map[string][]string, []string) {
+ origParsed := getParsedLabels(entry)
+ parsed := make(map[string][]string, len(origParsed))
+
+ for lbl, values := range origParsed {
+ if lbl == logqlmodel.ErrorLabel || lbl == logqlmodel.ErrorDetailsLabel ||
+ lbl == logqlmodel.PreserveErrorLabel {
+ continue
+ }
- lbls := logql_log.NewBaseLabelsBuilder().ForLabels(labels.EmptyLabels(), 0)
- _, logfmtSuccess := logFmtParser.Process(0, []byte(line), lbls)
- if !logfmtSuccess || lbls.HasErr() {
- parser = "json"
- jsonParser := logql_log.NewJSONParser()
+ parsed[lbl] = values
+ }
+
+ line := entry.Line
+ parser := "json"
+ jsonParser := logql_log.NewJSONParser()
+ _, jsonSuccess := jsonParser.Process(0, []byte(line), lbls)
+ if !jsonSuccess || lbls.HasErr() {
lbls.Reset()
- _, jsonSuccess := jsonParser.Process(0, []byte(line), lbls)
- if !jsonSuccess || lbls.HasErr() {
- return map[string][]string{}, nil
+
+ logFmtParser := logql_log.NewLogfmtParser(false, false)
+ parser = "logfmt"
+ _, logfmtSuccess := logFmtParser.Process(0, []byte(line), lbls)
+ if !logfmtSuccess || lbls.HasErr() {
+ return parsed, nil
}
}
parsedLabels := map[string]map[string]struct{}{}
- for _, lbl := range lbls.LabelsResult().Labels() {
+ for lbl, values := range parsed {
+ if vals, ok := parsedLabels[lbl]; ok {
+ for _, value := range values {
+ vals[value] = struct{}{}
+ }
+ } else {
+ parsedLabels[lbl] = map[string]struct{}{}
+ for _, value := range values {
+ parsedLabels[lbl][value] = struct{}{}
+ }
+ }
+ }
+
+ lblsResult := lbls.LabelsResult().Parsed()
+ for _, lbl := range lblsResult {
if values, ok := parsedLabels[lbl.Name]; ok {
values[lbl.Value] = struct{}{}
} else {
@@ -1310,6 +1373,32 @@ func parseLine(line string) (map[string][]string, *string) {
result := make(map[string][]string, len(parsedLabels))
for lbl, values := range parsedLabels {
+ if lbl == logqlmodel.ErrorLabel || lbl == logqlmodel.ErrorDetailsLabel ||
+ lbl == logqlmodel.PreserveErrorLabel {
+ continue
+ }
+ vals := make([]string, 0, len(values))
+ for v := range values {
+ vals = append(vals, v)
+ }
+ result[lbl] = vals
+ }
+
+ return result, []string{parser}
+}
+
+func getParsedLabels(entry push.Entry) map[string][]string {
+ labels := map[string]map[string]struct{}{}
+ for _, lbl := range entry.Parsed {
+ if values, ok := labels[lbl.Name]; ok {
+ values[lbl.Value] = struct{}{}
+ } else {
+ labels[lbl.Name] = map[string]struct{}{lbl.Value: {}}
+ }
+ }
+
+ result := make(map[string][]string, len(labels))
+ for lbl, values := range labels {
vals := make([]string, 0, len(values))
for v := range values {
vals = append(vals, v)
@@ -1317,13 +1406,10 @@ func parseLine(line string) (map[string][]string, *string) {
result[lbl] = vals
}
- return result, &parser
+ return result
}
// streamsForFieldDetection reads the streams from the iterator and returns them sorted.
-// If categorizeLabels is true, the stream labels contains just the stream labels and entries inside each stream have their
-// structuredMetadata and parsed fields populated with structured metadata labels plus the parsed labels respectively.
-// Otherwise, the stream labels are the whole series labels including the stream labels, structured metadata labels and parsed labels.
func streamsForFieldDetection(i iter.EntryIterator, size uint32) (logqlmodel.Streams, error) {
streams := map[string]*logproto.Stream{}
respSize := uint32(0)
@@ -1339,12 +1425,28 @@ func streamsForFieldDetection(i iter.EntryIterator, size uint32) (logqlmodel.Str
// If lastEntry.Unix < 0 this is the first pass through the loop and we should output the line.
// Then check to see if the entry is equal to, or past a forward step
if lastEntry.Unix() < 0 || shouldOutput {
- stream, ok := streams[streamLabels]
+ allLbls, err := syntax.ParseLabels(streamLabels)
+ if err != nil {
+ continue
+ }
+
+ parsedLbls := logproto.FromLabelAdaptersToLabels(entry.Parsed)
+ structuredMetadata := logproto.FromLabelAdaptersToLabels(entry.StructuredMetadata)
+
+ onlyStreamLbls := logql_log.NewBaseLabelsBuilder().ForLabels(allLbls, 0)
+ allLbls.Range(func(l labels.Label) {
+ if parsedLbls.Has(l.Name) || structuredMetadata.Has(l.Name) {
+ onlyStreamLbls.Del(l.Name)
+ }
+ })
+
+ lblStr := onlyStreamLbls.LabelsResult().String()
+ stream, ok := streams[lblStr]
if !ok {
stream = &logproto.Stream{
- Labels: streamLabels,
+ Labels: lblStr,
}
- streams[streamLabels] = stream
+ streams[lblStr] = stream
}
stream.Entries = append(stream.Entries, entry)
lastEntry = i.At().Timestamp
diff --git a/pkg/querier/querier_mock_test.go b/pkg/querier/querier_mock_test.go
index 4ddbab7ed2e59..df89d6b695611 100644
--- a/pkg/querier/querier_mock_test.go
+++ b/pkg/querier/querier_mock_test.go
@@ -8,6 +8,7 @@ import (
"time"
"github.com/grafana/loki/v3/pkg/logql/log"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
"github.com/grafana/loki/pkg/push"
@@ -16,6 +17,7 @@ import (
"github.com/grafana/dskit/grpcclient"
"github.com/grafana/dskit/ring"
ring_client "github.com/grafana/dskit/ring/client"
+ logql_log "github.com/grafana/loki/v3/pkg/logql/log"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/labels"
@@ -142,7 +144,7 @@ func (c *querierClientMock) Close() error {
// newIngesterClientMockFactory creates a factory function always returning
// the input querierClientMock
func newIngesterClientMockFactory(c *querierClientMock) ring_client.PoolFactory {
- return ring_client.PoolAddrFunc(func(addr string) (ring_client.PoolClient, error) {
+ return ring_client.PoolAddrFunc(func(_ string) (ring_client.PoolClient, error) {
return c, nil
})
}
@@ -356,8 +358,8 @@ func (s *storeMock) LabelValuesForMetricName(ctx context.Context, userID string,
return args.Get(0).([]string), args.Error(1)
}
-func (s *storeMock) LabelNamesForMetricName(ctx context.Context, userID string, from, through model.Time, metricName string, _ ...*labels.Matcher) ([]string, error) {
- args := s.Called(ctx, userID, from, through, metricName)
+func (s *storeMock) LabelNamesForMetricName(ctx context.Context, userID string, from, through model.Time, metricName string, m ...*labels.Matcher) ([]string, error) {
+ args := s.Called(ctx, userID, from, through, metricName, m)
return args.Get(0).([]string), args.Error(1)
}
@@ -575,30 +577,44 @@ func mockStreamWithLabels(from int, quantity int, labels string) logproto.Stream
}
func mockLogfmtStream(from int, quantity int) logproto.Stream {
- return mockLogfmtStreamWithLabels(from, quantity, `{type="test"}`)
+ return mockLogfmtStreamWithLabels(from, quantity, `{type="test", name="foo"}`)
}
-func mockLogfmtStreamWithLabels(_ int, quantity int, labels string) logproto.Stream {
+func mockLogfmtStreamWithLabels(_ int, quantity int, lbls string) logproto.Stream {
entries := make([]logproto.Entry, 0, quantity)
+ streamLabels, err := syntax.ParseLabels(lbls)
+ if err != nil {
+ streamLabels = labels.EmptyLabels()
+ }
+
+ lblBuilder := logql_log.NewBaseLabelsBuilder().ForLabels(streamLabels, streamLabels.Hash())
+ logFmtParser := logql_log.NewLogfmtParser(false, false)
// used for detected fields queries which are always BACKWARD
for i := quantity; i > 0; i-- {
- entries = append(entries, logproto.Entry{
+ line := fmt.Sprintf(
+ `message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t name=bar`,
+ i,
+ i,
+ (i * 10),
+ (i * 256),
+ float32(i*10.0),
+ (i%2 == 0))
+
+ entry := logproto.Entry{
Timestamp: time.Unix(int64(i), 0),
- Line: fmt.Sprintf(
- `message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t`,
- i,
- i,
- (i * 10),
- (i * 256),
- float32(i*10.0),
- (i%2 == 0)),
- })
+ Line: line,
+ }
+ _, logfmtSuccess := logFmtParser.Process(0, []byte(line), lblBuilder)
+ if logfmtSuccess {
+ entry.Parsed = logproto.FromLabelsToLabelAdapters(lblBuilder.LabelsResult().Parsed())
+ }
+ entries = append(entries, entry)
}
return logproto.Stream{
Entries: entries,
- Labels: labels,
+ Labels: lblBuilder.LabelsResult().String(),
}
}
@@ -609,7 +625,7 @@ func mockLogfmtStreamWithStructuredMetadata(from int, quantity int) logproto.Str
func mockLogfmtStreamWithLabelsAndStructuredMetadata(
from int,
quantity int,
- labels string,
+ lbls string,
) logproto.Stream {
var entries []logproto.Entry
metadata := push.LabelsAdapter{
@@ -626,15 +642,29 @@ func mockLogfmtStreamWithLabelsAndStructuredMetadata(
})
}
+ streamLabels, err := syntax.ParseLabels(lbls)
+ if err != nil {
+ streamLabels = labels.EmptyLabels()
+ }
+
+ lblBuilder := logql_log.NewBaseLabelsBuilder().ForLabels(streamLabels, streamLabels.Hash())
+ logFmtParser := logql_log.NewLogfmtParser(false, false)
+
for i := quantity; i > 0; i-- {
- entries = append(entries, logproto.Entry{
+ line := fmt.Sprintf(`message="line %d" count=%d fake=true`, i, i)
+ entry := logproto.Entry{
Timestamp: time.Unix(int64(i), 0),
- Line: fmt.Sprintf(`message="line %d" count=%d fake=true`, i, i),
+ Line: line,
StructuredMetadata: metadata,
- })
+ }
+ _, logfmtSuccess := logFmtParser.Process(0, []byte(line), lblBuilder)
+ if logfmtSuccess {
+ entry.Parsed = logproto.FromLabelsToLabelAdapters(lblBuilder.LabelsResult().Parsed())
+ }
+ entries = append(entries, entry)
}
return logproto.Stream{
- Labels: labels,
+ Labels: lbls,
Entries: entries,
}
}
diff --git a/pkg/querier/querier_test.go b/pkg/querier/querier_test.go
index 7336c3b11bfaf..eb8b8c3a97544 100644
--- a/pkg/querier/querier_test.go
+++ b/pkg/querier/querier_test.go
@@ -16,17 +16,21 @@ import (
ring_client "github.com/grafana/dskit/ring/client"
"github.com/grafana/dskit/user"
"github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/promql/parser"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
util_log "github.com/grafana/loki/v3/pkg/util/log"
+ "github.com/grafana/loki/pkg/push"
+
"github.com/grafana/loki/v3/pkg/compactor/deletion"
"github.com/grafana/loki/v3/pkg/ingester/client"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql"
"github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/logqlmodel"
"github.com/grafana/loki/v3/pkg/querier/plan"
"github.com/grafana/loki/v3/pkg/storage"
"github.com/grafana/loki/v3/pkg/util/constants"
@@ -254,7 +258,7 @@ func TestQuerier_SeriesAPI(t *testing.T) {
{
"ingester error",
mkReq([]string{`{a="1"}`}),
- func(store *storeMock, querier *queryClientMock, ingester *querierClientMock, limits validation.Limits, req *logproto.SeriesRequest) {
+ func(store *storeMock, _ *queryClientMock, ingester *querierClientMock, _ validation.Limits, req *logproto.SeriesRequest) {
ingester.On("Series", mock.Anything, req, mock.Anything).Return(nil, errors.New("tst-err"))
store.On("SelectSeries", mock.Anything, mock.Anything).Return(nil, nil)
@@ -268,7 +272,7 @@ func TestQuerier_SeriesAPI(t *testing.T) {
{
"store error",
mkReq([]string{`{a="1"}`}),
- func(store *storeMock, querier *queryClientMock, ingester *querierClientMock, limits validation.Limits, req *logproto.SeriesRequest) {
+ func(store *storeMock, _ *queryClientMock, ingester *querierClientMock, _ validation.Limits, req *logproto.SeriesRequest) {
ingester.On("Series", mock.Anything, req, mock.Anything).Return(mockSeriesResponse([]map[string]string{
{"a": "1"},
}), nil)
@@ -284,7 +288,7 @@ func TestQuerier_SeriesAPI(t *testing.T) {
{
"no matches",
mkReq([]string{`{a="1"}`}),
- func(store *storeMock, querier *queryClientMock, ingester *querierClientMock, limits validation.Limits, req *logproto.SeriesRequest) {
+ func(store *storeMock, _ *queryClientMock, ingester *querierClientMock, _ validation.Limits, req *logproto.SeriesRequest) {
ingester.On("Series", mock.Anything, req, mock.Anything).Return(mockSeriesResponse(nil), nil)
store.On("SelectSeries", mock.Anything, mock.Anything).Return(nil, nil)
},
@@ -298,7 +302,7 @@ func TestQuerier_SeriesAPI(t *testing.T) {
{
"returns series",
mkReq([]string{`{a="1"}`}),
- func(store *storeMock, querier *queryClientMock, ingester *querierClientMock, limits validation.Limits, req *logproto.SeriesRequest) {
+ func(store *storeMock, _ *queryClientMock, ingester *querierClientMock, _ validation.Limits, req *logproto.SeriesRequest) {
ingester.On("Series", mock.Anything, req, mock.Anything).Return(mockSeriesResponse([]map[string]string{
{"a": "1", "b": "2"},
{"a": "1", "b": "3"},
@@ -344,7 +348,7 @@ func TestQuerier_SeriesAPI(t *testing.T) {
{
"dedupes",
mkReq([]string{`{a="1"}`}),
- func(store *storeMock, querier *queryClientMock, ingester *querierClientMock, limits validation.Limits, req *logproto.SeriesRequest) {
+ func(store *storeMock, _ *queryClientMock, ingester *querierClientMock, _ validation.Limits, req *logproto.SeriesRequest) {
ingester.On("Series", mock.Anything, req, mock.Anything).Return(mockSeriesResponse([]map[string]string{
{"a": "1", "b": "2"},
}), nil)
@@ -1164,7 +1168,7 @@ func setupIngesterQuerierMocks(conf Config, limits *validation.Overrides) (*quer
store.On("SelectLogs", mock.Anything, mock.Anything).Return(mockStreamIterator(0, 1), nil)
store.On("SelectSamples", mock.Anything, mock.Anything).Return(mockSampleIterator(querySampleClient), nil)
store.On("LabelValuesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"1", "2", "3"}, nil)
- store.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"foo"}, nil)
+ store.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return([]string{"foo"}, nil)
store.On("SelectSeries", mock.Anything, mock.Anything).Return([]logproto.SeriesIdentifier{
{Labels: []logproto.SeriesIdentifier_LabelsEntry{{Key: "foo", Value: "1"}}},
}, nil)
@@ -1411,7 +1415,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&ingesterResponse, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{"storeLabel"}, nil).
On("LabelValuesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, "storeLabel", mock.Anything).
Return([]string{"val1", "val2"}, nil)
@@ -1452,7 +1456,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&ingesterResponse, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{"storeLabel", "commonLabel"}, nil).
On("LabelValuesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, "storeLabel", mock.Anything).
Return([]string{"val1", "val2"}, nil).
@@ -1490,7 +1494,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&logproto.LabelToValuesResponse{}, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{"storeLabel1", "storeLabel2"}, nil).
On("LabelValuesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, "storeLabel1", mock.Anything).
Return([]string{"val1", "val2"}, nil).
@@ -1524,7 +1528,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&logproto.LabelToValuesResponse{}, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{"storeLabel1", "pod"}, nil).
On("LabelValuesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, "storeLabel1", mock.Anything).
Return([]string{"val1", "val2"}, nil).
@@ -1563,7 +1567,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&ingesterResponse, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{}, nil)
querier, err := newQuerier(
@@ -1599,7 +1603,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&ingesterResponse, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{}, nil)
querier, err := newQuerier(
@@ -1630,7 +1634,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(&ingesterResponse, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{}, nil)
request := logproto.DetectedLabelsRequest{
Start: now,
@@ -1665,7 +1669,7 @@ func TestQuerier_DetectedLabels(t *testing.T) {
ingesterClient.On("GetDetectedLabels", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return(nil, nil)
- storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
+ storeClient.On("LabelNamesForMetricName", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).
Return([]string{}, nil)
request := logproto.DetectedLabelsRequest{
Start: now,
@@ -1777,15 +1781,16 @@ func TestQuerier_DetectedFields(t *testing.T) {
detectedFields := resp.Fields
// log lines come from querier_mock_test.go
// message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t
- assert.Len(t, detectedFields, 7)
+ assert.Len(t, detectedFields, 8)
expectedCardinality := map[string]uint64{
- "message": 5,
- "count": 5,
- "fake": 1,
- "bytes": 5,
- "duration": 5,
- "percent": 5,
- "even": 2,
+ "message": 5,
+ "count": 5,
+ "fake": 1,
+ "bytes": 5,
+ "duration": 5,
+ "percent": 5,
+ "even": 2,
+ "name_extracted": 1,
}
for _, d := range detectedFields {
card := expectedCardinality[d.Label]
@@ -1821,17 +1826,18 @@ func TestQuerier_DetectedFields(t *testing.T) {
detectedFields := resp.Fields
// log lines come from querier_mock_test.go
// message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t
- assert.Len(t, detectedFields, 9)
+ assert.Len(t, detectedFields, 10)
expectedCardinality := map[string]uint64{
- "variable": 5,
- "constant": 1,
- "message": 5,
- "count": 5,
- "fake": 1,
- "bytes": 5,
- "duration": 5,
- "percent": 5,
- "even": 2,
+ "variable": 5,
+ "constant": 1,
+ "message": 5,
+ "count": 5,
+ "fake": 1,
+ "bytes": 5,
+ "duration": 5,
+ "percent": 5,
+ "even": 2,
+ "name_extracted": 1,
}
for _, d := range detectedFields {
card := expectedCardinality[d.Label]
@@ -1867,7 +1873,7 @@ func TestQuerier_DetectedFields(t *testing.T) {
detectedFields := resp.Fields
// log lines come from querier_mock_test.go
// message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t
- assert.Len(t, detectedFields, 7)
+ assert.Len(t, detectedFields, 8)
var messageField, countField, bytesField, durationField, floatField, evenField *logproto.DetectedField
for _, field := range detectedFields {
@@ -1923,7 +1929,7 @@ func TestQuerier_DetectedFields(t *testing.T) {
detectedFields := resp.Fields
// log lines come from querier_mock_test.go
// message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t
- assert.Len(t, detectedFields, 9)
+ assert.Len(t, detectedFields, 10)
var messageField, countField, bytesField, durationField, floatField, evenField, constantField, variableField *logproto.DetectedField
for _, field := range detectedFields {
@@ -1953,9 +1959,58 @@ func TestQuerier_DetectedFields(t *testing.T) {
assert.Equal(t, []string{"logfmt"}, durationField.Parsers)
assert.Equal(t, []string{"logfmt"}, floatField.Parsers)
assert.Equal(t, []string{"logfmt"}, evenField.Parsers)
- assert.Equal(t, []string{""}, constantField.Parsers)
- assert.Equal(t, []string{""}, variableField.Parsers)
- })
+ assert.Equal(t, []string(nil), constantField.Parsers)
+ assert.Equal(t, []string(nil), variableField.Parsers)
+ },
+ )
+
+ t.Run(
+ "adds _extracted suffix to detected fields that conflict with indexed labels",
+ func(t *testing.T) {
+ store := newStoreMock()
+ store.On("SelectLogs", mock.Anything, mock.Anything).
+ Return(mockLogfmtStreamIterator(1, 2), nil)
+
+ queryClient := newQueryClientMock()
+ queryClient.On("Recv").
+ Return(mockQueryResponse([]logproto.Stream{mockLogfmtStreamWithStructuredMetadata(1, 2)}), nil)
+
+ ingesterClient := newQuerierClientMock()
+ ingesterClient.On("Query", mock.Anything, mock.Anything, mock.Anything).
+ Return(queryClient, nil)
+
+ querier, err := newQuerier(
+ conf,
+ mockIngesterClientConfig(),
+ newIngesterClientMockFactory(ingesterClient),
+ mockReadRingWithOneActiveIngester(),
+ &mockDeleteGettter{},
+ store, limits)
+ require.NoError(t, err)
+
+ resp, err := querier.DetectedFields(ctx, &request)
+ require.NoError(t, err)
+
+ detectedFields := resp.Fields
+ // log lines come from querier_mock_test.go
+ // message="line %d" count=%d fake=true bytes=%dMB duration=%dms percent=%f even=%t
+ assert.Len(t, detectedFields, 10)
+
+ var nameField *logproto.DetectedField
+ for _, field := range detectedFields {
+ switch field.Label {
+ case "name_extracted":
+ nameField = field
+ }
+ }
+
+ assert.NotNil(t, nameField)
+ assert.Equal(t, "name_extracted", nameField.Label)
+ assert.Equal(t, logproto.DetectedFieldString, nameField.Type)
+ assert.Equal(t, []string{"logfmt"}, nameField.Parsers)
+ assert.Equal(t, uint64(1), nameField.Cardinality)
+ },
+ )
}
func BenchmarkQuerierDetectedFields(b *testing.B) {
@@ -2001,3 +2056,615 @@ func BenchmarkQuerierDetectedFields(b *testing.B) {
assert.NoError(b, err)
}
}
+
+func Test_getParsersFromExpr(t *testing.T) {
+ t.Run("detects logfmt parser", func(t *testing.T) {
+ exprStr := `{foo="bar"} | logfmt`
+ expr, err := syntax.ParseLogSelector(exprStr, true)
+ require.NoError(t, err)
+ assert.Equal(t, []string{"logfmt"}, getParsersFromExpr(expr))
+ })
+
+ t.Run("detects json parser", func(t *testing.T) {
+ exprStr := `{foo="bar"} | json`
+ expr, err := syntax.ParseLogSelector(exprStr, true)
+ require.NoError(t, err)
+ assert.Equal(t, []string{"json"}, getParsersFromExpr(expr))
+ })
+
+ t.Run("detects multiple parsers", func(t *testing.T) {
+ exprStr := `{foo="bar"} | logfmt | json`
+ expr, err := syntax.ParseLogSelector(exprStr, true)
+ require.NoError(t, err)
+ assert.Equal(t, []string{"logfmt", "json"}, getParsersFromExpr(expr))
+ })
+
+ t.Run("detects logfmt expression parser", func(t *testing.T) {
+ exprStr := `{foo="bar"} | logfmt msg="message"`
+ expr, err := syntax.ParseLogSelector(exprStr, true)
+ require.NoError(t, err)
+ assert.Equal(t, []string{"logfmt"}, getParsersFromExpr(expr))
+ })
+
+ t.Run("detects json expression parser", func(t *testing.T) {
+ exprStr := `{foo="bar"} | json first_server="servers[0]"`
+ expr, err := syntax.ParseLogSelector(exprStr, true)
+ require.NoError(t, err)
+ assert.Equal(t, []string{"json"}, getParsersFromExpr(expr))
+ })
+
+ t.Run("detects multiple expression parsers", func(t *testing.T) {
+ exprStr := `{foo="bar"} | logfmt msg="message" | json first_server="servers[0]"`
+ expr, err := syntax.ParseLogSelector(exprStr, true)
+ require.NoError(t, err)
+ assert.Equal(t, []string{"logfmt", "json"}, getParsersFromExpr(expr))
+ })
+}
+
+func Test_parseDetectedFeilds(t *testing.T) {
+ now := time.Now()
+
+ t.Run("when no parsers are supplied", func(t *testing.T) {
+ infoDetectdFiledMetadata := []push.LabelAdapter{
+ {
+ Name: "detected_level",
+ Value: "info",
+ },
+ }
+
+ rulerLines := []push.Entry{
+ {Timestamp: now, Line: "ts=2024-09-05T15:36:38.757788067Z caller=grpc_logging.go:66 tenant=2419 level=info method=/cortex.Ingester/Push duration=19.098s msg=gRPC", StructuredMetadata: infoDetectdFiledMetadata},
+ {Timestamp: now, Line: "ts=2024-09-05T15:36:38.698375619Z caller=grpc_logging.go:66 tenant=29 level=info method=/cortex.Ingester/Push duration=5.471s msg=gRPC", StructuredMetadata: infoDetectdFiledMetadata},
+ {Timestamp: now, Line: "ts=2024-09-05T15:36:38.629424175Z caller=grpc_logging.go:66 tenant=2919 level=info method=/cortex.Ingester/Push duration=29.234s msg=gRPC", StructuredMetadata: infoDetectdFiledMetadata},
+ }
+
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: rulerLines,
+ Hash: rulerMetric.Hash(),
+ }
+
+ debugDetectedFieldMetadata := []push.LabelAdapter{
+ {
+ Name: "detected_level",
+ Value: "debug",
+ },
+ }
+
+ nginxJSONLines := []push.Entry{
+ {Timestamp: now, Line: `{"host":"100.117.38.203", "user-identifier":"nader3722", "datetime":"05/Sep/2024:16:13:56 +0000", "method": "PATCH", "request": "/api/loki/v1/push", "protocol":"HTTP/2.0", "status":200, "bytes":9664, "referer": "https://www.seniorbleeding-edge.net/exploit/robust/whiteboard"}`, StructuredMetadata: debugDetectedFieldMetadata},
+ {Timestamp: now, Line: `{"host":"66.134.9.30", "user-identifier":"-", "datetime":"05/Sep/2024:16:13:55 +0000", "method": "DELETE", "request": "/api/mimir/v1/push", "protocol":"HTTP/1.1", "status":200, "bytes":18688, "referer": "https://www.districtiterate.biz/synergistic/next-generation/extend"}`, StructuredMetadata: debugDetectedFieldMetadata},
+ {Timestamp: now, Line: `{"host":"66.134.9.30", "user-identifier":"-", "datetime":"05/Sep/2024:16:13:55 +0000", "method": "GET", "request": "/api/loki/v1/label/names", "protocol":"HTTP/1.1", "status":200, "bytes":9314, "referer": "https://www.dynamicimplement.info/enterprise/distributed/incentivize/strategic"}`, StructuredMetadata: debugDetectedFieldMetadata},
+ }
+
+ nginxLbls := `{ cluster="eu-west-1", level="debug", namespace="gateway", pod="nginx-json-oghco", service_name="nginx-json" }`
+ nginxMetric, err := parser.ParseMetric(nginxLbls)
+ require.NoError(t, err)
+
+ nginxStream := push.Stream{
+ Labels: nginxLbls,
+ Entries: nginxJSONLines,
+ Hash: nginxMetric.Hash(),
+ }
+
+ t.Run("detect logfmt fields when with no supplied parsers", func(t *testing.T) {
+ df := parseDetectedFields(uint32(15), logqlmodel.Streams([]push.Stream{rulerStream}))
+ for _, expected := range []string{"ts", "caller", "tenant", "level", "method", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "logfmt", parsers[0])
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("detect json fields when with no supplied parsers", func(t *testing.T) {
+ df := parseDetectedFields(uint32(15), logqlmodel.Streams([]push.Stream{nginxStream}))
+ for _, expected := range []string{"host", "user_identifier", "datetime", "method", "request", "protocol", "status", "bytes", "referer"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "json", parsers[0])
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("detect mixed fields when with no supplied parsers", func(t *testing.T) {
+ df := parseDetectedFields(uint32(20), logqlmodel.Streams([]push.Stream{rulerStream, nginxStream}))
+
+ for _, expected := range []string{"ts", "caller", "tenant", "level", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1, "expected only logfmt parser for %s", expected)
+ require.Equal(t, "logfmt", parsers[0], "expected only logfmt parser for %s", expected)
+ }
+
+ for _, expected := range []string{"host", "user_identifier", "datetime", "request", "protocol", "status", "bytes", "referer"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1, "expected only json parser for %s", expected)
+ require.Equal(t, "json", parsers[0], "expected only json parser for %s", expected)
+ }
+
+ // multiple parsers for fields that exist in both streams
+ for _, expected := range []string{"method"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 2, "expected logfmt and json parser for %s", expected)
+ require.Contains(t, parsers, "logfmt", "expected logfmt parser for %s", expected)
+ require.Contains(t, parsers, "json", "expected json parser for %s", expected)
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("correctly applies _extracted for a single stream", func(t *testing.T) {
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler", tenant="42", caller="inside-the-house"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: rulerLines,
+ Hash: rulerMetric.Hash(),
+ }
+
+ df := parseDetectedFields(uint32(15), logqlmodel.Streams([]push.Stream{rulerStream}))
+ for _, expected := range []string{"ts", "caller_extracted", "tenant_extracted", "level", "method", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "logfmt", parsers[0])
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("correctly applies _extracted for multiple streams", func(t *testing.T) {
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler", tenant="42", caller="inside-the-house"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: rulerLines,
+ Hash: rulerMetric.Hash(),
+ }
+
+ nginxLbls := `{ cluster="eu-west-1", level="debug", namespace="gateway", pod="nginx-json-oghco", service_name="nginx-json", host="localhost"}`
+ nginxMetric, err := parser.ParseMetric(nginxLbls)
+ require.NoError(t, err)
+
+ nginxStream := push.Stream{
+ Labels: nginxLbls,
+ Entries: nginxJSONLines,
+ Hash: nginxMetric.Hash(),
+ }
+
+ df := parseDetectedFields(uint32(20), logqlmodel.Streams([]push.Stream{rulerStream, nginxStream}))
+ for _, expected := range []string{"ts", "caller_extracted", "tenant_extracted", "level", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "logfmt", parsers[0])
+ }
+
+ for _, expected := range []string{"host_extracted", "user_identifier", "datetime", "request", "protocol", "status", "bytes", "referer"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1, "expected only json parser for %s", expected)
+ require.Equal(t, "json", parsers[0], "expected only json parser for %s", expected)
+ }
+
+ // multiple parsers for fields that exist in both streams
+ for _, expected := range []string{"method"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 2, "expected logfmt and json parser for %s", expected)
+ require.Contains(t, parsers, "logfmt", "expected logfmt parser for %s", expected)
+ require.Contains(t, parsers, "json", "expected json parser for %s", expected)
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+ })
+
+ t.Run("when parsers are supplied", func(t *testing.T) {
+ infoDetectdFiledMetadata := []push.LabelAdapter{
+ {
+ Name: "detected_level",
+ Value: "info",
+ },
+ }
+
+ parsedRulerFields := func(ts, tenant, duration string) []push.LabelAdapter {
+ return []push.LabelAdapter{
+ {
+ Name: "ts",
+ Value: ts,
+ },
+ {
+ Name: "caller",
+ Value: "grpc_logging.go:66",
+ },
+ {
+ Name: "tenant",
+ Value: tenant,
+ },
+ {
+ Name: "level",
+ Value: "info",
+ },
+ {
+ Name: "method",
+ Value: "/cortex.Ingester/Push",
+ },
+ {
+ Name: "duration",
+ Value: duration,
+ },
+ {
+ Name: "msg",
+ Value: "gRPC",
+ },
+ }
+ }
+
+ rulerLines := []push.Entry{
+ {
+ Timestamp: now,
+ Line: "ts=2024-09-05T15:36:38.757788067Z caller=grpc_logging.go:66 tenant=2419 level=info method=/cortex.Ingester/Push duration=19.098s msg=gRPC",
+ StructuredMetadata: infoDetectdFiledMetadata,
+ Parsed: parsedRulerFields("2024-09-05T15:36:38.757788067Z", "2419", "19.098s"),
+ },
+ {
+ Timestamp: now,
+ Line: "ts=2024-09-05T15:36:38.698375619Z caller=grpc_logging.go:66 tenant=29 level=info method=/cortex.Ingester/Push duration=5.471s msg=gRPC",
+ StructuredMetadata: infoDetectdFiledMetadata,
+ Parsed: parsedRulerFields("2024-09-05T15:36:38.698375619Z", "29", "5.471s"),
+ },
+ {
+ Timestamp: now,
+ Line: "ts=2024-09-05T15:36:38.629424175Z caller=grpc_logging.go:66 tenant=2919 level=info method=/cortex.Ingester/Push duration=29.234s msg=gRPC",
+ StructuredMetadata: infoDetectdFiledMetadata,
+ Parsed: parsedRulerFields("2024-09-05T15:36:38.629424175Z", "2919", "29.234s"),
+ },
+ }
+
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: rulerLines,
+ Hash: rulerMetric.Hash(),
+ }
+
+ debugDetectedFieldMetadata := []push.LabelAdapter{
+ {
+ Name: "detected_level",
+ Value: "debug",
+ },
+ }
+
+ parsedNginxFields := func(host, userIdentifier, datetime, method, request, protocol, status, bytes, referer string) []push.LabelAdapter {
+ return []push.LabelAdapter{
+ {
+ Name: "host",
+ Value: host,
+ },
+ {
+ Name: "user_identifier",
+ Value: userIdentifier,
+ },
+ {
+ Name: "datetime",
+ Value: datetime,
+ },
+ {
+ Name: "method",
+ Value: method,
+ },
+ {
+ Name: "request",
+ Value: request,
+ },
+ {
+ Name: "protocol",
+ Value: protocol,
+ },
+ {
+ Name: "status",
+ Value: status,
+ },
+ {
+ Name: "bytes",
+ Value: bytes,
+ },
+ {
+ Name: "referer",
+ Value: referer,
+ },
+ }
+ }
+
+ nginxJSONLines := []push.Entry{
+ {
+ Timestamp: now,
+ Line: `{"host":"100.117.38.203", "user-identifier":"nader3722", "datetime":"05/Sep/2024:16:13:56 +0000", "method": "PATCH", "request": "/api/loki/v1/push", "protocol":"HTTP/2.0", "status":200, "bytes":9664, "referer": "https://www.seniorbleeding-edge.net/exploit/robust/whiteboard"}`,
+ StructuredMetadata: debugDetectedFieldMetadata,
+ Parsed: parsedNginxFields("100.117.38.203", "nadre3722", "05/Sep/2024:16:13:56 +0000", "PATCH", "/api/loki/v1/push", "HTTP/2.0", "200", "9664", "https://www.seniorbleeding-edge.net/exploit/robust/whiteboard"),
+ },
+ {
+ Timestamp: now,
+ Line: `{"host":"66.134.9.30", "user-identifier":"-", "datetime":"05/Sep/2024:16:13:55 +0000", "method": "DELETE", "request": "/api/mimir/v1/push", "protocol":"HTTP/1.1", "status":200, "bytes":18688, "referer": "https://www.districtiterate.biz/synergistic/next-generation/extend"}`,
+ StructuredMetadata: debugDetectedFieldMetadata,
+ Parsed: parsedNginxFields("66.134.9.30", "-", "05/Sep/2024:16:13:55 +0000", "DELETE", "/api/mimir/v1/push", "HTTP/1.1", "200", "18688", "https://www.districtiterate.biz/synergistic/next-generation/extend"),
+ },
+ {
+ Timestamp: now,
+ Line: `{"host":"66.134.9.30", "user-identifier":"-", "datetime":"05/Sep/2024:16:13:55 +0000", "method": "GET", "request": "/api/loki/v1/label/names", "protocol":"HTTP/1.1", "status":200, "bytes":9314, "referer": "https://www.dynamicimplement.info/enterprise/distributed/incentivize/strategic"}`,
+ StructuredMetadata: debugDetectedFieldMetadata,
+ Parsed: parsedNginxFields("66.134.9.30", "-", "05/Sep/2024:16:13:55 +0000", "GET", "/api/loki/v1/label/names", "HTTP/1.1", "200", "9314", "https://www.dynamicimplement.info/enterprise/distributed/incentivize/strategic"),
+ },
+ }
+
+ nginxLbls := `{ cluster="eu-west-1", level="debug", namespace="gateway", pod="nginx-json-oghco", service_name="nginx-json" }`
+ nginxMetric, err := parser.ParseMetric(nginxLbls)
+ require.NoError(t, err)
+
+ nginxStream := push.Stream{
+ Labels: nginxLbls,
+ Entries: nginxJSONLines,
+ Hash: nginxMetric.Hash(),
+ }
+
+ t.Run("detect logfmt fields", func(t *testing.T) {
+ df := parseDetectedFields(uint32(15), logqlmodel.Streams([]push.Stream{rulerStream}))
+ for _, expected := range []string{"ts", "caller", "tenant", "level", "method", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "logfmt", parsers[0])
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("detect json fields", func(t *testing.T) {
+ df := parseDetectedFields(uint32(15), logqlmodel.Streams([]push.Stream{nginxStream}))
+ for _, expected := range []string{"host", "user_identifier", "datetime", "method", "request", "protocol", "status", "bytes", "referer"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "json", parsers[0])
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("detect mixed fields", func(t *testing.T) {
+ df := parseDetectedFields(uint32(20), logqlmodel.Streams([]push.Stream{rulerStream, nginxStream}))
+
+ for _, expected := range []string{"ts", "caller", "tenant", "level", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1, "expected only logfmt parser for %s", expected)
+ require.Equal(t, "logfmt", parsers[0], "expected only logfmt parser for %s", expected)
+ }
+
+ for _, expected := range []string{"host", "user_identifier", "datetime", "request", "protocol", "status", "bytes", "referer"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1, "expected only json parser for %s", expected)
+ require.Equal(t, "json", parsers[0], "expected only json parser for %s", expected)
+ }
+
+ // multiple parsers for fields that exist in both streams
+ for _, expected := range []string{"method"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 2, "expected logfmt and json parser for %s", expected)
+ require.Contains(t, parsers, "logfmt", "expected logfmt parser for %s", expected)
+ require.Contains(t, parsers, "json", "expected json parser for %s", expected)
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("correctly applies _extracted for a single stream", func(t *testing.T) {
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler", tenant="42", caller="inside-the-house"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: rulerLines,
+ Hash: rulerMetric.Hash(),
+ }
+
+ df := parseDetectedFields(uint32(15), logqlmodel.Streams([]push.Stream{rulerStream}))
+ for _, expected := range []string{"ts", "caller_extracted", "tenant_extracted", "level", "method", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "logfmt", parsers[0])
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+
+ t.Run("correctly applies _extracted for multiple streams", func(t *testing.T) {
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler", tenant="42", caller="inside-the-house"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: rulerLines,
+ Hash: rulerMetric.Hash(),
+ }
+
+ nginxLbls := `{ cluster="eu-west-1", level="debug", namespace="gateway", pod="nginx-json-oghco", service_name="nginx-json", host="localhost"}`
+ nginxMetric, err := parser.ParseMetric(nginxLbls)
+ require.NoError(t, err)
+
+ nginxStream := push.Stream{
+ Labels: nginxLbls,
+ Entries: nginxJSONLines,
+ Hash: nginxMetric.Hash(),
+ }
+
+ df := parseDetectedFields(uint32(20), logqlmodel.Streams([]push.Stream{rulerStream, nginxStream}))
+ for _, expected := range []string{"ts", "caller_extracted", "tenant_extracted", "level", "duration", "msg"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1)
+ require.Equal(t, "logfmt", parsers[0])
+ }
+
+ for _, expected := range []string{"host_extracted", "user_identifier", "datetime", "request", "protocol", "status", "bytes", "referer"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 1, "expected only json parser for %s", expected)
+ require.Equal(t, "json", parsers[0], "expected only json parser for %s", expected)
+ }
+
+ // multiple parsers for fields that exist in both streams
+ for _, expected := range []string{"method"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 2, "expected logfmt and json parser for %s", expected)
+ require.Contains(t, parsers, "logfmt", "expected logfmt parser for %s", expected)
+ require.Contains(t, parsers, "json", "expected json parser for %s", expected)
+ }
+
+ // no parsers for structed metadata
+ for _, expected := range []string{"detected_level"} {
+ require.Contains(t, df, expected)
+ parsers := df[expected].parsers
+
+ require.Len(t, parsers, 0)
+ }
+ })
+ })
+
+ t.Run("handles level in all the places", func(t *testing.T) {
+ rulerLbls := `{cluster="us-east-1", namespace="mimir-dev", pod="mimir-ruler-nfb37", service_name="mimir-ruler", tenant="42", caller="inside-the-house", level="debug"}`
+ rulerMetric, err := parser.ParseMetric(rulerLbls)
+ require.NoError(t, err)
+
+ rulerStream := push.Stream{
+ Labels: rulerLbls,
+ Entries: []push.Entry{
+ {
+ Timestamp: now,
+ Line: "ts=2024-09-05T15:36:38.757788067Z caller=grpc_logging.go:66 tenant=2419 level=info method=/cortex.Ingester/Push duration=19.098s msg=gRPC",
+ StructuredMetadata: []push.LabelAdapter{
+ {
+ Name: "detected_level",
+ Value: "debug",
+ },
+ },
+ Parsed: []push.LabelAdapter{
+ {
+ Name: "level",
+ Value: "info",
+ },
+ },
+ },
+ },
+ Hash: rulerMetric.Hash(),
+ }
+
+ df := parseDetectedFields(uint32(20), logqlmodel.Streams([]push.Stream{rulerStream, rulerStream}))
+
+ detectedLevelField := df["detected_level"]
+ require.Len(t, detectedLevelField.parsers, 0)
+ require.Equal(t, uint64(1), detectedLevelField.sketch.Estimate())
+
+ levelField := df["level_extracted"]
+ require.Len(t, levelField.parsers, 1)
+ require.Contains(t, levelField.parsers, "logfmt")
+ require.Equal(t, uint64(1), levelField.sketch.Estimate())
+ })
+}
diff --git a/pkg/querier/queryrange/codec.go b/pkg/querier/queryrange/codec.go
index 97cc813637a7d..2c4ff98c92c89 100644
--- a/pkg/querier/queryrange/codec.go
+++ b/pkg/querier/queryrange/codec.go
@@ -327,7 +327,7 @@ func (*DetectedLabelsRequest) GetCachingOptions() (res queryrangebase.CachingOpt
func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (queryrangebase.Request, error) {
if err := r.ParseForm(); err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
disableCacheReq := false
@@ -340,13 +340,13 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case QueryRangeOp:
req, err := parseRangeQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return req, nil
case InstantQueryOp:
req, err := parseInstantQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
req.CachingOptions = queryrangebase.CachingOptions{
@@ -357,7 +357,7 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case SeriesOp:
req, err := loghttp.ParseAndValidateSeriesQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &LokiSeriesRequest{
Match: req.Groups,
@@ -369,7 +369,7 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case LabelNamesOp:
req, err := loghttp.ParseLabelQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &LabelRequest{
@@ -379,7 +379,7 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case IndexStatsOp:
req, err := loghttp.ParseIndexStatsQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.IndexStatsRequest{
@@ -390,7 +390,7 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case IndexShardsOp:
req, targetBytes, err := loghttp.ParseIndexShardsQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.ShardsRequest{
@@ -402,7 +402,7 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case VolumeOp:
req, err := loghttp.ParseVolumeInstantQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.VolumeRequest{
@@ -420,7 +420,7 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case VolumeRangeOp:
req, err := loghttp.ParseVolumeRangeQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.VolumeRequest{
@@ -438,12 +438,12 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case DetectedFieldsOp:
req, err := loghttp.ParseDetectedFieldsQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
_, err = syntax.ParseExpr(req.Query)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &DetectedFieldsRequest{
@@ -453,20 +453,20 @@ func (Codec) DecodeRequest(_ context.Context, r *http.Request, _ []string) (quer
case PatternsQueryOp:
req, err := loghttp.ParsePatternsQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return req, nil
case DetectedLabelsOp:
req, err := loghttp.ParseDetectedLabelsQuery(r)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &DetectedLabelsRequest{
DetectedLabelsRequest: *req,
path: r.URL.Path,
}, nil
default:
- return nil, httpgrpc.Errorf(http.StatusNotFound, fmt.Sprintf("unknown request path: %s", r.URL.Path))
+ return nil, httpgrpc.Errorf(http.StatusNotFound, "%s", fmt.Sprintf("unknown request path: %s", r.URL.Path))
}
}
@@ -477,7 +477,7 @@ var labelNamesRoutes = regexp.MustCompile(`/loki/api/v1/label/(?P[^/]+)/va
func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest) (queryrangebase.Request, context.Context, error) {
httpReq, err := http.NewRequest(r.Method, r.Url, io.NopCloser(bytes.NewBuffer(r.Body)))
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusInternalServerError, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusInternalServerError, "%s", err.Error())
}
httpReq = httpReq.WithContext(ctx)
httpReq.RequestURI = r.Url
@@ -524,28 +524,28 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
}
if err := httpReq.ParseForm(); err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
switch op := getOperation(httpReq.URL.Path); op {
case QueryRangeOp:
req, err := parseRangeQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return req, ctx, nil
case InstantQueryOp:
req, err := parseInstantQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return req, ctx, nil
case SeriesOp:
req, err := loghttp.ParseAndValidateSeriesQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &LokiSeriesRequest{
Match: req.Groups,
@@ -557,7 +557,7 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case LabelNamesOp:
req, err := loghttp.ParseLabelQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if req.Name == "" {
@@ -574,7 +574,7 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case IndexStatsOp:
req, err := loghttp.ParseIndexStatsQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.IndexStatsRequest{
@@ -585,7 +585,7 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case IndexShardsOp:
req, targetBytes, err := loghttp.ParseIndexShardsQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.ShardsRequest{
@@ -598,7 +598,7 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case VolumeOp:
req, err := loghttp.ParseVolumeInstantQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.VolumeRequest{
@@ -613,7 +613,7 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case VolumeRangeOp:
req, err := loghttp.ParseVolumeRangeQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
from, through := util.RoundToMilliseconds(req.Start, req.End)
return &logproto.VolumeRequest{
@@ -628,7 +628,7 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case DetectedFieldsOp:
req, err := loghttp.ParseDetectedFieldsQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &DetectedFieldsRequest{
@@ -638,27 +638,27 @@ func (Codec) DecodeHTTPGrpcRequest(ctx context.Context, r *httpgrpc.HTTPRequest)
case PatternsQueryOp:
req, err := loghttp.ParsePatternsQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return req, ctx, nil
case DetectedLabelsOp:
req, err := loghttp.ParseDetectedLabelsQuery(httpReq)
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
return &DetectedLabelsRequest{
DetectedLabelsRequest: *req,
path: httpReq.URL.Path,
}, ctx, err
default:
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, fmt.Sprintf("unknown request path in HTTP gRPC decode: %s", r.Url))
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", fmt.Sprintf("unknown request path in HTTP gRPC decode: %s", r.Url))
}
}
// DecodeHTTPGrpcResponse decodes an httpgrp.HTTPResponse to queryrangebase.Response.
func (Codec) DecodeHTTPGrpcResponse(r *httpgrpc.HTTPResponse, req queryrangebase.Request) (queryrangebase.Response, error) {
if r.Code/100 != 2 {
- return nil, httpgrpc.Errorf(int(r.Code), string(r.Body))
+ return nil, httpgrpc.Errorf(int(r.Code), "%s", string(r.Body))
}
headers := make(http.Header)
@@ -989,7 +989,7 @@ func (c Codec) EncodeRequest(ctx context.Context, r queryrangebase.Request) (*ht
return req.WithContext(ctx), nil
default:
- return nil, httpgrpc.Errorf(http.StatusInternalServerError, fmt.Sprintf("invalid request format, got (%T)", r))
+ return nil, httpgrpc.Errorf(http.StatusInternalServerError, "%s", fmt.Sprintf("invalid request format, got (%T)", r))
}
}
@@ -1041,7 +1041,7 @@ type Buffer interface {
func (Codec) DecodeResponse(_ context.Context, r *http.Response, req queryrangebase.Request) (queryrangebase.Response, error) {
if r.StatusCode/100 != 2 {
body, _ := io.ReadAll(r.Body)
- return nil, httpgrpc.Errorf(r.StatusCode, string(body))
+ return nil, httpgrpc.Errorf(r.StatusCode, "%s", string(body))
}
if r.Header.Get("Content-Type") == ProtobufType {
@@ -1377,7 +1377,7 @@ func encodeResponseJSONTo(version loghttp.Version, res queryrangebase.Response,
return err
}
default:
- return httpgrpc.Errorf(http.StatusInternalServerError, fmt.Sprintf("invalid response format, got (%T)", res))
+ return httpgrpc.Errorf(http.StatusInternalServerError, "%s", fmt.Sprintf("invalid response format, got (%T)", res))
}
return nil
@@ -1389,7 +1389,7 @@ func encodeResponseProtobuf(ctx context.Context, res queryrangebase.Response) (*
p, err := QueryResponseWrap(res)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusInternalServerError, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusInternalServerError, "%s", err.Error())
}
buf, err := p.Marshal()
@@ -2130,7 +2130,7 @@ func NewEmptyResponse(r queryrangebase.Request) (queryrangebase.Response, error)
// range query can either be metrics or logs
expr, err := syntax.ParseExpr(req.Query)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if _, ok := expr.(syntax.SampleExpr); ok {
return &LokiPromResponse{
diff --git a/pkg/querier/queryrange/index_stats_cache_test.go b/pkg/querier/queryrange/index_stats_cache_test.go
index 4d0f4124788a4..cd4a3cc1139c7 100644
--- a/pkg/querier/queryrange/index_stats_cache_test.go
+++ b/pkg/querier/queryrange/index_stats_cache_test.go
@@ -212,7 +212,7 @@ func TestIndexStatsCache_RecentData(t *testing.T) {
func indexStatsResultHandler(v *IndexStatsResponse) (*int, queryrangebase.Handler) {
calls := 0
- return &calls, queryrangebase.HandlerFunc(func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ return &calls, queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
calls++
return v, nil
})
diff --git a/pkg/querier/queryrange/limits.go b/pkg/querier/queryrange/limits.go
index 695c0d5346fa4..0e47e6b762bff 100644
--- a/pkg/querier/queryrange/limits.go
+++ b/pkg/querier/queryrange/limits.go
@@ -156,7 +156,7 @@ func (l limitsMiddleware) Do(ctx context.Context, r queryrangebase.Request) (que
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
// Clamp the time range based on the max query lookback.
@@ -352,7 +352,7 @@ func (q *querySizeLimiter) Do(ctx context.Context, r queryrangebase.Request) (qu
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
limitFuncCapture := func(id string) int { return q.limitFunc(ctx, id) }
@@ -495,7 +495,7 @@ func (rt limitedRoundTripper) Do(c context.Context, request queryrangebase.Reque
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
parallelism := MinWeightedParallelism(
@@ -508,7 +508,7 @@ func (rt limitedRoundTripper) Do(c context.Context, request queryrangebase.Reque
)
if parallelism < 1 {
- return nil, httpgrpc.Errorf(http.StatusTooManyRequests, ErrMaxQueryParalellism.Error())
+ return nil, httpgrpc.Errorf(http.StatusTooManyRequests, "%s", ErrMaxQueryParalellism.Error())
}
semWithTiming := NewSemaphoreWithTiming(int64(parallelism))
@@ -678,7 +678,7 @@ func MinWeightedParallelism(ctx context.Context, tenantIDs []string, configs []c
func validateMaxEntriesLimits(ctx context.Context, reqLimit uint32, limits Limits) error {
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
maxEntriesCapture := func(id string) int { return limits.MaxEntriesLimitPerQuery(ctx, id) }
diff --git a/pkg/querier/queryrange/limits_test.go b/pkg/querier/queryrange/limits_test.go
index 7cc2ad951b262..d2ad3385b26ac 100644
--- a/pkg/querier/queryrange/limits_test.go
+++ b/pkg/querier/queryrange/limits_test.go
@@ -248,7 +248,7 @@ func Test_MaxQueryParallelism(t *testing.T) {
_, _ = NewLimitedRoundTripper(h, fakeLimits{maxQueryParallelism: maxQueryParallelism},
testSchemas,
base.MiddlewareFunc(func(next base.Handler) base.Handler {
- return base.HandlerFunc(func(c context.Context, r base.Request) (base.Response, error) {
+ return base.HandlerFunc(func(c context.Context, _ base.Request) (base.Response, error) {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
@@ -306,7 +306,7 @@ func Test_MaxQueryParallelismDisable(t *testing.T) {
_, err := NewLimitedRoundTripper(h, fakeLimits{maxQueryParallelism: maxQueryParallelism},
testSchemas,
base.MiddlewareFunc(func(next base.Handler) base.Handler {
- return base.HandlerFunc(func(c context.Context, r base.Request) (base.Response, error) {
+ return base.HandlerFunc(func(c context.Context, _ base.Request) (base.Response, error) {
for i := 0; i < 10; i++ {
go func() {
_, _ = next.Do(c, &LokiRequest{})
@@ -759,7 +759,7 @@ func Test_MaxQuerySize_MaxLookBackPeriod(t *testing.T) {
}
handler := tc.middleware.Wrap(
- base.HandlerFunc(func(_ context.Context, req base.Request) (base.Response, error) {
+ base.HandlerFunc(func(_ context.Context, _ base.Request) (base.Response, error) {
return &LokiResponse{}, nil
}),
)
diff --git a/pkg/querier/queryrange/log_result_cache.go b/pkg/querier/queryrange/log_result_cache.go
index da3dc58896a4f..842004a34b7b1 100644
--- a/pkg/querier/queryrange/log_result_cache.go
+++ b/pkg/querier/queryrange/log_result_cache.go
@@ -86,7 +86,7 @@ func (l *logResultCache) Do(ctx context.Context, req queryrangebase.Request) (qu
defer sp.Finish()
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if l.shouldCache != nil && !l.shouldCache(ctx, req) {
diff --git a/pkg/querier/queryrange/marshal.go b/pkg/querier/queryrange/marshal.go
index b3920e00a6668..ab7a483890e03 100644
--- a/pkg/querier/queryrange/marshal.go
+++ b/pkg/querier/queryrange/marshal.go
@@ -332,7 +332,7 @@ func (Codec) QueryRequestUnwrap(ctx context.Context, req *QueryRequest) (queryra
if concrete.Instant.Plan == nil {
parsed, err := syntax.ParseExpr(concrete.Instant.GetQuery())
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
concrete.Instant.Plan = &plan.QueryPlan{
AST: parsed,
@@ -350,7 +350,7 @@ func (Codec) QueryRequestUnwrap(ctx context.Context, req *QueryRequest) (queryra
if concrete.Streams.Plan == nil {
parsed, err := syntax.ParseExpr(concrete.Streams.GetQuery())
if err != nil {
- return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, ctx, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
concrete.Streams.Plan = &plan.QueryPlan{
AST: parsed,
diff --git a/pkg/querier/queryrange/metrics.go b/pkg/querier/queryrange/metrics.go
index bd9ce6fa79bac..d3c949b9bb3af 100644
--- a/pkg/querier/queryrange/metrics.go
+++ b/pkg/querier/queryrange/metrics.go
@@ -9,6 +9,7 @@ import (
"github.com/grafana/loki/v3/pkg/logql"
"github.com/grafana/loki/v3/pkg/logql/syntax"
"github.com/grafana/loki/v3/pkg/querier/queryrange/queryrangebase"
+ v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
)
type Metrics struct {
@@ -46,15 +47,15 @@ func NewMetrics(registerer prometheus.Registerer, metricsNamespace string) *Metr
}
type QueryMetrics struct {
- receivedFilters prometheus.Histogram
+ receivedLabelFilters prometheus.Histogram
}
func NewMiddlewareQueryMetrics(registerer prometheus.Registerer, metricsNamespace string) *QueryMetrics {
return &QueryMetrics{
- receivedFilters: promauto.With(registerer).NewHistogram(prometheus.HistogramOpts{
+ receivedLabelFilters: promauto.With(registerer).NewHistogram(prometheus.HistogramOpts{
Namespace: metricsNamespace,
- Name: "query_frontend_query_filters",
- Help: "Number of filters per query.",
+ Name: "query_frontend_query_label_filters",
+ Help: "Number of label matcher expressions per query.",
Buckets: prometheus.ExponentialBuckets(1, 2, 9), // 1 -> 256
}),
}
@@ -87,8 +88,8 @@ func QueryMetricsMiddleware(metrics *QueryMetrics) queryrangebase.Middleware {
}
}
- filters := syntax.ExtractLineFilters(expr)
- metrics.receivedFilters.Observe(float64(len(filters)))
+ filters := v1.ExtractTestableLabelMatchers(expr)
+ metrics.receivedLabelFilters.Observe(float64(len(filters)))
return next.Do(ctx, req)
})
diff --git a/pkg/querier/queryrange/queryrangebase/middleware_test.go b/pkg/querier/queryrange/queryrangebase/middleware_test.go
index b5517046308a8..90d0b401aa0ff 100644
--- a/pkg/querier/queryrange/queryrangebase/middleware_test.go
+++ b/pkg/querier/queryrange/queryrangebase/middleware_test.go
@@ -18,7 +18,7 @@ func TestCacheGenNumberHeaderSetterMiddleware(t *testing.T) {
loader := &fakeGenNumberLoader{genNumber: "test-header-value"}
mware := CacheGenNumberHeaderSetterMiddleware(loader).
- Wrap(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {}))
+ Wrap(http.HandlerFunc(func(_ http.ResponseWriter, _ *http.Request) {}))
mware.ServeHTTP(w, req)
assert.Equal(t, w.Header().Get(ResultsCacheGenNumberHeaderName), "test-header-value")
diff --git a/pkg/querier/queryrange/queryrangebase/promql_test.go b/pkg/querier/queryrange/queryrangebase/promql_test.go
index e5c9e119d68c6..edc7a0e5829a0 100644
--- a/pkg/querier/queryrange/queryrangebase/promql_test.go
+++ b/pkg/querier/queryrange/queryrangebase/promql_test.go
@@ -570,7 +570,7 @@ func Test_FunctionParallelism(t *testing.T) {
}
-var shardAwareQueryable = storage.QueryableFunc(func(mint, maxt int64) (storage.Querier, error) {
+var shardAwareQueryable = storage.QueryableFunc(func(_, _ int64) (storage.Querier, error) {
return &testMatrix{
series: []*promql.StorageSeries{
newSeries(labels.Labels{{Name: "__name__", Value: "bar1"}, {Name: "baz", Value: "blip"}, {Name: "bar", Value: "blop"}, {Name: "foo", Value: "barr"}}, factor(5)),
diff --git a/pkg/querier/queryrange/queryrangebase/query_range.go b/pkg/querier/queryrange/queryrangebase/query_range.go
index bb85f1a191247..24338d60f585a 100644
--- a/pkg/querier/queryrange/queryrangebase/query_range.go
+++ b/pkg/querier/queryrange/queryrangebase/query_range.go
@@ -204,7 +204,7 @@ func (p prometheusCodec) MergeResponse(responses ...Response) (Response, error)
func (prometheusCodec) DecodeResponse(ctx context.Context, r *http.Response, _ Request) (Response, error) {
if r.StatusCode/100 != 2 {
body, _ := io.ReadAll(r.Body)
- return nil, httpgrpc.Errorf(r.StatusCode, string(body))
+ return nil, httpgrpc.Errorf(r.StatusCode, "%s", string(body))
}
sp, ctx := opentracing.StartSpanFromContext(ctx, "ParseQueryRangeResponse") //nolint:ineffassign,staticcheck
defer sp.Finish()
diff --git a/pkg/querier/queryrange/queryrangebase/results_cache_test.go b/pkg/querier/queryrange/queryrangebase/results_cache_test.go
index 1453808d14646..456bdf7704de5 100644
--- a/pkg/querier/queryrange/queryrangebase/results_cache_test.go
+++ b/pkg/querier/queryrange/queryrangebase/results_cache_test.go
@@ -419,7 +419,7 @@ func TestResultsCache(t *testing.T) {
PrometheusResponseExtractor{},
nil,
nil,
- func(_ context.Context, tenantIDs []string, r Request) int {
+ func(_ context.Context, _ []string, _ Request) int {
return mockLimits{}.MaxQueryParallelism(context.Background(), "fake")
},
false,
@@ -466,7 +466,7 @@ func TestResultsCacheRecent(t *testing.T) {
PrometheusResponseExtractor{},
nil,
nil,
- func(_ context.Context, tenantIDs []string, r Request) int {
+ func(_ context.Context, _ []string, _ Request) int {
return mockLimits{}.MaxQueryParallelism(context.Background(), "fake")
},
false,
@@ -577,7 +577,7 @@ func TestResultsCacheShouldCacheFunc(t *testing.T) {
PrometheusResponseExtractor{},
nil,
tc.shouldCache,
- func(_ context.Context, tenantIDs []string, r Request) int {
+ func(_ context.Context, _ []string, _ Request) int {
return mockLimits{}.MaxQueryParallelism(context.Background(), "fake")
},
false,
diff --git a/pkg/querier/queryrange/queryrangebase/retry_test.go b/pkg/querier/queryrange/queryrangebase/retry_test.go
index f3a33b45c9d1e..dec1d82b5e9f6 100644
--- a/pkg/querier/queryrange/queryrangebase/retry_test.go
+++ b/pkg/querier/queryrange/queryrangebase/retry_test.go
@@ -29,7 +29,7 @@ func TestRetry(t *testing.T) {
}{
{
name: "retry failures",
- handler: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
+ handler: HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
if try.Inc() == 5 {
return &PrometheusResponse{Status: "Hello World"}, nil
}
@@ -40,7 +40,7 @@ func TestRetry(t *testing.T) {
},
{
name: "don't retry 400s",
- handler: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
+ handler: HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
try.Inc()
return nil, httpgrpc.Errorf(http.StatusBadRequest, "Bad Request")
}),
@@ -49,7 +49,7 @@ func TestRetry(t *testing.T) {
},
{
name: "retry 500s",
- handler: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
+ handler: HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
try.Inc()
return nil, httpgrpc.Errorf(http.StatusInternalServerError, "Internal Server Error")
}),
@@ -58,7 +58,7 @@ func TestRetry(t *testing.T) {
},
{
name: "last error",
- handler: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
+ handler: HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
if try.Inc() == 5 {
return nil, httpgrpc.Errorf(http.StatusBadRequest, "Bad Request")
}
@@ -71,7 +71,7 @@ func TestRetry(t *testing.T) {
// Next set of tests validate the retry behavior when using protobuf encoding where the status does not include the details.
{
name: "protobuf enc don't retry 400s",
- handler: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
+ handler: HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
try.Inc()
return nil, status.New(codes.Code(http.StatusBadRequest), "Bad Request").Err()
}),
@@ -80,7 +80,7 @@ func TestRetry(t *testing.T) {
},
{
name: "protobuf enc retry 500s",
- handler: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
+ handler: HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
try.Inc()
return nil, status.New(codes.Code(http.StatusInternalServerError), "Internal Server Error").Err()
}),
@@ -111,7 +111,7 @@ func Test_RetryMiddlewareCancel(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
cancel()
_, err := NewRetryMiddleware(log.NewNopLogger(), 5, nil, constants.Loki).Wrap(
- HandlerFunc(func(c context.Context, r Request) (Response, error) {
+ HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
try.Inc()
return nil, ctx.Err()
}),
@@ -121,7 +121,7 @@ func Test_RetryMiddlewareCancel(t *testing.T) {
ctx, cancel = context.WithCancel(context.Background())
_, err = NewRetryMiddleware(log.NewNopLogger(), 5, nil, constants.Loki).Wrap(
- HandlerFunc(func(c context.Context, r Request) (Response, error) {
+ HandlerFunc(func(_ context.Context, _ Request) (Response, error) {
try.Inc()
cancel()
return nil, errors.New("failed")
diff --git a/pkg/querier/queryrange/querysharding.go b/pkg/querier/queryrange/querysharding.go
index bd5c26079636b..9fe578fad665a 100644
--- a/pkg/querier/queryrange/querysharding.go
+++ b/pkg/querier/queryrange/querysharding.go
@@ -124,7 +124,7 @@ type astMapperware struct {
func (ast *astMapperware) checkQuerySizeLimit(ctx context.Context, bytesPerShard uint64, notShardable bool) error {
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
maxQuerierBytesReadCapture := func(id string) int { return ast.limits.MaxQuerierBytesRead(ctx, id) }
@@ -323,7 +323,7 @@ type shardSplitter struct {
func (splitter *shardSplitter) Do(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
minShardingLookback := validation.SmallestPositiveNonZeroDurationPerTenant(tenantIDs, splitter.limits.MinShardingLookback)
if minShardingLookback == 0 {
@@ -456,7 +456,7 @@ func (ss *seriesShardingHandler) Do(ctx context.Context, r queryrangebase.Reques
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
requestResponses, err := queryrangebase.DoRequests(
ctx,
diff --git a/pkg/querier/queryrange/querysharding_test.go b/pkg/querier/queryrange/querysharding_test.go
index 809013ffb0021..49c915566a5b2 100644
--- a/pkg/querier/queryrange/querysharding_test.go
+++ b/pkg/querier/queryrange/querysharding_test.go
@@ -152,7 +152,7 @@ func Test_astMapper(t *testing.T) {
var lock sync.Mutex
called := 0
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
lock.Lock()
defer lock.Unlock()
resp := lokiResps[called]
@@ -264,7 +264,7 @@ func Test_astMapper_QuerySizeLimits(t *testing.T) {
} {
t.Run(tc.desc, func(t *testing.T) {
statsCalled := 0
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
if casted, ok := req.(*logproto.IndexStatsRequest); ok {
statsCalled++
@@ -341,7 +341,7 @@ func Test_astMapper_QuerySizeLimits(t *testing.T) {
func Test_ShardingByPass(t *testing.T) {
called := 0
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
called++
return nil, nil
})
@@ -413,7 +413,7 @@ func Test_hasShards(t *testing.T) {
// astmapper successful stream & prom conversion
func mockHandler(resp queryrangebase.Response, err error) queryrangebase.Handler {
- return queryrangebase.HandlerFunc(func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ return queryrangebase.HandlerFunc(func(ctx context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
if expired := ctx.Err(); expired != nil {
return nil, expired
}
@@ -445,7 +445,7 @@ func Test_InstantSharding(t *testing.T) {
nil,
[]string{},
)
- response, err := sharding.Wrap(queryrangebase.HandlerFunc(func(c context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ response, err := sharding.Wrap(queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
lock.Lock()
defer lock.Unlock()
called++
@@ -508,7 +508,7 @@ func Test_SeriesShardingHandler(t *testing.T) {
)
ctx := user.InjectOrgID(context.Background(), "1")
- response, err := sharding.Wrap(queryrangebase.HandlerFunc(func(c context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ response, err := sharding.Wrap(queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
req, ok := r.(*LokiSeriesRequest)
if !ok {
return nil, errors.New("not a series call")
@@ -711,7 +711,7 @@ func TestShardingAcrossConfigs_ASTMapper(t *testing.T) {
var lock sync.Mutex
called := 0
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
lock.Lock()
defer lock.Unlock()
called++
@@ -814,7 +814,7 @@ func TestShardingAcrossConfigs_SeriesSharding(t *testing.T) {
DefaultCodec,
)
- _, err := mware.Wrap(queryrangebase.HandlerFunc(func(c context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ _, err := mware.Wrap(queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
_, ok := r.(*LokiSeriesRequest)
if !ok {
return nil, errors.New("not a series call")
@@ -839,7 +839,7 @@ func Test_ASTMapper_MaxLookBackPeriod(t *testing.T) {
engineOpts := testEngineOpts
engineOpts.MaxLookBackPeriod = 1 * time.Hour
- queryHandler := queryrangebase.HandlerFunc(func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ queryHandler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &LokiResponse{}, nil
})
diff --git a/pkg/querier/queryrange/roundtrip.go b/pkg/querier/queryrange/roundtrip.go
index f553c61dafb67..3b8031cb5e1ef 100644
--- a/pkg/querier/queryrange/roundtrip.go
+++ b/pkg/querier/queryrange/roundtrip.go
@@ -388,17 +388,17 @@ func (r roundTripper) Do(ctx context.Context, req base.Request) (base.Response,
for _, g := range groups {
if err := validateMatchers(ctx, r.limits, g.Matchers); err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
}
return r.metric.Do(ctx, req)
case syntax.LogSelectorExpr:
if err := validateMaxEntriesLimits(ctx, op.Limit, r.limits); err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if err := validateMatchers(ctx, r.limits, e.Matchers()); err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
// Some queries we don't want to parallelize as aggressively, like limited queries and `datasample` queries
diff --git a/pkg/querier/queryrange/roundtrip_test.go b/pkg/querier/queryrange/roundtrip_test.go
index 27d3ff781b0b5..2f3b5fcd92ee3 100644
--- a/pkg/querier/queryrange/roundtrip_test.go
+++ b/pkg/querier/queryrange/roundtrip_test.go
@@ -1086,7 +1086,7 @@ func TestTripperware_RequiredLabels(t *testing.T) {
_, err = tpw.Wrap(h).Do(ctx, lreq)
if test.expectedError != "" {
- require.Equal(t, httpgrpc.Errorf(http.StatusBadRequest, test.expectedError), err)
+ require.Equal(t, httpgrpc.Errorf(http.StatusBadRequest, "%s", test.expectedError), err)
} else {
require.NoError(t, err)
}
@@ -1194,7 +1194,7 @@ func TestTripperware_RequiredNumberLabels(t *testing.T) {
_, err = tpw.Wrap(h).Do(ctx, lreq)
if tc.expectedError != noErr {
- require.Equal(t, httpgrpc.Errorf(http.StatusBadRequest, tc.expectedError), err)
+ require.Equal(t, httpgrpc.Errorf(http.StatusBadRequest, "%s", tc.expectedError), err)
} else {
require.NoError(t, err)
}
@@ -1543,7 +1543,7 @@ func (i ingesterQueryOpts) QueryIngestersWithin() time.Duration {
func counter() (*int, base.Handler) {
count := 0
var lock sync.Mutex
- return &count, base.HandlerFunc(func(ctx context.Context, r base.Request) (base.Response, error) {
+ return &count, base.HandlerFunc(func(_ context.Context, _ base.Request) (base.Response, error) {
lock.Lock()
defer lock.Unlock()
count++
@@ -1554,7 +1554,7 @@ func counter() (*int, base.Handler) {
func counterWithError(err error) (*int, base.Handler) {
count := 0
var lock sync.Mutex
- return &count, base.HandlerFunc(func(ctx context.Context, r base.Request) (base.Response, error) {
+ return &count, base.HandlerFunc(func(_ context.Context, _ base.Request) (base.Response, error) {
lock.Lock()
defer lock.Unlock()
count++
@@ -1565,7 +1565,7 @@ func counterWithError(err error) (*int, base.Handler) {
func promqlResult(v parser.Value) (*int, base.Handler) {
count := 0
var lock sync.Mutex
- return &count, base.HandlerFunc(func(ctx context.Context, r base.Request) (base.Response, error) {
+ return &count, base.HandlerFunc(func(_ context.Context, r base.Request) (base.Response, error) {
lock.Lock()
defer lock.Unlock()
count++
@@ -1581,7 +1581,7 @@ func promqlResult(v parser.Value) (*int, base.Handler) {
func seriesResult(v logproto.SeriesResponse) (*int, base.Handler) {
count := 0
var lock sync.Mutex
- return &count, base.HandlerFunc(func(ctx context.Context, r base.Request) (base.Response, error) {
+ return &count, base.HandlerFunc(func(_ context.Context, _ base.Request) (base.Response, error) {
lock.Lock()
defer lock.Unlock()
count++
diff --git a/pkg/querier/queryrange/serialize_test.go b/pkg/querier/queryrange/serialize_test.go
index 0bd6c36aa4bd6..f37face6e9351 100644
--- a/pkg/querier/queryrange/serialize_test.go
+++ b/pkg/querier/queryrange/serialize_test.go
@@ -108,7 +108,7 @@ func TestResponseFormat(t *testing.T) {
},
} {
t.Run(fmt.Sprintf("%s returns the expected format", tc.url), func(t *testing.T) {
- handler := queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ handler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return tc.response, nil
})
httpHandler := NewSerializeHTTPHandler(handler, DefaultCodec)
diff --git a/pkg/querier/queryrange/split_by_interval.go b/pkg/querier/queryrange/split_by_interval.go
index 040befd26de93..701a045270d0b 100644
--- a/pkg/querier/queryrange/split_by_interval.go
+++ b/pkg/querier/queryrange/split_by_interval.go
@@ -179,7 +179,7 @@ func (h *splitByInterval) loop(ctx context.Context, ch <-chan *lokiResult, next
func (h *splitByInterval) Do(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
var interval time.Duration
diff --git a/pkg/querier/queryrange/split_by_interval_test.go b/pkg/querier/queryrange/split_by_interval_test.go
index c74ec05c252c7..de1b19be10450 100644
--- a/pkg/querier/queryrange/split_by_interval_test.go
+++ b/pkg/querier/queryrange/split_by_interval_test.go
@@ -1550,7 +1550,7 @@ func Test_splitByInterval_Do(t *testing.T) {
func Test_series_splitByInterval_Do(t *testing.T) {
ctx := user.InjectOrgID(context.Background(), "1")
- next := queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ next := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &LokiSeriesResponse{
Status: "success",
Version: uint32(loghttp.VersionV1),
@@ -1653,7 +1653,7 @@ func Test_seriesvolume_splitByInterval_Do(t *testing.T) {
from := model.TimeFromUnixNano(start.UnixNano())
through := model.TimeFromUnixNano(end.UnixNano())
- next := queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ next := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &VolumeResponse{
Response: &logproto.VolumeResponse{
Volumes: []logproto.Volume{
@@ -1691,7 +1691,7 @@ func Test_seriesvolume_splitByInterval_Do(t *testing.T) {
t.Run("volumes with limits", func(t *testing.T) {
from := model.TimeFromUnixNano(start.UnixNano())
through := model.TimeFromUnixNano(end.UnixNano())
- next := queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ next := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &VolumeResponse{
Response: &logproto.VolumeResponse{
Volumes: []logproto.Volume{
@@ -1733,7 +1733,7 @@ func Test_seriesvolume_splitByInterval_Do(t *testing.T) {
t.Run("volumes with a query split by of 0", func(t *testing.T) {
from := model.TimeFromUnixNano(start.UnixNano())
through := model.TimeFromUnixNano(end.UnixNano())
- next := queryrangebase.HandlerFunc(func(_ context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ next := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &VolumeResponse{
Response: &logproto.VolumeResponse{
Volumes: []logproto.Volume{
diff --git a/pkg/querier/queryrange/split_by_range.go b/pkg/querier/queryrange/split_by_range.go
index 380466d04408b..98ac6f6b34d13 100644
--- a/pkg/querier/queryrange/split_by_range.go
+++ b/pkg/querier/queryrange/split_by_range.go
@@ -59,7 +59,7 @@ func (s *splitByRange) Do(ctx context.Context, request queryrangebase.Request) (
tenants, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
interval := validation.SmallestPositiveNonZeroDurationPerTenant(tenants, s.limits.InstantMetricQuerySplitDuration)
diff --git a/pkg/querier/queryrange/split_by_range_test.go b/pkg/querier/queryrange/split_by_range_test.go
index 0f61c3c276b1f..e3c30c66cc54c 100644
--- a/pkg/querier/queryrange/split_by_range_test.go
+++ b/pkg/querier/queryrange/split_by_range_test.go
@@ -275,7 +275,7 @@ func Test_RangeVectorSplitAlign(t *testing.T) {
}
resp, err := srm.Wrap(queryrangebase.HandlerFunc(
- func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
// req should match with one of the subqueries.
ts := req.(*LokiInstantRequest).TimeTs
subq, ok := byTimeTs[ts.UnixNano()]
@@ -411,7 +411,7 @@ func Test_RangeVectorSplit(t *testing.T) {
tc := tc
t.Run(tc.in.GetQuery(), func(t *testing.T) {
resp, err := srm.Wrap(queryrangebase.HandlerFunc(
- func(ctx context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
+ func(_ context.Context, req queryrangebase.Request) (queryrangebase.Response, error) {
// Assert subquery request
for _, reqResp := range tc.subQueries {
if req.GetQuery() == reqResp.Request.GetQuery() {
@@ -421,7 +421,7 @@ func Test_RangeVectorSplit(t *testing.T) {
}
}
- return nil, fmt.Errorf("subquery request '" + req.GetQuery() + "' not found")
+ return nil, fmt.Errorf("%s", "subquery request '"+req.GetQuery()+"' not found")
})).Do(ctx, tc.in)
require.NoError(t, err)
require.Equal(t, tc.expected, resp.(*LokiPromResponse).Response)
diff --git a/pkg/querier/queryrange/stats_test.go b/pkg/querier/queryrange/stats_test.go
index 8c48a9ece8538..c2b6b3755bda4 100644
--- a/pkg/querier/queryrange/stats_test.go
+++ b/pkg/querier/queryrange/stats_test.go
@@ -24,7 +24,7 @@ func TestStatsCollectorMiddleware(t *testing.T) {
now = time.Now()
)
ctx := context.WithValue(context.Background(), ctxKey, data)
- _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return nil, nil
})).Do(ctx, &LokiRequest{
Query: "foo",
@@ -37,7 +37,7 @@ func TestStatsCollectorMiddleware(t *testing.T) {
// no context.
data = &queryData{}
- _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return nil, nil
})).Do(context.Background(), &LokiRequest{
Query: "foo",
@@ -48,7 +48,7 @@ func TestStatsCollectorMiddleware(t *testing.T) {
// stats
data = &queryData{}
ctx = context.WithValue(context.Background(), ctxKey, data)
- _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &LokiPromResponse{
Statistics: stats.Result{
Ingester: stats.Ingester{
@@ -69,7 +69,7 @@ func TestStatsCollectorMiddleware(t *testing.T) {
// Rationale being, in that case returned `response` will be nil and there won't be any `response.statistics` to collect.
data = &queryData{}
ctx = context.WithValue(context.Background(), ctxKey, data)
- _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ _, _ = StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return nil, errors.New("request timedout")
})).Do(ctx, &LokiRequest{
Query: "foo",
@@ -86,17 +86,17 @@ func Test_StatsHTTP(t *testing.T) {
}{
{
"should not record metric if nothing is recorded",
- http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
data := r.Context().Value(ctxKey).(*queryData)
data.recorded = false
}),
- func(t *testing.T, data *queryData) {
+ func(t *testing.T, _ *queryData) {
t.Fail()
},
},
{
"empty statistics success",
- http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
data := r.Context().Value(ctxKey).(*queryData)
data.recorded = true
data.params, _ = ParamsFromRequest(&LokiRequest{
@@ -189,7 +189,7 @@ func Test_StatsHTTP(t *testing.T) {
}
func Test_StatsUpdateResult(t *testing.T) {
- resp, err := StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(c context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ resp, err := StatsCollectorMiddleware().Wrap(queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
time.Sleep(20 * time.Millisecond)
return &LokiResponse{}, nil
})).Do(context.Background(), &LokiRequest{
diff --git a/pkg/querier/queryrange/views.go b/pkg/querier/queryrange/views.go
index b34020934c1c5..2757c76b7708d 100644
--- a/pkg/querier/queryrange/views.go
+++ b/pkg/querier/queryrange/views.go
@@ -123,7 +123,7 @@ func (v *SeriesIdentifierView) ForEachLabel(fn func(string, string) error) error
return false, err
}
- err = molecule.MessageEach(codec.NewBuffer(entry), func(fieldNum int32, labelOrKey molecule.Value) (bool, error) {
+ err = molecule.MessageEach(codec.NewBuffer(entry), func(_ int32, labelOrKey molecule.Value) (bool, error) {
s, err := labelOrKey.AsStringUnsafe()
if err != nil {
return false, err
diff --git a/pkg/querier/queryrange/views_test.go b/pkg/querier/queryrange/views_test.go
index 7d1938dacb775..ead7981a7aee3 100644
--- a/pkg/querier/queryrange/views_test.go
+++ b/pkg/querier/queryrange/views_test.go
@@ -185,7 +185,7 @@ func TestMergedViewDeduplication(t *testing.T) {
}
count := 0
- err := view.ForEachUniqueSeries(func(s *SeriesIdentifierView) error {
+ err := view.ForEachUniqueSeries(func(_ *SeriesIdentifierView) error {
count++
return nil
})
diff --git a/pkg/querier/queryrange/volume_test.go b/pkg/querier/queryrange/volume_test.go
index 7327a58e15d9e..d4d2a9febe33d 100644
--- a/pkg/querier/queryrange/volume_test.go
+++ b/pkg/querier/queryrange/volume_test.go
@@ -258,7 +258,7 @@ func Test_toPrometheusResponse(t *testing.T) {
func Test_VolumeMiddleware(t *testing.T) {
makeVolumeRequest := func(req *logproto.VolumeRequest) *queryrangebase.PrometheusResponse {
- nextHandler := queryrangebase.HandlerFunc(func(ctx context.Context, r queryrangebase.Request) (queryrangebase.Response, error) {
+ nextHandler := queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (queryrangebase.Response, error) {
return &VolumeResponse{
Response: &logproto.VolumeResponse{
Volumes: []logproto.Volume{
diff --git a/pkg/querier/tail.go b/pkg/querier/tail.go
index 87dba6bae09ae..0d9495daf6e46 100644
--- a/pkg/querier/tail.go
+++ b/pkg/querier/tail.go
@@ -38,7 +38,7 @@ const (
// Tailer manages complete lifecycle of a tail request
type Tailer struct {
// openStreamIterator is for streams already open
- openStreamIterator iter.HeapIterator
+ openStreamIterator iter.MergeEntryIterator
streamMtx sync.Mutex // for synchronizing access to openStreamIterator
currEntry logproto.Entry
diff --git a/pkg/querier/tail_test.go b/pkg/querier/tail_test.go
index 4867574e5792c..3be5e5f053dc9 100644
--- a/pkg/querier/tail_test.go
+++ b/pkg/querier/tail_test.go
@@ -33,7 +33,7 @@ func TestTailer(t *testing.T) {
"tail logs from historic entries only (no tail clients provided)": {
historicEntries: mockStreamIterator(1, 2),
tailClient: nil,
- tester: func(t *testing.T, tailer *Tailer, tailClient *tailClientMock) {
+ tester: func(t *testing.T, tailer *Tailer, _ *tailClientMock) {
responses, err := readFromTailer(tailer, 2)
require.NoError(t, err)
@@ -82,7 +82,7 @@ func TestTailer(t *testing.T) {
"honor max entries per tail response": {
historicEntries: mockStreamIterator(1, maxEntriesPerTailResponse+1),
tailClient: nil,
- tester: func(t *testing.T, tailer *Tailer, tailClient *tailClientMock) {
+ tester: func(t *testing.T, tailer *Tailer, _ *tailClientMock) {
responses, err := readFromTailer(tailer, maxEntriesPerTailResponse+1)
require.NoError(t, err)
diff --git a/pkg/querier/worker/scheduler_processor_test.go b/pkg/querier/worker/scheduler_processor_test.go
index 264d5a1769fd1..1634da77c4afc 100644
--- a/pkg/querier/worker/scheduler_processor_test.go
+++ b/pkg/querier/worker/scheduler_processor_test.go
@@ -79,7 +79,7 @@ func TestSchedulerProcessor_processQueriesOnSingleStream(t *testing.T) {
workerCtx, workerCancel := context.WithCancel(context.Background())
- requestHandler.On("Do", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
+ requestHandler.On("Do", mock.Anything, mock.Anything).Run(func(_ mock.Arguments) {
// Cancel the worker context while the query execution is in progress.
workerCancel()
diff --git a/pkg/queue/queue_test.go b/pkg/queue/queue_test.go
index b51ccf7cc2a06..9b4aca8481c72 100644
--- a/pkg/queue/queue_test.go
+++ b/pkg/queue/queue_test.go
@@ -31,7 +31,7 @@ func BenchmarkGetNextRequest(b *testing.B) {
}{
{
"without sub-queues",
- func(i int) []string { return nil },
+ func(_ int) []string { return nil },
},
{
"with 1 level of sub-queues",
@@ -554,7 +554,7 @@ func assertChanReceived(t *testing.T, c chan struct{}, timeout time.Duration, ms
select {
case <-c:
case <-time.After(timeout):
- t.Fatalf(msg)
+ t.Fatal(msg)
}
}
diff --git a/pkg/ruler/base/compat_test.go b/pkg/ruler/base/compat_test.go
index e37ef6646811a..1dd65282ddf75 100644
--- a/pkg/ruler/base/compat_test.go
+++ b/pkg/ruler/base/compat_test.go
@@ -197,7 +197,7 @@ func TestMetricsQueryFuncErrors(t *testing.T) {
queries := prometheus.NewCounter(prometheus.CounterOpts{})
failures := prometheus.NewCounter(prometheus.CounterOpts{})
- mockFunc := func(ctx context.Context, q string, t time.Time) (promql.Vector, error) {
+ mockFunc := func(_ context.Context, _ string, _ time.Time) (promql.Vector, error) {
return promql.Vector{}, WrapQueryableErrors(tc.returnedError)
}
qf := MetricsQueryFunc(mockFunc, queries, failures)
@@ -214,7 +214,7 @@ func TestMetricsQueryFuncErrors(t *testing.T) {
func TestRecordAndReportRuleQueryMetrics(t *testing.T) {
queryTime := prometheus.NewCounterVec(prometheus.CounterOpts{}, []string{"user"})
- mockFunc := func(ctx context.Context, q string, t time.Time) (promql.Vector, error) {
+ mockFunc := func(_ context.Context, _ string, _ time.Time) (promql.Vector, error) {
time.Sleep(1 * time.Second)
return promql.Vector{}, nil
}
diff --git a/pkg/ruler/base/ruler.go b/pkg/ruler/base/ruler.go
index 7255142829c4d..2e6c74c759dfb 100644
--- a/pkg/ruler/base/ruler.go
+++ b/pkg/ruler/base/ruler.go
@@ -417,7 +417,7 @@ func grafanaLinkForExpression(expr, datasourceUID string) string {
//
// Copied from Prometheus's main.go.
func SendAlerts(n sender, externalURL, datasourceUID string) promRules.NotifyFunc {
- return func(ctx context.Context, expr string, alerts ...*promRules.Alert) {
+ return func(_ context.Context, expr string, alerts ...*promRules.Alert) {
var res []*notifier.Alert
for _, alert := range alerts {
diff --git a/pkg/ruler/base/ruler_test.go b/pkg/ruler/base/ruler_test.go
index c80ad29cb1ad0..b180c559d8d3b 100644
--- a/pkg/ruler/base/ruler_test.go
+++ b/pkg/ruler/base/ruler_test.go
@@ -108,12 +108,12 @@ func (r ruleLimits) RulerAlertManagerConfig(tenantID string) *config.AlertManage
func testQueryableFunc(q storage.Querier) storage.QueryableFunc {
if q != nil {
- return func(mint, maxt int64) (storage.Querier, error) {
+ return func(_, _ int64) (storage.Querier, error) {
return q, nil
}
}
- return func(mint, maxt int64) (storage.Querier, error) {
+ return func(_, _ int64) (storage.Querier, error) {
return storage.NoopQuerier(), nil
}
}
@@ -245,7 +245,7 @@ func TestNotifierSendsUserIDHeader(t *testing.T) {
// We do expect 1 API call for the user create with the getOrCreateNotifier()
wg.Add(1)
- ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ ts := httptest.NewServer(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
userID, _, err := tenant.ExtractTenantIDFromHTTPRequest(r)
assert.NoError(t, err)
assert.Equal(t, userID, "1")
@@ -290,7 +290,7 @@ func TestMultiTenantsNotifierSendsUserIDHeader(t *testing.T) {
// We do expect 2 API calls for the users create with the getOrCreateNotifier()
wg.Add(2)
- ts1 := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ ts1 := httptest.NewServer(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
userID, _, err := tenant.ExtractTenantIDFromHTTPRequest(r)
assert.NoError(t, err)
assert.Equal(t, userID, tenant1)
@@ -298,7 +298,7 @@ func TestMultiTenantsNotifierSendsUserIDHeader(t *testing.T) {
}))
defer ts1.Close()
- ts2 := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ ts2 := httptest.NewServer(http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
userID, _, err := tenant.ExtractTenantIDFromHTTPRequest(r)
assert.NoError(t, err)
assert.Equal(t, userID, tenant2)
@@ -1836,7 +1836,7 @@ func TestRecoverAlertsPostOutage(t *testing.T) {
defer m.Unregister()
// create a ruler but don't start it. instead, we'll evaluate the rule groups manually.
r := buildRuler(t, rulerCfg, &fakeQuerier{
- fn: func(sortSeries bool, hints *storage.SelectHints, matchers ...*labels.Matcher) storage.SeriesSet {
+ fn: func(_ bool, _ *storage.SelectHints, _ ...*labels.Matcher) storage.SeriesSet {
return series.NewConcreteSeriesSet([]storage.Series{
series.NewConcreteSeries(
labels.Labels{
@@ -1978,7 +1978,7 @@ func TestRuleGroupAlertsAndSeriesLimit(t *testing.T) {
defer m.Unregister()
r := buildRuler(tt, rulerCfg, &fakeQuerier{
- fn: func(sortSeries bool, hints *storage.SelectHints, matchers ...*labels.Matcher) storage.SeriesSet {
+ fn: func(_ bool, _ *storage.SelectHints, _ ...*labels.Matcher) storage.SeriesSet {
return series.NewConcreteSeriesSet([]storage.Series{
series.NewConcreteSeries(
labels.Labels{
diff --git a/pkg/ruler/compat.go b/pkg/ruler/compat.go
index 3f413a13b8c5f..838f08cb7f227 100644
--- a/pkg/ruler/compat.go
+++ b/pkg/ruler/compat.go
@@ -251,9 +251,9 @@ func validateRuleNode(r *rulefmt.RuleNode, groupName string) error {
return errors.Errorf("field 'expr' must be set in rule")
} else if _, err := syntax.ParseExpr(r.Expr.Value); err != nil {
if r.Record.Value != "" {
- return errors.Wrapf(err, fmt.Sprintf("could not parse expression for record '%s' in group '%s'", r.Record.Value, groupName))
+ return errors.Wrapf(err, "could not parse expression for record '%s' in group '%s'", r.Record.Value, groupName)
}
- return errors.Wrapf(err, fmt.Sprintf("could not parse expression for alert '%s' in group '%s'", r.Alert.Value, groupName))
+ return errors.Wrapf(err, "could not parse expression for alert '%s' in group '%s'", r.Alert.Value, groupName)
}
if r.Record.Value != "" {
diff --git a/pkg/ruler/evaluator_remote_test.go b/pkg/ruler/evaluator_remote_test.go
index 0b11978a7f7ed..3d76d57640dab 100644
--- a/pkg/ruler/evaluator_remote_test.go
+++ b/pkg/ruler/evaluator_remote_test.go
@@ -45,7 +45,7 @@ func TestRemoteEvalQueryTimeout(t *testing.T) {
require.NoError(t, err)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// sleep for slightly longer than the timeout
time.Sleep(timeout + (100 * time.Millisecond))
return &httpgrpc.HTTPResponse{
@@ -79,7 +79,7 @@ func TestRemoteEvalMaxResponseSize(t *testing.T) {
require.NoError(t, err)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// generate a response of random bytes that's just too big for the max response size
var resp = make([]byte, exceededSize)
_, err = rand.Read(resp)
@@ -116,7 +116,7 @@ func TestRemoteEvalScalar(t *testing.T) {
)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// this is somewhat bleeding the abstraction, but it's more idiomatic/readable than constructing
// the expected JSON response by hand
resp := loghttp.QueryResponse{
@@ -162,7 +162,7 @@ func TestRemoteEvalEmptyScalarResponse(t *testing.T) {
require.NoError(t, err)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// this is somewhat bleeding the abstraction, but it's more idiomatic/readable than constructing
// the expected JSON response by hand
resp := loghttp.QueryResponse{
@@ -205,7 +205,7 @@ func TestRemoteEvalVectorResponse(t *testing.T) {
value := 35891
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// this is somewhat bleeding the abstraction, but it's more idiomatic/readable than constructing
// the expected JSON response by hand
resp := loghttp.QueryResponse{
@@ -267,7 +267,7 @@ func TestRemoteEvalEmptyVectorResponse(t *testing.T) {
require.NoError(t, err)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// this is somewhat bleeding the abstraction, but it's more idiomatic/readable than constructing
// the expected JSON response by hand
resp := loghttp.QueryResponse{
@@ -307,7 +307,7 @@ func TestRemoteEvalErrorResponse(t *testing.T) {
var respErr = fmt.Errorf("some error occurred")
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
return nil, respErr
},
}
@@ -331,7 +331,7 @@ func TestRemoteEvalNon2xxResponse(t *testing.T) {
const httpErr = http.StatusInternalServerError
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
return &httpgrpc.HTTPResponse{
Code: httpErr,
}, nil
@@ -354,7 +354,7 @@ func TestRemoteEvalNonJSONResponse(t *testing.T) {
require.NoError(t, err)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
return &httpgrpc.HTTPResponse{
Code: http.StatusOK,
Body: []byte("this is not json"),
@@ -378,7 +378,7 @@ func TestRemoteEvalUnsupportedResultResponse(t *testing.T) {
require.NoError(t, err)
cli := mockClient{
- handleFn: func(ctx context.Context, in *httpgrpc.HTTPRequest, opts ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
+ handleFn: func(_ context.Context, _ *httpgrpc.HTTPRequest, _ ...grpc.CallOption) (*httpgrpc.HTTPResponse, error) {
// this is somewhat bleeding the abstraction, but it's more idiomatic/readable than constructing
// the expected JSON response by hand
resp := loghttp.QueryResponse{
diff --git a/pkg/ruler/memstore_test.go b/pkg/ruler/memstore_test.go
index 3c26a0f71506a..94b6adfd598d3 100644
--- a/pkg/ruler/memstore_test.go
+++ b/pkg/ruler/memstore_test.go
@@ -48,7 +48,7 @@ func TestSelectRestores(t *testing.T) {
}
callCount := 0
- fn := rules.QueryFunc(func(ctx context.Context, qs string, t time.Time) (promql.Vector, error) {
+ fn := rules.QueryFunc(func(_ context.Context, _ string, t time.Time) (promql.Vector, error) {
callCount++
return promql.Vector{
promql.Sample{
@@ -138,7 +138,7 @@ func TestMemstoreStart(_ *testing.T) {
},
}
- fn := rules.QueryFunc(func(ctx context.Context, qs string, t time.Time) (promql.Vector, error) {
+ fn := rules.QueryFunc(func(_ context.Context, _ string, _ time.Time) (promql.Vector, error) {
return nil, nil
})
@@ -171,7 +171,7 @@ func TestMemstoreBlocks(t *testing.T) {
},
}
- fn := rules.QueryFunc(func(ctx context.Context, qs string, t time.Time) (promql.Vector, error) {
+ fn := rules.QueryFunc(func(_ context.Context, _ string, _ time.Time) (promql.Vector, error) {
return nil, nil
})
diff --git a/pkg/ruler/rulestore/config_test.go b/pkg/ruler/rulestore/config_test.go
index bde42e48c0703..2929c25e94ebf 100644
--- a/pkg/ruler/rulestore/config_test.go
+++ b/pkg/ruler/rulestore/config_test.go
@@ -19,7 +19,7 @@ func TestIsDefaults(t *testing.T) {
expected: true,
},
"should return false if the config contains zero values": {
- setup: func(cfg *Config) {},
+ setup: func(_ *Config) {},
expected: false,
},
"should return false if the config contains default values and some overrides": {
diff --git a/pkg/ruler/storage/cleaner/cleaner_test.go b/pkg/ruler/storage/cleaner/cleaner_test.go
index 5d5147eb0ada7..67b084da2bba6 100644
--- a/pkg/ruler/storage/cleaner/cleaner_test.go
+++ b/pkg/ruler/storage/cleaner/cleaner_test.go
@@ -50,7 +50,7 @@ func TestWALCleaner_getAbandonedStorageBeforeCutoff(t *testing.T) {
now := time.Now()
cleaner := newCleaner(walRoot, Config{})
- cleaner.walLastModified = func(path string) (time.Time, error) {
+ cleaner.walLastModified = func(_ string) (time.Time, error) {
return now, nil
}
@@ -76,7 +76,7 @@ func TestWALCleaner_getAbandonedStorageAfterCutoff(t *testing.T) {
MinAge: 5 * time.Minute,
})
- cleaner.walLastModified = func(path string) (time.Time, error) {
+ cleaner.walLastModified = func(_ string) (time.Time, error) {
return now.Add(-30 * time.Minute), nil
}
@@ -105,7 +105,7 @@ func TestWALCleaner_cleanup(t *testing.T) {
})
cleaner.instanceManager = manager
- cleaner.walLastModified = func(path string) (time.Time, error) {
+ cleaner.walLastModified = func(_ string) (time.Time, error) {
return now.Add(-30 * time.Minute), nil
}
diff --git a/pkg/ruler/storage/instance/instance.go b/pkg/ruler/storage/instance/instance.go
index 15eb356375f52..ddd017664c976 100644
--- a/pkg/ruler/storage/instance/instance.go
+++ b/pkg/ruler/storage/instance/instance.go
@@ -262,7 +262,7 @@ func (i *Instance) Run(ctx context.Context) error {
level.Info(i.logger).Log("msg", "truncation loop stopped")
return nil
},
- func(err error) {
+ func(_ error) {
level.Info(i.logger).Log("msg", "stopping truncation loop...")
contextCancel()
},
diff --git a/pkg/ruler/storage/instance/manager_test.go b/pkg/ruler/storage/instance/manager_test.go
index c2321bb81e1de..2cf4e0a977984 100644
--- a/pkg/ruler/storage/instance/manager_test.go
+++ b/pkg/ruler/storage/instance/manager_test.go
@@ -24,7 +24,7 @@ func TestBasicManager_ApplyConfig(t *testing.T) {
<-ctx.Done()
return nil
},
- UpdateFunc: func(c Config) error {
+ UpdateFunc: func(_ Config) error {
return nil
},
TargetsActiveFunc: func() map[string][]*scrape.Target {
@@ -34,7 +34,7 @@ func TestBasicManager_ApplyConfig(t *testing.T) {
t.Run("dynamic update successful", func(t *testing.T) {
spawnedCount := 0
- spawner := func(c Config) (ManagedInstance, error) {
+ spawner := func(_ Config) (ManagedInstance, error) {
spawnedCount++
newMock := baseMock
@@ -53,11 +53,11 @@ func TestBasicManager_ApplyConfig(t *testing.T) {
t.Run("dynamic update unsuccessful", func(t *testing.T) {
spawnedCount := 0
- spawner := func(c Config) (ManagedInstance, error) {
+ spawner := func(_ Config) (ManagedInstance, error) {
spawnedCount++
newMock := baseMock
- newMock.UpdateFunc = func(c Config) error {
+ newMock.UpdateFunc = func(_ Config) error {
return ErrInvalidUpdate{
Inner: fmt.Errorf("cannot dynamically update for testing reasons"),
}
@@ -77,11 +77,11 @@ func TestBasicManager_ApplyConfig(t *testing.T) {
t.Run("dynamic update errored", func(t *testing.T) {
spawnedCount := 0
- spawner := func(c Config) (ManagedInstance, error) {
+ spawner := func(_ Config) (ManagedInstance, error) {
spawnedCount++
newMock := baseMock
- newMock.UpdateFunc = func(c Config) error {
+ newMock.UpdateFunc = func(_ Config) error {
return fmt.Errorf("something really bad happened")
}
return &newMock, nil
diff --git a/pkg/storage/async_store.go b/pkg/storage/async_store.go
index 49fe26612ec69..ffc8779328ab3 100644
--- a/pkg/storage/async_store.go
+++ b/pkg/storage/async_store.go
@@ -156,7 +156,7 @@ func (a *AsyncStore) Stats(ctx context.Context, userID string, from, through mod
ctx,
len(jobs),
len(jobs),
- func(ctx context.Context, i int) error {
+ func(_ context.Context, i int) error {
resp, err := jobs[i]()
resps[i] = resp
return err
@@ -208,7 +208,7 @@ func (a *AsyncStore) Volume(ctx context.Context, userID string, from, through mo
ctx,
len(jobs),
len(jobs),
- func(ctx context.Context, i int) error {
+ func(_ context.Context, i int) error {
resp, err := jobs[i]()
resps[i] = resp
return err
@@ -324,7 +324,7 @@ func (a *AsyncStore) GetShards(
ctx,
len(jobs),
len(jobs),
- func(ctx context.Context, i int) error {
+ func(_ context.Context, i int) error {
return jobs[i]()
},
); err != nil {
diff --git a/pkg/storage/batch.go b/pkg/storage/batch.go
index 46f708d09155a..739a27f9b2334 100644
--- a/pkg/storage/batch.go
+++ b/pkg/storage/batch.go
@@ -421,7 +421,7 @@ func (it *logBatchIterator) buildIterators(chks map[model.Fingerprint][][]*LazyC
for _, chunks := range chks {
if len(chunks) != 0 && len(chunks[0]) != 0 {
streamPipeline := it.pipeline.ForStream(labels.NewBuilder(chunks[0][0].Chunk.Metric).Del(labels.MetricName).Labels())
- iterator, err := it.buildHeapIterator(chunks, from, through, streamPipeline, nextChunk)
+ iterator, err := it.buildMergeIterator(chunks, from, through, streamPipeline, nextChunk)
if err != nil {
return nil, err
}
@@ -433,7 +433,7 @@ func (it *logBatchIterator) buildIterators(chks map[model.Fingerprint][][]*LazyC
return result, nil
}
-func (it *logBatchIterator) buildHeapIterator(chks [][]*LazyChunk, from, through time.Time, streamPipeline log.StreamPipeline, nextChunk *LazyChunk) (iter.EntryIterator, error) {
+func (it *logBatchIterator) buildMergeIterator(chks [][]*LazyChunk, from, through time.Time, streamPipeline log.StreamPipeline, nextChunk *LazyChunk) (iter.EntryIterator, error) {
result := make([]iter.EntryIterator, 0, len(chks))
for i := range chks {
diff --git a/pkg/storage/batch_test.go b/pkg/storage/batch_test.go
index 0159c20a19f65..34d8e350045a5 100644
--- a/pkg/storage/batch_test.go
+++ b/pkg/storage/batch_test.go
@@ -1649,9 +1649,9 @@ func TestBuildHeapIterator(t *testing.T) {
ctx: ctx,
pipeline: log.NewNoopPipeline(),
}
- it, err := b.buildHeapIterator(tc.input, from, from.Add(6*time.Millisecond), b.pipeline.ForStream(labels.Labels{labels.Label{Name: "foo", Value: "bar"}}), nil)
+ it, err := b.buildMergeIterator(tc.input, from, from.Add(6*time.Millisecond), b.pipeline.ForStream(labels.Labels{labels.Label{Name: "foo", Value: "bar"}}), nil)
if err != nil {
- t.Errorf("buildHeapIterator error = %v", err)
+ t.Errorf("buildMergeIterator error = %v", err)
return
}
req := newQuery("{foo=\"bar\"}", from, from.Add(6*time.Millisecond), nil, nil)
diff --git a/pkg/storage/bloom/v1/archive_test.go b/pkg/storage/bloom/v1/archive_test.go
index 8ebcdb9aebccf..401cc56a218cd 100644
--- a/pkg/storage/bloom/v1/archive_test.go
+++ b/pkg/storage/bloom/v1/archive_test.go
@@ -23,7 +23,7 @@ func TestArchive(t *testing.T) {
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
},
SeriesPageSize: 100,
diff --git a/pkg/storage/bloom/v1/ast_extractor.go b/pkg/storage/bloom/v1/ast_extractor.go
new file mode 100644
index 0000000000000..6cabd907f7676
--- /dev/null
+++ b/pkg/storage/bloom/v1/ast_extractor.go
@@ -0,0 +1,129 @@
+package v1
+
+import (
+ "github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/v3/pkg/logql/log"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+)
+
+// LabelMatcher represents bloom tests for key-value pairs, mapped from
+// LabelFilterExprs from the AST.
+type LabelMatcher interface{ isLabelMatcher() }
+
+// UnsupportedLabelMatcher represents a label matcher which could not be
+// mapped. Bloom tests for UnsupportedLabelMatchers must always pass.
+type UnsupportedLabelMatcher struct{}
+
+// PlainLabelMatcher represents a direct key-value matcher. Bloom tests
+// must only pass if the key-value pair exists in the bloom.
+type PlainLabelMatcher struct{ Key, Value string }
+
+// OrLabelMatcher represents a logical OR test. Bloom tests must only pass if
+// one of the Left or Right label matcher bloom tests pass.
+type OrLabelMatcher struct{ Left, Right LabelMatcher }
+
+// AndLabelMatcher represents a logical AND test. Bloom tests must only pass
+// if both of the Left and Right label matcher bloom tests pass.
+type AndLabelMatcher struct{ Left, Right LabelMatcher }
+
+// ExtractTestableLabelMatchers extracts label matchers from the label filters
+// in an expression. The resulting label matchers can then be used for testing
+// against bloom filters. Only label matchers before the first parse stage are
+// included.
+//
+// Unsupported LabelFilterExprs map to an UnsupportedLabelMatcher, for which
+// bloom tests should always pass.
+func ExtractTestableLabelMatchers(expr syntax.Expr) []LabelMatcher {
+ if expr == nil {
+ return nil
+ }
+
+ var (
+ exprs []*syntax.LabelFilterExpr
+ foundParseStage bool
+ )
+
+ visitor := &syntax.DepthFirstTraversal{
+ VisitLabelFilterFn: func(v syntax.RootVisitor, e *syntax.LabelFilterExpr) {
+ if !foundParseStage {
+ exprs = append(exprs, e)
+ }
+ },
+
+ // TODO(rfratto): Find a way to generically represent or test for an
+ // expression that modifies extracted labels (parsers, keep, drop, etc.).
+ //
+ // As the AST is now, we can't prove at compile time that the list of
+ // visitors below is complete. For example, if a new parser stage
+ // expression is added without updating this list, blooms can silently
+ // misbehave.
+
+ VisitLogfmtParserFn: func(v syntax.RootVisitor, e *syntax.LogfmtParserExpr) { foundParseStage = true },
+ VisitLabelParserFn: func(v syntax.RootVisitor, e *syntax.LabelParserExpr) { foundParseStage = true },
+ VisitJSONExpressionParserFn: func(v syntax.RootVisitor, e *syntax.JSONExpressionParser) { foundParseStage = true },
+ VisitLogfmtExpressionParserFn: func(v syntax.RootVisitor, e *syntax.LogfmtExpressionParser) { foundParseStage = true },
+ VisitLabelFmtFn: func(v syntax.RootVisitor, e *syntax.LabelFmtExpr) { foundParseStage = true },
+ VisitKeepLabelFn: func(v syntax.RootVisitor, e *syntax.KeepLabelsExpr) { foundParseStage = true },
+ VisitDropLabelsFn: func(v syntax.RootVisitor, e *syntax.DropLabelsExpr) { foundParseStage = true },
+ }
+ expr.Accept(visitor)
+
+ return buildLabelMatchers(exprs)
+}
+
+func buildLabelMatchers(exprs []*syntax.LabelFilterExpr) []LabelMatcher {
+ matchers := make([]LabelMatcher, 0, len(exprs))
+ for _, expr := range exprs {
+ matchers = append(matchers, buildLabelMatcher(expr.LabelFilterer))
+ }
+ return matchers
+}
+
+func buildLabelMatcher(filter log.LabelFilterer) LabelMatcher {
+ switch filter := filter.(type) {
+
+ case *log.LineFilterLabelFilter:
+ if filter.Type != labels.MatchEqual {
+ return UnsupportedLabelMatcher{}
+ }
+
+ return PlainLabelMatcher{
+ Key: filter.Name,
+ Value: filter.Value,
+ }
+
+ case *log.StringLabelFilter:
+ if filter.Type != labels.MatchEqual {
+ return UnsupportedLabelMatcher{}
+ }
+
+ return PlainLabelMatcher{
+ Key: filter.Name,
+ Value: filter.Value,
+ }
+
+ case *log.BinaryLabelFilter:
+ var (
+ left = buildLabelMatcher(filter.Left)
+ right = buildLabelMatcher(filter.Right)
+ )
+
+ if filter.And {
+ return AndLabelMatcher{Left: left, Right: right}
+ }
+ return OrLabelMatcher{Left: left, Right: right}
+
+ default:
+ return UnsupportedLabelMatcher{}
+ }
+}
+
+//
+// Implement marker types:
+//
+
+func (UnsupportedLabelMatcher) isLabelMatcher() {}
+func (PlainLabelMatcher) isLabelMatcher() {}
+func (OrLabelMatcher) isLabelMatcher() {}
+func (AndLabelMatcher) isLabelMatcher() {}
diff --git a/pkg/storage/bloom/v1/ast_extractor_test.go b/pkg/storage/bloom/v1/ast_extractor_test.go
new file mode 100644
index 0000000000000..856f0412c8a99
--- /dev/null
+++ b/pkg/storage/bloom/v1/ast_extractor_test.go
@@ -0,0 +1,105 @@
+package v1_test
+
+import (
+ "fmt"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+ v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
+)
+
+func TestExtractLabelMatchers(t *testing.T) {
+ tt := []struct {
+ name string
+ input string
+ expect []v1.LabelMatcher
+ }{
+ {
+ name: "basic label matcher",
+ input: `{app="foo"} | key="value"`,
+ expect: []v1.LabelMatcher{
+ v1.PlainLabelMatcher{Key: "key", Value: "value"},
+ },
+ },
+
+ {
+ name: "or label matcher",
+ input: `{app="foo"} | key1="value1" or key2="value2"`,
+ expect: []v1.LabelMatcher{
+ v1.OrLabelMatcher{
+ Left: v1.PlainLabelMatcher{Key: "key1", Value: "value1"},
+ Right: v1.PlainLabelMatcher{Key: "key2", Value: "value2"},
+ },
+ },
+ },
+
+ {
+ name: "and label matcher",
+ input: `{app="foo"} | key1="value1" and key2="value2"`,
+ expect: []v1.LabelMatcher{
+ v1.AndLabelMatcher{
+ Left: v1.PlainLabelMatcher{Key: "key1", Value: "value1"},
+ Right: v1.PlainLabelMatcher{Key: "key2", Value: "value2"},
+ },
+ },
+ },
+
+ {
+ name: "multiple label matchers",
+ input: `{app="foo"} | key1="value1" | key2="value2"`,
+ expect: []v1.LabelMatcher{
+ v1.PlainLabelMatcher{Key: "key1", Value: "value1"},
+ v1.PlainLabelMatcher{Key: "key2", Value: "value2"},
+ },
+ },
+
+ {
+ name: "unsupported label matchers",
+ input: `{app="foo"} | key1=~"value1"`,
+ expect: []v1.LabelMatcher{
+ v1.UnsupportedLabelMatcher{},
+ },
+ },
+ }
+
+ for _, tc := range tt {
+ t.Run(tc.name, func(t *testing.T) {
+ expr, err := syntax.ParseExpr(tc.input)
+ require.NoError(t, err)
+ require.Equal(t, tc.expect, v1.ExtractTestableLabelMatchers(expr))
+ })
+ }
+}
+
+func TestExtractLabelMatchers_IgnoreAfterParse(t *testing.T) {
+ tt := []struct {
+ name string
+ expr string
+ }{
+ {"after json parser", `json`},
+ {"after logfmt parser", `logfmt`},
+ {"after pattern parser", `pattern ""`},
+ {"after regexp parser", `regexp "(?P.*)"`},
+ {"after unpack parser", `unpack`},
+ {"after label_format", `label_format foo="bar"`},
+ {"after drop labels stage", `drop foo`},
+ {"after keep labels stage", `keep foo`},
+ }
+
+ for _, tc := range tt {
+ t.Run(tc.name, func(t *testing.T) {
+ fullInput := fmt.Sprintf(`{app="foo"} | key1="value1" | %s | key2="value2"`, tc.expr)
+ expect := []v1.LabelMatcher{
+ v1.PlainLabelMatcher{Key: "key1", Value: "value1"},
+ // key2="value2" should be ignored following tc.expr
+ }
+
+ expr, err := syntax.ParseExpr(fullInput)
+ require.NoError(t, err)
+
+ require.Equal(t, expect, v1.ExtractTestableLabelMatchers(expr), "key2=value2 should be ignored with query %s", fullInput)
+ })
+ }
+}
diff --git a/pkg/storage/bloom/v1/block.go b/pkg/storage/bloom/v1/block.go
index 5ad4adbe7a0d6..c309cb7fec29c 100644
--- a/pkg/storage/bloom/v1/block.go
+++ b/pkg/storage/bloom/v1/block.go
@@ -116,7 +116,7 @@ type BlockQuerier struct {
// whether the underlying byte slice of the bloom page will be returned to the
// pool for efficiency or not. Returning to the pool can only safely be used
// when the underlying bloom bytes don't escape the decoder, i.e. when loading
-// blooms for querying (bloom-gateway), but not for writing (bloom-compactor).
+// blooms for querying (bloom-gateway), but not for writing (bloom-builder).
// Therefore, when calling NewBlockQuerier on the write path, you should always
// pass the SimpleHeapAllocator implementation of the Allocator interface.
func NewBlockQuerier(b *Block, alloc mempool.Allocator, maxPageSize int) *BlockQuerier {
@@ -170,7 +170,7 @@ func (b *BlockQuerierIter) Next() bool {
func (b *BlockQuerierIter) At() *SeriesWithBlooms {
s := b.LazySeriesIter.At()
res := &SeriesWithBlooms{
- Series: &s.Series,
+ Series: s,
Blooms: newOffsetsIter(b.blooms, s.Offsets),
}
return res
diff --git a/pkg/storage/bloom/v1/bloom.go b/pkg/storage/bloom/v1/bloom.go
index cf7053b07308a..878f254abc178 100644
--- a/pkg/storage/bloom/v1/bloom.go
+++ b/pkg/storage/bloom/v1/bloom.go
@@ -23,6 +23,13 @@ type Bloom struct {
filter.ScalableBloomFilter
}
+func NewBloom() *Bloom {
+ return &Bloom{
+ // TODO parameterise SBF options. fp_rate
+ ScalableBloomFilter: *filter.NewScalableBloomFilter(1024, 0.01, 0.8),
+ }
+}
+
func (b *Bloom) Encode(enc *encoding.Encbuf) error {
// divide by 8 b/c bloom capacity is measured in bits, but we want bytes
buf := bytes.NewBuffer(make([]byte, 0, int(b.Capacity()/8)))
@@ -167,7 +174,7 @@ type BloomPageDecoder struct {
// perf optimization.
// This can only safely be used when the underlying bloom
// bytes don't escape the decoder:
-// on reads in the bloom-gw but not in the bloom-compactor
+// on reads in the bloom-gw but not in the bloom-builder
func (d *BloomPageDecoder) Relinquish(alloc mempool.Allocator) {
if d == nil {
return
diff --git a/pkg/storage/bloom/v1/bloom_tester.go b/pkg/storage/bloom/v1/bloom_tester.go
index 349f3691f6ea0..f3741be6e267b 100644
--- a/pkg/storage/bloom/v1/bloom_tester.go
+++ b/pkg/storage/bloom/v1/bloom_tester.go
@@ -1,7 +1,9 @@
package v1
import (
+ "fmt"
"unicode/utf8"
+ "unsafe"
"github.com/grafana/regexp"
@@ -52,12 +54,12 @@ func ExtractTestableLineFilters(expr syntax.Expr) []syntax.LineFilterExpr {
var filters []syntax.LineFilterExpr
var lineFmtFound bool
visitor := &syntax.DepthFirstTraversal{
- VisitLineFilterFn: func(v syntax.RootVisitor, e *syntax.LineFilterExpr) {
+ VisitLineFilterFn: func(_ syntax.RootVisitor, e *syntax.LineFilterExpr) {
if e != nil && !lineFmtFound {
filters = append(filters, *e)
}
},
- VisitLineFmtFn: func(v syntax.RootVisitor, e *syntax.LineFmtExpr) {
+ VisitLineFmtFn: func(_ syntax.RootVisitor, e *syntax.LineFmtExpr) {
if e != nil {
lineFmtFound = true
}
@@ -252,7 +254,7 @@ func (b stringMatcherFilter) Matches(test log.Checker) bool {
}
func newStringFilterFunc(b NGramBuilder) log.NewMatcherFiltererFunc {
- return func(match []byte, caseInsensitive bool) log.MatcherFilterer {
+ return func(match []byte, _ bool) log.MatcherFilterer {
return log.WrapMatcher(stringMatcherFilter{
test: newStringTest(b, string(match)),
})
@@ -292,6 +294,25 @@ func (o orTest) MatchesWithPrefixBuf(bloom filter.Checker, buf []byte, prefixLen
return o.left.MatchesWithPrefixBuf(bloom, buf, prefixLen) || o.right.MatchesWithPrefixBuf(bloom, buf, prefixLen)
}
+type andTest struct {
+ left, right BloomTest
+}
+
+func newAndTest(left, right BloomTest) andTest {
+ return andTest{
+ left: left,
+ right: right,
+ }
+}
+
+func (a andTest) Matches(bloom filter.Checker) bool {
+ return a.left.Matches(bloom) && a.right.Matches(bloom)
+}
+
+func (a andTest) MatchesWithPrefixBuf(bloom filter.Checker, buf []byte, prefixLen int) bool {
+ return a.left.MatchesWithPrefixBuf(bloom, buf, prefixLen) && a.right.MatchesWithPrefixBuf(bloom, buf, prefixLen)
+}
+
func newPatternTest(b NGramBuilder, match string) BloomTest {
lit, err := pattern.ParseLiterals(match)
if err != nil {
@@ -305,3 +326,103 @@ func newPatternTest(b NGramBuilder, match string) BloomTest {
}
return res
}
+
+func LabelMatchersToBloomTest(matchers ...LabelMatcher) BloomTest {
+ tests := make(BloomTests, 0, len(matchers))
+ for _, matcher := range matchers {
+ tests = append(tests, matcherToBloomTest(matcher))
+ }
+ return tests
+}
+
+func matcherToBloomTest(matcher LabelMatcher) BloomTest {
+ switch matcher := matcher.(type) {
+ case UnsupportedLabelMatcher:
+ return matchAllTest{}
+
+ case PlainLabelMatcher:
+ return newStringMatcherTest(matcher)
+
+ case OrLabelMatcher:
+ return newOrTest(
+ matcherToBloomTest(matcher.Left),
+ matcherToBloomTest(matcher.Right),
+ )
+
+ case AndLabelMatcher:
+ return newAndTest(
+ matcherToBloomTest(matcher.Left),
+ matcherToBloomTest(matcher.Right),
+ )
+
+ default:
+ // Unhandled cases pass bloom tests by default.
+ return matchAllTest{}
+ }
+}
+
+type stringMatcherTest struct {
+ matcher PlainLabelMatcher
+}
+
+func newStringMatcherTest(matcher PlainLabelMatcher) stringMatcherTest {
+ return stringMatcherTest{matcher: matcher}
+}
+
+func (sm stringMatcherTest) Matches(bloom filter.Checker) bool {
+ // TODO(rfratto): reintroduce the use of a shared tokenizer here to avoid
+ // desyncing between how tokens are passed during building vs passed during
+ // querying.
+ //
+ // For a shared tokenizer to be ergonomic:
+ //
+ // 1. A prefix shouldn't be required until MatchesWithPrefixBuf is called
+ // 2. It should be possible to test for just the key
+
+ var (
+ combined = fmt.Sprintf("%s=%s", sm.matcher.Key, sm.matcher.Value)
+
+ rawKey = unsafe.Slice(unsafe.StringData(sm.matcher.Key), len(sm.matcher.Key))
+ rawCombined = unsafe.Slice(unsafe.StringData(combined), len(combined))
+ )
+
+ if !bloom.Test(rawKey) {
+ // The structured metadata key wasn't indexed. We pass the bloom test
+ // since we can only filter data out if the key was indexed but the value
+ // wasn't.
+ //
+ // TODO(rfratto): The negative test here is a bit confusing, and the key
+ // presence test should likely be done higher up.
+ return true
+ }
+
+ return bloom.Test(rawCombined)
+}
+
+func (sm stringMatcherTest) MatchesWithPrefixBuf(bloom filter.Checker, buf []byte, prefixLen int) bool {
+ var (
+ combined = fmt.Sprintf("%s=%s", sm.matcher.Key, sm.matcher.Value)
+
+ prefixedKey = appendToBuf(buf, prefixLen, sm.matcher.Key)
+ prefixedCombined = appendToBuf(buf, prefixLen, combined)
+ )
+
+ if !bloom.Test(prefixedKey) {
+ // The structured metadata key wasn't indexed for a prefix. We pass the
+ // bloom test since we can only filter data out if the key was indexed but
+ // the value wasn't.
+ //
+ // TODO(rfratto): The negative test here is a bit confusing, and the key
+ // presence test should likely be done higher up.
+ return true
+ }
+
+ return bloom.Test(prefixedCombined)
+}
+
+// appendToBuf is the equivalent of append(buf[:prefixLen], str). len(buf) must
+// be greater than or equal to prefixLen+len(str) to avoid allocations.
+func appendToBuf(buf []byte, prefixLen int, str string) []byte {
+ rawString := unsafe.Slice(unsafe.StringData(str), len(str))
+ return append(buf[:prefixLen], rawString...)
+}
diff --git a/pkg/storage/bloom/v1/bloom_tester_test.go b/pkg/storage/bloom/v1/bloom_tester_test.go
index 81adbd8a86b54..fa4e0f6e82870 100644
--- a/pkg/storage/bloom/v1/bloom_tester_test.go
+++ b/pkg/storage/bloom/v1/bloom_tester_test.go
@@ -6,13 +6,15 @@ import (
"github.com/stretchr/testify/require"
"github.com/grafana/loki/v3/pkg/logql/syntax"
+
+ "github.com/grafana/loki/pkg/push"
)
-type fakeBloom []string
+type fakeLineBloom []string
// fakeBloom is a fake bloom filter that matches tokens exactly.
// It uses a tokenizer to build the tokens for a line
-func newFakeBloom(tokenizer *NGramTokenizer, line string) (res fakeBloom) {
+func newFakeBloom(tokenizer *NGramTokenizer, line string) (res fakeLineBloom) {
toks := tokenizer.Tokens(line)
for toks.Next() {
res = append(res, string(toks.At()))
@@ -20,7 +22,7 @@ func newFakeBloom(tokenizer *NGramTokenizer, line string) (res fakeBloom) {
return
}
-func (f fakeBloom) Test(data []byte) bool {
+func (f fakeLineBloom) Test(data []byte) bool {
str := string(data)
for _, match := range f {
if str == match {
@@ -117,3 +119,117 @@ func TestBloomQueryingLogic(t *testing.T) {
})
}
}
+
+func TestLabelMatchersToBloomTest(t *testing.T) {
+ // All test cases below have access to a fake bloom filter with
+ // trace_id=exists_1 and trace_id=exists_2
+ var (
+ prefix = "fakeprefix"
+ tokenizer = NewStructuredMetadataTokenizer(prefix)
+ bloom = newFakeMetadataBloom(
+ tokenizer,
+ push.LabelAdapter{Name: "trace_id", Value: "exists_1"},
+ push.LabelAdapter{Name: "trace_id", Value: "exists_2"},
+ )
+ )
+
+ tt := []struct {
+ name string
+ query string
+ match bool
+ }{
+ {
+ name: "no matchers",
+ query: `{app="fake"}`,
+ match: true,
+ },
+ {
+ name: "basic matcher pass",
+ query: `{app="fake"} | trace_id="exists_1"`,
+ match: true,
+ },
+ {
+ name: "basic matcher fail",
+ query: `{app="fake"} | trace_id="noexist"`,
+ match: false,
+ },
+ {
+ name: "multiple matcher pass",
+ query: `{app="fake"} | trace_id="exists_1" | trace_id="exists_2"`,
+ match: true,
+ },
+ {
+ name: "multiple matcher fail",
+ query: `{app="fake"} | trace_id="exists_1" | trace_id="noexist"`,
+ match: false,
+ },
+ {
+ name: "ignore non-indexed key",
+ query: `{app="fake"} | noexist="noexist"`,
+ match: true,
+ },
+ {
+ name: "ignore unsupported operator",
+ query: `{app="fake"} | trace_id=~".*noexist.*"`,
+ match: true,
+ },
+ {
+ name: "or test pass",
+ query: `{app="fake"} | trace_id="noexist" or trace_id="exists_1"`,
+ match: true,
+ },
+ {
+ name: "or test fail",
+ query: `{app="fake"} | trace_id="noexist" or trace_id="noexist"`,
+ match: false,
+ },
+ {
+ name: "and test pass",
+ query: `{app="fake"} | trace_id="exists_1" or trace_id="exists_2"`,
+ match: true,
+ },
+ {
+ name: "and test fail",
+ query: `{app="fake"} | trace_id="exists_1" and trace_id="noexist"`,
+ match: false,
+ },
+ }
+
+ for _, tc := range tt {
+ t.Run(tc.name, func(t *testing.T) {
+ expr, err := syntax.ParseExpr(tc.query)
+ require.NoError(t, err)
+
+ matchers := ExtractTestableLabelMatchers(expr)
+ bloomTest := LabelMatchersToBloomTest(matchers...)
+
+ // .Matches and .MatchesWithPrefixBuf should both have the same result.
+ require.Equal(t, tc.match, bloomTest.Matches(bloom))
+ require.Equal(t, tc.match, bloomTest.MatchesWithPrefixBuf(bloom, []byte(prefix), len(prefix)))
+ })
+ }
+}
+
+type fakeMetadataBloom []string
+
+// fakeBloom is a fake bloom filter that matches tokens exactly.
+// It uses a tokenizer to build the tokens for a line
+func newFakeMetadataBloom(tokenizer *StructuredMetadataTokenizer, kvs ...push.LabelAdapter) (res fakeLineBloom) {
+ for _, kv := range kvs {
+ it := tokenizer.Tokens(kv)
+ for it.Next() {
+ res = append(res, it.At())
+ }
+ }
+ return res
+}
+
+func (f fakeMetadataBloom) Test(data []byte) bool {
+ str := string(data)
+ for _, match := range f {
+ if str == match {
+ return true
+ }
+ }
+ return false
+}
diff --git a/pkg/storage/bloom/v1/bloom_tokenizer.go b/pkg/storage/bloom/v1/bloom_tokenizer.go
index e5a71d0aedd4d..333e2f22a37cc 100644
--- a/pkg/storage/bloom/v1/bloom_tokenizer.go
+++ b/pkg/storage/bloom/v1/bloom_tokenizer.go
@@ -2,15 +2,12 @@ package v1
import (
"math"
- "unsafe"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/loki/v3/pkg/iter"
v2iter "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/logproto"
- "github.com/grafana/loki/v3/pkg/storage/bloom/v1/filter"
"github.com/grafana/loki/v3/pkg/util/encoding"
"github.com/grafana/loki/pkg/push"
@@ -92,28 +89,17 @@ func estimatedCount(m uint, p float64) uint {
return uint(-float64(m) * math.Log(1-p))
}
-func (bt *BloomTokenizer) newBloom() *Bloom {
- return &Bloom{
- // TODO parameterise SBF options. fp_rate
- ScalableBloomFilter: *filter.NewScalableBloomFilter(1024, 0.01, 0.8),
- }
-}
-
// Populates a bloom filter(s) with the tokens from the given chunks.
// Called once per series
-func (bt *BloomTokenizer) Populate(
- blooms v2iter.SizedIterator[*Bloom],
- chks v2iter.Iterator[ChunkRefWithIter],
- ch chan *BloomCreation,
-) {
+func (bt *BloomTokenizer) Populate(blooms v2iter.SizedIterator[*Bloom], chks v2iter.Iterator[ChunkRefWithIter], ch chan *BloomCreation) {
clear(bt.cache) // MUST always clear the cache before starting a new series
var next bool
// All but the last bloom are considered full -- send back unaltered
for next = blooms.Next(); next && blooms.Remaining() > 0; next = blooms.Next() {
ch <- &BloomCreation{
- Bloom: blooms.At(),
- SourceBytesAdded: 0,
+ Bloom: blooms.At(),
+ Info: newIndexingInfo(),
}
}
@@ -127,36 +113,30 @@ func (bt *BloomTokenizer) Populate(
// We have the feeling that the empty blooms may be reused from old blocks.
// Here we log an error if we find an empty bloom.
if bloom.Count() == 0 {
- level.Warn(bt.logger).Log(
- "msg", "found existing empty bloom",
- )
+ level.Warn(bt.logger).Log("msg", "found existing empty bloom")
}
} else {
- bloom = bt.newBloom()
+ bloom = NewBloom()
}
- var bytesAdded int
+ info := newIndexingInfo()
for chks.Next() {
chk := chks.At()
- itr := newPeekingEntryIterAdapter(chk.Itr)
+ itr := v2iter.NewPeekIter(chk.Itr)
for {
- full, newBytes := bt.addChunkToBloom(
- bloom,
- chk.Ref,
- itr,
- )
- bytesAdded += newBytes
+ full, chunkStats := bt.addChunkToBloom(bloom, chk.Ref, itr)
+ info = info.merge(chunkStats)
// If a bloom is full, the chunk wasn't completely added
// so we'll submit this bloom, start a new one, and continue indexing
if full {
- bt.sendBloom(ch, bloom, bytesAdded)
+ bt.sendBloom(ch, bloom, info)
- // start a new bloom + reset bytesAdded counter
- bytesAdded = 0
- bloom = bt.newBloom()
+ // start a new bloom + reset stats
+ info = newIndexingInfo()
+ bloom = NewBloom()
// cache _MUST_ be cleared when a new bloom is created to ensure that all tokens from
// each line are indexed into at least one bloom
@@ -170,21 +150,15 @@ func (bt *BloomTokenizer) Populate(
// TODO(salvacorts): Delete this once we solve the correctness bug
if bloom.Count() == 0 {
- level.Warn(bt.logger).Log(
- "msg", "resulting bloom is empty",
- )
+ level.Warn(bt.logger).Log("msg", "resulting bloom is empty")
}
// Send the last bloom
- bt.sendBloom(ch, bloom, bytesAdded)
+ bt.sendBloom(ch, bloom, info)
close(ch)
}
-func (bt *BloomTokenizer) sendBloom(
- ch chan<- *BloomCreation,
- bloom *Bloom,
- bytesAdded int,
-) {
+func (bt *BloomTokenizer) sendBloom(ch chan<- *BloomCreation, bloom *Bloom, info indexingInfo) {
fillRatio := bloom.ScalableBloomFilter.FillRatio()
bt.metrics.hammingWeightRatio.Observe(fillRatio)
bt.metrics.estimatedCount.Observe(
@@ -193,70 +167,57 @@ func (bt *BloomTokenizer) sendBloom(
bt.metrics.bloomSize.Observe(float64(bloom.ScalableBloomFilter.Capacity() / eightBits))
bt.metrics.bloomsTotal.Inc()
ch <- &BloomCreation{
- Bloom: bloom,
- SourceBytesAdded: bytesAdded,
+ Bloom: bloom,
+ Info: info,
}
}
-// addChunkToBloom adds the tokens from the given chunk to the given bloom.
-// It continues until the chunk is exhausted or the bloom is full.
-// NB(owen-d): We ensure the invariant that each line is indexed entirely into at least one bloom.
-// This includes both raw ngrams and chunk-prefixed ngrams and is why we use a peeking iterator --
-// so we can advance the iterator only after we're sure the bloom has accepted the line.
-// This is because the _line_ is the atom in Loki's data model and a query must either match (or not) an individual line.
-// Therefore, we index entire lines into a bloom to ensure a lookups are accurate.
-func (bt *BloomTokenizer) addChunkToBloom(bloom *Bloom, ref ChunkRef, entryIter v2iter.PeekIterator[push.Entry]) (full bool, bytesAdded int) {
+func prefixForChunkRef(chk ChunkRef) []byte {
+ enc := encoding.EncWith(make([]byte, 0, 20))
+ enc.PutBE64(uint64(chk.From)) // 8 bytes
+ enc.PutBE64(uint64(chk.Through)) // 8 bytes
+ enc.PutBE32(chk.Checksum) // 4 bytes
+ return enc.Get()
+}
+
+// addChunkToBloom adds the values from structured metadata from the entries of the given chunk to the given bloom.
+// addChunkToBloom returns true if the bloom has been completely filled, and may not have consumed the entire iterator.
+// addChunkToBloom must be called multiple times until returning false with new blooms until the iterator has been fully consumed.
+func (bt *BloomTokenizer) addChunkToBloom(bloom *Bloom, ref ChunkRef, entryIter v2iter.PeekIterator[push.Entry]) (bool, indexingInfo) {
var (
- tokenBuf, prefixLn = prefixedToken(bt.lineTokenizer.N(), ref, nil)
- tokens int
- successfulInserts int
- cachedInserts int
- collisionInserts int
- chunkBytes int
- linesAdded int
+ tokens int
+ successfulInserts int
+ cachedInserts int
+ collisionInserts int
+ linesAdded int
+
+ collision bool
)
+ // return values
+ full, info := false, newIndexingInfo()
+
+ tokenizer := NewStructuredMetadataTokenizer(string(prefixForChunkRef(ref)))
+
// We use a peeking iterator to avoid advancing the iterator until we're sure the bloom has accepted the line.
-outer:
for entry, ok := entryIter.Peek(); ok; entry, ok = entryIter.Peek() {
- line := entry.Line
- chunkBytes += len(line)
-
- tokenItrs := []v2iter.Iterator[[]byte]{
- // two iterators, one for the raw tokens and one for the chunk prefixed tokens.
- // Warning: the underlying line tokenizer (used in both iterators) uses the same buffer for tokens.
- // They are NOT SAFE for concurrent use.
- NewPrefixedTokenIter(tokenBuf, prefixLn, bt.lineTokenizer.Tokens(line)),
- bt.lineTokenizer.Tokens(line),
- }
+ for _, kv := range entry.StructuredMetadata {
+ info.sourceBytes += len(kv.Name) + len(kv.Value)
+ info.indexedFields.Add(Field(kv.Name))
- for _, itr := range tokenItrs {
- for itr.Next() {
- tok := itr.At()
+ tokenItr := tokenizer.Tokens(kv)
+ for tokenItr.Next() {
+ tok := tokenItr.At()
tokens++
- // TODO[owen-d]: [n]byte this
- // To avoid allocations, an unsafe string can be used to check ownership in cache.
- str := unsafe.String(unsafe.SliceData(tok), len(tok))
// A cache is used ahead of the SBF, as it cuts out the costly operations of scaling bloom filters
- if _, found := bt.cache[str]; found {
+ if _, found := bt.cache[tok]; found {
cachedInserts++
continue
}
// maxBloomSize is in bytes, but blooms operate at the bit level; adjust
- var collision bool
- collision, full = bloom.ScalableBloomFilter.TestAndAddWithMaxSize(tok, bt.maxBloomSize*eightBits)
-
- if full {
- // edge case: one line maxed out the bloom size -- retrying is futile
- // (and will loop endlessly), so we'll just skip indexing it
- if linesAdded == 0 {
- _ = entryIter.Next()
- }
-
- break outer
- }
+ collision, full = bloom.ScalableBloomFilter.TestAndAddWithMaxSize([]byte(tok), bt.maxBloomSize*eightBits)
if collision {
collisionInserts++
@@ -266,8 +227,7 @@ outer:
// only register the key in the cache if it was successfully added to the bloom
// as can prevent us from trying subsequent copies
- str = string(tok)
- bt.cache[str] = nil
+ bt.cache[tok] = nil
if len(bt.cache) >= cacheSize { // While crude, this has proven efficient in performance testing. This speaks to the similarity in log lines near each other
clear(bt.cache)
}
@@ -277,6 +237,11 @@ outer:
// Only advance the iterator once we're sure the bloom has accepted the line
linesAdded++
_ = entryIter.Next()
+
+ // Only break out of the loop if the bloom filter is full after indexing all structured metadata of an entry.
+ if full {
+ break
+ }
}
// update metrics after each chunk added for more consistent reporting
@@ -284,23 +249,7 @@ outer:
bt.metrics.insertsTotal.WithLabelValues(collisionTypeFalse).Add(float64(successfulInserts))
bt.metrics.insertsTotal.WithLabelValues(collisionTypeCache).Add(float64(cachedInserts))
bt.metrics.insertsTotal.WithLabelValues(collisionTypeTrue).Add(float64(collisionInserts))
- bt.metrics.sourceBytesAdded.Add(float64(chunkBytes))
-
- return full, chunkBytes
-}
-
-type entryIterAdapter struct {
- iter.EntryIterator
-}
-
-func (a entryIterAdapter) At() logproto.Entry {
- return a.EntryIterator.At()
-}
-
-func (a entryIterAdapter) Err() error {
- return a.EntryIterator.Err()
-}
+ bt.metrics.sourceBytesAdded.Add(float64(info.sourceBytes))
-func newPeekingEntryIterAdapter(itr iter.EntryIterator) *v2iter.PeekIter[logproto.Entry] {
- return v2iter.NewPeekIter[logproto.Entry](entryIterAdapter{itr})
+ return full, info
}
diff --git a/pkg/storage/bloom/v1/bloom_tokenizer_test.go b/pkg/storage/bloom/v1/bloom_tokenizer_test.go
index 29deada6ab82c..7e8f5c4c99939 100644
--- a/pkg/storage/bloom/v1/bloom_tokenizer_test.go
+++ b/pkg/storage/bloom/v1/bloom_tokenizer_test.go
@@ -100,12 +100,15 @@ func TestTokenizerPopulate(t *testing.T) {
var testLine = "this is a log line"
bt := NewBloomTokenizer(DefaultNGramLength, DefaultNGramSkip, 0, metrics, logger.NewNopLogger())
- sbf := filter.NewScalableBloomFilter(1024, 0.01, 0.8)
-
+ metadata := push.LabelsAdapter{
+ {Name: "pod", Value: "loki-1"},
+ {Name: "trace_id", Value: "3bef3c91643bde73"},
+ }
memChunk := chunkenc.NewMemChunk(chunkenc.ChunkFormatV4, chunkenc.EncSnappy, chunkenc.ChunkHeadFormatFor(chunkenc.ChunkFormatV4), 256000, 1500000)
_, _ = memChunk.Append(&push.Entry{
- Timestamp: time.Unix(0, 1),
- Line: testLine,
+ Timestamp: time.Unix(0, 1),
+ Line: testLine,
+ StructuredMetadata: metadata,
})
itr, err := memChunk.Iterator(
context.Background(),
@@ -116,24 +119,25 @@ func TestTokenizerPopulate(t *testing.T) {
)
require.Nil(t, err)
- bloom := Bloom{
- ScalableBloomFilter: *sbf,
- }
+ ref := ChunkRef{}
+ bloom := NewBloom()
blooms, err := populateAndConsumeBloom(
bt,
- v2.NewSliceIter([]*Bloom{&bloom}),
- v2.NewSliceIter([]ChunkRefWithIter{{Ref: ChunkRef{},
- Itr: itr}}),
+ v2.NewSliceIter([]*Bloom{bloom}),
+ v2.NewSliceIter([]ChunkRefWithIter{{Ref: ref, Itr: itr}}),
)
require.NoError(t, err)
require.Equal(t, 1, len(blooms))
- tokenizer := NewNGramTokenizer(DefaultNGramLength, DefaultNGramSkip)
- toks := tokenizer.Tokens(testLine)
- for toks.Next() {
- token := toks.At()
- require.True(t, blooms[0].Test(token))
+ tokenizer := NewStructuredMetadataTokenizer(string(prefixForChunkRef(ref)))
+
+ for _, kv := range metadata {
+ tokens := tokenizer.Tokens(kv)
+ for tokens.Next() {
+ token := tokens.At()
+ require.True(t, blooms[0].Test([]byte(token)))
+ }
}
}
@@ -141,10 +145,15 @@ func TestBloomTokenizerPopulateWithoutPreexistingBloom(t *testing.T) {
var testLine = "this is a log line"
bt := NewBloomTokenizer(DefaultNGramLength, DefaultNGramSkip, 0, metrics, logger.NewNopLogger())
+ metadata := push.LabelsAdapter{
+ {Name: "pod", Value: "loki-1"},
+ {Name: "trace_id", Value: "3bef3c91643bde73"},
+ }
memChunk := chunkenc.NewMemChunk(chunkenc.ChunkFormatV4, chunkenc.EncSnappy, chunkenc.ChunkHeadFormatFor(chunkenc.ChunkFormatV4), 256000, 1500000)
_, _ = memChunk.Append(&push.Entry{
- Timestamp: time.Unix(0, 1),
- Line: testLine,
+ Timestamp: time.Unix(0, 1),
+ Line: testLine,
+ StructuredMetadata: metadata,
})
itr, err := memChunk.Iterator(
context.Background(),
@@ -155,30 +164,34 @@ func TestBloomTokenizerPopulateWithoutPreexistingBloom(t *testing.T) {
)
require.Nil(t, err)
+ ref := ChunkRef{}
+
blooms, err := populateAndConsumeBloom(
bt,
v2.NewEmptyIter[*Bloom](),
- v2.NewSliceIter([]ChunkRefWithIter{{Ref: ChunkRef{},
- Itr: itr}}),
+ v2.NewSliceIter([]ChunkRefWithIter{{Ref: ref, Itr: itr}}),
)
require.NoError(t, err)
require.Equal(t, 1, len(blooms))
- tokenizer := NewNGramTokenizer(DefaultNGramLength, DefaultNGramSkip)
- toks := tokenizer.Tokens(testLine)
- for toks.Next() {
- token := toks.At()
- require.True(t, blooms[0].Test(token))
- }
+ tokenizer := NewStructuredMetadataTokenizer(string(prefixForChunkRef(ref)))
+ for _, kv := range metadata {
+ tokens := tokenizer.Tokens(kv)
+ for tokens.Next() {
+ token := tokens.At()
+ require.True(t, blooms[0].Test([]byte(token)))
+ }
+ }
}
-func chunkRefItrFromLines(lines ...string) (iter.EntryIterator, error) {
+func chunkRefItrFromMetadata(metadata ...push.LabelsAdapter) (iter.EntryIterator, error) {
memChunk := chunkenc.NewMemChunk(chunkenc.ChunkFormatV4, chunkenc.EncSnappy, chunkenc.ChunkHeadFormatFor(chunkenc.ChunkFormatV4), 256000, 1500000)
- for i, line := range lines {
+ for i, md := range metadata {
if _, err := memChunk.Append(&push.Entry{
- Timestamp: time.Unix(0, int64(i)),
- Line: line,
+ Timestamp: time.Unix(0, int64(i)),
+ Line: "line content",
+ StructuredMetadata: md,
}); err != nil {
return nil, err
}
@@ -195,7 +208,7 @@ func chunkRefItrFromLines(lines ...string) (iter.EntryIterator, error) {
}
func randomStr(ln int) string {
- rng := rand.New(rand.NewSource(0))
+ rng := rand.New(rand.NewSource(time.Now().UnixNano()))
charset := []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_!@#$%^&*() ")
res := make([]rune, ln)
@@ -206,24 +219,20 @@ func randomStr(ln int) string {
}
func TestTokenizerPopulateWontExceedMaxSize(t *testing.T) {
- maxSize := 2048
+ maxSize := 4 << 10
bt := NewBloomTokenizer(DefaultNGramLength, DefaultNGramSkip, maxSize, NewMetrics(nil), logger.NewNopLogger())
ch := make(chan *BloomCreation)
- line := randomStr(10e3)
- itr, err := chunkRefItrFromLines(line)
+
+ metadata := make([]push.LabelsAdapter, 0, 4<<10)
+ for i := 0; i < cap(metadata); i++ {
+ metadata = append(metadata, push.LabelsAdapter{{Name: "trace_id", Value: randomStr(12)}})
+ }
+
+ itr, err := chunkRefItrFromMetadata(metadata...)
require.NoError(t, err)
go bt.Populate(
- v2.NewSliceIter([]*Bloom{
- {
- *filter.NewScalableBloomFilter(1024, 0.01, 0.8),
- },
- }),
- v2.NewSliceIter([]ChunkRefWithIter{
- {
- Ref: ChunkRef{},
- Itr: itr,
- },
- }),
+ v2.NewEmptyIter[*Bloom](),
+ v2.NewSliceIter([]ChunkRefWithIter{{Ref: ChunkRef{}, Itr: itr}}),
ch,
)
@@ -231,10 +240,11 @@ func TestTokenizerPopulateWontExceedMaxSize(t *testing.T) {
for created := range ch {
ct++
capacity := created.Bloom.ScalableBloomFilter.Capacity() / 8
+ t.Log(ct, int(capacity), maxSize)
require.Less(t, int(capacity), maxSize)
}
// ensure we created two bloom filters from this dataset
- require.Equal(t, 2, ct)
+ require.Greater(t, ct, 2)
}
func populateAndConsumeBloom(
@@ -292,21 +302,19 @@ func BenchmarkPopulateSeriesWithBloom(b *testing.B) {
func TestTokenizerClearsCacheBetweenPopulateCalls(t *testing.T) {
bt := NewBloomTokenizer(DefaultNGramLength, DefaultNGramSkip, 0, NewMetrics(nil), logger.NewNopLogger())
- line := "foobarbazz"
+ md := push.LabelsAdapter{
+ {Name: "trace_id", Value: "3bef3c91643bde73"},
+ }
var blooms []*Bloom
+ ref := ChunkRef{}
for i := 0; i < 2; i++ {
ch := make(chan *BloomCreation)
- itr, err := chunkRefItrFromLines(line)
+ itr, err := chunkRefItrFromMetadata(md)
require.NoError(t, err)
go bt.Populate(
v2.NewEmptyIter[*Bloom](),
- v2.NewSliceIter([]ChunkRefWithIter{
- {
- Ref: ChunkRef{},
- Itr: itr,
- },
- }),
+ v2.NewSliceIter([]ChunkRefWithIter{{Ref: ref, Itr: itr}}),
ch,
)
var ct int
@@ -319,11 +327,12 @@ func TestTokenizerClearsCacheBetweenPopulateCalls(t *testing.T) {
}
+ tokenizer := NewStructuredMetadataTokenizer(string(prefixForChunkRef(ref)))
for _, bloom := range blooms {
- toks := bt.lineTokenizer.Tokens(line)
+ toks := tokenizer.Tokens(md[0])
for toks.Next() {
token := toks.At()
- require.True(t, bloom.Test(token))
+ require.True(t, bloom.Test([]byte(token)))
}
require.NoError(t, toks.Err())
}
diff --git a/pkg/storage/bloom/v1/builder.go b/pkg/storage/bloom/v1/builder.go
index 56882c4cb140a..08f92631b1643 100644
--- a/pkg/storage/bloom/v1/builder.go
+++ b/pkg/storage/bloom/v1/builder.go
@@ -68,7 +68,7 @@ func (b BlockOptions) Encode(enc *encoding.Encbuf) {
func NewBlockOptions(enc chunkenc.Encoding, nGramLength, nGramSkip, maxBlockSizeBytes, maxBloomSizeBytes uint64) BlockOptions {
opts := NewBlockOptionsFromSchema(Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: enc,
nGramLength: nGramLength,
nGramSkip: nGramSkip,
@@ -151,10 +151,29 @@ func (w *PageWriter) writePage(writer io.Writer, pool chunkenc.WriterPool, crc32
return decompressedLen, w.enc.Len(), nil
}
+// indexingInfo is a datastructure that holds information about the indexing operation.
+type indexingInfo struct {
+ sourceBytes int
+ indexedFields Set[Field]
+}
+
+func newIndexingInfo() indexingInfo {
+ return indexingInfo{
+ sourceBytes: 0,
+ indexedFields: NewSet[Field](16),
+ }
+}
+
+func (s indexingInfo) merge(other indexingInfo) indexingInfo {
+ s.sourceBytes += other.sourceBytes
+ s.indexedFields.Union(other.indexedFields)
+ return s
+}
+
type BloomCreation struct {
- Bloom *Bloom
- SourceBytesAdded int
- Err error
+ Bloom *Bloom
+ Info indexingInfo
+ Err error
}
// Simplistic implementation of a merge builder that builds a single block
@@ -164,12 +183,12 @@ type MergeBuilder struct {
blocks iter.Iterator[*SeriesWithBlooms]
// store
store iter.Iterator[*Series]
- // Add chunks to a bloom
- populate func(s *Series, srcBlooms iter.SizedIterator[*Bloom], toAdd ChunkRefs, ch chan *BloomCreation)
+ // Add chunks of a single series to a bloom
+ populate BloomPopulatorFunc
metrics *Metrics
}
-type BloomPopulatorFunc = func(s *Series, srcBlooms iter.SizedIterator[*Bloom], toAdd ChunkRefs, ch chan *BloomCreation)
+type BloomPopulatorFunc func(series *Series, preExistingBlooms iter.SizedIterator[*Bloom], chunksToAdd ChunkRefs, ch chan *BloomCreation)
// NewMergeBuilder is a specific builder which does the following:
// 1. merges multiple blocks into a single ordered querier,
@@ -222,7 +241,8 @@ func (mb *MergeBuilder) processNextSeries(
bool, // done building block
error, // error
) {
- var blockSeriesIterated, chunksIndexed, chunksCopied, bytesAdded int
+ var blockSeriesIterated, chunksIndexed, chunksCopied int
+
defer func() {
mb.metrics.blockSeriesIterated.Add(float64(blockSeriesIterated))
mb.metrics.chunksIndexed.WithLabelValues(chunkIndexedTypeIterated).Add(float64(chunksIndexed))
@@ -257,9 +277,11 @@ func (mb *MergeBuilder) processNextSeries(
}
var (
- offsets []BloomOffset
+ offsets []BloomOffset
+
chunksToAdd = nextInStore.Chunks
preExistingBlooms iter.SizedIterator[*Bloom] = iter.NewEmptyIter[*Bloom]()
+ info = newIndexingInfo()
)
if nextInBlocks != nil && nextInBlocks.Series.Fingerprint == nextInStore.Fingerprint {
@@ -267,6 +289,8 @@ func (mb *MergeBuilder) processNextSeries(
chunksToAdd = nextInStore.Chunks.Unless(nextInBlocks.Series.Chunks)
chunksCopied += len(nextInStore.Chunks) - len(chunksToAdd)
preExistingBlooms = nextInBlocks.Blooms
+ // we also need to carry over existing indexed fields from the series metadata
+ info.indexedFields.Union(nextInBlocks.Series.Meta.Fields)
}
chunksIndexed += len(chunksToAdd)
@@ -275,26 +299,26 @@ func (mb *MergeBuilder) processNextSeries(
ch := make(chan *BloomCreation)
go mb.populate(nextInStore, preExistingBlooms, chunksToAdd, ch)
- for bloom := range ch {
- if bloom.Err != nil {
- return nil, bytesAdded, 0, false, false, errors.Wrap(bloom.Err, "populating bloom")
+ for creation := range ch {
+ if creation.Err != nil {
+ return nil, info.sourceBytes, 0, false, false, errors.Wrap(creation.Err, "populating bloom")
}
- offset, err := builder.AddBloom(bloom.Bloom)
+ offset, err := builder.AddBloom(creation.Bloom)
if err != nil {
- return nil, bytesAdded, 0, false, false, errors.Wrapf(
+ return nil, info.sourceBytes, 0, false, false, errors.Wrapf(
err, "adding bloom to block for fp (%s)", nextInStore.Fingerprint,
)
}
offsets = append(offsets, offset)
- bytesAdded += bloom.SourceBytesAdded
+ info.merge(creation.Info)
}
- done, err := builder.AddSeries(*nextInStore, offsets)
+ done, err := builder.AddSeries(*nextInStore, offsets, info.indexedFields)
if err != nil {
- return nil, bytesAdded, 0, false, false, errors.Wrap(err, "committing series")
+ return nil, info.sourceBytes, 0, false, false, errors.Wrap(err, "committing series")
}
- return nextInBlocks, bytesAdded, chunksIndexed + chunksCopied, blocksFinished, done, nil
+ return nextInBlocks, info.sourceBytes, chunksIndexed + chunksCopied, blocksFinished, done, nil
}
func (mb *MergeBuilder) Build(builder *BlockBuilder) (checksum uint32, totalBytes int, err error) {
diff --git a/pkg/storage/bloom/v1/builder_test.go b/pkg/storage/bloom/v1/builder_test.go
index 6abed637d7c79..8db825e799657 100644
--- a/pkg/storage/bloom/v1/builder_test.go
+++ b/pkg/storage/bloom/v1/builder_test.go
@@ -11,7 +11,6 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
iter "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/storage/bloom/v1/filter"
"github.com/grafana/loki/v3/pkg/util/encoding"
"github.com/grafana/loki/v3/pkg/util/mempool"
)
@@ -24,11 +23,11 @@ var blockEncodings = []chunkenc.Encoding{
chunkenc.EncZstd,
}
-func TestBlockOptionsRoundTrip(t *testing.T) {
+func TestBlockOptions_RoundTrip(t *testing.T) {
t.Parallel()
opts := BlockOptions{
Schema: Schema{
- version: V1,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
nGramLength: 10,
nGramSkip: 2,
@@ -50,7 +49,6 @@ func TestBlockOptionsRoundTrip(t *testing.T) {
func TestBlockBuilder_RoundTrip(t *testing.T) {
numSeries := 100
- data, keys := MkBasicSeriesWithLiteralBlooms(numSeries, 0, 0xffff, 0, 10000)
for _, enc := range blockEncodings {
// references for linking in memory reader+writer
@@ -89,7 +87,7 @@ func TestBlockBuilder_RoundTrip(t *testing.T) {
t.Run(desc, func(t *testing.T) {
blockOpts := BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: enc,
nGramLength: 10,
nGramSkip: 2,
@@ -98,16 +96,12 @@ func TestBlockBuilder_RoundTrip(t *testing.T) {
BloomPageSize: 10 << 10,
BlockSize: tc.maxBlockSize,
}
+ data, keys := MkBasicSeriesWithBlooms(numSeries, 0, 0xffff, 0, 10000)
builder, err := NewBlockBuilder(blockOpts, tc.writer)
require.Nil(t, err)
- itr := iter.NewPeekIter[SeriesWithBlooms](
- iter.NewMapIter(
- iter.NewSliceIter[SeriesWithLiteralBlooms](data),
- func(x SeriesWithLiteralBlooms) SeriesWithBlooms { return x.SeriesWithBlooms() },
- ),
- )
+ itr := iter.NewPeekIter(iter.NewSliceIter(data))
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
@@ -135,7 +129,7 @@ func TestBlockBuilder_RoundTrip(t *testing.T) {
got := querier.At()
blooms, err := iter.Collect(got.Blooms)
require.Nil(t, err)
- require.Equal(t, processedData[i].Series, got.Series)
+ require.Equal(t, processedData[i].Series.Series, got.Series.Series)
for _, key := range keys[i] {
found := false
for _, b := range blooms {
@@ -162,7 +156,7 @@ func TestBlockBuilder_RoundTrip(t *testing.T) {
got := querier.At()
blooms, err := iter.Collect(got.Blooms)
require.Nil(t, err)
- require.Equal(t, halfData[j].Series, got.Series)
+ require.Equal(t, halfData[j].Series.Series, got.Series.Series)
for _, key := range halfKeys[j] {
found := false
for _, b := range blooms {
@@ -210,7 +204,7 @@ func TestMergeBuilder(t *testing.T) {
data, _ := MkBasicSeriesWithBlooms(numSeries, 0, 0xffff, 0, 10000)
blockOpts := BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
},
SeriesPageSize: 100,
@@ -245,12 +239,12 @@ func TestMergeBuilder(t *testing.T) {
}
// We're not testing the ability to extend a bloom in this test
- pop := func(s *Series, srcBlooms iter.SizedIterator[*Bloom], toAdd ChunkRefs, ch chan *BloomCreation) {
- for srcBlooms.Next() {
- bloom := srcBlooms.At()
+ populate := func(_ *Series, preExistingBlooms iter.SizedIterator[*Bloom], _ ChunkRefs, ch chan *BloomCreation) {
+ for preExistingBlooms.Next() {
+ bloom := preExistingBlooms.At()
ch <- &BloomCreation{
- Bloom: bloom,
- SourceBytesAdded: int(bloom.Capacity()) / 8,
+ Bloom: bloom,
+ Info: newIndexingInfo(),
}
}
close(ch)
@@ -261,21 +255,18 @@ func TestMergeBuilder(t *testing.T) {
storeItr := iter.NewMapIter[SeriesWithBlooms, *Series](
iter.NewSliceIter[SeriesWithBlooms](data),
func(swb SeriesWithBlooms) *Series {
- return swb.Series
+ return &swb.Series.Series
},
)
// Ensure that the merge builder combines all the blocks correctly
- mergeBuilder := NewMergeBuilder(dedupedBlocks(blocks), storeItr, pop, NewMetrics(nil))
+ mergeBuilder := NewMergeBuilder(dedupedBlocks(blocks), storeItr, populate, NewMetrics(nil))
indexBuf := bytes.NewBuffer(nil)
bloomsBuf := bytes.NewBuffer(nil)
writer := NewMemoryBlockWriter(indexBuf, bloomsBuf)
reader := NewByteReader(indexBuf, bloomsBuf)
- builder, err := NewBlockBuilder(
- blockOpts,
- writer,
- )
+ builder, err := NewBlockBuilder(blockOpts, writer)
require.Nil(t, err)
_, _, err = mergeBuilder.Build(builder)
@@ -287,7 +278,11 @@ func TestMergeBuilder(t *testing.T) {
EqualIterators[*SeriesWithBlooms](
t,
func(a, b *SeriesWithBlooms) {
- require.Equal(t, a.Series, b.Series, "expected %+v, got %+v", a, b)
+ require.Equal(t, a.Series.Series, b.Series.Series, "expected series %+v, got %+v", a.Series.Series, b.Series.Series)
+ require.Equal(t, a.Series.Meta.Fields, b.Series.Meta.Fields, "expected fields %+v, got %+v", a.Series.Meta.Fields, b.Series.Meta.Fields)
+ // TODO(chaudum): Investigate why offsets not match
+ // This has not been tested before, so I'm not too worried about something being broken.
+ // require.Equal(t, a.Series.Meta.Offsets, b.Series.Meta.Offsets, "expected offsets %+v, got %+v", a.Series.Meta.Offsets, b.Series.Meta.Offsets)
},
iter.NewSliceIter[*SeriesWithBlooms](PointerSlice(data)),
querier.Iter(),
@@ -306,7 +301,7 @@ func TestMergeBuilderFingerprintCollision(t *testing.T) {
blockOpts := BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
},
SeriesPageSize: 100,
@@ -353,11 +348,15 @@ func TestMergeBuilderFingerprintCollision(t *testing.T) {
}
// We're not testing the ability to extend a bloom in this test
- pop := func(s *Series, srcBlooms iter.SizedIterator[*Bloom], toAdd ChunkRefs, ch chan *BloomCreation) {
+ pop := func(_ *Series, _ iter.SizedIterator[*Bloom], _ ChunkRefs, ch chan *BloomCreation) {
+ bloom := NewBloom()
+ stats := indexingInfo{
+ sourceBytes: int(bloom.Capacity()) / 8,
+ indexedFields: NewSetFromLiteral[Field]("__all__"),
+ }
ch <- &BloomCreation{
- Bloom: &Bloom{
- ScalableBloomFilter: *filter.NewScalableBloomFilter(1024, 0.01, 0.8),
- },
+ Bloom: bloom,
+ Info: stats,
}
close(ch)
}
@@ -399,7 +398,7 @@ func TestBlockReset(t *testing.T) {
reader := NewByteReader(indexBuf, bloomsBuf)
schema := Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
nGramLength: 10,
nGramSkip: 2,
@@ -457,7 +456,7 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
blockOpts := BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy, // test with different encodings?
nGramLength: 4, // needs to match values from MkBasicSeriesWithBlooms
nGramSkip: 0, // needs to match values from MkBasicSeriesWithBlooms
@@ -512,11 +511,11 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
return a.Series.Fingerprint == b.Fingerprint
},
func(swb *SeriesWithBlooms) *Series {
- return swb.Series
+ return &swb.Series.Series
},
func(a *SeriesWithBlooms, b *Series) *Series {
if len(a.Series.Chunks) > len(b.Chunks) {
- return a.Series
+ return &a.Series.Series
}
return b
},
@@ -524,12 +523,16 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
)
// We're not testing the ability to extend a bloom in this test
- pop := func(s *Series, srcBlooms iter.SizedIterator[*Bloom], toAdd ChunkRefs, ch chan *BloomCreation) {
+ pop := func(_ *Series, srcBlooms iter.SizedIterator[*Bloom], _ ChunkRefs, ch chan *BloomCreation) {
for srcBlooms.Next() {
bloom := srcBlooms.At()
+ stats := indexingInfo{
+ sourceBytes: int(bloom.Capacity()) / 8,
+ indexedFields: NewSetFromLiteral[Field]("__all__"),
+ }
ch <- &BloomCreation{
- Bloom: bloom,
- SourceBytesAdded: int(bloom.Capacity()) / 8,
+ Bloom: bloom,
+ Info: stats,
}
}
close(ch)
@@ -548,9 +551,11 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
builder, err := NewBlockBuilder(blockOpts, writer)
require.Nil(t, err)
- checksum, _, err := mb.Build(builder)
+ _, _, err = mb.Build(builder)
require.Nil(t, err)
- require.Equal(t, uint32(0x2a6cdba6), checksum)
+ // checksum changes as soon as the contents of the block or the encoding change
+ // once the block format is stable, calculate the checksum and assert its correctness
+ // require.Equal(t, uint32(0x2a6cdba6), checksum)
// ensure the new block contains one copy of all the data
// by comparing it against an iterator over the source data
diff --git a/pkg/storage/bloom/v1/filter/scalable.go b/pkg/storage/bloom/v1/filter/scalable.go
index b6078c6dad336..ca979632db1d8 100644
--- a/pkg/storage/bloom/v1/filter/scalable.go
+++ b/pkg/storage/bloom/v1/filter/scalable.go
@@ -88,10 +88,9 @@ func NewScalableBloomFilter(hint uint, fpRate, r float64) *ScalableBloomFilter {
return s
}
-// NewDefaultScalableBloomFilter creates a new Scalable Bloom Filter with the
-// specified target false-positive rate and an optimal tightening ratio.
-func NewDefaultScalableBloomFilter(fpRate float64) *ScalableBloomFilter {
- return NewScalableBloomFilter(10000, fpRate, 0.8)
+// NewDefaultScalableBloomFilter creates a new Scalable Bloom Filter.
+func NewDefaultScalableBloomFilter() *ScalableBloomFilter {
+ return NewScalableBloomFilter(10e3, 0.1, 0.8)
}
// Capacity returns the current Scalable Bloom Filter capacity, which is the
diff --git a/pkg/storage/bloom/v1/filter/scalable_test.go b/pkg/storage/bloom/v1/filter/scalable_test.go
index a4f3d6d49ccbf..2456277f2e93a 100644
--- a/pkg/storage/bloom/v1/filter/scalable_test.go
+++ b/pkg/storage/bloom/v1/filter/scalable_test.go
@@ -20,7 +20,7 @@ import (
// Ensures that NewDefaultScalableBloomFilter creates a Scalable Bloom Filter
// with hint = 10000 and r = 0.8.
func TestNewDefaultScalableBloomFilter(t *testing.T) {
- f := NewDefaultScalableBloomFilter(0.1)
+ f := NewDefaultScalableBloomFilter()
if f.fp != 0.1 {
t.Errorf("Expected 0.1, got %f", f.fp)
diff --git a/pkg/storage/bloom/v1/fuse.go b/pkg/storage/bloom/v1/fuse.go
index ace67b0496c2a..37a0de06c4890 100644
--- a/pkg/storage/bloom/v1/fuse.go
+++ b/pkg/storage/bloom/v1/fuse.go
@@ -253,7 +253,7 @@ func (fq *FusedQuerier) Run() error {
return nil
}
-func (fq *FusedQuerier) runSeries(schema Schema, series *SeriesWithOffsets, reqs []Request) {
+func (fq *FusedQuerier) runSeries(_ Schema, series *SeriesWithMeta, reqs []Request) {
// For a given chunk|series to be removed, it must fail to match all blooms.
// Because iterating/loading blooms can be expensive, we iterate blooms one at a time, collecting
// the removals (failures) for each (bloom, chunk) pair.
@@ -331,23 +331,16 @@ func (fq *FusedQuerier) runSeries(schema Schema, series *SeriesWithOffsets, reqs
continue
}
- // TODO(owen-d): copying this over, but they're going to be the same
- // across any block schema because prefix len is determined by n-gram and
- // all chunks have the same encoding length. tl;dr: it's weird/unnecessary to have
- // these defined this way and recreated across each bloom
- var (
- tokenBuf []byte
- prefixLen int
- )
for k, chk := range inputs[j].InBlooms {
// if we've already found this chunk in a previous bloom, skip testing it
if inputs[j].found[k] {
continue
}
- // Get buf to concatenate the chunk and search token
- tokenBuf, prefixLen = prefixedToken(schema.NGramLen(), chk, tokenBuf)
- if matched := req.Search.MatchesWithPrefixBuf(bloom, tokenBuf, prefixLen); matched {
+ // TODO(rfratto): reuse buffer between multiple calls to
+ // prefixForChunkRef and MatchesWithPrefixBuf to avoid allocations.
+ tokenBuf := prefixForChunkRef(chk)
+ if matched := req.Search.MatchesWithPrefixBuf(bloom, tokenBuf, len(tokenBuf)); matched {
inputs[j].found[k] = true
}
}
diff --git a/pkg/storage/bloom/v1/fuse_test.go b/pkg/storage/bloom/v1/fuse_test.go
index 9db4154ca2903..4f33a91309380 100644
--- a/pkg/storage/bloom/v1/fuse_test.go
+++ b/pkg/storage/bloom/v1/fuse_test.go
@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
+ "math"
"sync"
"testing"
@@ -27,13 +28,14 @@ var BloomPagePool = mempool.New("test", []mempool.Bucket{
// TODO(owen-d): this is unhinged from the data it represents. I'm leaving this solely so I don't
// have to refactor tests here in order to fix this elsewhere, but it can/should be fixed --
// the skip & n len are hardcoded based on data that's passed to it elsewhere.
+// TODO(chaudum): Can be removed once matching with structured metadata is implemented.
type fakeNgramBuilder struct{}
-func (f fakeNgramBuilder) N() int { return 4 }
+func (f fakeNgramBuilder) N() int { return math.MaxInt } // do not tokenize
func (f fakeNgramBuilder) SkipFactor() int { return 0 }
-func (f fakeNgramBuilder) Tokens(line string) v2.Iterator[[]byte] {
- return v2.NewSliceIter[[]byte]([][]byte{[]byte(line)})
+func (f fakeNgramBuilder) Tokens(key string) v2.Iterator[[]byte] {
+ return v2.NewSliceIter[[]byte]([][]byte{[]byte(key)})
}
func keysToBloomTest(keys [][]byte) BloomTest {
@@ -58,7 +60,7 @@ func TestFusedQuerier(t *testing.T) {
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
},
SeriesPageSize: 100,
@@ -152,7 +154,7 @@ func TestFuseMultiPage(t *testing.T) {
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
nGramLength: 3, // we test trigrams
nGramSkip: 0,
@@ -170,7 +172,7 @@ func TestFuseMultiPage(t *testing.T) {
Through: 10,
Checksum: 0,
}
- series := &Series{
+ series := Series{
Fingerprint: fp,
Chunks: []ChunkRef{chk},
}
@@ -192,10 +194,10 @@ func TestFuseMultiPage(t *testing.T) {
_, err = builder.BuildFrom(v2.NewSliceIter([]SeriesWithBlooms{
{
- series,
- v2.NewSliceIter([]*Bloom{
- b1, b2,
- }),
+ Series: &SeriesWithMeta{
+ Series: series,
+ },
+ Blooms: v2.NewSliceIter([]*Bloom{b1, b2}),
},
}))
require.NoError(t, err)
@@ -279,8 +281,7 @@ func TestLazyBloomIter_Seek_ResetError(t *testing.T) {
},
}
- var bloom Bloom
- bloom.ScalableBloomFilter = *filter.NewScalableBloomFilter(1024, 0.01, 0.8)
+ bloom := NewBloom()
nLines := 10
// all even series will have a larger bloom (more than 1 filter)
@@ -300,15 +301,17 @@ func TestLazyBloomIter_Seek_ResetError(t *testing.T) {
}
data = append(data, SeriesWithBlooms{
- Series: &series,
- Blooms: v2.NewSliceIter([]*Bloom{&bloom}),
+ Series: &SeriesWithMeta{
+ Series: series,
+ },
+ Blooms: v2.NewSliceIter([]*Bloom{bloom}),
})
}
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
},
SeriesPageSize: 100,
@@ -366,7 +369,7 @@ func TestFusedQuerierSkipsEmptyBlooms(t *testing.T) {
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncNone,
},
SeriesPageSize: 100,
@@ -377,22 +380,19 @@ func TestFusedQuerierSkipsEmptyBlooms(t *testing.T) {
require.Nil(t, err)
data := SeriesWithBlooms{
- Series: &Series{
- Fingerprint: 0,
- Chunks: []ChunkRef{
- {
- From: 0,
- Through: 10,
- Checksum: 0x1234,
+ Series: &SeriesWithMeta{
+ Series: Series{
+ Fingerprint: 0,
+ Chunks: []ChunkRef{
+ {
+ From: 0,
+ Through: 10,
+ Checksum: 0x1234,
+ },
},
},
},
- Blooms: v2.NewSliceIter([]*Bloom{
- // simulate empty bloom
- {
- *filter.NewScalableBloomFilter(1024, 0.01, 0.8),
- },
- }),
+ Blooms: v2.NewSliceIter([]*Bloom{NewBloom()}),
}
itr := v2.NewSliceIter[SeriesWithBlooms]([]SeriesWithBlooms{data})
@@ -430,7 +430,7 @@ func setupBlockForBenchmark(b *testing.B) (*BlockQuerier, [][]Request, []chan Ou
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
},
SeriesPageSize: 256 << 10, // 256k
diff --git a/pkg/storage/bloom/v1/index.go b/pkg/storage/bloom/v1/index.go
index 8cae1d8a87f1e..a9e03efc41af9 100644
--- a/pkg/storage/bloom/v1/index.go
+++ b/pkg/storage/bloom/v1/index.go
@@ -2,97 +2,16 @@ package v1
import (
"bytes"
- "fmt"
"io"
"sort"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
- "github.com/grafana/loki/v3/pkg/chunkenc"
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/util/encoding"
)
-type Schema struct {
- version Version
- encoding chunkenc.Encoding
- nGramLength, nGramSkip uint64
-}
-
-func (s Schema) String() string {
- return fmt.Sprintf("v%d,encoding=%s,ngram=%d,skip=%d", s.version, s.encoding, s.nGramLength, s.nGramSkip)
-}
-
-func (s Schema) Compatible(other Schema) bool {
- return s == other
-}
-
-func (s Schema) NGramLen() int {
- return int(s.nGramLength)
-}
-
-func (s Schema) NGramSkip() int {
- return int(s.nGramSkip)
-}
-
-// byte length
-func (s Schema) Len() int {
- // magic number + version + encoding + ngram length + ngram skip
- return 4 + 1 + 1 + 8 + 8
-}
-
-func (s *Schema) DecompressorPool() chunkenc.ReaderPool {
- return chunkenc.GetReaderPool(s.encoding)
-}
-
-func (s *Schema) CompressorPool() chunkenc.WriterPool {
- return chunkenc.GetWriterPool(s.encoding)
-}
-
-func (s *Schema) Encode(enc *encoding.Encbuf) {
- enc.Reset()
- enc.PutBE32(magicNumber)
- enc.PutByte(byte(s.version))
- enc.PutByte(byte(s.encoding))
- enc.PutBE64(s.nGramLength)
- enc.PutBE64(s.nGramSkip)
-
-}
-
-func (s *Schema) DecodeFrom(r io.ReadSeeker) error {
- // TODO(owen-d): improve allocations
- schemaBytes := make([]byte, s.Len())
- _, err := io.ReadFull(r, schemaBytes)
- if err != nil {
- return errors.Wrap(err, "reading schema")
- }
-
- dec := encoding.DecWith(schemaBytes)
- return s.Decode(&dec)
-}
-
-func (s *Schema) Decode(dec *encoding.Decbuf) error {
- number := dec.Be32()
- if number != magicNumber {
- return errors.Errorf("invalid magic number. expected %x, got %x", magicNumber, number)
- }
- s.version = Version(dec.Byte())
- if s.version != 1 && s.version != 2 {
- return errors.Errorf("invalid version. expected %d, got %d", 1, s.version)
- }
-
- s.encoding = chunkenc.Encoding(dec.Byte())
- if _, err := chunkenc.ParseEncoding(s.encoding.String()); err != nil {
- return errors.Wrap(err, "parsing encoding")
- }
-
- s.nGramLength = dec.Be64()
- s.nGramSkip = dec.Be64()
-
- return dec.Err()
-}
-
// Block index is a set of series pages along with
// the headers for each page
type BlockIndex struct {
@@ -275,7 +194,7 @@ type SeriesPageDecoder struct {
// state
i int // current index
- cur *SeriesWithOffsets
+ cur *SeriesWithMeta
err error
previousFp model.Fingerprint // previous series' fingerprint for delta-decoding
previousOffset BloomOffset // previous series' bloom offset for delta-decoding
@@ -300,8 +219,8 @@ func (d *SeriesPageDecoder) Next() bool {
return false
}
- var res SeriesWithOffsets
- d.previousFp, d.previousOffset, d.err = res.Decode(d.schema.version, &d.dec, d.previousFp, d.previousOffset)
+ var res SeriesWithMeta
+ d.previousFp, d.previousOffset, d.err = res.Decode(&d.dec, d.schema.version, d.previousFp, d.previousOffset)
if d.err != nil {
return false
}
@@ -350,7 +269,7 @@ func (d *SeriesPageDecoder) Seek(fp model.Fingerprint) {
}
}
-func (d *SeriesPageDecoder) At() (res *SeriesWithOffsets) {
+func (d *SeriesPageDecoder) At() (res *SeriesWithMeta) {
return d.cur
}
@@ -361,25 +280,29 @@ func (d *SeriesPageDecoder) Err() error {
return d.dec.Err()
}
+// series encoding/decoding --------------------------------------------------
+
type Series struct {
Fingerprint model.Fingerprint
Chunks ChunkRefs
}
-// SeriesWithOffsets is a series with a a variable number
-// of bloom offsets. Used in v2+ to store blooms for larger series
-// in parts
-type SeriesWithOffsets struct {
+type Meta struct {
+ Fields Set[Field]
Offsets []BloomOffset
+}
+
+type SeriesWithMeta struct {
Series
+ Meta
}
-func (s *SeriesWithOffsets) Encode(
+func (s *SeriesWithMeta) Encode(
enc *encoding.Encbuf,
+ version Version,
previousFp model.Fingerprint,
previousOffset BloomOffset,
) BloomOffset {
- sort.Sort(s.Chunks) // ensure order
// delta encode fingerprint
enc.PutUvarint64(uint64(s.Fingerprint - previousFp))
// encode number of bloom offsets in this series
@@ -388,132 +311,93 @@ func (s *SeriesWithOffsets) Encode(
lastOffset := previousOffset
for _, offset := range s.Offsets {
// delta encode offsets.
- // Multiple offsets per series is a v2+ feature with different encoding implementation,
- // so we signal that to the encoder
- offset.Encode(enc, V2, lastOffset)
+ offset.Encode(enc, version, lastOffset)
lastOffset = offset
}
// encode chunks using delta encoded timestamps
var lastEnd model.Time
enc.PutUvarint(len(s.Chunks))
+ sort.Sort(s.Chunks) // ensure order
for _, chunk := range s.Chunks {
- lastEnd = chunk.Encode(enc, lastEnd)
+ lastEnd = chunk.Encode(enc, version, lastEnd)
+ }
+
+ enc.PutUvarint(s.Fields.Len())
+ for _, f := range s.Fields.Items() {
+ f.Encode(enc, version)
}
return lastOffset
}
-func (s *SeriesWithOffsets) Decode(
- version Version,
+func (s *SeriesWithMeta) Decode(
dec *encoding.Decbuf,
+ version Version,
previousFp model.Fingerprint,
previousOffset BloomOffset,
) (model.Fingerprint, BloomOffset, error) {
- // Since *SeriesWithOffsets is is still representable by the v1 schema as a len=1 offset group,
- // we can decode it even though multiple offsets were introduced in v2
- if version == V1 {
- return s.decodeV1(dec, previousFp, previousOffset)
+ if version < V3 {
+ return 0, BloomOffset{}, ErrUnsupportedSchemaVersion
}
s.Fingerprint = previousFp + model.Fingerprint(dec.Uvarint64())
numOffsets := dec.Uvarint()
- s.Offsets = make([]BloomOffset, numOffsets)
var (
err error
lastEnd model.Time
lastOffset = previousOffset
)
+
+ s.Offsets = make([]BloomOffset, numOffsets)
for i := range s.Offsets {
// SeriesWithOffsets is a v2+ feature with multiple bloom offsets per series
// so we signal that to the decoder
- err = s.Offsets[i].Decode(dec, V2, lastOffset)
+ err = s.Offsets[i].Decode(dec, version, lastOffset)
lastOffset = s.Offsets[i]
if err != nil {
return 0, BloomOffset{}, errors.Wrapf(err, "decoding %dth bloom offset", i)
}
}
- // TODO(owen-d): use pool
s.Chunks = make([]ChunkRef, dec.Uvarint())
for i := range s.Chunks {
- lastEnd, err = s.Chunks[i].Decode(dec, lastEnd)
+ lastEnd, err = s.Chunks[i].Decode(dec, version, lastEnd)
if err != nil {
return 0, BloomOffset{}, errors.Wrapf(err, "decoding %dth chunk", i)
}
}
- return s.Fingerprint, lastOffset, dec.Err()
-}
-// Decodes a v2 compatible series from a v1 encoding
-func (s *SeriesWithOffsets) decodeV1(
- dec *encoding.Decbuf,
- previousFp model.Fingerprint,
- previousOffset BloomOffset,
-) (model.Fingerprint, BloomOffset, error) {
- var single SeriesWithOffset
- fp, last, err := single.Decode(dec, previousFp, previousOffset)
- if err != nil {
- return 0, BloomOffset{}, errors.Wrap(err, "decoding series with offset")
+ n := dec.Uvarint()
+ s.Fields = NewSet[Field](n)
+ for i := 0; i < n; i++ {
+ var f Field
+ err = f.Decode(dec, version)
+ if err != nil {
+ return 0, BloomOffset{}, errors.Wrapf(err, "decoding %dth field", i)
+ }
+ s.Fields.Add(f)
}
- s.Offsets = []BloomOffset{last}
- s.Series = single.Series
- return fp, last, nil
-}
-// Used in v1 schema
-type SeriesWithOffset struct {
- Offset BloomOffset
- Series
+ return s.Fingerprint, lastOffset, dec.Err()
}
-func (s *SeriesWithOffset) Encode(
- enc *encoding.Encbuf,
- previousFp model.Fingerprint,
- previousOffset BloomOffset,
-) (model.Fingerprint, BloomOffset) {
- sort.Sort(s.Chunks) // ensure order
- // delta encode fingerprint
- enc.PutBE64(uint64(s.Fingerprint - previousFp))
- // delta encode offsets
- // V1 only has 1 offset per series which has a legacy encoding scheme;
- // we signal that to the encoder
- s.Offset.Encode(enc, V1, previousOffset)
+// field encoding/decoding ---------------------------------------------------
- // encode chunks using delta encoded timestamps
- var lastEnd model.Time
- enc.PutUvarint(len(s.Chunks))
- for _, chunk := range s.Chunks {
- lastEnd = chunk.Encode(enc, lastEnd)
- }
+type Field string
- return s.Fingerprint, s.Offset
+func (f Field) Encode(enc *encoding.Encbuf, _ Version) {
+ enc.PutUvarintBytes([]byte(f))
}
-func (s *SeriesWithOffset) Decode(dec *encoding.Decbuf, previousFp model.Fingerprint, previousOffset BloomOffset) (model.Fingerprint, BloomOffset, error) {
- s.Fingerprint = previousFp + model.Fingerprint(dec.Be64())
- // V1 only has 1 offset per series which has a legacy encoding scheme;
- // we signal that to the decoder
- if err := s.Offset.Decode(dec, V1, previousOffset); err != nil {
- return 0, BloomOffset{}, errors.Wrap(err, "decoding bloom offset")
- }
-
- // TODO(owen-d): use pool
- s.Chunks = make([]ChunkRef, dec.Uvarint())
- var (
- err error
- lastEnd model.Time
- )
- for i := range s.Chunks {
- lastEnd, err = s.Chunks[i].Decode(dec, lastEnd)
- if err != nil {
- return 0, BloomOffset{}, errors.Wrapf(err, "decoding %dth chunk", i)
- }
- }
- return s.Fingerprint, s.Offset, dec.Err()
+func (f *Field) Decode(dec *encoding.Decbuf, _ Version) error {
+ *f = Field(dec.UvarintBytes())
+ return dec.Err()
}
+// chunk encoding/decoding ---------------------------------------------------
+
type ChunkRef logproto.ShortRef
func (r *ChunkRef) Less(other ChunkRef) bool {
@@ -540,7 +424,7 @@ func (r *ChunkRef) Cmp(other ChunkRef) int {
return int(r.Checksum) - int(other.Checksum)
}
-func (r *ChunkRef) Encode(enc *encoding.Encbuf, previousEnd model.Time) model.Time {
+func (r *ChunkRef) Encode(enc *encoding.Encbuf, _ Version, previousEnd model.Time) model.Time {
// delta encode start time
enc.PutVarint64(int64(r.From - previousEnd))
enc.PutVarint64(int64(r.Through - r.From))
@@ -548,7 +432,7 @@ func (r *ChunkRef) Encode(enc *encoding.Encbuf, previousEnd model.Time) model.Ti
return r.Through
}
-func (r *ChunkRef) Decode(dec *encoding.Decbuf, previousEnd model.Time) (model.Time, error) {
+func (r *ChunkRef) Decode(dec *encoding.Decbuf, _ Version, previousEnd model.Time) (model.Time, error) {
r.From = previousEnd + model.Time(dec.Varint64())
r.Through = r.From + model.Time(dec.Varint64())
r.Checksum = dec.Be32()
@@ -560,33 +444,15 @@ type BloomOffset struct {
ByteOffset int // offset to beginning of bloom within page
}
-func (o *BloomOffset) Encode(enc *encoding.Encbuf, v Version, previousOffset BloomOffset) {
+func (o *BloomOffset) Encode(enc *encoding.Encbuf, _ Version, previousOffset BloomOffset) {
// page offsets diffs are always ascending
enc.PutUvarint(o.Page - previousOffset.Page)
-
- switch v {
- case V1:
- // V1 uses UVarint for bloom offset deltas. This is fine because there is only 1 bloom per series in v1
- enc.PutUvarint(o.ByteOffset - previousOffset.ByteOffset)
- default:
- // V2 encodes multiple bloom offsets per series and successive blooms may belong to
- // separate bloom pages. Therefore, we use Varint64 for byte offset deltas as
- // byteOffsets will not be ascending when a new bloom page is written.
- enc.PutVarint64(int64(o.ByteOffset - previousOffset.ByteOffset))
- }
+ enc.PutVarint64(int64(o.ByteOffset - previousOffset.ByteOffset))
}
-func (o *BloomOffset) Decode(dec *encoding.Decbuf, v Version, previousOffset BloomOffset) error {
+func (o *BloomOffset) Decode(dec *encoding.Decbuf, _ Version, previousOffset BloomOffset) error {
o.Page = previousOffset.Page + dec.Uvarint()
-
- // Explained by the Encode method
- switch v {
- case V1:
- o.ByteOffset = previousOffset.ByteOffset + dec.Uvarint()
- default:
- o.ByteOffset = previousOffset.ByteOffset + int(dec.Varint64())
- }
-
+ o.ByteOffset = previousOffset.ByteOffset + int(dec.Varint64())
return dec.Err()
}
diff --git a/pkg/storage/bloom/v1/index_builder.go b/pkg/storage/bloom/v1/index_builder.go
index 36c74a9d87ab1..067a79ad03f4e 100644
--- a/pkg/storage/bloom/v1/index_builder.go
+++ b/pkg/storage/bloom/v1/index_builder.go
@@ -46,18 +46,20 @@ func (b *IndexBuilder) WriteOpts() error {
return nil
}
-func (b *IndexBuilder) AppendV2(series SeriesWithOffsets) error {
+func (b *IndexBuilder) Append(series SeriesWithMeta) error {
if !b.writtenSchema {
if err := b.WriteOpts(); err != nil {
return errors.Wrap(err, "appending series")
}
}
+ version := b.opts.Schema.version
+
b.scratch.Reset()
// we don't want to update the previous pointers yet in case
// we need to flush the page first which would
// be passed the incorrect final fp/offset
- lastOffset := series.Encode(b.scratch, b.previousFp, b.previousOffset)
+ lastOffset := series.Encode(b.scratch, version, b.previousFp, b.previousOffset)
if !b.page.SpaceFor(b.scratch.Len()) && b.page.Count() > 0 {
if err := b.flushPage(); err != nil {
@@ -66,7 +68,7 @@ func (b *IndexBuilder) AppendV2(series SeriesWithOffsets) error {
// re-encode now that a new page has been cut and we use delta-encoding
b.scratch.Reset()
- lastOffset = series.Encode(b.scratch, b.previousFp, b.previousOffset)
+ lastOffset = series.Encode(b.scratch, version, b.previousFp, b.previousOffset)
}
switch {
@@ -95,57 +97,6 @@ func (b *IndexBuilder) AppendV2(series SeriesWithOffsets) error {
return nil
}
-func (b *IndexBuilder) AppendV1(series SeriesWithOffset) error {
- if !b.writtenSchema {
- if err := b.WriteOpts(); err != nil {
- return errors.Wrap(err, "appending series")
- }
- }
-
- b.scratch.Reset()
- // we don't want to update the previous pointers yet in case
- // we need to flush the page first which would
- // be passed the incorrect final fp/offset
- previousFp, previousOffset := series.Encode(b.scratch, b.previousFp, b.previousOffset)
-
- if !b.page.SpaceFor(b.scratch.Len()) {
- if err := b.flushPage(); err != nil {
- return errors.Wrap(err, "flushing series page")
- }
-
- // re-encode now that a new page has been cut and we use delta-encoding
- b.scratch.Reset()
- previousFp, previousOffset = series.Encode(b.scratch, b.previousFp, b.previousOffset)
- }
- b.previousFp = previousFp
- b.previousOffset = previousOffset
-
- switch {
- case b.page.Count() == 0:
- // Special case: this is the first series in a page
- if len(series.Chunks) < 1 {
- return fmt.Errorf("series with zero chunks for fingerprint %v", series.Fingerprint)
- }
- b.fromFp = series.Fingerprint
- b.fromTs, b.throughTs = chkBounds(series.Chunks)
- case b.previousFp > series.Fingerprint:
- return fmt.Errorf("out of order series fingerprint for series %v", series.Fingerprint)
- default:
- from, through := chkBounds(series.Chunks)
- if b.fromTs.After(from) {
- b.fromTs = from
- }
- if b.throughTs.Before(through) {
- b.throughTs = through
- }
- }
-
- _ = b.page.Add(b.scratch.Get())
- b.previousFp = series.Fingerprint
- b.previousOffset = series.Offset
- return nil
-}
-
// must be > 1
func chkBounds(chks []ChunkRef) (from, through model.Time) {
from, through = chks[0].From, chks[0].Through
diff --git a/pkg/storage/bloom/v1/index_querier.go b/pkg/storage/bloom/v1/index_querier.go
index 7fdaa4617571f..fe05f7bcddfda 100644
--- a/pkg/storage/bloom/v1/index_querier.go
+++ b/pkg/storage/bloom/v1/index_querier.go
@@ -10,7 +10,7 @@ import (
)
type SeriesIterator interface {
- iter.Iterator[*SeriesWithOffset]
+ iter.Iterator[*SeriesWithMeta]
Reset()
}
@@ -138,7 +138,7 @@ func (it *LazySeriesIter) next() bool {
return false
}
-func (it *LazySeriesIter) At() *SeriesWithOffsets {
+func (it *LazySeriesIter) At() *SeriesWithMeta {
return it.curPage.At()
}
diff --git a/pkg/storage/bloom/v1/index_test.go b/pkg/storage/bloom/v1/index_test.go
index 8b3a078bc0c46..dc25261faff75 100644
--- a/pkg/storage/bloom/v1/index_test.go
+++ b/pkg/storage/bloom/v1/index_test.go
@@ -9,8 +9,6 @@ import (
"github.com/grafana/loki/v3/pkg/util/encoding"
)
-var SupportedVersions = []Version{V1, V2}
-
func TestBloomOffsetEncoding(t *testing.T) {
for _, v := range SupportedVersions {
t.Run(v.String(), func(t *testing.T) {
@@ -28,9 +26,10 @@ func TestBloomOffsetEncoding(t *testing.T) {
}
-func TestSeriesEncoding_V1(t *testing.T) {
+func TestSeriesEncoding_V3(t *testing.T) {
t.Parallel()
- src := SeriesWithOffset{
+ version := V3
+ src := SeriesWithMeta{
Series: Series{
Fingerprint: model.Fingerprint(1),
Chunks: []ChunkRef{
@@ -46,93 +45,31 @@ func TestSeriesEncoding_V1(t *testing.T) {
},
},
},
- Offset: BloomOffset{Page: 2, ByteOffset: 3},
- }
-
- enc := &encoding.Encbuf{}
- src.Encode(enc, 0, BloomOffset{})
-
- dec := encoding.DecWith(enc.Get())
- var dst SeriesWithOffset
- fp, offset, err := dst.Decode(&dec, 0, BloomOffset{})
- require.Nil(t, err)
- require.Equal(t, src.Fingerprint, fp)
- require.Equal(t, src.Offset, offset)
- require.Equal(t, src, dst)
-}
-
-func TestSeriesEncoding_V2(t *testing.T) {
- t.Parallel()
- src := SeriesWithOffsets{
- Series: Series{
- Fingerprint: model.Fingerprint(1),
- Chunks: []ChunkRef{
- {
- From: 1,
- Through: 2,
- Checksum: 3,
- },
- {
- From: 4,
- Through: 5,
- Checksum: 6,
- },
+ Meta: Meta{
+ Offsets: []BloomOffset{
+ {Page: 0, ByteOffset: 0},
+ {Page: 0, ByteOffset: 100},
+ {Page: 1, ByteOffset: 2},
+ {Page: 2, ByteOffset: 1},
},
- },
- Offsets: []BloomOffset{
- {Page: 0, ByteOffset: 0},
- {Page: 0, ByteOffset: 100},
- {Page: 1, ByteOffset: 2},
- {Page: 2, ByteOffset: 1},
+ Fields: NewSetFromLiteral[Field]("foo", "bar"),
},
}
enc := &encoding.Encbuf{}
- src.Encode(enc, 0, BloomOffset{})
+ src.Encode(enc, version, 0, BloomOffset{})
dec := encoding.DecWith(enc.Get())
- var dst SeriesWithOffsets
- fp, offset, err := dst.Decode(V2, &dec, 0, BloomOffset{})
+ var dst SeriesWithMeta
+ fp, offset, err := dst.Decode(&dec, version, 0, BloomOffset{})
require.Nil(t, err)
require.Equal(t, src.Fingerprint, fp)
require.Equal(t, src.Offsets[len(src.Offsets)-1], offset)
+ require.Equal(t, src.Offsets, dst.Offsets)
+ require.Equal(t, src.Fields, dst.Fields)
require.Equal(t, src, dst)
}
-func TestV2SeriesDecodesV1(t *testing.T) {
- t.Parallel()
- src := SeriesWithOffset{
- Series: Series{
- Fingerprint: model.Fingerprint(1),
- Chunks: []ChunkRef{
- {
- From: 1,
- Through: 2,
- Checksum: 3,
- },
- {
- From: 4,
- Through: 5,
- Checksum: 6,
- },
- },
- },
- Offset: BloomOffset{Page: 1, ByteOffset: 2},
- }
-
- enc := &encoding.Encbuf{}
- src.Encode(enc, 0, BloomOffset{})
-
- dec := encoding.DecWith(enc.Get())
- var dst SeriesWithOffsets
- fp, offset, err := dst.decodeV1(&dec, 0, BloomOffset{})
- require.Nil(t, err)
- require.Equal(t, src.Fingerprint, fp)
- require.Equal(t, src.Offset, offset)
- require.Equal(t, []BloomOffset{src.Offset}, dst.Offsets)
- require.Equal(t, src.Series, dst.Series)
-}
-
func TestChunkRefCmpLess(t *testing.T) {
t.Parallel()
for _, tc := range []struct {
diff --git a/pkg/storage/bloom/v1/schema.go b/pkg/storage/bloom/v1/schema.go
new file mode 100644
index 0000000000000..6fd8621654239
--- /dev/null
+++ b/pkg/storage/bloom/v1/schema.go
@@ -0,0 +1,130 @@
+package v1
+
+import (
+ "fmt"
+ "io"
+
+ "github.com/pkg/errors"
+
+ "github.com/grafana/loki/v3/pkg/chunkenc"
+ "github.com/grafana/loki/v3/pkg/util/encoding"
+)
+
+type Version byte
+
+func (v Version) String() string {
+ return fmt.Sprintf("v%d", v)
+}
+
+const (
+ magicNumber = uint32(0xCA7CAFE5)
+
+ // Add new versions below
+ V1 Version = iota
+ // V2 supports single series blooms encoded over multiple pages
+ // to accommodate larger single series
+ V2
+ // V2 indicated schema for indexed structured metadata
+ V3
+
+ CurrentSchemaVersion = V3
+)
+
+var (
+ SupportedVersions = []Version{V3}
+
+ ErrInvalidSchemaVersion = errors.New("invalid schema version")
+ ErrUnsupportedSchemaVersion = errors.New("unsupported schema version")
+)
+
+type Schema struct {
+ version Version
+ encoding chunkenc.Encoding
+ nGramLength, nGramSkip uint64
+}
+
+func NewSchema() Schema {
+ return Schema{
+ version: CurrentSchemaVersion,
+ encoding: chunkenc.EncNone,
+ nGramLength: 0,
+ nGramSkip: 0,
+ }
+}
+
+func (s Schema) String() string {
+ return fmt.Sprintf("%s,encoding=%s,ngram=%d,skip=%d", s.version, s.encoding, s.nGramLength, s.nGramSkip)
+}
+
+func (s Schema) Compatible(other Schema) bool {
+ return s == other
+}
+
+func (s Schema) Version() Version {
+ return s.version
+}
+
+func (s Schema) NGramLen() int {
+ return int(s.nGramLength)
+}
+
+func (s Schema) NGramSkip() int {
+ return int(s.nGramSkip)
+}
+
+// byte length
+func (s Schema) Len() int {
+ // magic number + version + encoding + ngram length + ngram skip
+ return 4 + 1 + 1 + 8 + 8
+}
+
+func (s *Schema) DecompressorPool() chunkenc.ReaderPool {
+ return chunkenc.GetReaderPool(s.encoding)
+}
+
+func (s *Schema) CompressorPool() chunkenc.WriterPool {
+ return chunkenc.GetWriterPool(s.encoding)
+}
+
+func (s *Schema) Encode(enc *encoding.Encbuf) {
+ enc.Reset()
+ enc.PutBE32(magicNumber)
+ enc.PutByte(byte(s.version))
+ enc.PutByte(byte(s.encoding))
+ enc.PutBE64(s.nGramLength)
+ enc.PutBE64(s.nGramSkip)
+
+}
+
+func (s *Schema) DecodeFrom(r io.ReadSeeker) error {
+ // TODO(owen-d): improve allocations
+ schemaBytes := make([]byte, s.Len())
+ _, err := io.ReadFull(r, schemaBytes)
+ if err != nil {
+ return errors.Wrap(err, "reading schema")
+ }
+
+ dec := encoding.DecWith(schemaBytes)
+ return s.Decode(&dec)
+}
+
+func (s *Schema) Decode(dec *encoding.Decbuf) error {
+ number := dec.Be32()
+ if number != magicNumber {
+ return errors.Errorf("invalid magic number. expected %x, got %x", magicNumber, number)
+ }
+ s.version = Version(dec.Byte())
+ if s.version != V3 {
+ return errors.Errorf("invalid version. expected %d, got %d", 3, s.version)
+ }
+
+ s.encoding = chunkenc.Encoding(dec.Byte())
+ if _, err := chunkenc.ParseEncoding(s.encoding.String()); err != nil {
+ return errors.Wrap(err, "parsing encoding")
+ }
+
+ s.nGramLength = dec.Be64()
+ s.nGramSkip = dec.Be64()
+
+ return dec.Err()
+}
diff --git a/pkg/storage/bloom/v1/test_util.go b/pkg/storage/bloom/v1/test_util.go
index 4fbbfa8d7bc1d..3bca46865c75b 100644
--- a/pkg/storage/bloom/v1/test_util.go
+++ b/pkg/storage/bloom/v1/test_util.go
@@ -11,7 +11,8 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
iter "github.com/grafana/loki/v3/pkg/iter/v2"
- "github.com/grafana/loki/v3/pkg/storage/bloom/v1/filter"
+
+ "github.com/grafana/loki/pkg/push"
)
// TODO(owen-d): this should probably be in it's own testing-util package
@@ -28,7 +29,7 @@ func MakeBlock(t testing.TB, nth int, fromFp, throughFp model.Fingerprint, fromT
builder, err := NewBlockBuilder(
BlockOptions{
Schema: Schema{
- version: DefaultSchemaVersion,
+ version: CurrentSchemaVersion,
encoding: chunkenc.EncSnappy,
nGramLength: 4, // see DefaultNGramLength in bloom_tokenizer_test.go
nGramSkip: 0, // see DefaultNGramSkip in bloom_tokenizer_test.go
@@ -46,83 +47,73 @@ func MakeBlock(t testing.TB, nth int, fromFp, throughFp model.Fingerprint, fromT
return block, data, keys
}
-// This is a helper type used in tests that buffers blooms and can be turned into
-// the commonly used iterator form *SeriesWithBlooms.
-type SeriesWithLiteralBlooms struct {
- Series *Series
- Blooms []*Bloom
-}
-
-func (s *SeriesWithLiteralBlooms) SeriesWithBlooms() SeriesWithBlooms {
+func newSeriesWithBlooms(series Series, blooms []*Bloom) SeriesWithBlooms {
+ offsets := make([]BloomOffset, 0, len(blooms))
+ for i := range blooms {
+ offsets = append(offsets, BloomOffset{Page: i, ByteOffset: 0})
+ }
return SeriesWithBlooms{
- Series: s.Series,
- Blooms: iter.NewSliceIter[*Bloom](s.Blooms),
+ Series: &SeriesWithMeta{
+ Series: series,
+ Meta: Meta{
+ Fields: NewSetFromLiteral[Field]("trace_id"),
+ Offsets: offsets,
+ },
+ },
+ Blooms: iter.NewSliceIter(blooms),
}
}
-func MkBasicSeriesWithBlooms(nSeries int, fromFp, throughFp model.Fingerprint, fromTs, throughTs model.Time) (seriesList []SeriesWithBlooms, keysList [][][]byte) {
- series, keys := MkBasicSeriesWithLiteralBlooms(nSeries, fromFp, throughFp, fromTs, throughTs)
- mapped := make([]SeriesWithBlooms, 0, len(series))
- for _, s := range series {
- mapped = append(mapped, s.SeriesWithBlooms())
- }
-
- return mapped, keys
-}
+func MkBasicSeriesWithBlooms(nSeries int, fromFp, throughFp model.Fingerprint, fromTs, throughTs model.Time) ([]SeriesWithBlooms, [][][]byte) {
+ // return values
+ seriesList := make([]SeriesWithBlooms, 0, nSeries)
+ keysList := make([][][]byte, 0, nSeries)
-func MkBasicSeriesWithLiteralBlooms(nSeries int, fromFp, throughFp model.Fingerprint, fromTs, throughTs model.Time) (seriesList []SeriesWithLiteralBlooms, keysList [][][]byte) {
- const nGramLen = 4
- seriesList = make([]SeriesWithLiteralBlooms, 0, nSeries)
- keysList = make([][][]byte, 0, nSeries)
+ numChunksPerSeries := 10
+ numBloomsPerSeries := 2
step := (throughFp - fromFp) / model.Fingerprint(nSeries)
- timeDelta := time.Duration(throughTs.Sub(fromTs).Nanoseconds() / int64(nSeries))
+ timeDelta := time.Duration(throughTs.Sub(fromTs).Nanoseconds() / int64(numChunksPerSeries))
- tokenizer := NewNGramTokenizer(nGramLen, 0)
for i := 0; i < nSeries; i++ {
var series Series
+ var blooms []*Bloom
+
series.Fingerprint = fromFp + model.Fingerprint(i)*step
- from := fromTs.Add(timeDelta * time.Duration(i))
- series.Chunks = []ChunkRef{
- {
- From: from,
- Through: from.Add(timeDelta),
- Checksum: uint32(i),
- },
+ for from := fromTs; from < throughTs; from = from.Add(timeDelta) {
+ series.Chunks = append(series.Chunks,
+ ChunkRef{
+ From: from,
+ Through: from.Add(timeDelta),
+ },
+ )
}
- var bloom Bloom
- bloom.ScalableBloomFilter = *filter.NewScalableBloomFilter(1024, 0.01, 0.8)
-
keys := make([][]byte, 0, int(step))
- for _, chk := range series.Chunks {
- tokenBuf, prefixLen := prefixedToken(nGramLen, chk, nil)
- for j := 0; j < int(step); j++ {
- line := fmt.Sprintf("%04x:%04x", int(series.Fingerprint), j)
- it := tokenizer.Tokens(line)
+ chunkBatchSize := (series.Chunks.Len() + numBloomsPerSeries - 1) / numBloomsPerSeries
+ for j := 0; j < numBloomsPerSeries; j++ {
+ bloom := NewBloom()
+
+ batchStart, batchEnd := j*chunkBatchSize, min(series.Chunks.Len(), (j+1)*chunkBatchSize)
+ for x, chk := range series.Chunks[batchStart:batchEnd] {
+ tokenizer := NewStructuredMetadataTokenizer(string(prefixForChunkRef(chk)))
+ kv := push.LabelAdapter{Name: "trace_id", Value: fmt.Sprintf("%s:%04x", series.Fingerprint, j*chunkBatchSize+x)}
+ it := tokenizer.Tokens(kv)
for it.Next() {
- key := it.At()
- // series-level key
+ key := []byte(it.At())
bloom.Add(key)
-
- // chunk-level key
- tokenBuf = append(tokenBuf[:prefixLen], key...)
- bloom.Add(tokenBuf)
-
- keyCopy := key
- keys = append(keys, keyCopy)
+ keys = append(keys, key)
}
}
+ blooms = append(blooms, bloom)
}
- seriesList = append(seriesList, SeriesWithLiteralBlooms{
- Series: &series,
- Blooms: []*Bloom{&bloom},
- })
+ seriesList = append(seriesList, newSeriesWithBlooms(series, blooms))
keysList = append(keysList, keys)
}
- return
+
+ return seriesList, keysList
}
func EqualIterators[T any](t *testing.T, test func(a, b T), expected, actual iter.Iterator[T]) {
diff --git a/pkg/storage/bloom/v1/tokenizer.go b/pkg/storage/bloom/v1/tokenizer.go
index dcd7c21468691..5cbf199448f68 100644
--- a/pkg/storage/bloom/v1/tokenizer.go
+++ b/pkg/storage/bloom/v1/tokenizer.go
@@ -1,15 +1,42 @@
package v1
import (
+ "fmt"
"unicode/utf8"
iter "github.com/grafana/loki/v3/pkg/iter/v2"
+
+ "github.com/grafana/loki/pkg/push"
)
const (
MaxRuneLen = 4
)
+type StructuredMetadataTokenizer struct {
+ // prefix to add to tokens, typically the encoded chunkref
+ prefix string
+ tokens []string
+}
+
+func NewStructuredMetadataTokenizer(prefix string) *StructuredMetadataTokenizer {
+ return &StructuredMetadataTokenizer{
+ prefix: prefix,
+ tokens: make([]string, 6),
+ }
+}
+
+// Tokens implements the NGramBuilder interface
+func (t *StructuredMetadataTokenizer) Tokens(kv push.LabelAdapter) iter.Iterator[string] {
+ combined := fmt.Sprintf("%s=%s", kv.Name, kv.Value)
+ t.tokens = append(t.tokens[:0],
+ kv.Name, t.prefix+kv.Name,
+ kv.Value, t.prefix+kv.Value,
+ combined, t.prefix+combined,
+ )
+ return iter.NewSliceIter(t.tokens)
+}
+
func reassemble(buf []rune, ln, pos int, result []byte) []byte {
result = result[:0] // Reset the result slice
for i := 0; i < ln; i++ {
diff --git a/pkg/storage/bloom/v1/tokenizer_test.go b/pkg/storage/bloom/v1/tokenizer_test.go
index c12788fe0f800..f21aceca06402 100644
--- a/pkg/storage/bloom/v1/tokenizer_test.go
+++ b/pkg/storage/bloom/v1/tokenizer_test.go
@@ -5,6 +5,10 @@ import (
"unicode/utf8"
"github.com/stretchr/testify/require"
+
+ v2 "github.com/grafana/loki/v3/pkg/iter/v2"
+
+ "github.com/grafana/loki/pkg/push"
)
const BigFile = "../../../logql/sketch/testdata/war_peace.txt"
@@ -230,3 +234,15 @@ func BenchmarkTokens(b *testing.B) {
})
}
}
+
+func TestStructuredMetadataTokenizer(t *testing.T) {
+ tokenizer := NewStructuredMetadataTokenizer("chunk")
+
+ metadata := push.LabelAdapter{Name: "pod", Value: "loki-1"}
+ expected := []string{"pod", "chunkpod", "loki-1", "chunkloki-1", "pod=loki-1", "chunkpod=loki-1"}
+
+ tokenIter := tokenizer.Tokens(metadata)
+ got, err := v2.Collect(tokenIter)
+ require.NoError(t, err)
+ require.Equal(t, expected, got)
+}
diff --git a/pkg/storage/bloom/v1/util.go b/pkg/storage/bloom/v1/util.go
index ec46d2633b7ad..6745ccaec7c61 100644
--- a/pkg/storage/bloom/v1/util.go
+++ b/pkg/storage/bloom/v1/util.go
@@ -1,7 +1,6 @@
package v1
import (
- "fmt"
"hash"
"hash/crc32"
"io"
@@ -10,25 +9,6 @@ import (
"github.com/grafana/loki/v3/pkg/util/mempool"
)
-type Version byte
-
-func (v Version) String() string {
- return fmt.Sprintf("v%d", v)
-}
-
-const (
- magicNumber = uint32(0xCA7CAFE5)
- // Add new versions below
- V1 Version = iota
- // V2 supports single series blooms encoded over multiple pages
- // to accommodate larger single series
- V2
-)
-
-const (
- DefaultSchemaVersion = V2
-)
-
var (
castagnoliTable = crc32.MakeTable(crc32.Castagnoli)
@@ -83,3 +63,45 @@ func PointerSlice[T any](xs []T) []*T {
}
return out
}
+
+type Set[V comparable] struct {
+ internal map[V]struct{}
+}
+
+func NewSet[V comparable](size int) Set[V] {
+ return Set[V]{make(map[V]struct{}, size)}
+}
+
+func NewSetFromLiteral[V comparable](v ...V) Set[V] {
+ set := NewSet[V](len(v))
+ for _, elem := range v {
+ set.Add(elem)
+ }
+ return set
+}
+
+func (s Set[V]) Add(v V) bool {
+ _, ok := s.internal[v]
+ if !ok {
+ s.internal[v] = struct{}{}
+ }
+ return !ok
+}
+
+func (s Set[V]) Len() int {
+ return len(s.internal)
+}
+
+func (s Set[V]) Items() []V {
+ set := make([]V, 0, s.Len())
+ for k := range s.internal {
+ set = append(set, k)
+ }
+ return set
+}
+
+func (s Set[V]) Union(other Set[V]) {
+ for _, v := range other.Items() {
+ s.Add(v)
+ }
+}
diff --git a/pkg/storage/bloom/v1/versioned_builder.go b/pkg/storage/bloom/v1/versioned_builder.go
index 175d651dc460c..4f1881c441e70 100644
--- a/pkg/storage/bloom/v1/versioned_builder.go
+++ b/pkg/storage/bloom/v1/versioned_builder.go
@@ -8,8 +8,8 @@ import (
/*
Each binary format (version) has it's own builder. This provides type-safe way to build the binary format
-while allowing reuse of underlying logic. As an example, the V2Builder will prevent encoding v1 series (only 1 bloom per series)
-as it only provides methods that are v2 compatible. The opposite is also true.
+while allowing reuse of underlying logic. As an example, the V3Builder will prevent encoding v1 and v2 series
+as it only provides methods that are v3 compatible. The opposite is also true.
Builders provide the following methods:
- [Convenience method] BuildFrom: builds the binary format from an iterator of the relevant type.
@@ -21,14 +21,14 @@ Builders provide the following methods:
*/
// Convenience constructor targeting the most current version.
-func NewBlockBuilder(opts BlockOptions, writer BlockWriter) (*V2Builder, error) {
- return NewBlockBuilderV2(opts, writer)
+func NewBlockBuilder(opts BlockOptions, writer BlockWriter) (*BlockBuilder, error) {
+ return NewBlockBuilderV3(opts, writer)
}
// Convenience alias for the most current version.
-type BlockBuilder = V2Builder
+type BlockBuilder = V3Builder
-type V2Builder struct {
+type V3Builder struct {
opts BlockOptions
writer BlockWriter
@@ -37,13 +37,13 @@ type V2Builder struct {
}
type SeriesWithBlooms struct {
- Series *Series
+ Series *SeriesWithMeta
Blooms iter.SizedIterator[*Bloom]
}
-func NewBlockBuilderV2(opts BlockOptions, writer BlockWriter) (*V2Builder, error) {
- if opts.Schema.version != V2 {
- return nil, errors.Errorf("schema mismatch creating v2 builder, expected %v, got %v", V2, opts.Schema.version)
+func NewBlockBuilderV3(opts BlockOptions, writer BlockWriter) (*V3Builder, error) {
+ if opts.Schema.version != V3 {
+ return nil, errors.Errorf("schema mismatch creating builder, expected v3, got %v", opts.Schema.version)
}
index, err := writer.Index()
@@ -55,7 +55,7 @@ func NewBlockBuilderV2(opts BlockOptions, writer BlockWriter) (*V2Builder, error
return nil, errors.Wrap(err, "initializing blooms writer")
}
- return &V2Builder{
+ return &V3Builder{
opts: opts,
writer: writer,
index: NewIndexBuilder(opts, index),
@@ -63,7 +63,9 @@ func NewBlockBuilderV2(opts BlockOptions, writer BlockWriter) (*V2Builder, error
}, nil
}
-func (b *V2Builder) BuildFrom(itr iter.Iterator[SeriesWithBlooms]) (uint32, error) {
+// BuildFrom is only used in tests as helper function to create blocks
+// It does not take indexed fields into account.
+func (b *V3Builder) BuildFrom(itr iter.Iterator[SeriesWithBlooms]) (uint32, error) {
for itr.Next() {
at := itr.At()
var offsets []BloomOffset
@@ -78,7 +80,8 @@ func (b *V2Builder) BuildFrom(itr iter.Iterator[SeriesWithBlooms]) (uint32, erro
if err := at.Blooms.Err(); err != nil {
return 0, errors.Wrap(err, "iterating blooms")
}
- blockFull, err := b.AddSeries(*at.Series, offsets)
+
+ blockFull, err := b.AddSeries(at.Series.Series, offsets, at.Series.Meta.Fields)
if err != nil {
return 0, errors.Wrapf(err, "writing series")
}
@@ -94,7 +97,7 @@ func (b *V2Builder) BuildFrom(itr iter.Iterator[SeriesWithBlooms]) (uint32, erro
return b.Close()
}
-func (b *V2Builder) Close() (uint32, error) {
+func (b *V3Builder) Close() (uint32, error) {
bloomChecksum, err := b.blooms.Close()
if err != nil {
return 0, errors.Wrap(err, "closing bloom file")
@@ -106,109 +109,18 @@ func (b *V2Builder) Close() (uint32, error) {
return combineChecksums(indexCheckSum, bloomChecksum), nil
}
-func (b *V2Builder) AddBloom(bloom *Bloom) (BloomOffset, error) {
+func (b *V3Builder) AddBloom(bloom *Bloom) (BloomOffset, error) {
return b.blooms.Append(bloom)
}
// AddSeries adds a series to the block. It returns true after adding the series, the block is full.
-func (b *V2Builder) AddSeries(series Series, offsets []BloomOffset) (bool, error) {
- if err := b.index.AppendV2(SeriesWithOffsets{
- Offsets: offsets,
- Series: series,
- }); err != nil {
- return false, errors.Wrapf(err, "writing index for series %v", series.Fingerprint)
- }
-
- full, _, err := b.writer.Full(b.opts.BlockSize)
- if err != nil {
- return false, errors.Wrap(err, "checking if block is full")
- }
-
- return full, nil
-}
-
-// Now the same for legacy V1
-type SeriesWithBloom struct {
- Series *Series
- Bloom *Bloom
-}
-
-//nolint:revive
-type V1Builder struct {
- opts BlockOptions
-
- writer BlockWriter
- index *IndexBuilder
- blooms *BloomBlockBuilder
-}
-
-func NewBlockBuilderV1(opts BlockOptions, writer BlockWriter) (*V1Builder, error) {
- if opts.Schema.version != V1 {
- return nil, errors.Errorf("schema mismatch creating v1 builder, expected %v, got %v", V1, opts.Schema.version)
- }
-
- index, err := writer.Index()
- if err != nil {
- return nil, errors.Wrap(err, "initializing index writer")
- }
- blooms, err := writer.Blooms()
- if err != nil {
- return nil, errors.Wrap(err, "initializing blooms writer")
- }
-
- return &V1Builder{
- opts: opts,
- writer: writer,
- index: NewIndexBuilder(opts, index),
- blooms: NewBloomBlockBuilder(opts, blooms),
- }, nil
-}
-
-func (b *V1Builder) BuildFrom(itr iter.Iterator[SeriesWithBloom]) (uint32, error) {
- for itr.Next() {
- at := itr.At()
- offset, err := b.AddBloom(at.Bloom)
- if err != nil {
- return 0, errors.Wrap(err, "writing bloom")
- }
-
- blockFull, err := b.AddSeries(*at.Series, offset)
-
- if err != nil {
- return 0, errors.Wrapf(err, "writing series")
- }
- if blockFull {
- break
- }
- }
-
- if err := itr.Err(); err != nil {
- return 0, errors.Wrap(err, "iterating series")
- }
-
- return b.Close()
-}
-
-func (b *V1Builder) Close() (uint32, error) {
- bloomChecksum, err := b.blooms.Close()
- if err != nil {
- return 0, errors.Wrap(err, "closing bloom file")
- }
- indexCheckSum, err := b.index.Close()
- if err != nil {
- return 0, errors.Wrap(err, "closing series file")
- }
- return combineChecksums(indexCheckSum, bloomChecksum), nil
-}
-
-func (b *V1Builder) AddBloom(bloom *Bloom) (BloomOffset, error) {
- return b.blooms.Append(bloom)
-}
-
-func (b *V1Builder) AddSeries(series Series, offset BloomOffset) (bool, error) {
- if err := b.index.AppendV1(SeriesWithOffset{
+func (b *V3Builder) AddSeries(series Series, offsets []BloomOffset, fields Set[Field]) (bool, error) {
+ if err := b.index.Append(SeriesWithMeta{
Series: series,
- Offset: offset,
+ Meta: Meta{
+ Offsets: offsets,
+ Fields: fields,
+ },
}); err != nil {
return false, errors.Wrapf(err, "writing index for series %v", series.Fingerprint)
}
diff --git a/pkg/storage/bloom/v1/versioned_builder_test.go b/pkg/storage/bloom/v1/versioned_builder_test.go
index 4b1103f1bbdac..2fb45e13d63e2 100644
--- a/pkg/storage/bloom/v1/versioned_builder_test.go
+++ b/pkg/storage/bloom/v1/versioned_builder_test.go
@@ -28,9 +28,9 @@ func smallBlockOpts(v Version, enc chunkenc.Encoding) BlockOptions {
}
}
-func setup(v Version) (BlockOptions, []SeriesWithLiteralBlooms, BlockWriter, BlockReader) {
+func setup(v Version) (BlockOptions, []SeriesWithBlooms, BlockWriter, BlockReader) {
numSeries := 100
- data, _ := MkBasicSeriesWithLiteralBlooms(numSeries, 0, 0xffff, 0, 10000)
+ data, _ := MkBasicSeriesWithBlooms(numSeries, 0, 0xffff, 0, 10000)
indexBuf := bytes.NewBuffer(nil)
bloomsBuf := bytes.NewBuffer(nil)
writer := NewMemoryBlockWriter(indexBuf, bloomsBuf)
@@ -38,111 +38,48 @@ func setup(v Version) (BlockOptions, []SeriesWithLiteralBlooms, BlockWriter, Blo
return smallBlockOpts(v, chunkenc.EncNone), data, writer, reader
}
-// Tests v1 format by encoding a block into v1 then decoding it back and comparing the results
-// to the source data.
-// NB(owen-d): This also tests that the block querier can "up cast" the v1 format to the v2 format
-// in the sense that v1 uses a single bloom per series and v2 uses multiple blooms per series and therefore
-// v1 can be interpreted as v2 with a single bloom per series.
-func TestV1RoundTrip(t *testing.T) {
- opts, data, writer, reader := setup(V1)
- b, err := NewBlockBuilderV1(opts, writer)
- require.NoError(t, err)
+func TestV3Roundtrip(t *testing.T) {
+ opts, sourceData, writer, reader := setup(V3)
- mapped := v2.NewMapIter[SeriesWithLiteralBlooms](
- v2.NewSliceIter(data),
- func(s SeriesWithLiteralBlooms) SeriesWithBloom {
- return SeriesWithBloom{
- Series: s.Series,
- Bloom: s.Blooms[0],
- }
- },
- )
+ // SeriesWithBlooms holds an interator of blooms,
+ // which will be exhausted after being consumed by the block builder
+ // therefore we need a deepcopy of the original data, or - and that's easier to achieve -
+ // we simply create the same data twice.
+ _, unmodifiedData, _, _ := setup(V3)
- _, err = b.BuildFrom(mapped)
+ b, err := NewBlockBuilderV3(opts, writer)
require.NoError(t, err)
- // Ensure Equality
- block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
-
- CompareIterators[SeriesWithLiteralBlooms, *SeriesWithBlooms](
- t,
- func(t *testing.T, a SeriesWithLiteralBlooms, b *SeriesWithBlooms) {
- require.Equal(t, a.Series, b.Series) // ensure series equality
- bs, err := v2.Collect(b.Blooms)
- require.NoError(t, err)
-
- // ensure we only have one bloom in v1
- require.Equal(t, 1, len(a.Blooms))
- require.Equal(t, 1, len(bs))
-
- var encA, encB encoding.Encbuf
- require.NoError(t, a.Blooms[0].Encode(&encA))
- require.NoError(t, bs[0].Encode(&encB))
-
- require.Equal(t, encA.Get(), encB.Get())
- },
- v2.NewSliceIter(data),
- querier,
- )
-}
-
-func TestV2Roundtrip(t *testing.T) {
- opts, data, writer, reader := setup(V2)
-
- data, err := v2.Collect(
- v2.NewMapIter[SeriesWithLiteralBlooms, SeriesWithLiteralBlooms](
- v2.NewSliceIter(data),
- func(swlb SeriesWithLiteralBlooms) SeriesWithLiteralBlooms {
- return SeriesWithLiteralBlooms{
- Series: swlb.Series,
- // hack(owen-d): data currently only creates one bloom per series, but I want to test multiple.
- // we're not checking the contents here, so ensuring the same bloom is used twice is fine.
- Blooms: []*Bloom{swlb.Blooms[0], swlb.Blooms[0]},
- }
- },
- ),
- )
- require.NoError(t, err)
-
- b, err := NewBlockBuilderV2(opts, writer)
- require.NoError(t, err)
-
- mapped := v2.NewMapIter[SeriesWithLiteralBlooms](
- v2.NewSliceIter(data),
- func(s SeriesWithLiteralBlooms) SeriesWithBlooms {
- return s.SeriesWithBlooms()
- },
- )
-
- _, err = b.BuildFrom(mapped)
+ _, err = b.BuildFrom(v2.NewSliceIter(sourceData))
require.NoError(t, err)
// Ensure Equality
block := NewBlock(reader, NewMetrics(nil))
querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
- CompareIterators[SeriesWithLiteralBlooms, *SeriesWithBlooms](
+ CompareIterators[SeriesWithBlooms, *SeriesWithBlooms](
t,
- func(t *testing.T, a SeriesWithLiteralBlooms, b *SeriesWithBlooms) {
- require.Equal(t, a.Series, b.Series) // ensure series equality
- bs, err := v2.Collect(b.Blooms)
+ func(t *testing.T, a SeriesWithBlooms, b *SeriesWithBlooms) {
+ require.Equal(t, a.Series.Series.Fingerprint, b.Series.Series.Fingerprint)
+ require.ElementsMatch(t, a.Series.Series.Chunks, b.Series.Series.Chunks)
+ bloomsA, err := v2.Collect(a.Blooms)
+ require.NoError(t, err)
+ bloomsB, err := v2.Collect(b.Blooms)
require.NoError(t, err)
- // ensure we only have one bloom in v1
- require.Equal(t, 2, len(a.Blooms))
- require.Equal(t, 2, len(bs))
+ require.Equal(t, 2, len(bloomsA))
+ require.Equal(t, 2, len(bloomsB))
var encA, encB encoding.Encbuf
- for i := range a.Blooms {
- require.NoError(t, a.Blooms[i].Encode(&encA))
- require.NoError(t, bs[i].Encode(&encB))
+ for i := range bloomsA {
+ require.NoError(t, bloomsA[i].Encode(&encA))
+ require.NoError(t, bloomsB[i].Encode(&encB))
require.Equal(t, encA.Get(), encB.Get())
encA.Reset()
encB.Reset()
}
},
- v2.NewSliceIter(data),
+ v2.NewSliceIter(unmodifiedData),
querier,
)
}
diff --git a/pkg/storage/chunk/cache/embeddedcache_test.go b/pkg/storage/chunk/cache/embeddedcache_test.go
index 7e109efea4f97..b264f6e44bc8e 100644
--- a/pkg/storage/chunk/cache/embeddedcache_test.go
+++ b/pkg/storage/chunk/cache/embeddedcache_test.go
@@ -48,7 +48,7 @@ func TestEmbeddedCacheEviction(t *testing.T) {
for _, test := range tests {
removedEntriesCount := atomic.NewInt64(0)
- onEntryRemoved := func(key string, value []byte) {
+ onEntryRemoved := func(_ string, _ []byte) {
removedEntriesCount.Inc()
}
c := NewTypedEmbeddedCache[string, []byte](test.name, test.cfg, nil, log.NewNopLogger(), "test", sizeOf, onEntryRemoved)
@@ -187,7 +187,7 @@ func TestEmbeddedCacheExpiry(t *testing.T) {
}
removedEntriesCount := atomic.NewInt64(0)
- onEntryRemoved := func(key string, value []byte) {
+ onEntryRemoved := func(_ string, _ []byte) {
removedEntriesCount.Inc()
}
c := NewTypedEmbeddedCache[string, []byte]("cache_exprity_test", cfg, nil, log.NewNopLogger(), "test", sizeOf, onEntryRemoved)
diff --git a/pkg/storage/chunk/cache/redis_client_test.go b/pkg/storage/chunk/cache/redis_client_test.go
index 2f5494193e572..2a8b7426f56dd 100644
--- a/pkg/storage/chunk/cache/redis_client_test.go
+++ b/pkg/storage/chunk/cache/redis_client_test.go
@@ -118,7 +118,7 @@ func Test_deriveEndpoints(t *testing.T) {
{
name: "single endpoint",
endpoints: fmt.Sprintf("%s:6379", upstream),
- lookup: func(host string) ([]string, error) {
+ lookup: func(_ string) ([]string, error) {
return []string{upstream}, nil
},
want: []string{fmt.Sprintf("%s:6379", upstream)},
@@ -136,7 +136,7 @@ func Test_deriveEndpoints(t *testing.T) {
{
name: "all loopback",
endpoints: fmt.Sprintf("%s:6379", lookback),
- lookup: func(host string) ([]string, error) {
+ lookup: func(_ string) ([]string, error) {
return []string{"::1", "127.0.0.1"}, nil
},
want: []string{fmt.Sprintf("%s:6379", lookback)},
@@ -145,7 +145,7 @@ func Test_deriveEndpoints(t *testing.T) {
{
name: "non-loopback address resolving to multiple addresses",
endpoints: fmt.Sprintf("%s:6379", upstream),
- lookup: func(host string) ([]string, error) {
+ lookup: func(_ string) ([]string, error) {
return []string{upstream, downstream}, nil
},
want: []string{fmt.Sprintf("%s:6379", upstream), fmt.Sprintf("%s:6379", downstream)},
@@ -154,7 +154,7 @@ func Test_deriveEndpoints(t *testing.T) {
{
name: "no such host",
endpoints: fmt.Sprintf("%s:6379", upstream),
- lookup: func(host string) ([]string, error) {
+ lookup: func(_ string) ([]string, error) {
return nil, fmt.Errorf("no such host")
},
want: nil,
diff --git a/pkg/storage/chunk/cache/resultscache/cache.go b/pkg/storage/chunk/cache/resultscache/cache.go
index aaf1d47fa88eb..0dfc4d49aae0a 100644
--- a/pkg/storage/chunk/cache/resultscache/cache.go
+++ b/pkg/storage/chunk/cache/resultscache/cache.go
@@ -105,7 +105,7 @@ func (s ResultsCache) Do(ctx context.Context, r Request) (Response, error) {
defer sp.Finish()
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
if s.shouldCacheReq != nil && !s.shouldCacheReq(ctx, r) {
@@ -200,7 +200,7 @@ func (s ResultsCache) handleHit(ctx context.Context, r Request, extents []Extent
tenantIDs, err := tenant.TenantIDs(ctx)
if err != nil {
- return nil, nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ return nil, nil, httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error())
}
reqResps, err = DoRequests(ctx, s.next, requests, s.parallelismForReq(ctx, tenantIDs, r))
diff --git a/pkg/storage/chunk/cache/resultscache/cache_test.go b/pkg/storage/chunk/cache/resultscache/cache_test.go
index 0febe48020867..efb1358cca661 100644
--- a/pkg/storage/chunk/cache/resultscache/cache_test.go
+++ b/pkg/storage/chunk/cache/resultscache/cache_test.go
@@ -491,7 +491,7 @@ func TestHandleHit(t *testing.T) {
minCacheExtent: 10,
limits: mockLimits{},
merger: MockMerger{},
- parallelismForReq: func(_ context.Context, tenantIDs []string, r Request) int { return 1 },
+ parallelismForReq: func(_ context.Context, _ []string, _ Request) int { return 1 },
next: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
return mkAPIResponse(req.GetStart().UnixMilli(), req.GetEnd().UnixMilli(), req.GetStep()), nil
}),
@@ -514,7 +514,7 @@ func TestHandleHit_queryLengthServed(t *testing.T) {
extractor: MockExtractor{},
limits: mockLimits{},
merger: MockMerger{},
- parallelismForReq: func(_ context.Context, tenantIDs []string, r Request) int { return 1 },
+ parallelismForReq: func(_ context.Context, _ []string, _ Request) int { return 1 },
next: HandlerFunc(func(_ context.Context, req Request) (Response, error) {
return mkAPIResponse(req.GetStart().UnixMilli(), req.GetEnd().UnixMilli(), req.GetStep()), nil
}),
@@ -602,7 +602,7 @@ func TestResultsCacheMaxFreshness(t *testing.T) {
MockExtractor{},
nil,
nil,
- func(_ context.Context, tenantIDs []string, r Request) int {
+ func(_ context.Context, _ []string, _ Request) int {
return 10
},
nil,
@@ -646,7 +646,7 @@ func Test_resultsCache_MissingData(t *testing.T) {
MockExtractor{},
nil,
nil,
- func(_ context.Context, tenantIDs []string, r Request) int {
+ func(_ context.Context, _ []string, _ Request) int {
return 10
},
nil,
@@ -700,7 +700,7 @@ func Test_shouldCacheReq(t *testing.T) {
MockExtractor{},
nil,
nil,
- func(_ context.Context, tenantIDs []string, r Request) int {
+ func(_ context.Context, _ []string, _ Request) int {
return 10
},
nil,
diff --git a/pkg/storage/chunk/chunk.go b/pkg/storage/chunk/chunk.go
index 6f050f8cbd01d..aadfe6ea937b2 100644
--- a/pkg/storage/chunk/chunk.go
+++ b/pkg/storage/chunk/chunk.go
@@ -5,7 +5,6 @@ import (
"encoding/binary"
"fmt"
"hash/crc32"
- "reflect"
"strconv"
"strings"
"sync"
@@ -215,11 +214,7 @@ func readOneHexPart(hex []byte) (part []byte, i int) {
}
func unsafeGetBytes(s string) []byte {
- var buf []byte
- p := unsafe.Pointer(&buf)
- *(*string)(p) = s
- (*reflect.SliceHeader)(p).Cap = len(s)
- return buf
+ return unsafe.Slice(unsafe.StringData(s), len(s))
}
func unsafeGetString(buf []byte) string {
diff --git a/pkg/storage/chunk/client/alibaba/oss_object_client.go b/pkg/storage/chunk/client/alibaba/oss_object_client.go
index cbe449fca9e5c..423a7348086e4 100644
--- a/pkg/storage/chunk/client/alibaba/oss_object_client.go
+++ b/pkg/storage/chunk/client/alibaba/oss_object_client.go
@@ -74,7 +74,7 @@ func (s *OssObjectClient) Stop() {
func (s *OssObjectClient) ObjectExists(ctx context.Context, objectKey string) (bool, error) {
var options []oss.Option
- err := instrument.CollectedRequest(ctx, "OSS.ObjectExists", ossRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "OSS.ObjectExists", ossRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
_, requestErr := s.defaultBucket.GetObjectMeta(objectKey, options...)
return requestErr
})
@@ -89,7 +89,7 @@ func (s *OssObjectClient) ObjectExists(ctx context.Context, objectKey string) (b
func (s *OssObjectClient) GetObject(ctx context.Context, objectKey string) (io.ReadCloser, int64, error) {
var resp *oss.GetObjectResult
var options []oss.Option
- err := instrument.CollectedRequest(ctx, "OSS.GetObject", ossRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "OSS.GetObject", ossRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var requestErr error
resp, requestErr = s.defaultBucket.DoGetObject(&oss.GetObjectRequest{ObjectKey: objectKey}, options)
if requestErr != nil {
@@ -114,7 +114,7 @@ func (s *OssObjectClient) GetObjectRange(ctx context.Context, objectKey string,
options := []oss.Option{
oss.Range(offset, offset+length-1),
}
- err := instrument.CollectedRequest(ctx, "OSS.GetObject", ossRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "OSS.GetObject", ossRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var requestErr error
resp, requestErr = s.defaultBucket.DoGetObject(&oss.GetObjectRequest{ObjectKey: objectKey}, options)
if requestErr != nil {
@@ -130,7 +130,7 @@ func (s *OssObjectClient) GetObjectRange(ctx context.Context, objectKey string,
// PutObject puts the specified bytes into the configured OSS bucket at the provided key
func (s *OssObjectClient) PutObject(ctx context.Context, objectKey string, object io.Reader) error {
- return instrument.CollectedRequest(ctx, "OSS.PutObject", ossRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ return instrument.CollectedRequest(ctx, "OSS.PutObject", ossRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
if err := s.defaultBucket.PutObject(objectKey, object); err != nil {
return errors.Wrap(err, "failed to put oss object")
}
@@ -173,7 +173,7 @@ func (s *OssObjectClient) List(ctx context.Context, prefix, delimiter string) ([
// DeleteObject deletes the specified object key from the configured OSS bucket.
func (s *OssObjectClient) DeleteObject(ctx context.Context, objectKey string) error {
- return instrument.CollectedRequest(ctx, "OSS.DeleteObject", ossRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ return instrument.CollectedRequest(ctx, "OSS.DeleteObject", ossRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
err := s.defaultBucket.DeleteObject(objectKey)
if err != nil {
return err
diff --git a/pkg/storage/chunk/client/aws/dynamodb_index_reader.go b/pkg/storage/chunk/client/aws/dynamodb_index_reader.go
index 4b1c4cd8a9e2d..19ea676451a9e 100644
--- a/pkg/storage/chunk/client/aws/dynamodb_index_reader.go
+++ b/pkg/storage/chunk/client/aws/dynamodb_index_reader.go
@@ -93,7 +93,7 @@ func (r *dynamodbIndexReader) ReadIndexEntries(ctx context.Context, tableName st
withRetrys := func(req *request.Request) {
req.Retryer = client.DefaultRetryer{NumMaxRetries: r.maxRetries}
}
- err := r.DynamoDB.ScanPagesWithContext(ctx, input, func(page *dynamodb.ScanOutput, lastPage bool) bool {
+ err := r.DynamoDB.ScanPagesWithContext(ctx, input, func(page *dynamodb.ScanOutput, _ bool) bool {
if cc := page.ConsumedCapacity; cc != nil {
r.metrics.dynamoConsumedCapacity.WithLabelValues("DynamoDB.ScanTable", *cc.TableName).
Add(*cc.CapacityUnits)
diff --git a/pkg/storage/chunk/client/aws/dynamodb_storage_client.go b/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
index 87fd24e127db0..064116cf2ed00 100644
--- a/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
+++ b/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
@@ -194,7 +194,7 @@ func (a dynamoDBStorageClient) BatchWrite(ctx context.Context, input index.Write
ReturnConsumedCapacity: aws.String(dynamodb.ReturnConsumedCapacityTotal),
})
- err := instrument.CollectedRequest(ctx, "DynamoDB.BatchWriteItem", a.metrics.dynamoRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "DynamoDB.BatchWriteItem", a.metrics.dynamoRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
return request.Send()
})
resp := request.Data().(*dynamodb.BatchWriteItemOutput)
@@ -450,7 +450,7 @@ func (a dynamoDBStorageClient) getDynamoDBChunks(ctx context.Context, chunks []c
ReturnConsumedCapacity: aws.String(dynamodb.ReturnConsumedCapacityTotal),
})
- err := instrument.CollectedRequest(ctx, "DynamoDB.BatchGetItemPages", a.metrics.dynamoRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "DynamoDB.BatchGetItemPages", a.metrics.dynamoRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
return request.Send()
})
response := request.Data().(*dynamodb.BatchGetItemOutput)
diff --git a/pkg/storage/chunk/client/aws/s3_storage_client.go b/pkg/storage/chunk/client/aws/s3_storage_client.go
index 11696f67eddb4..12fea874e311f 100644
--- a/pkg/storage/chunk/client/aws/s3_storage_client.go
+++ b/pkg/storage/chunk/client/aws/s3_storage_client.go
@@ -124,7 +124,7 @@ func (cfg *S3Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.DurationVar(&cfg.BackoffConfig.MinBackoff, prefix+"s3.min-backoff", 100*time.Millisecond, "Minimum backoff time when s3 get Object")
f.DurationVar(&cfg.BackoffConfig.MaxBackoff, prefix+"s3.max-backoff", 3*time.Second, "Maximum backoff time when s3 get Object")
- f.IntVar(&cfg.BackoffConfig.MaxRetries, prefix+"s3.max-retries", 5, "Maximum number of times to retry when s3 get Object")
+ f.IntVar(&cfg.BackoffConfig.MaxRetries, prefix+"s3.max-retries", 5, "Maximum number of times to retry for s3 GetObject or ObjectExists")
}
// Validate config and returns error on failure
@@ -307,16 +307,34 @@ func buckets(cfg S3Config) ([]string, error) {
func (a *S3ObjectClient) Stop() {}
func (a *S3ObjectClient) ObjectExists(ctx context.Context, objectKey string) (bool, error) {
- err := instrument.CollectedRequest(ctx, "S3.ObjectExists", s3RequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
- headObjectInput := &s3.HeadObjectInput{
- Bucket: aws.String(a.bucketFromKey(objectKey)),
- Key: aws.String(objectKey),
+ var lastErr error
+
+ retries := backoff.New(ctx, a.cfg.BackoffConfig)
+ for retries.Ongoing() {
+ if ctx.Err() != nil {
+ return false, errors.Wrap(ctx.Err(), "ctx related error during s3 objectExists")
}
- _, err := a.S3.HeadObject(headObjectInput)
- return err
- })
- if err != nil {
- return false, err
+ lastErr = instrument.CollectedRequest(ctx, "S3.ObjectExists", s3RequestDuration, instrument.ErrorCode, func(_ context.Context) error {
+ headObjectInput := &s3.HeadObjectInput{
+ Bucket: aws.String(a.bucketFromKey(objectKey)),
+ Key: aws.String(objectKey),
+ }
+ _, requestErr := a.S3.HeadObject(headObjectInput)
+ return requestErr
+ })
+ if lastErr == nil {
+ return true, nil
+ }
+
+ if a.IsObjectNotFoundErr(lastErr) {
+ return false, lastErr
+ }
+
+ retries.Wait()
+ }
+
+ if lastErr != nil {
+ return false, lastErr
}
return true, nil
diff --git a/pkg/storage/chunk/client/aws/s3_storage_client_test.go b/pkg/storage/chunk/client/aws/s3_storage_client_test.go
index d966f1a2f9f9c..3a2c1e8dc33c3 100644
--- a/pkg/storage/chunk/client/aws/s3_storage_client_test.go
+++ b/pkg/storage/chunk/client/aws/s3_storage_client_test.go
@@ -177,8 +177,8 @@ func Test_Hedging(t *testing.T) {
SecretAccessKey: flagext.SecretWithValue("bar"),
BackoffConfig: backoff.Config{MaxRetries: 1},
BucketNames: "foo",
- Inject: func(next http.RoundTripper) http.RoundTripper {
- return RoundTripperFunc(func(req *http.Request) (*http.Response, error) {
+ Inject: func(_ http.RoundTripper) http.RoundTripper {
+ return RoundTripperFunc(func(_ *http.Request) (*http.Response, error) {
count.Inc()
time.Sleep(200 * time.Millisecond)
return nil, errors.New("foo")
@@ -196,6 +196,112 @@ func Test_Hedging(t *testing.T) {
}
}
+type MockS3Client struct {
+ s3.S3
+ HeadObjectFunc func(*s3.HeadObjectInput) (*s3.HeadObjectOutput, error)
+}
+
+func (m *MockS3Client) HeadObject(input *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
+ return m.HeadObjectFunc(input)
+}
+
+func Test_RetryLogic(t *testing.T) {
+ for _, tc := range []struct {
+ name string
+ maxRetries int
+ exists bool
+ do func(c *S3ObjectClient) error
+ }{
+ {
+ "get object with retries",
+ 3,
+ true,
+ func(c *S3ObjectClient) error {
+ _, _, err := c.GetObject(context.Background(), "foo")
+ return err
+ },
+ },
+ {
+ "object exists with retries",
+ 3,
+ true,
+ func(c *S3ObjectClient) error {
+ _, err := c.ObjectExists(context.Background(), "foo")
+ return err
+ },
+ },
+ {
+ "object doesn't exist with retries",
+ 3,
+ false,
+ func(c *S3ObjectClient) error {
+ _, err := c.ObjectExists(context.Background(), "foo")
+ return err
+ },
+ },
+ } {
+ t.Run(tc.name, func(t *testing.T) {
+ callCount := atomic.NewInt32(0)
+
+ mockS3 := &MockS3Client{
+ HeadObjectFunc: func(_ *s3.HeadObjectInput) (*s3.HeadObjectOutput, error) {
+ callNum := callCount.Inc()
+ if !tc.exists {
+ rfIn := awserr.NewRequestFailure(
+ awserr.New("NotFound", "Not Found", nil), 404, "abc",
+ )
+ return nil, rfIn
+ }
+
+ // Fail the first set of calls
+ if int(callNum) <= tc.maxRetries-1 {
+ time.Sleep(200 * time.Millisecond) // Simulate latency
+ return nil, errors.New("simulated error on mock call")
+ }
+
+ // Succeed on the last call
+ return &s3.HeadObjectOutput{}, nil
+ },
+ }
+
+ c, err := NewS3ObjectClient(S3Config{
+ AccessKeyID: "foo",
+ SecretAccessKey: flagext.SecretWithValue("bar"),
+ BackoffConfig: backoff.Config{MaxRetries: tc.maxRetries},
+ BucketNames: "foo",
+ Inject: func(_ http.RoundTripper) http.RoundTripper {
+ return RoundTripperFunc(func(_ *http.Request) (*http.Response, error) {
+ // Increment the call counter
+ callNum := callCount.Inc()
+
+ // Fail the first set of calls
+ if int(callNum) <= tc.maxRetries-1 {
+ time.Sleep(200 * time.Millisecond) // Simulate latency
+ return nil, errors.New("simulated error on call")
+ }
+
+ // Succeed on the last call
+ return &http.Response{
+ StatusCode: http.StatusOK,
+ Body: io.NopCloser(bytes.NewReader([]byte("object content"))),
+ }, nil
+ })
+ },
+ }, hedging.Config{})
+ require.NoError(t, err)
+ c.S3 = mockS3
+ err = tc.do(c)
+ if tc.exists {
+ require.NoError(t, err)
+ require.Equal(t, tc.maxRetries, int(callCount.Load()))
+ } else {
+ require.True(t, c.IsObjectNotFoundErr(err))
+ require.Equal(t, 1, int(callCount.Load()))
+ }
+ })
+ }
+}
+
func Test_ConfigRedactsCredentials(t *testing.T) {
underTest := S3Config{
AccessKeyID: "access key id",
diff --git a/pkg/storage/chunk/client/azure/blob_storage_client.go b/pkg/storage/chunk/client/azure/blob_storage_client.go
index 2e66e1e89da36..0a9d6300b1634 100644
--- a/pkg/storage/chunk/client/azure/blob_storage_client.go
+++ b/pkg/storage/chunk/client/azure/blob_storage_client.go
@@ -361,7 +361,7 @@ func (b *BlobStorage) newPipeline(hedgingCfg hedging.Config, hedging bool) (pipe
client := defaultClientFactory()
- opts.HTTPSender = pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc {
+ opts.HTTPSender = pipeline.FactoryFunc(func(_ pipeline.Policy, _ *pipeline.PolicyOptions) pipeline.PolicyFunc {
return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) {
resp, err := client.Do(request.WithContext(ctx))
return pipeline.NewHTTPResponse(resp), err
@@ -373,7 +373,7 @@ func (b *BlobStorage) newPipeline(hedgingCfg hedging.Config, hedging bool) (pipe
if err != nil {
return nil, err
}
- opts.HTTPSender = pipeline.FactoryFunc(func(next pipeline.Policy, po *pipeline.PolicyOptions) pipeline.PolicyFunc {
+ opts.HTTPSender = pipeline.FactoryFunc(func(_ pipeline.Policy, _ *pipeline.PolicyOptions) pipeline.PolicyFunc {
return func(ctx context.Context, request pipeline.Request) (pipeline.Response, error) {
resp, err := client.Do(request.WithContext(ctx))
return pipeline.NewHTTPResponse(resp), err
@@ -450,7 +450,7 @@ func (b *BlobStorage) getServicePrincipalToken(authFunctions authFunctions) (*ad
if b.cfg.UseFederatedToken {
token, err := b.servicePrincipalTokenFromFederatedToken(resource, authFunctions.NewOAuthConfigFunc, authFunctions.NewServicePrincipalTokenFromFederatedTokenFunc)
- var customRefreshFunc adal.TokenRefresh = func(context context.Context, resource string) (*adal.Token, error) {
+ var customRefreshFunc adal.TokenRefresh = func(_ context.Context, resource string) (*adal.Token, error) {
newToken, err := b.servicePrincipalTokenFromFederatedToken(resource, authFunctions.NewOAuthConfigFunc, authFunctions.NewServicePrincipalTokenFromFederatedTokenFunc)
if err != nil {
return nil, err
diff --git a/pkg/storage/chunk/client/azure/blob_storage_client_test.go b/pkg/storage/chunk/client/azure/blob_storage_client_test.go
index 2f59934aabf20..cedc5057e85bc 100644
--- a/pkg/storage/chunk/client/azure/blob_storage_client_test.go
+++ b/pkg/storage/chunk/client/azure/blob_storage_client_test.go
@@ -66,7 +66,7 @@ func (suite *FederatedTokenTestSuite) TestGetServicePrincipalToken() {
return suite.mockOAuthConfig, nil
}
- servicePrincipalTokenFromFederatedTokenFunc := func(oauthConfig adal.OAuthConfig, clientID string, jwt string, resource string, callbacks ...adal.TokenRefreshCallback) (*adal.ServicePrincipalToken, error) {
+ servicePrincipalTokenFromFederatedTokenFunc := func(oauthConfig adal.OAuthConfig, clientID string, jwt string, resource string, _ ...adal.TokenRefreshCallback) (*adal.ServicePrincipalToken, error) {
require.True(suite.T(), *suite.mockOAuthConfig == oauthConfig, "should return the mocked object")
require.Equal(suite.T(), "myClientId", clientID)
require.Equal(suite.T(), "myJwtToken", jwt)
diff --git a/pkg/storage/chunk/client/baidubce/bos_storage_client.go b/pkg/storage/chunk/client/baidubce/bos_storage_client.go
index edb6870033db9..b76db38e47c60 100644
--- a/pkg/storage/chunk/client/baidubce/bos_storage_client.go
+++ b/pkg/storage/chunk/client/baidubce/bos_storage_client.go
@@ -80,7 +80,7 @@ func NewBOSObjectStorage(cfg *BOSStorageConfig) (*BOSObjectStorage, error) {
}
func (b *BOSObjectStorage) PutObject(ctx context.Context, objectKey string, object io.Reader) error {
- return instrument.CollectedRequest(ctx, "BOS.PutObject", bosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ return instrument.CollectedRequest(ctx, "BOS.PutObject", bosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
body, err := bce.NewBodyFromSizedReader(object, -1)
if err != nil {
return err
@@ -91,7 +91,7 @@ func (b *BOSObjectStorage) PutObject(ctx context.Context, objectKey string, obje
}
func (b *BOSObjectStorage) ObjectExists(ctx context.Context, objectKey string) (bool, error) {
- err := instrument.CollectedRequest(ctx, "BOS.ObjectExists", bosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "BOS.ObjectExists", bosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var requestErr error
_, requestErr = b.client.GetObjectMeta(b.cfg.BucketName, objectKey)
return requestErr
@@ -105,7 +105,7 @@ func (b *BOSObjectStorage) ObjectExists(ctx context.Context, objectKey string) (
func (b *BOSObjectStorage) GetObject(ctx context.Context, objectKey string) (io.ReadCloser, int64, error) {
var res *api.GetObjectResult
- err := instrument.CollectedRequest(ctx, "BOS.GetObject", bosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "BOS.GetObject", bosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var requestErr error
res, requestErr = b.client.BasicGetObject(b.cfg.BucketName, objectKey)
return requestErr
@@ -119,7 +119,7 @@ func (b *BOSObjectStorage) GetObject(ctx context.Context, objectKey string) (io.
func (b *BOSObjectStorage) GetObjectRange(ctx context.Context, objectKey string, offset, length int64) (io.ReadCloser, error) {
var res *api.GetObjectResult
- err := instrument.CollectedRequest(ctx, "BOS.GetObject", bosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "BOS.GetObject", bosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var requestErr error
res, requestErr = b.client.GetObject(b.cfg.BucketName, objectKey, nil, offset, offset+length-1)
return requestErr
@@ -134,7 +134,7 @@ func (b *BOSObjectStorage) List(ctx context.Context, prefix string, delimiter st
var storageObjects []client.StorageObject
var commonPrefixes []client.StorageCommonPrefix
- err := instrument.CollectedRequest(ctx, "BOS.List", bosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "BOS.List", bosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
args := new(api.ListObjectsArgs)
args.Prefix = prefix
args.Delimiter = delimiter
@@ -172,7 +172,7 @@ func (b *BOSObjectStorage) List(ctx context.Context, prefix string, delimiter st
}
func (b *BOSObjectStorage) DeleteObject(ctx context.Context, objectKey string) error {
- return instrument.CollectedRequest(ctx, "BOS.DeleteObject", bosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ return instrument.CollectedRequest(ctx, "BOS.DeleteObject", bosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
err := b.client.DeleteObject(b.cfg.BucketName, objectKey)
return err
})
diff --git a/pkg/storage/chunk/client/gcp/fixtures.go b/pkg/storage/chunk/client/gcp/fixtures.go
index 153a906776cf3..3fc03fb6e0158 100644
--- a/pkg/storage/chunk/client/gcp/fixtures.go
+++ b/pkg/storage/chunk/client/gcp/fixtures.go
@@ -49,10 +49,19 @@ func (f *fixture) Clients() (
}
f.gcssrv = fakestorage.NewServer(nil)
- opts := fakestorage.CreateBucketOpts{
- Name: "chunks",
- }
- f.gcssrv.CreateBucketWithOpts(opts)
+ /*
+ // Note: fake-gcs-server upgrade does not work in the `dist` tooling builds.
+ // Leave at v1.7.0 until the issue is resolved.
+ // Example failure: https://github.com/grafana/loki/actions/runs/10744853958/job/29802951861
+ // Open issue: https://github.com/fsouza/fake-gcs-server/issues/1739
+ // Once the issue is resolved, this code block can be used to replace the
+ // `CreateBucket` call below.
+ opts := fakestorage.CreateBucketOpts{
+ Name: "chunks",
+ }
+ f.gcssrv.CreateBucketWithOpts(opts)
+ */
+ f.gcssrv.CreateBucket("chunks")
conn, err := grpc.NewClient(f.btsrv.Addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
@@ -92,7 +101,7 @@ func (f *fixture) Clients() (
c, err = newGCSObjectClient(ctx, GCSConfig{
BucketName: "chunks",
Insecure: true,
- }, hedging.Config{}, func(ctx context.Context, opts ...option.ClientOption) (*storage.Client, error) {
+ }, hedging.Config{}, func(_ context.Context, _ ...option.ClientOption) (*storage.Client, error) {
return f.gcssrv.Client(), nil
})
if err != nil {
diff --git a/pkg/storage/chunk/client/gcp/gcs_object_client_test.go b/pkg/storage/chunk/client/gcp/gcs_object_client_test.go
index 230067f9e9508..5bece14a18dbb 100644
--- a/pkg/storage/chunk/client/gcp/gcs_object_client_test.go
+++ b/pkg/storage/chunk/client/gcp/gcs_object_client_test.go
@@ -80,7 +80,7 @@ func Test_Hedging(t *testing.T) {
}
func fakeServer(t *testing.T, returnIn time.Duration, counter *atomic.Int32) *httptest.Server {
- server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
counter.Inc()
time.Sleep(returnIn)
_, _ = w.Write([]byte(`{}`))
@@ -236,7 +236,7 @@ func TestTCPErrs(t *testing.T) {
}
func fakeHTTPRespondingServer(t *testing.T, code int) *httptest.Server {
- server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
w.WriteHeader(code)
}))
server.StartTLS()
@@ -246,7 +246,7 @@ func fakeHTTPRespondingServer(t *testing.T, code int) *httptest.Server {
}
func fakeSleepingServer(t *testing.T, responseSleep, connectSleep time.Duration, closeOnNew, closeOnActive bool) *httptest.Server {
- server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ server := httptest.NewUnstartedServer(http.HandlerFunc(func(_ http.ResponseWriter, _ *http.Request) {
// sleep on response to mimic server overload
time.Sleep(responseSleep)
}))
diff --git a/pkg/storage/chunk/client/grpc/grpc_client_test.go b/pkg/storage/chunk/client/grpc/grpc_client_test.go
index dc040cb5aecab..b0bcffce91ebf 100644
--- a/pkg/storage/chunk/client/grpc/grpc_client_test.go
+++ b/pkg/storage/chunk/client/grpc/grpc_client_test.go
@@ -157,7 +157,7 @@ func TestGrpcStore(t *testing.T) {
{TableName: "table", HashValue: "foo"},
}
results := 0
- err = storageClient.QueryPages(context.Background(), queries, func(query index.Query, batch index.ReadBatchResult) bool {
+ err = storageClient.QueryPages(context.Background(), queries, func(_ index.Query, batch index.ReadBatchResult) bool {
iter := batch.Iterator()
for iter.Next() {
results++
diff --git a/pkg/storage/chunk/client/grpc/grpc_server_mock_test.go b/pkg/storage/chunk/client/grpc/grpc_server_mock_test.go
index 4f8cb14762669..81a60201c8528 100644
--- a/pkg/storage/chunk/client/grpc/grpc_server_mock_test.go
+++ b/pkg/storage/chunk/client/grpc/grpc_server_mock_test.go
@@ -143,7 +143,7 @@ func NewTestStorageClient(cfg Config, schemaCfg config.SchemaConfig) (*StorageCl
//*********************** gRPC mock server *********************************//
-// NewTableClient returns a new TableClient.
+// NewTestTableClient returns a new TableClient.
func NewTestTableClient(cfg Config) (*TableClient, error) {
grpcClient, _, err := connectToGrpcServer(cfg.Address)
if err != nil {
@@ -155,7 +155,7 @@ func NewTestTableClient(cfg Config) (*TableClient, error) {
return client, nil
}
-// NewStorageClient returns a new StorageClient.
+// newTestStorageServer returns a new StorageServer.
func newTestStorageServer(cfg Config) *server {
client := &server{
Cfg: cfg,
diff --git a/pkg/storage/chunk/client/hedging/hedging_test.go b/pkg/storage/chunk/client/hedging/hedging_test.go
index 4524156e7309b..1baf0f757dbd0 100644
--- a/pkg/storage/chunk/client/hedging/hedging_test.go
+++ b/pkg/storage/chunk/client/hedging/hedging_test.go
@@ -34,7 +34,7 @@ func TestHedging(t *testing.T) {
}
count := atomic.NewInt32(0)
client, err := cfg.Client(&http.Client{
- Transport: RoundTripperFunc(func(r *http.Request) (*http.Response, error) {
+ Transport: RoundTripperFunc(func(_ *http.Request) (*http.Response, error) {
count.Inc()
time.Sleep(200 * time.Millisecond)
return &http.Response{
@@ -69,7 +69,7 @@ func TestHedgingRateLimit(t *testing.T) {
}
count := atomic.NewInt32(0)
client, err := cfg.Client(&http.Client{
- Transport: RoundTripperFunc(func(r *http.Request) (*http.Response, error) {
+ Transport: RoundTripperFunc(func(_ *http.Request) (*http.Response, error) {
count.Inc()
time.Sleep(200 * time.Millisecond)
return &http.Response{
diff --git a/pkg/storage/chunk/client/ibmcloud/cos_object_client.go b/pkg/storage/chunk/client/ibmcloud/cos_object_client.go
index a796ab88dea4e..d432071293054 100644
--- a/pkg/storage/chunk/client/ibmcloud/cos_object_client.go
+++ b/pkg/storage/chunk/client/ibmcloud/cos_object_client.go
@@ -321,7 +321,7 @@ func (c *COSObjectClient) DeleteObject(ctx context.Context, objectKey string) er
func (c *COSObjectClient) ObjectExists(ctx context.Context, objectKey string) (bool, error) {
bucket := c.bucketFromKey(objectKey)
- err := instrument.CollectedRequest(ctx, "COS.GetObject", cosRequestDuration, instrument.ErrorCode, func(ctx context.Context) error {
+ err := instrument.CollectedRequest(ctx, "COS.GetObject", cosRequestDuration, instrument.ErrorCode, func(_ context.Context) error {
var requestErr error
_, requestErr = c.hedgedCOS.HeadObject(&cos.HeadObjectInput{
Bucket: ibm.String(bucket),
diff --git a/pkg/storage/chunk/client/ibmcloud/cos_object_client_test.go b/pkg/storage/chunk/client/ibmcloud/cos_object_client_test.go
index f6959b3f31d81..3d6ee89af934b 100644
--- a/pkg/storage/chunk/client/ibmcloud/cos_object_client_test.go
+++ b/pkg/storage/chunk/client/ibmcloud/cos_object_client_test.go
@@ -584,7 +584,7 @@ func mockCOSServer(accessToken, tokenType, resp string) *httptest.Server {
}
func mockAuthServer(accessToken, tokenType string) *httptest.Server {
- return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
token := token.Token{
AccessToken: accessToken,
RefreshToken: "test",
diff --git a/pkg/storage/chunk/client/local/boltdb_table_client.go b/pkg/storage/chunk/client/local/boltdb_table_client.go
index df30db04d29ac..d2bdab807d4d0 100644
--- a/pkg/storage/chunk/client/local/boltdb_table_client.go
+++ b/pkg/storage/chunk/client/local/boltdb_table_client.go
@@ -20,7 +20,7 @@ func NewTableClient(directory string) (index.TableClient, error) {
func (c *TableClient) ListTables(_ context.Context) ([]string, error) {
boltDbFiles := []string{}
- err := filepath.Walk(c.directory, func(path string, info os.FileInfo, err error) error {
+ err := filepath.Walk(c.directory, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return err
}
diff --git a/pkg/storage/chunk/client/local/fs_object_client.go b/pkg/storage/chunk/client/local/fs_object_client.go
index 751e1b94b37b1..0eb027e9fd3cf 100644
--- a/pkg/storage/chunk/client/local/fs_object_client.go
+++ b/pkg/storage/chunk/client/local/fs_object_client.go
@@ -230,7 +230,7 @@ func (f *FSObjectClient) DeleteObject(_ context.Context, objectKey string) error
// DeleteChunksBefore implements BucketClient
func (f *FSObjectClient) DeleteChunksBefore(_ context.Context, ts time.Time) error {
- return filepath.Walk(f.cfg.Directory, func(path string, info os.FileInfo, err error) error {
+ return filepath.Walk(f.cfg.Directory, func(path string, info os.FileInfo, _ error) error {
if !info.IsDir() && info.ModTime().Before(ts) {
level.Info(util_log.Logger).Log("msg", "file has exceeded the retention period, removing it", "filepath", info.Name())
if err := os.Remove(path); err != nil {
diff --git a/pkg/storage/chunk/client/util/parallel_chunk_fetch_test.go b/pkg/storage/chunk/client/util/parallel_chunk_fetch_test.go
index 98b654d9df074..8fec4518e18ae 100644
--- a/pkg/storage/chunk/client/util/parallel_chunk_fetch_test.go
+++ b/pkg/storage/chunk/client/util/parallel_chunk_fetch_test.go
@@ -13,7 +13,7 @@ func BenchmarkGetParallelChunks(b *testing.B) {
b.ResetTimer()
for i := 0; i < b.N; i++ {
res, err := GetParallelChunks(ctx, 150, in,
- func(_ context.Context, d *chunk.DecodeContext, c chunk.Chunk) (chunk.Chunk, error) {
+ func(_ context.Context, _ *chunk.DecodeContext, c chunk.Chunk) (chunk.Chunk, error) {
return c, nil
})
if err != nil {
diff --git a/pkg/storage/detected/fields.go b/pkg/storage/detected/fields.go
index 9d6a699bc1e14..8dd9dd1a15126 100644
--- a/pkg/storage/detected/fields.go
+++ b/pkg/storage/detected/fields.go
@@ -44,6 +44,9 @@ func (f *UnmarshaledDetectedField) Merge(df *logproto.DetectedField) error {
f.Parsers = append(f.Parsers, df.Parsers...)
slices.Sort(f.Parsers)
f.Parsers = slices.Compact(f.Parsers)
+ if len(f.Parsers) == 0 {
+ f.Parsers = nil
+ }
return f.Sketch.Merge(sketch)
}
diff --git a/pkg/storage/stores/series/series_index_store.go b/pkg/storage/stores/series/series_index_store.go
index 9fb64fe9b85b8..75fa2969e926c 100644
--- a/pkg/storage/stores/series/series_index_store.go
+++ b/pkg/storage/stores/series/series_index_store.go
@@ -501,12 +501,12 @@ func (c *IndexReaderWriter) lookupSeriesByMetricNameMatchers(ctx context.Context
}
func (c *IndexReaderWriter) lookupSeriesByMetricNameMatcher(ctx context.Context, from, through model.Time, userID, metricName string, matcher *labels.Matcher, shard *astmapper.ShardAnnotation) ([]string, error) {
- return c.lookupIdsByMetricNameMatcher(ctx, from, through, userID, metricName, matcher, func(queries []series_index.Query) []series_index.Query {
+ return c.lookupIDsByMetricNameMatcher(ctx, from, through, userID, metricName, matcher, func(queries []series_index.Query) []series_index.Query {
return c.schema.FilterReadQueries(queries, shard)
})
}
-func (c *IndexReaderWriter) lookupIdsByMetricNameMatcher(ctx context.Context, from, through model.Time, userID, metricName string, matcher *labels.Matcher, filter func([]series_index.Query) []series_index.Query) ([]string, error) {
+func (c *IndexReaderWriter) lookupIDsByMetricNameMatcher(ctx context.Context, from, through model.Time, userID, metricName string, matcher *labels.Matcher, filter func([]series_index.Query) []series_index.Query) ([]string, error) {
var err error
var queries []series_index.Query
var labelName string
diff --git a/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go b/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go
index fb802fd63b9a5..e7723b6d26536 100644
--- a/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go
@@ -181,7 +181,7 @@ func TestFetcher_DownloadQueue(t *testing.T) {
_, err := newDownloadQueue[bool, bool](
tc.size,
tc.workers,
- func(ctx context.Context, r downloadRequest[bool, bool]) {},
+ func(_ context.Context, _ downloadRequest[bool, bool]) {},
log.NewNopLogger(),
)
require.ErrorContains(t, err, tc.err)
diff --git a/pkg/storage/stores/shipper/bloomshipper/store.go b/pkg/storage/stores/shipper/bloomshipper/store.go
index 363fb7806ece3..b486b7ca8e524 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store.go
@@ -314,7 +314,7 @@ func NewBloomStore(
// sort by From time
sort.Slice(periodicConfigs, func(i, j int) bool {
- return periodicConfigs[i].From.Time.Before(periodicConfigs[i].From.Time)
+ return periodicConfigs[i].From.Time.Before(periodicConfigs[j].From.Time)
})
// TODO(chaudum): Remove wrapper
diff --git a/pkg/storage/stores/shipper/bloomshipper/store_test.go b/pkg/storage/stores/shipper/bloomshipper/store_test.go
index 15568e8763bd2..6a6705f8f0be0 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store_test.go
@@ -353,7 +353,7 @@ func TestBloomStore_TenantFilesForInterval(t *testing.T) {
tenantFiles, err := store.TenantFilesForInterval(
ctx,
NewInterval(parseTime("2024-01-18 00:00"), parseTime("2024-02-12 00:00")),
- func(tenant string, object client.StorageObject) bool {
+ func(tenant string, _ client.StorageObject) bool {
return tenant == "1"
},
)
diff --git a/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/table_compactor.go b/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/table_compactor.go
index bdd42afc935d6..174a48d498f4e 100644
--- a/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/table_compactor.go
+++ b/pkg/storage/stores/shipper/indexshipper/boltdb/compactor/table_compactor.go
@@ -280,7 +280,7 @@ func (t *tableCompactor) compactUserIndexes(idxSet compactor.IndexSet) (*Compact
}
// go through each file and dump records in the local bucket of the new compacted file
- err = concurrency.ForEachJob(t.ctx, len(indexes), readDBsConcurrency, func(ctx context.Context, idx int) error {
+ err = concurrency.ForEachJob(t.ctx, len(indexes), readDBsConcurrency, func(_ context.Context, idx int) error {
downloadAt, err := idxSet.GetSourceFile(indexes[idx])
if err != nil {
return err
@@ -296,7 +296,7 @@ func (t *tableCompactor) compactUserIndexes(idxSet compactor.IndexSet) (*Compact
}
dbPair.db = db
- err = readFile(idxSet.GetLogger(), dbPair, func(bucketName string, batch []indexEntry) error {
+ err = readFile(idxSet.GetLogger(), dbPair, func(_ string, batch []indexEntry) error {
return writeBatch(compactedFile, batch)
})
if err != nil {
@@ -341,7 +341,7 @@ func (t *tableCompactor) compactCommonIndexes(ctx context.Context) (*CompactedIn
var fetchStateMx sync.Mutex
defer func() {
- err := concurrency.ForEachJob(ctx, len(dbsToRead), readDBsConcurrency, func(ctx context.Context, idx int) error {
+ err := concurrency.ForEachJob(ctx, len(dbsToRead), readDBsConcurrency, func(_ context.Context, idx int) error {
dbsToRead[idx].cleanup(idxSet.GetLogger())
return nil
})
@@ -351,7 +351,7 @@ func (t *tableCompactor) compactCommonIndexes(ctx context.Context) (*CompactedIn
}()
// fetch common index files and extract information about tenants that have records in a given file
- err = concurrency.ForEachJob(ctx, len(indexes), readDBsConcurrency, func(ctx context.Context, idx int) error {
+ err = concurrency.ForEachJob(ctx, len(indexes), readDBsConcurrency, func(_ context.Context, idx int) error {
workNum := idx
// skip seed file
if workNum == compactedFileIdx {
@@ -378,7 +378,7 @@ func (t *tableCompactor) compactCommonIndexes(ctx context.Context) (*CompactedIn
dbsToRead[idx].db = db
return db.View(func(tx *bbolt.Tx) error {
- return tx.ForEach(func(name []byte, b *bbolt.Bucket) error {
+ return tx.ForEach(func(name []byte, _ *bbolt.Bucket) error {
bucketNameStr := string(name)
if bucketNameStr == shipper_util.GetUnsafeString(local.IndexBucketName) {
return nil
@@ -396,13 +396,13 @@ func (t *tableCompactor) compactCommonIndexes(ctx context.Context) (*CompactedIn
return nil, errors.Wrap(err, "unable to fetch index files and extract tenants: ")
}
- tenantIdsSlice := make([]string, 0, len(tenantsToFetch))
+ tenantIDsSlice := make([]string, 0, len(tenantsToFetch))
for tenant := range tenantsToFetch {
- tenantIdsSlice = append(tenantIdsSlice, tenant)
+ tenantIDsSlice = append(tenantIDsSlice, tenant)
}
- err = concurrency.ForEachJob(ctx, len(tenantIdsSlice), readDBsConcurrency, func(ctx context.Context, idx int) error {
- userID := tenantIdsSlice[idx]
+ err = concurrency.ForEachJob(ctx, len(tenantIDsSlice), readDBsConcurrency, func(_ context.Context, idx int) error {
+ userID := tenantIDsSlice[idx]
return t.fetchOrCreateUserCompactedIndexSet(userID)
})
@@ -411,7 +411,7 @@ func (t *tableCompactor) compactCommonIndexes(ctx context.Context) (*CompactedIn
}
// go through each file and build index in FORMAT1 from FORMAT1 indexes and FORMAT3 from FORMAT2 indexes
- err = concurrency.ForEachJob(ctx, len(indexes), readDBsConcurrency, func(ctx context.Context, idx int) error {
+ err = concurrency.ForEachJob(ctx, len(indexes), readDBsConcurrency, func(_ context.Context, idx int) error {
workNum := idx
// skip seed file
if workNum == compactedFileIdx {
diff --git a/pkg/storage/stores/shipper/indexshipper/boltdb/table.go b/pkg/storage/stores/shipper/indexshipper/boltdb/table.go
index f1893ccc33563..abb40f0bf1684 100644
--- a/pkg/storage/stores/shipper/indexshipper/boltdb/table.go
+++ b/pkg/storage/stores/shipper/indexshipper/boltdb/table.go
@@ -109,7 +109,7 @@ func (lt *Table) Snapshot() error {
for name, db := range lt.dbs {
level.Debug(util_log.Logger).Log("msg", fmt.Sprintf("checking db %s for snapshot", name))
srcWriteCount := int64(0)
- err := db.View(func(tx *bbolt.Tx) error {
+ err := db.View(func(_ *bbolt.Tx) error {
srcWriteCount = db.Stats().TxStats.Write
return nil
})
diff --git a/pkg/storage/stores/shipper/indexshipper/boltdb/table_manager_test.go b/pkg/storage/stores/shipper/indexshipper/boltdb/table_manager_test.go
index 9cd73fe3e60c6..4632a40c7c863 100644
--- a/pkg/storage/stores/shipper/indexshipper/boltdb/table_manager_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/boltdb/table_manager_test.go
@@ -143,7 +143,7 @@ func TestLoadTables(t *testing.T) {
for tableName, expectedIndex := range expectedTables {
// loaded tables should not have any index files, it should have handed them over to index shipper
testutil.VerifyIndexes(t, userID, []index.Query{{TableName: tableName}},
- func(ctx context.Context, table string, callback func(b *bbolt.DB) error) error {
+ func(ctx context.Context, _ string, callback func(_ *bbolt.DB) error) error {
return tm.tables[tableName].ForEach(ctx, callback)
},
0, 0)
@@ -187,7 +187,7 @@ func TestTableManager_BatchWrite(t *testing.T) {
for tableName, expectedIndex := range tc {
require.NoError(t, tm.tables[tableName].Snapshot())
testutil.VerifyIndexes(t, userID, []index.Query{{TableName: tableName}},
- func(ctx context.Context, table string, callback func(b *bbolt.DB) error) error {
+ func(_ context.Context, _ string, callback func(_ *bbolt.DB) error) error {
return tm.tables[tableName].ForEach(context.Background(), callback)
},
expectedIndex.start, expectedIndex.numRecords)
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go b/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go
index 8eae835b441c3..8edd121071c5e 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go
@@ -20,7 +20,6 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk/client/util"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/index"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/storage"
- util_log "github.com/grafana/loki/v3/pkg/util/log"
"github.com/grafana/loki/v3/pkg/util/spanlogger"
)
@@ -32,7 +31,7 @@ const (
var errIndexListCacheTooStale = fmt.Errorf("index list cache too stale")
type IndexSet interface {
- Init(forQuerying bool) error
+ Init(forQuerying bool, logger log.Logger) error
Close()
ForEach(ctx context.Context, callback index.ForEachIndexCallback) error
ForEachConcurrent(ctx context.Context, callback index.ForEachIndexCallback) error
@@ -94,14 +93,12 @@ func NewIndexSet(tableName, userID, cacheLocation string, baseIndexSet storage.I
}
// Init downloads all the db files for the table from object storage.
-func (t *indexSet) Init(forQuerying bool) (err error) {
+func (t *indexSet) Init(forQuerying bool, logger log.Logger) (err error) {
// Using background context to avoid cancellation of download when request times out.
// We would anyways need the files for serving next requests.
ctx := context.Background()
ctx, t.cancelFunc = context.WithTimeout(ctx, downloadTimeout)
- logger, ctx := spanlogger.NewWithLogger(ctx, t.logger, "indexSet.Init")
-
defer func() {
if err != nil {
level.Error(logger).Log("msg", "failed to initialize table, cleaning it up", "table", t.tableName, "err", err)
@@ -186,7 +183,7 @@ func (t *indexSet) ForEach(ctx context.Context, callback index.ForEachIndexCallb
}
defer t.indexMtx.rUnlock()
- logger := util_log.WithContext(ctx, t.logger)
+ logger := spanlogger.FromContextWithFallback(ctx, t.logger)
level.Debug(logger).Log("index-files-count", len(t.index))
for _, idx := range t.index {
@@ -205,7 +202,7 @@ func (t *indexSet) ForEachConcurrent(ctx context.Context, callback index.ForEach
}
defer t.indexMtx.rUnlock()
- logger := util_log.WithContext(ctx, t.logger)
+ logger := spanlogger.FromContextWithFallback(ctx, t.logger)
level.Debug(logger).Log("index-files-count", len(t.index))
if len(t.index) == 0 {
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go b/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go
index 5a2f6522de9f2..988c0457fd190 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go
@@ -26,7 +26,7 @@ func buildTestIndexSet(t *testing.T, userID, path string) (*indexSet, stopFunc)
}, util_log.Logger)
require.NoError(t, err)
- require.NoError(t, idxSet.Init(false))
+ require.NoError(t, idxSet.Init(false, util_log.Logger))
return idxSet.(*indexSet), idxSet.Close
}
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/table.go b/pkg/storage/stores/shipper/indexshipper/downloads/table.go
index f329c3b41dcd8..1bae83c51e0e9 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/table.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/table.go
@@ -109,13 +109,14 @@ func LoadTable(name, cacheLocation string, storageClient storage.Client, openInd
}
userID := entry.Name()
+ logger := loggerWithUserID(table.logger, userID)
userIndexSet, err := NewIndexSet(name, userID, filepath.Join(cacheLocation, userID),
- table.baseUserIndexSet, openIndexFileFunc, loggerWithUserID(table.logger, userID))
+ table.baseUserIndexSet, openIndexFileFunc, logger)
if err != nil {
return nil, err
}
- err = userIndexSet.Init(false)
+ err = userIndexSet.Init(false, logger)
if err != nil {
return nil, err
}
@@ -129,7 +130,7 @@ func LoadTable(name, cacheLocation string, storageClient storage.Client, openInd
return nil, err
}
- err = commonIndexSet.Init(false)
+ err = commonIndexSet.Init(false, table.logger)
if err != nil {
return nil, err
}
@@ -287,7 +288,7 @@ func (t *table) Sync(ctx context.Context) error {
// forQuerying must be set to true only getting the index for querying since
// it captures the amount of time it takes to download the index at query time.
func (t *table) getOrCreateIndexSet(ctx context.Context, id string, forQuerying bool) (IndexSet, error) {
- logger := spanlogger.FromContextWithFallback(ctx, log.With(t.logger, "user", id, "table", t.name))
+ logger := spanlogger.FromContextWithFallback(ctx, loggerWithUserID(t.logger, id))
t.indexSetsMtx.RLock()
indexSet, ok := t.indexSets[id]
@@ -311,7 +312,7 @@ func (t *table) getOrCreateIndexSet(ctx context.Context, id string, forQuerying
}
// instantiate the index set, add it to the map
- indexSet, err = NewIndexSet(t.name, id, filepath.Join(t.cacheLocation, id), baseIndexSet, t.openIndexFileFunc, logger)
+ indexSet, err = NewIndexSet(t.name, id, filepath.Join(t.cacheLocation, id), baseIndexSet, t.openIndexFileFunc, loggerWithUserID(t.logger, id))
if err != nil {
return nil, err
}
@@ -321,7 +322,7 @@ func (t *table) getOrCreateIndexSet(ctx context.Context, id string, forQuerying
// it is up to the caller to wait for its readiness using IndexSet.AwaitReady()
go func() {
start := time.Now()
- err := indexSet.Init(forQuerying)
+ err := indexSet.Init(forQuerying, logger)
duration := time.Since(start)
level.Info(logger).Log("msg", "init index set", "duration", duration, "success", err == nil)
diff --git a/pkg/storage/stores/shipper/indexshipper/testutil/testutil.go b/pkg/storage/stores/shipper/indexshipper/testutil/testutil.go
index 48f5990dc0790..8f602aad855e0 100644
--- a/pkg/storage/stores/shipper/indexshipper/testutil/testutil.go
+++ b/pkg/storage/stores/shipper/indexshipper/testutil/testutil.go
@@ -94,7 +94,7 @@ func VerifySingleIndexFile(t *testing.T, query index.Query, db *bbolt.DB, bucket
func makeTestCallback(t *testing.T, minValue, maxValue int, records map[string]string) index.QueryPagesCallback {
t.Helper()
recordsMtx := sync.Mutex{}
- return func(query index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
+ return func(_ index.Query, batch index.ReadBatchResult) (shouldContinue bool) {
itr := batch.Iterator()
for itr.Next() {
require.Equal(t, itr.RangeValue(), itr.Value())
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go b/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go
index 5c2ae28d89351..df8ea85465142 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/compactor.go
@@ -99,7 +99,7 @@ func (t *tableCompactor) CompactTable() error {
downloadPaths := make([]string, len(multiTenantIndexes))
// concurrently download and open all the multi-tenant indexes
- err := concurrency.ForEachJob(t.ctx, len(multiTenantIndexes), readDBsConcurrency, func(ctx context.Context, job int) error {
+ err := concurrency.ForEachJob(t.ctx, len(multiTenantIndexes), readDBsConcurrency, func(_ context.Context, job int) error {
downloadedAt, err := t.commonIndexSet.GetSourceFile(multiTenantIndexes[job])
if err != nil {
return err
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/compactor_test.go b/pkg/storage/stores/shipper/indexshipper/tsdb/compactor_test.go
index 5f8a5b1e6d9d5..23a951deacbd6 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/compactor_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/compactor_test.go
@@ -144,7 +144,7 @@ func setupMultiTenantIndex(t *testing.T, indexFormat int, userStreams map[string
_, err := b.Build(
context.Background(),
t.TempDir(),
- func(from, through model.Time, checksum uint32) Identifier {
+ func(_, _ model.Time, _ uint32) Identifier {
return dst
},
)
@@ -609,7 +609,7 @@ func TestCompactor_Compact(t *testing.T) {
require.NoError(t, err)
actualChunks = map[string]index.ChunkMetas{}
- err = indexFile.(*TSDBFile).Index.(*TSDBIndex).ForSeries(context.Background(), "", nil, 0, math.MaxInt64, func(lbls labels.Labels, fp model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
+ err = indexFile.(*TSDBFile).Index.(*TSDBIndex).ForSeries(context.Background(), "", nil, 0, math.MaxInt64, func(lbls labels.Labels, _ model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
actualChunks[lbls.String()] = chks
return false
}, labels.MustNewMatcher(labels.MatchEqual, "", ""))
@@ -824,7 +824,7 @@ func TestCompactedIndex(t *testing.T) {
require.NoError(t, err)
foundChunks := map[string]index.ChunkMetas{}
- err = indexFile.(*TSDBFile).Index.(*TSDBIndex).ForSeries(context.Background(), "", nil, 0, math.MaxInt64, func(lbls labels.Labels, fp model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
+ err = indexFile.(*TSDBFile).Index.(*TSDBIndex).ForSeries(context.Background(), "", nil, 0, math.MaxInt64, func(lbls labels.Labels, _ model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
foundChunks[lbls.String()] = append(index.ChunkMetas{}, chks...)
return false
}, labels.MustNewMatcher(labels.MatchEqual, "", ""))
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go b/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go
index f3cb7653cbe9f..7a34ecfdeb355 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/index/index.go
@@ -1347,44 +1347,37 @@ func newReader(b ByteSlice, c io.Closer) (*Reader, error) {
// Earlier V1 formats don't have a sorted postings offset table, so
// load the whole offset table into memory.
r.postingsV1 = map[string]map[string]uint64{}
- if err := ReadOffsetTable(r.b, r.toc.PostingsTable, func(key []string, off uint64, _ int) error {
- if len(key) != 2 {
- return errors.Errorf("unexpected key length for posting table %d", len(key))
+ if err := ReadOffsetTable(r.b, r.toc.PostingsTable, func(name, value []byte, off uint64, _ int) error {
+ if _, ok := r.postingsV1[string(name)]; !ok {
+ r.postingsV1[string(name)] = map[string]uint64{}
+ r.postings[string(name)] = nil // Used to get a list of labelnames in places.
}
- if _, ok := r.postingsV1[key[0]]; !ok {
- r.postingsV1[key[0]] = map[string]uint64{}
- r.postings[key[0]] = nil // Used to get a list of labelnames in places.
- }
- r.postingsV1[key[0]][key[1]] = off
+ r.postingsV1[string(name)][string(value)] = off
return nil
}); err != nil {
return nil, errors.Wrap(err, "read postings table")
}
} else {
- var lastKey []string
+ var lastName, lastValue []byte
lastOff := 0
valueCount := 0
// For the postings offset table we keep every label name but only every nth
// label value (plus the first and last one), to save memory.
- if err := ReadOffsetTable(r.b, r.toc.PostingsTable, func(key []string, _ uint64, off int) error {
- if len(key) != 2 {
- return errors.Errorf("unexpected key length for posting table %d", len(key))
- }
- if _, ok := r.postings[key[0]]; !ok {
+ if err := ReadOffsetTable(r.b, r.toc.PostingsTable, func(name, value []byte, _ uint64, off int) error {
+ if _, ok := r.postings[string(name)]; !ok {
// Next label name.
- r.postings[key[0]] = []postingOffset{}
- if lastKey != nil {
+ r.postings[string(name)] = []postingOffset{}
+ if lastName != nil {
// Always include last value for each label name.
- r.postings[lastKey[0]] = append(r.postings[lastKey[0]], postingOffset{value: lastKey[1], off: lastOff})
+ r.postings[string(lastName)] = append(r.postings[string(lastName)], postingOffset{value: string(lastValue), off: lastOff})
}
- lastKey = nil
valueCount = 0
}
if valueCount%symbolFactor == 0 {
- r.postings[key[0]] = append(r.postings[key[0]], postingOffset{value: key[1], off: off})
- lastKey = nil
+ r.postings[string(name)] = append(r.postings[string(name)], postingOffset{value: string(value), off: off})
+ lastName, lastValue = nil, nil
} else {
- lastKey = key
+ lastName, lastValue = name, value
lastOff = off
}
valueCount++
@@ -1392,8 +1385,8 @@ func newReader(b ByteSlice, c io.Closer) (*Reader, error) {
}); err != nil {
return nil, errors.Wrap(err, "read postings table")
}
- if lastKey != nil {
- r.postings[lastKey[0]] = append(r.postings[lastKey[0]], postingOffset{value: lastKey[1], off: lastOff})
+ if lastName != nil {
+ r.postings[string(lastName)] = append(r.postings[string(lastName)], postingOffset{value: string(lastValue), off: lastOff})
}
// Trim any extra space in the slices.
for k, v := range r.postings {
@@ -1443,15 +1436,12 @@ type Range struct {
// for all postings lists.
func (r *Reader) PostingsRanges() (map[labels.Label]Range, error) {
m := map[labels.Label]Range{}
- if err := ReadOffsetTable(r.b, r.toc.PostingsTable, func(key []string, off uint64, _ int) error {
- if len(key) != 2 {
- return errors.Errorf("unexpected key length for posting table %d", len(key))
- }
+ if err := ReadOffsetTable(r.b, r.toc.PostingsTable, func(name, value []byte, off uint64, _ int) error {
d := encoding.DecWrap(tsdb_enc.NewDecbufAt(r.b, int(off), castagnoliTable))
if d.Err() != nil {
return d.Err()
}
- m[labels.Label{Name: key[0], Value: key[1]}] = Range{
+ m[labels.Label{Name: string(name), Value: string(value)}] = Range{
Start: int64(off) + 4,
End: int64(off) + 4 + int64(d.Len()),
}
@@ -1606,27 +1596,24 @@ func (s symbolsIter) Err() error { return s.err }
// ReadOffsetTable reads an offset table and at the given position calls f for each
// found entry. If f returns an error it stops decoding and returns the received error.
-func ReadOffsetTable(bs ByteSlice, off uint64, f func([]string, uint64, int) error) error {
+func ReadOffsetTable(bs ByteSlice, off uint64, f func(name, value []byte, postingsOffset uint64, labelOffset int) error) error {
d := encoding.DecWrap(tsdb_enc.NewDecbufAt(bs, int(off), castagnoliTable))
startLen := d.Len()
cnt := d.Be32()
for d.Err() == nil && d.Len() > 0 && cnt > 0 {
offsetPos := startLen - d.Len()
- keyCount := d.Uvarint()
- // The Postings offset table takes only 2 keys per entry (name and value of label),
- // and the LabelIndices offset table takes only 1 key per entry (a label name).
- // Hence setting the size to max of both, i.e. 2.
- keys := make([]string, 0, 2)
-
- for i := 0; i < keyCount; i++ {
- keys = append(keys, d.UvarintStr())
+ if keyCount := d.Uvarint(); keyCount != 2 {
+ return fmt.Errorf("unexpected number of keys for postings offset table %d", keyCount)
}
+
+ name := d.UvarintBytes()
+ value := d.UvarintBytes()
o := d.Uvarint64()
if d.Err() != nil {
break
}
- if err := f(keys, o, offsetPos); err != nil {
+ if err := f(name, value, o, offsetPos); err != nil {
return err
}
cnt--
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/index/index_test.go b/pkg/storage/stores/shipper/indexshipper/tsdb/index/index_test.go
index 2f8576b825649..2a1b4f4d58dcc 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/index/index_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/index/index_test.go
@@ -17,6 +17,7 @@ import (
"context"
"fmt"
"hash/crc32"
+ "io"
"math"
"math/rand"
"os"
@@ -198,28 +199,30 @@ func TestIndexRW_Postings(t *testing.T) {
require.NoError(t, p.Err())
// The label indices are no longer used, so test them by hand here.
- labelIndices := map[string][]string{}
- require.NoError(t, ReadOffsetTable(ir.b, ir.toc.LabelIndicesTable, func(key []string, off uint64, _ int) error {
- if len(key) != 1 {
- return errors.Errorf("unexpected key length for label indices table %d", len(key))
- }
+ labelValuesOffsets := map[string]uint64{}
+ d := tsdb_enc.NewDecbufAt(ir.b, int(ir.toc.LabelIndicesTable), castagnoliTable)
+ cnt := d.Be32()
+
+ for d.Err() == nil && d.Len() > 0 && cnt > 0 {
+ require.Equal(t, 1, d.Uvarint(), "Unexpected number of keys for label indices table")
+ lbl := d.UvarintStr()
+ off := d.Uvarint64()
+ labelValuesOffsets[lbl] = off
+ cnt--
+ }
+ require.NoError(t, d.Err())
+ labelIndices := map[string][]string{}
+ for lbl, off := range labelValuesOffsets {
d := tsdb_enc.NewDecbufAt(ir.b, int(off), castagnoliTable)
- vals := []string{}
- nc := d.Be32int()
- if nc != 1 {
- return errors.Errorf("unexpected number of label indices table names %d", nc)
- }
- for i := d.Be32(); i > 0; i-- {
+ require.Equal(t, 1, d.Be32int(), "Unexpected number of label indices table names")
+ for i := d.Be32(); i > 0 && d.Err() == nil; i-- {
v, err := ir.lookupSymbol(d.Be32())
- if err != nil {
- return err
- }
- vals = append(vals, v)
+ require.NoError(t, err)
+ labelIndices[lbl] = append(labelIndices[lbl], v)
}
- labelIndices[key[0]] = vals
- return d.Err()
- }))
+ require.NoError(t, d.Err())
+ }
require.Equal(t, map[string][]string{
"a": {"1"},
"b": {"1", "2", "3", "4"},
@@ -940,3 +943,71 @@ func TestChunkSamples_getChunkSampleForQueryStarting(t *testing.T) {
})
}
}
+
+func BenchmarkInitReader_ReadOffsetTable(b *testing.B) {
+ dir := b.TempDir()
+ idxFile := filepath.Join(dir, IndexFilename)
+
+ lbls, err := labels.ReadLabels(filepath.Join("..", "testdata", "20kseries.json"), 1000)
+ require.NoError(b, err)
+
+ // Sort labels as the index writer expects series in sorted order by fingerprint.
+ sort.Slice(lbls, func(i, j int) bool {
+ return lbls[i].Hash() < lbls[j].Hash()
+ })
+
+ symbols := map[string]struct{}{}
+ for _, lset := range lbls {
+ for _, l := range lset {
+ symbols[l.Name] = struct{}{}
+ symbols[l.Value] = struct{}{}
+ }
+ }
+
+ var input indexWriterSeriesSlice
+
+ // Generate ChunkMetas for every label set.
+ for _, lset := range lbls {
+ input = append(input, &indexWriterSeries{
+ labels: lset,
+ chunks: []ChunkMeta{
+ {
+ MinTime: 0,
+ MaxTime: 1,
+ Checksum: rand.Uint32(),
+ },
+ },
+ })
+ }
+
+ iw, err := NewWriter(context.Background(), FormatV3, idxFile)
+ require.NoError(b, err)
+
+ var syms []string
+ for s := range symbols {
+ syms = append(syms, s)
+ }
+ sort.Strings(syms)
+ for _, s := range syms {
+ require.NoError(b, iw.AddSymbol(s))
+ }
+
+ for i, s := range input {
+ err = iw.AddSeries(storage.SeriesRef(i), s.labels, model.Fingerprint(s.labels.Hash()), s.chunks...)
+ require.NoError(b, err)
+ }
+
+ err = iw.Close()
+ require.NoError(b, err)
+
+ bs, err := os.ReadFile(idxFile)
+ require.NoError(b, err)
+
+ b.ResetTimer()
+ b.ReportAllocs()
+ for i := 0; i < b.N; i++ {
+ r, err := newReader(RealByteSlice(bs), io.NopCloser(nil))
+ require.NoError(b, err)
+ require.NoError(b, r.Close())
+ }
+}
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/manager.go b/pkg/storage/stores/shipper/indexshipper/tsdb/manager.go
index c50d3e00f12f7..96f56d7021f45 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/manager.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/manager.go
@@ -221,7 +221,7 @@ func (m *tsdbManager) buildFromHead(heads *tenantHeads, indexShipper indexshippe
_, err = b.Build(
context.Background(),
filepath.Join(managerScratchDir(m.dir), m.name),
- func(from, through model.Time, checksum uint32) Identifier {
+ func(_, _ model.Time, _ uint32) Identifier {
return dst
},
)
diff --git a/pkg/storage/stores/shipper/indexshipper/tsdb/single_file_index.go b/pkg/storage/stores/shipper/indexshipper/tsdb/single_file_index.go
index 255425b286f22..6bd7e6e79a251 100644
--- a/pkg/storage/stores/shipper/indexshipper/tsdb/single_file_index.go
+++ b/pkg/storage/stores/shipper/indexshipper/tsdb/single_file_index.go
@@ -218,7 +218,7 @@ func (i *TSDBIndex) GetChunkRefs(ctx context.Context, userID string, from, throu
}
res = res[:0]
- if err := i.ForSeries(ctx, "", fpFilter, from, through, func(ls labels.Labels, fp model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
+ if err := i.ForSeries(ctx, "", fpFilter, from, through, func(_ labels.Labels, fp model.Fingerprint, chks []index.ChunkMeta) (stop bool) {
for _, chk := range chks {
res = append(res, ChunkRef{
diff --git a/pkg/storage/stores/shipper/indexshipper/util/queries_test.go b/pkg/storage/stores/shipper/indexshipper/util/queries_test.go
index a33da42c264f0..4622d027aa736 100644
--- a/pkg/storage/stores/shipper/indexshipper/util/queries_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/util/queries_test.go
@@ -64,7 +64,7 @@ func TestDoParallelQueries(t *testing.T) {
queries: map[string]index.Query{},
}
- err := DoParallelQueries(context.Background(), tableQuerier.MultiQueries, queries, func(query index.Query, batch index.ReadBatchResult) bool {
+ err := DoParallelQueries(context.Background(), tableQuerier.MultiQueries, queries, func(_ index.Query, _ index.ReadBatchResult) bool {
return false
})
require.NoError(t, err)
diff --git a/pkg/storage/wal/segment.go b/pkg/storage/wal/segment.go
index 93b824bbcb70e..83841aab1697e 100644
--- a/pkg/storage/wal/segment.go
+++ b/pkg/storage/wal/segment.go
@@ -177,7 +177,7 @@ func (b *SegmentWriter) Append(tenantID, labelsString string, lbls labels.Labels
b.lastAppend = now
for _, e := range entries {
- b.inputSize.Add(int64(len(e.Line)))
+ b.inputSize.Add(int64(len(e.Line))) // todo(cyriltovena): should add the size of structured metadata
}
id := streamID{labels: labelsString, tenant: tenantID}
s := b.getOrCreateStream(id, lbls)
diff --git a/pkg/tool/commands/rules.go b/pkg/tool/commands/rules.go
index 4abc14162eddd..2eb5c0eb6cdd3 100644
--- a/pkg/tool/commands/rules.go
+++ b/pkg/tool/commands/rules.go
@@ -628,7 +628,7 @@ func (r *RuleCommand) prepare(_ *kingpin.ParseContext) error {
}
// Do not apply the aggregation label to excluded rule groups.
- applyTo := func(group rwrulefmt.RuleGroup, rule rulefmt.RuleNode) bool {
+ applyTo := func(group rwrulefmt.RuleGroup, _ rulefmt.RuleNode) bool {
_, excluded := r.aggregationLabelExcludedRuleGroupsList[group.Name]
return !excluded
}
diff --git a/pkg/tool/rules/rules.go b/pkg/tool/rules/rules.go
index eccfbdabe45a4..4ac84f7da92c4 100644
--- a/pkg/tool/rules/rules.go
+++ b/pkg/tool/rules/rules.go
@@ -148,7 +148,7 @@ func (r RuleNamespace) AggregateBy(label string, applyTo func(group rwrulefmt.Ru
// exprNodeInspectorFunc returns a PromQL inspector.
// It modifies most PromQL expressions to include a given label.
func exprNodeInspectorFunc(rule rulefmt.RuleNode, label string) func(node parser.Node, path []parser.Node) error {
- return func(node parser.Node, path []parser.Node) error {
+ return func(node parser.Node, _ []parser.Node) error {
var err error
switch n := node.(type) {
case *parser.AggregateExpr:
diff --git a/pkg/tool/rules/rules_test.go b/pkg/tool/rules/rules_test.go
index fba13040d49b8..8c24a7d8ab490 100644
--- a/pkg/tool/rules/rules_test.go
+++ b/pkg/tool/rules/rules_test.go
@@ -176,7 +176,7 @@ func TestAggregateBy(t *testing.T) {
},
},
},
- applyTo: func(group rwrulefmt.RuleGroup, rule rulefmt.RuleNode) bool {
+ applyTo: func(group rwrulefmt.RuleGroup, _ rulefmt.RuleNode) bool {
return group.Name != "CountSkipped"
},
expectedExpr: []string{`count by (namespace, cluster) (test_series) > 1`, `count by (namespace) (test_series) > 1`},
diff --git a/pkg/util/atomicfs/fsync.go b/pkg/util/atomicfs/fsync.go
new file mode 100644
index 0000000000000..5d85865c495cc
--- /dev/null
+++ b/pkg/util/atomicfs/fsync.go
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package atomicfs
+
+import (
+ "io"
+ "os"
+ "path/filepath"
+
+ "github.com/grafana/dskit/multierror"
+)
+
+// Create creates a new file at a temporary path that will be renamed to the
+// supplied path on close from a temporary file in the same directory, ensuring
+// all data and the containing directory have been fsynced to disk.
+func Create(path string) (*File, error) {
+ // We rename from a temporary file in the same directory to because rename
+ // can only operate on two files that are on the same filesystem. Creating
+ // a temporary file in the same directory is an easy way to guarantee that.
+ final := filepath.Clean(path)
+ tmp := tempPath(final)
+
+ file, err := os.Create(tmp)
+ if err != nil {
+ return nil, err
+ }
+
+ return &File{
+ File: file,
+ finalPath: final,
+ }, nil
+}
+
+// tempPath returns a path for the temporary version of a file. This function exists
+// to ensure the logic here stays in sync with unit tests that check for this file being
+// cleaned up.
+func tempPath(final string) string {
+ return final + ".tmp"
+}
+
+// File is a wrapper around an os.File instance that uses a temporary file for writes
+// that is renamed to its final path when Close is called. The Close method will also
+// ensure that all data from the file has been fsynced as well as the containing
+// directory. If the temporary file cannot be renamed or fsynced on Close, it is
+// removed.
+type File struct {
+ *os.File
+ finalPath string
+}
+
+func (a *File) Close() error {
+ cleanup := true
+ defer func() {
+ if cleanup {
+ _ = os.Remove(a.File.Name())
+ }
+ }()
+
+ merr := multierror.New()
+ merr.Add(a.File.Sync())
+ merr.Add(a.File.Close())
+ if err := merr.Err(); err != nil {
+ return err
+ }
+
+ if err := os.Rename(a.File.Name(), a.finalPath); err != nil {
+ return err
+ }
+
+ cleanup = false
+ // After writing the file and calling fsync on it, fsync the containing directory
+ // to ensure the directory entry is persisted to disk.
+ //
+ // From https://man7.org/linux/man-pages/man2/fsync.2.html
+ // > Calling fsync() does not necessarily ensure that the entry in the
+ // > directory containing the file has also reached disk. For that an
+ // > explicit fsync() on a file descriptor for the directory is also
+ // > needed.
+ dir, err := os.Open(filepath.Dir(a.finalPath))
+ if err != nil {
+ return err
+ }
+
+ merr.Add(dir.Sync())
+ merr.Add(dir.Close())
+ return merr.Err()
+}
+
+// CreateFile safely writes the contents of data to filePath, ensuring that all data
+// has been fsynced as well as the containing directory of the file.
+func CreateFile(filePath string, data io.Reader) error {
+ f, err := Create(filePath)
+ if err != nil {
+ return err
+ }
+
+ _, err = io.Copy(f, data)
+ merr := multierror.New(err)
+ merr.Add(f.Close())
+ return merr.Err()
+}
diff --git a/pkg/util/atomicfs/fsync_test.go b/pkg/util/atomicfs/fsync_test.go
new file mode 100644
index 0000000000000..65933b3d9c16e
--- /dev/null
+++ b/pkg/util/atomicfs/fsync_test.go
@@ -0,0 +1,84 @@
+// SPDX-License-Identifier: AGPL-3.0-only
+
+package atomicfs
+
+import (
+ "os"
+ "path/filepath"
+ "strings"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestCreateFile(t *testing.T) {
+ path := filepath.Join(t.TempDir(), "TestCreateFile")
+ require.NoError(t, CreateFile(path, strings.NewReader("test")))
+
+ // Ensure the temporary file created by CreateFile has been removed.
+ _, err := os.Stat(tempPath(path))
+ require.ErrorIs(t, err, os.ErrNotExist)
+
+ // Ensure the directory entry for the file exists.
+ entries, err := os.ReadDir(filepath.Dir(path))
+ require.NoError(t, err)
+ requireContainsFile(t, entries, path)
+
+ // Check the contents of the file.
+ contents, err := os.ReadFile(path)
+ require.NoError(t, err)
+ require.Equal(t, "test", string(contents))
+}
+
+func TestCreate(t *testing.T) {
+ t.Run("success", func(t *testing.T) {
+ path := filepath.Join(t.TempDir(), "TestCreate")
+ f, err := Create(path)
+ require.NoError(t, err)
+
+ _, err = f.WriteString("test")
+ require.NoError(t, err)
+ require.NoError(t, f.Close())
+
+ // Ensure the directory entry for the file exists.
+ entries, err := os.ReadDir(filepath.Dir(path))
+ require.NoError(t, err)
+ requireContainsFile(t, entries, path)
+
+ // Check the contents of the file.
+ contents, err := os.ReadFile(path)
+ require.NoError(t, err)
+ require.Equal(t, "test", string(contents))
+ })
+
+ t.Run("duplicate close", func(t *testing.T) {
+ path := filepath.Join(t.TempDir(), "TestCreate")
+ f, err := Create(path)
+ require.NoError(t, err)
+
+ _, err = f.WriteString("test")
+ require.NoError(t, err)
+ require.NoError(t, f.Close())
+
+ // File has already been closed, our attempt to fsync and close again should fail.
+ require.ErrorIs(t, f.Close(), os.ErrClosed)
+
+ // Original file _should not_ have been modified by trying to close again.
+ contents, err := os.ReadFile(path)
+ require.NoError(t, err)
+ require.Equal(t, "test", string(contents))
+ })
+
+}
+
+func requireContainsFile(t *testing.T, entries []os.DirEntry, path string) {
+ name := filepath.Base(path)
+
+ for _, entry := range entries {
+ if entry.Name() == name {
+ return
+ }
+ }
+
+ t.Fatalf("expected to find %s in %+v", name, entries)
+}
diff --git a/pkg/util/cfg/dynamic_test.go b/pkg/util/cfg/dynamic_test.go
index b76cc2e79ca94..ab2f568bbf3f8 100644
--- a/pkg/util/cfg/dynamic_test.go
+++ b/pkg/util/cfg/dynamic_test.go
@@ -52,7 +52,7 @@ server:
t.Run("calls ApplyDynamicConfig on provided DynamicCloneable", func(t *testing.T) {
applyDynamicConfigCalled := false
- mockApplyDynamicConfig := func(dst Cloneable) error {
+ mockApplyDynamicConfig := func(_ Cloneable) error {
applyDynamicConfigCalled = true
return nil
}
@@ -113,7 +113,7 @@ type DynamicConfig struct {
func NewDynamicConfig(applyDynamicConfig Source) DynamicConfig {
if applyDynamicConfig == nil {
- applyDynamicConfig = func(config Cloneable) error {
+ applyDynamicConfig = func(_ Cloneable) error {
return nil
}
}
diff --git a/pkg/util/cfg/flag.go b/pkg/util/cfg/flag.go
index c95798883692c..6315ca137beea 100644
--- a/pkg/util/cfg/flag.go
+++ b/pkg/util/cfg/flag.go
@@ -36,7 +36,7 @@ func Flags(args []string, fs *flag.FlagSet) Source {
// dFlags parses the flagset, applying all values set on the slice
func dFlags(fs *flag.FlagSet, args []string) Source {
- return func(dst Cloneable) error {
+ return func(_ Cloneable) error {
// parse the final flagset
return fs.Parse(args)
}
diff --git a/pkg/util/fakeauth/fake_auth.go b/pkg/util/fakeauth/fake_auth.go
index 8f836c9a0c181..250c5726c4ef1 100644
--- a/pkg/util/fakeauth/fake_auth.go
+++ b/pkg/util/fakeauth/fake_auth.go
@@ -55,7 +55,7 @@ var fakeHTTPAuthMiddleware = middleware.Func(func(next http.Handler) http.Handle
})
})
-var fakeGRPCAuthUniaryMiddleware = func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
+var fakeGRPCAuthUniaryMiddleware = func(ctx context.Context, req interface{}, _ *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) {
ctx = user.InjectOrgID(ctx, "fake")
return handler(ctx, req)
}
diff --git a/pkg/util/httpreq/tags_test.go b/pkg/util/httpreq/tags_test.go
index 830e13c84af49..430d616451f7e 100644
--- a/pkg/util/httpreq/tags_test.go
+++ b/pkg/util/httpreq/tags_test.go
@@ -44,7 +44,7 @@ func TestQueryTags(t *testing.T) {
w := httptest.NewRecorder()
checked := false
- mware := ExtractQueryTagsMiddleware().Wrap(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ mware := ExtractQueryTagsMiddleware().Wrap(http.HandlerFunc(func(_ http.ResponseWriter, req *http.Request) {
require.Equal(t, tc.exp, req.Context().Value(QueryTagsHTTPHeader).(string))
checked = true
}))
@@ -85,7 +85,7 @@ func TestQueryMetrics(t *testing.T) {
w := httptest.NewRecorder()
checked := false
- mware := ExtractQueryMetricsMiddleware().Wrap(http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ mware := ExtractQueryMetricsMiddleware().Wrap(http.HandlerFunc(func(_ http.ResponseWriter, req *http.Request) {
require.Equal(t, tc.exp, req.Context().Value(QueryQueueTimeHTTPHeader))
checked = true
}))
diff --git a/pkg/util/jumphash/memcached_client_selector_test.go b/pkg/util/jumphash/memcached_client_selector_test.go
index 0708f06d763e4..939106ad5aac8 100644
--- a/pkg/util/jumphash/memcached_client_selector_test.go
+++ b/pkg/util/jumphash/memcached_client_selector_test.go
@@ -47,7 +47,7 @@ var ips = map[string][]byte{
"microsoft.com:80": net.ParseIP("172.12.34.56"),
}
-var mockTCPResolver = func(network, address string) (*net.TCPAddr, error) {
+var mockTCPResolver = func(_, address string) (*net.TCPAddr, error) {
return &net.TCPAddr{
IP: ips[address],
Port: 0,
diff --git a/pkg/util/limiter/combined_limits.go b/pkg/util/limiter/combined_limits.go
index 92caf2c19d681..3ea2a230634e8 100644
--- a/pkg/util/limiter/combined_limits.go
+++ b/pkg/util/limiter/combined_limits.go
@@ -3,7 +3,6 @@ package limiter
import (
bloombuilder "github.com/grafana/loki/v3/pkg/bloombuild/builder"
bloomplanner "github.com/grafana/loki/v3/pkg/bloombuild/planner"
- "github.com/grafana/loki/v3/pkg/bloomcompactor"
"github.com/grafana/loki/v3/pkg/bloomgateway"
"github.com/grafana/loki/v3/pkg/compactor"
"github.com/grafana/loki/v3/pkg/distributor"
@@ -27,7 +26,6 @@ type CombinedLimits interface {
storage.StoreLimits
indexgateway.Limits
bloomgateway.Limits
- bloomcompactor.Limits
bloomplanner.Limits
bloombuilder.Limits
}
diff --git a/pkg/util/limiter/query_limiter.go b/pkg/util/limiter/query_limiter.go
index 430eee3ebc8be..47f0276c1731a 100644
--- a/pkg/util/limiter/query_limiter.go
+++ b/pkg/util/limiter/query_limiter.go
@@ -97,7 +97,7 @@ func (ql *QueryLimiter) AddChunks(count int) error {
}
if ql.chunkCount.Add(int64(count)) > int64(ql.maxChunksPerQuery) {
- return fmt.Errorf(fmt.Sprintf(ErrMaxChunksPerQueryLimit, ql.maxChunksPerQuery))
+ return fmt.Errorf("%s %d", ErrMaxChunksPerQueryLimit, ql.maxChunksPerQuery)
}
return nil
}
diff --git a/pkg/util/marshal/marshal_test.go b/pkg/util/marshal/marshal_test.go
index c749677f77026..0e08239b20ad5 100644
--- a/pkg/util/marshal/marshal_test.go
+++ b/pkg/util/marshal/marshal_test.go
@@ -1056,7 +1056,7 @@ func Test_WriteTailResponseJSON(t *testing.T) {
{Timestamp: time.Unix(0, 2), Labels: `{app="dropped"}`},
},
},
- NewWebsocketJSONWriter(WebsocketWriterFunc(func(i int, b []byte) error {
+ NewWebsocketJSONWriter(WebsocketWriterFunc(func(_ int, b []byte) error {
require.Equal(t, `{"streams":[{"stream":{"app":"foo"},"values":[["1","foobar"]]}],"dropped_entries":[{"timestamp":"2","labels":{"app":"dropped"}}]}`, string(b))
return nil
})),
diff --git a/pkg/util/metrics_helper.go b/pkg/util/metrics_helper.go
index e4572b4e4a15c..7bf7d3029a260 100644
--- a/pkg/util/metrics_helper.go
+++ b/pkg/util/metrics_helper.go
@@ -5,8 +5,10 @@ import (
"errors"
"fmt"
"math"
+ "strings"
"sync"
+ humanize "github.com/dustin/go-humanize"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
dto "github.com/prometheus/client_model/go"
@@ -841,3 +843,8 @@ func RegisterCounterVec(registerer prometheus.Registerer, namespace, name, help
}
return vec
}
+
+// HumanizeBytes returns a human readable string representation of the given byte value and removes all whitespaces.
+func HumanizeBytes(val uint64) string {
+ return strings.Replace(humanize.Bytes(val), " ", "", 1)
+}
diff --git a/pkg/util/metrics_helper_test.go b/pkg/util/metrics_helper_test.go
index 7ca74ab7b0225..09e80a2afa580 100644
--- a/pkg/util/metrics_helper_test.go
+++ b/pkg/util/metrics_helper_test.go
@@ -1158,3 +1158,18 @@ func verifyLabels(t *testing.T, m prometheus.Collector, filter map[string]string
require.Equal(t, expectedLabels, result)
}
+
+func TestHumanizeBytes(t *testing.T) {
+ tests := map[uint64]string{
+ 1024: "1.0kB",
+ 1024 * 1000: "1.0MB",
+ 1024 * 1000 * 1000: "1.0GB",
+ 10: "10B",
+ }
+
+ for bytes, humanizedBytes := range tests {
+ t.Run(fmt.Sprintf("%d", bytes), func(t *testing.T) {
+ require.Equal(t, humanizedBytes, HumanizeBytes(bytes))
+ })
+ }
+}
diff --git a/pkg/util/querylimits/middleware_test.go b/pkg/util/querylimits/middleware_test.go
index 1861df3ce1f81..acea9fd5d3ebb 100644
--- a/pkg/util/querylimits/middleware_test.go
+++ b/pkg/util/querylimits/middleware_test.go
@@ -12,7 +12,7 @@ import (
)
func Test_MiddlewareWithoutHeader(t *testing.T) {
- nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ nextHandler := http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
limits := ExtractQueryLimitsContext(r.Context())
require.Nil(t, limits)
})
@@ -28,7 +28,7 @@ func Test_MiddlewareWithoutHeader(t *testing.T) {
}
func Test_MiddlewareWithBrokenHeader(t *testing.T) {
- nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ nextHandler := http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
limits := ExtractQueryLimitsContext(r.Context())
require.Nil(t, limits)
})
@@ -56,7 +56,7 @@ func Test_MiddlewareWithHeader(t *testing.T) {
10,
}
- nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ nextHandler := http.HandlerFunc(func(_ http.ResponseWriter, r *http.Request) {
actual := ExtractQueryLimitsContext(r.Context())
require.Equal(t, limits, *actual)
})
diff --git a/pkg/util/ring/ringmanager.go b/pkg/util/ring/ringmanager.go
index b9e0fb9c2a5f5..2834d8ef623b1 100644
--- a/pkg/util/ring/ringmanager.go
+++ b/pkg/util/ring/ringmanager.go
@@ -151,7 +151,7 @@ func (rm *RingManager) startClientMode() error {
rm.Service = services.NewIdleService(func(ctx context.Context) error {
return services.StartManagerAndAwaitHealthy(ctx, rm.subservices)
- }, func(failureCase error) error {
+ }, func(_ error) error {
return services.StopManagerAndAwaitStopped(context.Background(), rm.subservices)
})
diff --git a/pkg/util/server/error_test.go b/pkg/util/server/error_test.go
index 69f2bff163c6c..b0f593c094f14 100644
--- a/pkg/util/server/error_test.go
+++ b/pkg/util/server/error_test.go
@@ -44,7 +44,7 @@ func Test_writeError(t *testing.T) {
{"mixed context and rpc deadline", util.MultiError{context.DeadlineExceeded, status.New(codes.DeadlineExceeded, context.DeadlineExceeded.Error()).Err()}, ErrDeadlineExceeded, http.StatusGatewayTimeout},
{"mixed context, rpc deadline and another", util.MultiError{errors.New("standard error"), context.DeadlineExceeded, status.New(codes.DeadlineExceeded, context.DeadlineExceeded.Error()).Err()}, "3 errors: standard error; context deadline exceeded; rpc error: code = DeadlineExceeded desc = context deadline exceeded", http.StatusInternalServerError},
{"parse error", logqlmodel.ParseError{}, "parse error : ", http.StatusBadRequest},
- {"httpgrpc", httpgrpc.Errorf(http.StatusBadRequest, errors.New("foo").Error()), "foo", http.StatusBadRequest},
+ {"httpgrpc", httpgrpc.Errorf(http.StatusBadRequest, "%s", errors.New("foo").Error()), "foo", http.StatusBadRequest},
{"internal", errors.New("foo"), "foo", http.StatusInternalServerError},
{"query error", storage_errors.ErrQueryMustContainMetricName, storage_errors.ErrQueryMustContainMetricName.Error(), http.StatusBadRequest},
{"wrapped query error", fmt.Errorf("wrapped: %w", storage_errors.ErrQueryMustContainMetricName), "wrapped: " + storage_errors.ErrQueryMustContainMetricName.Error(), http.StatusBadRequest},
diff --git a/pkg/util/server/middleware.go b/pkg/util/server/middleware.go
index 4dd241a6d54d7..9b0ad7071f345 100644
--- a/pkg/util/server/middleware.go
+++ b/pkg/util/server/middleware.go
@@ -14,7 +14,7 @@ func NewPrepopulateMiddleware() middleware.Interface {
return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
err := req.ParseForm()
if err != nil {
- WriteError(httpgrpc.Errorf(http.StatusBadRequest, err.Error()), w)
+ WriteError(httpgrpc.Errorf(http.StatusBadRequest, "%s", err.Error()), w)
return
}
diff --git a/pkg/util/server/middleware_test.go b/pkg/util/server/middleware_test.go
index b2267c919926a..c142775fb13c3 100644
--- a/pkg/util/server/middleware_test.go
+++ b/pkg/util/server/middleware_test.go
@@ -12,7 +12,7 @@ import (
)
func TestPrepopulate(t *testing.T) {
- success := http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
+ success := http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
_, err := w.Write([]byte("ok"))
require.Nil(t, err)
})
diff --git a/pkg/util/server/recovery_test.go b/pkg/util/server/recovery_test.go
index a8d1d3f1b6b9d..c1717ac2fa7ee 100644
--- a/pkg/util/server/recovery_test.go
+++ b/pkg/util/server/recovery_test.go
@@ -26,17 +26,17 @@ func Test_onPanic(t *testing.T) {
ServeHTTP(rec, req)
require.Equal(t, http.StatusInternalServerError, rec.Code)
- require.Error(t, RecoveryGRPCStreamInterceptor(nil, fakeStream{}, nil, grpc.StreamHandler(func(srv interface{}, stream grpc.ServerStream) error {
+ require.Error(t, RecoveryGRPCStreamInterceptor(nil, fakeStream{}, nil, grpc.StreamHandler(func(_ interface{}, _ grpc.ServerStream) error {
panic("foo")
})))
- _, err = RecoveryGRPCUnaryInterceptor(context.Background(), nil, nil, grpc.UnaryHandler(func(ctx context.Context, req interface{}) (interface{}, error) {
+ _, err = RecoveryGRPCUnaryInterceptor(context.Background(), nil, nil, grpc.UnaryHandler(func(_ context.Context, _ interface{}) (interface{}, error) {
panic("foo")
}))
require.Error(t, err)
_, err = RecoveryMiddleware.
- Wrap(queryrangebase.HandlerFunc(func(ctx context.Context, req queryrangebase.Request) (res queryrangebase.Response, err error) {
+ Wrap(queryrangebase.HandlerFunc(func(_ context.Context, _ queryrangebase.Request) (_ queryrangebase.Response, _ error) {
panic("foo")
})).
Do(context.Background(), nil)
diff --git a/pkg/validation/exporter_test.go b/pkg/validation/exporter_test.go
index 59b4537533e53..45484bbc13b08 100644
--- a/pkg/validation/exporter_test.go
+++ b/pkg/validation/exporter_test.go
@@ -35,8 +35,8 @@ func TestOverridesExporter_noConfig(t *testing.T) {
func TestOverridesExporter_withConfig(t *testing.T) {
tenantLimits := map[string]*Limits{
"tenant-a": {
- MaxQueriersPerTenant: 5,
- BloomCompactorEnabled: true,
+ MaxQueriersPerTenant: 5,
+ BloomCreationEnabled: true,
},
}
overrides, _ := NewOverrides(Limits{}, newMockTenantLimits(tenantLimits))
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index 1ae6ce3a47c63..75128607b882e 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -59,8 +59,8 @@ const (
defaultMaxStructuredMetadataSize = "64kb"
defaultMaxStructuredMetadataCount = 128
- defaultBloomCompactorMaxBlockSize = "200MB"
- defaultBloomCompactorMaxBloomSize = "128MB"
+ defaultBloomBuildMaxBlockSize = "200MB"
+ defaultBloomBuildMaxBloomSize = "128MB"
defaultBlockedIngestionStatusCode = 260 // 260 is a custom status code to indicate blocked ingestion
)
@@ -203,21 +203,19 @@ type Limits struct {
BloomGatewayEnabled bool `yaml:"bloom_gateway_enable_filtering" json:"bloom_gateway_enable_filtering" category:"experimental"`
BloomGatewayCacheKeyInterval time.Duration `yaml:"bloom_gateway_cache_key_interval" json:"bloom_gateway_cache_key_interval" category:"experimental"`
- BloomCompactorShardSize int `yaml:"bloom_compactor_shard_size" json:"bloom_compactor_shard_size" category:"experimental"`
- BloomCompactorEnabled bool `yaml:"bloom_compactor_enable_compaction" json:"bloom_compactor_enable_compaction" category:"experimental"`
- BloomCompactorMaxBlockSize flagext.ByteSize `yaml:"bloom_compactor_max_block_size" json:"bloom_compactor_max_block_size" category:"experimental"`
- BloomCompactorMaxBloomSize flagext.ByteSize `yaml:"bloom_compactor_max_bloom_size" json:"bloom_compactor_max_bloom_size" category:"experimental"`
+ BloomBuildMaxBuilders int `yaml:"bloom_build_max_builders" json:"bloom_build_max_builders" category:"experimental"`
+ BloomBuildTaskMaxRetries int `yaml:"bloom_build_task_max_retries" json:"bloom_build_task_max_retries" category:"experimental"`
+ BloomBuilderResponseTimeout time.Duration `yaml:"bloom_build_builder_response_timeout" json:"bloom_build_builder_response_timeout" category:"experimental"`
- BloomCreationEnabled bool `yaml:"bloom_creation_enabled" json:"bloom_creation_enabled" category:"experimental"`
- BloomSplitSeriesKeyspaceBy int `yaml:"bloom_split_series_keyspace_by" json:"bloom_split_series_keyspace_by" category:"experimental"`
- BloomBuildMaxBuilders int `yaml:"bloom_build_max_builders" json:"bloom_build_max_builders" category:"experimental"`
- BuilderResponseTimeout time.Duration `yaml:"bloom_build_builder_response_timeout" json:"bloom_build_builder_response_timeout" category:"experimental"`
- BloomTaskMaxRetries int `yaml:"bloom_build_task_max_retries" json:"bloom_build_task_max_retries" category:"experimental"`
+ BloomCreationEnabled bool `yaml:"bloom_creation_enabled" json:"bloom_creation_enabled" category:"experimental"`
+ BloomSplitSeriesKeyspaceBy int `yaml:"bloom_split_series_keyspace_by" json:"bloom_split_series_keyspace_by" category:"experimental"`
+ BloomNGramLength int `yaml:"bloom_ngram_length" json:"bloom_ngram_length" category:"experimental"`
+ BloomNGramSkip int `yaml:"bloom_ngram_skip" json:"bloom_ngram_skip" category:"experimental"`
+ BloomFalsePositiveRate float64 `yaml:"bloom_false_positive_rate" json:"bloom_false_positive_rate" category:"experimental"`
+ BloomBlockEncoding string `yaml:"bloom_block_encoding" json:"bloom_block_encoding" category:"experimental"`
- BloomNGramLength int `yaml:"bloom_ngram_length" json:"bloom_ngram_length" category:"experimental"`
- BloomNGramSkip int `yaml:"bloom_ngram_skip" json:"bloom_ngram_skip" category:"experimental"`
- BloomFalsePositiveRate float64 `yaml:"bloom_false_positive_rate" json:"bloom_false_positive_rate" category:"experimental"`
- BloomBlockEncoding string `yaml:"bloom_block_encoding" json:"bloom_block_encoding" category:"experimental"`
+ BloomMaxBlockSize flagext.ByteSize `yaml:"bloom_max_block_size" json:"bloom_max_block_size" category:"experimental"`
+ BloomMaxBloomSize flagext.ByteSize `yaml:"bloom_max_bloom_size" json:"bloom_max_bloom_size" category:"experimental"`
AllowStructuredMetadata bool `yaml:"allow_structured_metadata,omitempty" json:"allow_structured_metadata,omitempty" doc:"description=Allow user to send structured metadata in push payload."`
MaxStructuredMetadataSize flagext.ByteSize `yaml:"max_structured_metadata_size" json:"max_structured_metadata_size" doc:"description=Maximum size accepted for structured metadata per log line."`
@@ -265,9 +263,11 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
"app_kubernetes_io_name",
"container",
"container_name",
+ "k8s_container_name",
"component",
"workload",
"job",
+ "k8s_job_name",
}
f.Var((*dskit_flagext.StringSlice)(&l.DiscoverServiceName), "validation.discover-service-name", "If no service_name label exists, Loki maps a single label from the configured list to service_name. If none of the configured labels exist in the stream, label is set to unknown_service. Empty list disables setting the label.")
f.BoolVar(&l.DiscoverLogLevels, "validation.discover-log-levels", true, "Discover and add log levels during ingestion, if not present already. Levels would be added to Structured Metadata with name level/LEVEL/Level/Severity/severity/SEVERITY/lvl/LVL/Lvl (case-sensitive) and one of the values from 'trace', 'debug', 'info', 'warn', 'error', 'critical', 'fatal' (case insensitive).")
@@ -377,33 +377,32 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&l.BloomGatewayShardSize, "bloom-gateway.shard-size", 0, "Experimental. The shard size defines how many bloom gateways should be used by a tenant for querying.")
f.BoolVar(&l.BloomGatewayEnabled, "bloom-gateway.enable-filtering", false, "Experimental. Whether to use the bloom gateway component in the read path to filter chunks.")
-
- f.IntVar(&l.BloomCompactorShardSize, "bloom-compactor.shard-size", 0, "Experimental. The shard size defines how many bloom compactors should be used by a tenant when computing blooms. If it's set to 0, shuffle sharding is disabled.")
- f.BoolVar(&l.BloomCompactorEnabled, "bloom-compactor.enable-compaction", false, "Experimental. Whether to compact chunks into bloom filters.")
- f.IntVar(&l.BloomNGramLength, "bloom-compactor.ngram-length", 4, "Experimental. Length of the n-grams created when computing blooms from log lines.")
- f.IntVar(&l.BloomNGramSkip, "bloom-compactor.ngram-skip", 1, "Experimental. Skip factor for the n-grams created when computing blooms from log lines.")
- f.Float64Var(&l.BloomFalsePositiveRate, "bloom-compactor.false-positive-rate", 0.01, "Experimental. Scalable Bloom Filter desired false-positive rate.")
- f.StringVar(&l.BloomBlockEncoding, "bloom-compactor.block-encoding", "none", "Experimental. Compression algorithm for bloom block pages.")
f.DurationVar(&l.BloomGatewayCacheKeyInterval, "bloom-gateway.cache-key-interval", 15*time.Minute, "Experimental. Interval for computing the cache key in the Bloom Gateway.")
- _ = l.BloomCompactorMaxBlockSize.Set(defaultBloomCompactorMaxBlockSize)
- f.Var(&l.BloomCompactorMaxBlockSize, "bloom-compactor.max-block-size",
+
+ f.IntVar(&l.BloomNGramLength, "bloom-build.ngram-length", 4, "Experimental. Length of the n-grams created when computing blooms from log lines.")
+ f.IntVar(&l.BloomNGramSkip, "bloom-build.ngram-skip", 1, "Experimental. Skip factor for the n-grams created when computing blooms from log lines.")
+ f.Float64Var(&l.BloomFalsePositiveRate, "bloom-build.false-positive-rate", 0.01, "Experimental. Scalable Bloom Filter desired false-positive rate.")
+ f.StringVar(&l.BloomBlockEncoding, "bloom-build.block-encoding", "none", "Experimental. Compression algorithm for bloom block pages.")
+
+ _ = l.BloomMaxBlockSize.Set(defaultBloomBuildMaxBlockSize)
+ f.Var(&l.BloomMaxBlockSize, "bloom-build.max-block-size",
fmt.Sprintf(
"Experimental. The maximum bloom block size. A value of 0 sets an unlimited size. Default is %s. The actual block size might exceed this limit since blooms will be added to blocks until the block exceeds the maximum block size.",
- defaultBloomCompactorMaxBlockSize,
+ defaultBloomBuildMaxBlockSize,
),
)
f.BoolVar(&l.BloomCreationEnabled, "bloom-build.enable", false, "Experimental. Whether to create blooms for the tenant.")
f.IntVar(&l.BloomSplitSeriesKeyspaceBy, "bloom-build.split-keyspace-by", 256, "Experimental. Number of splits to create for the series keyspace when building blooms. The series keyspace is split into this many parts to parallelize bloom creation.")
f.IntVar(&l.BloomBuildMaxBuilders, "bloom-build.max-builders", 0, "Experimental. Maximum number of builders to use when building blooms. 0 allows unlimited builders.")
- f.DurationVar(&l.BuilderResponseTimeout, "bloom-build.builder-response-timeout", 0, "Experimental. Timeout for a builder to finish a task. If a builder does not respond within this time, it is considered failed and the task will be requeued. 0 disables the timeout.")
- f.IntVar(&l.BloomTaskMaxRetries, "bloom-build.task-max-retries", 3, "Experimental. Maximum number of retries for a failed task. If a task fails more than this number of times, it is considered failed and will not be retried. A value of 0 disables this limit.")
+ f.DurationVar(&l.BloomBuilderResponseTimeout, "bloom-build.builder-response-timeout", 0, "Experimental. Timeout for a builder to finish a task. If a builder does not respond within this time, it is considered failed and the task will be requeued. 0 disables the timeout.")
+ f.IntVar(&l.BloomBuildTaskMaxRetries, "bloom-build.task-max-retries", 3, "Experimental. Maximum number of retries for a failed task. If a task fails more than this number of times, it is considered failed and will not be retried. A value of 0 disables this limit.")
- _ = l.BloomCompactorMaxBloomSize.Set(defaultBloomCompactorMaxBloomSize)
- f.Var(&l.BloomCompactorMaxBloomSize, "bloom-compactor.max-bloom-size",
+ _ = l.BloomMaxBloomSize.Set(defaultBloomBuildMaxBloomSize)
+ f.Var(&l.BloomMaxBloomSize, "bloom-build.max-bloom-size",
fmt.Sprintf(
"Experimental. The maximum bloom size per log stream. A log stream whose generated bloom filter exceeds this size will be discarded. A value of 0 sets an unlimited size. Default is %s.",
- defaultBloomCompactorMaxBloomSize,
+ defaultBloomBuildMaxBloomSize,
),
)
@@ -991,14 +990,6 @@ func (o *Overrides) BloomGatewayEnabled(userID string) bool {
return o.getOverridesForUser(userID).BloomGatewayEnabled
}
-func (o *Overrides) BloomCompactorShardSize(userID string) int {
- return o.getOverridesForUser(userID).BloomCompactorShardSize
-}
-
-func (o *Overrides) BloomCompactorEnabled(userID string) bool {
- return o.getOverridesForUser(userID).BloomCompactorEnabled
-}
-
func (o *Overrides) BloomCreationEnabled(userID string) bool {
return o.getOverridesForUser(userID).BloomCreationEnabled
}
@@ -1012,11 +1003,11 @@ func (o *Overrides) BloomBuildMaxBuilders(userID string) int {
}
func (o *Overrides) BuilderResponseTimeout(userID string) time.Duration {
- return o.getOverridesForUser(userID).BuilderResponseTimeout
+ return o.getOverridesForUser(userID).BloomBuilderResponseTimeout
}
func (o *Overrides) BloomTaskMaxRetries(userID string) int {
- return o.getOverridesForUser(userID).BloomTaskMaxRetries
+ return o.getOverridesForUser(userID).BloomBuildTaskMaxRetries
}
func (o *Overrides) BloomNGramLength(userID string) int {
@@ -1027,12 +1018,12 @@ func (o *Overrides) BloomNGramSkip(userID string) int {
return o.getOverridesForUser(userID).BloomNGramSkip
}
-func (o *Overrides) BloomCompactorMaxBlockSize(userID string) int {
- return o.getOverridesForUser(userID).BloomCompactorMaxBlockSize.Val()
+func (o *Overrides) BloomMaxBlockSize(userID string) int {
+ return o.getOverridesForUser(userID).BloomMaxBlockSize.Val()
}
-func (o *Overrides) BloomCompactorMaxBloomSize(userID string) int {
- return o.getOverridesForUser(userID).BloomCompactorMaxBloomSize.Val()
+func (o *Overrides) BloomMaxBloomSize(userID string) int {
+ return o.getOverridesForUser(userID).BloomMaxBloomSize.Val()
}
func (o *Overrides) BloomFalsePositiveRate(userID string) float64 {
diff --git a/production/docker/docker-compose.yaml b/production/docker/docker-compose.yaml
index 30de7f739a28e..3200b4cab6e97 100644
--- a/production/docker/docker-compose.yaml
+++ b/production/docker/docker-compose.yaml
@@ -182,7 +182,7 @@ services:
# alertmanager to enable receiving alerts
alertmanager:
- image: prom/alertmanager:v0.23.0
+ image: prom/alertmanager:v0.27.0
restart: unless-stopped
ports:
- "9093:9093"
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index a9cead21ece46..1b68e6a5b1ca8 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,18 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # ( : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+## 6.12.0
+
+- [ENHANCEMENT] Replace Bloom Compactor component with Bloom Planner and Bloom Builder. These are the new components to build bloom blocks.
+
+## 6.11.0
+
+- [FEATURE] Add support for configuring persistence for memcached.
+
+## 6.10.2
+
+- [CHANGE] Bumped version of `nginxinc/nginx-unprivileged` to 1.27-alpine; this remediates several CVE
+
## 6.10.1
- [CHANGE] Bumped version of `kiwigrid/k8s-sidecar` to 1.27.5; this remediates several CVE
@@ -23,7 +35,6 @@ Entries should include a reference to the pull request that introduced the chang
- [CHANGE] Changed version of Grafana Loki to 3.1.1
- [ENHANCEMENT] Added ability to disable AWS S3 dualstack endpoint usage.
-
## 6.9.0
- [BUGFIX] Fixed how we set imagePullSecrets for the memcached and provisioner.
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index 52b52d5af56a6..dcef3406eaac2 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -3,7 +3,7 @@ name: loki
description: Helm chart for Grafana Loki and Grafana Enterprise Logs supporting both simple, scalable and distributed modes.
type: application
appVersion: 3.1.1
-version: 6.10.1
+version: 6.12.0
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki
diff --git a/production/helm/loki/README.md b/production/helm/loki/README.md
index d0cb010b446b5..766cb151c4969 100644
--- a/production/helm/loki/README.md
+++ b/production/helm/loki/README.md
@@ -1,6 +1,6 @@
# loki
-![Version: 6.10.1](https://img.shields.io/badge/Version-6.10.1-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 3.1.1](https://img.shields.io/badge/AppVersion-3.1.1-informational?style=flat-square)
+![Version: 6.12.0](https://img.shields.io/badge/Version-6.12.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 3.1.1](https://img.shields.io/badge/AppVersion-3.1.1-informational?style=flat-square)
Helm chart for Grafana Loki and Grafana Enterprise Logs supporting both simple, scalable and distributed modes.
diff --git a/production/helm/loki/distributed-values.yaml b/production/helm/loki/distributed-values.yaml
index 0016b724ce573..78a1f111cd246 100644
--- a/production/helm/loki/distributed-values.yaml
+++ b/production/helm/loki/distributed-values.yaml
@@ -47,7 +47,10 @@ indexGateway:
replicas: 2
maxUnavailable: 1
-bloomCompactor:
+# optional experimental components
+bloomPlanner:
+ replicas: 0
+bloomBuilder:
replicas: 0
bloomGateway:
replicas: 0
@@ -66,5 +69,3 @@ write:
singleBinary:
replicas: 0
-
-
diff --git a/production/helm/loki/scenarios/README.md b/production/helm/loki/scenarios/README.md
new file mode 100644
index 0000000000000..1ec8692618613
--- /dev/null
+++ b/production/helm/loki/scenarios/README.md
@@ -0,0 +1,19 @@
+These scenarios are used by Github Workflow: [Publish Rendered Helm Chart Diff](../../../../.github/workflows/helm-loki-ci.yml).
+
+Each scenario is used as the values file for the Loki Helm chart to render Kubernetes manifests in `base` and `PR's` branch to compare the content and report the diff on Pull Request as a comment([example](https://github.com/grafana/loki/pull/14127#issuecomment-2348360828)). It gives the ability to the reviewer to understand how the changes in the chart modify resulting manifests.
+
+![img.png](images/img.png)
+
+The workflow reports three types of changes for each scenario:
+
+1. Added files - the manifests that are added in the current PR and that did not exist in `base` branch.
+
+![added.png](images/added.png)
+
+
+2. Modified files - the manifests that exist in both branches but the changes in PRs branch modify them.
+![modified.png](images/modified.png)
+
+3. Removed files - the manifests that exist in `base` branch but do not exist in PRs branch.
+
+![removed.png](images/removed.png)
\ No newline at end of file
diff --git a/production/helm/loki/scenarios/default-single-binary-values.yaml b/production/helm/loki/scenarios/default-single-binary-values.yaml
new file mode 100644
index 0000000000000..78a1f111cd246
--- /dev/null
+++ b/production/helm/loki/scenarios/default-single-binary-values.yaml
@@ -0,0 +1,71 @@
+---
+loki:
+ schemaConfig:
+ configs:
+ - from: 2024-04-01
+ store: tsdb
+ object_store: s3
+ schema: v13
+ index:
+ prefix: loki_index_
+ period: 24h
+ ingester:
+ chunk_encoding: snappy
+ tracing:
+ enabled: true
+ querier:
+ # Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
+ max_concurrent: 4
+
+#gateway:
+# ingress:
+# enabled: true
+# hosts:
+# - host: FIXME
+# paths:
+# - path: /
+# pathType: Prefix
+
+deploymentMode: Distributed
+
+ingester:
+ replicas: 3
+querier:
+ replicas: 3
+ maxUnavailable: 2
+queryFrontend:
+ replicas: 2
+ maxUnavailable: 1
+queryScheduler:
+ replicas: 2
+distributor:
+ replicas: 3
+ maxUnavailable: 2
+compactor:
+ replicas: 1
+indexGateway:
+ replicas: 2
+ maxUnavailable: 1
+
+# optional experimental components
+bloomPlanner:
+ replicas: 0
+bloomBuilder:
+ replicas: 0
+bloomGateway:
+ replicas: 0
+
+# Enable minio for storage
+minio:
+ enabled: true
+
+# Zero out replica counts of other deployment modes
+backend:
+ replicas: 0
+read:
+ replicas: 0
+write:
+ replicas: 0
+
+singleBinary:
+ replicas: 0
diff --git a/production/helm/loki/scenarios/default-values.yaml b/production/helm/loki/scenarios/default-values.yaml
new file mode 100644
index 0000000000000..a79baee503989
--- /dev/null
+++ b/production/helm/loki/scenarios/default-values.yaml
@@ -0,0 +1,16 @@
+---
+loki:
+ commonConfig:
+ replication_factor: 1
+ useTestSchema: true
+ storage:
+ bucketNames:
+ chunks: chunks
+ ruler: ruler
+ admin: admin
+read:
+ replicas: 1
+write:
+ replicas: 1
+backend:
+ replicas: 1
diff --git a/production/helm/loki/scenarios/images/added.png b/production/helm/loki/scenarios/images/added.png
new file mode 100644
index 0000000000000..ced9f9554a8f8
Binary files /dev/null and b/production/helm/loki/scenarios/images/added.png differ
diff --git a/production/helm/loki/scenarios/images/img.png b/production/helm/loki/scenarios/images/img.png
new file mode 100644
index 0000000000000..81ba701da26a0
Binary files /dev/null and b/production/helm/loki/scenarios/images/img.png differ
diff --git a/production/helm/loki/scenarios/images/modified.png b/production/helm/loki/scenarios/images/modified.png
new file mode 100644
index 0000000000000..39a25bae35b20
Binary files /dev/null and b/production/helm/loki/scenarios/images/modified.png differ
diff --git a/production/helm/loki/scenarios/images/removed.png b/production/helm/loki/scenarios/images/removed.png
new file mode 100644
index 0000000000000..219d64c32c983
Binary files /dev/null and b/production/helm/loki/scenarios/images/removed.png differ
diff --git a/production/helm/loki/scenarios/ingress-values.yaml b/production/helm/loki/scenarios/ingress-values.yaml
new file mode 100644
index 0000000000000..ff5ff1efd9ce7
--- /dev/null
+++ b/production/helm/loki/scenarios/ingress-values.yaml
@@ -0,0 +1,30 @@
+---
+gateway:
+ ingress:
+ enabled: true
+ annotations: {}
+ hosts:
+ - host: gateway.loki.example.com
+ paths:
+ - path: /
+ pathType: Prefix
+loki:
+ commonConfig:
+ replication_factor: 1
+ useTestSchema: true
+ storage:
+ bucketNames:
+ chunks: chunks
+ ruler: ruler
+ admin: admin
+read:
+ replicas: 1
+write:
+ replicas: 1
+backend:
+ replicas: 1
+monitoring:
+ lokiCanary:
+ enabled: false
+test:
+ enabled: false
diff --git a/production/helm/loki/scenarios/legacy-monitoring-values.yaml b/production/helm/loki/scenarios/legacy-monitoring-values.yaml
new file mode 100644
index 0000000000000..ad520e57f2f44
--- /dev/null
+++ b/production/helm/loki/scenarios/legacy-monitoring-values.yaml
@@ -0,0 +1,27 @@
+---
+loki:
+ commonConfig:
+ replication_factor: 1
+ useTestSchema: true
+ storage:
+ bucketNames:
+ chunks: chunks
+ ruler: ruler
+ admin: admin
+read:
+ replicas: 1
+write:
+ replicas: 1
+backend:
+ replicas: 1
+monitoring:
+ enabled: true
+ selfMonitoring:
+ enabled: true
+ grafanaAgent:
+ installOperator: true
+ serviceMonitor:
+ labels:
+ release: "prometheus"
+test:
+ prometheusAddress: "http://prometheus-kube-prometheus-prometheus.prometheus.svc.cluster.local.:9090"
diff --git a/production/helm/loki/scenarios/simple-scalable-aws-kube-irsa-values.yaml b/production/helm/loki/scenarios/simple-scalable-aws-kube-irsa-values.yaml
new file mode 100644
index 0000000000000..28c6c3bbe9162
--- /dev/null
+++ b/production/helm/loki/scenarios/simple-scalable-aws-kube-irsa-values.yaml
@@ -0,0 +1,67 @@
+loki:
+ # -- Storage config. Providing this will automatically populate all necessary storage configs in the templated config.
+ storage:
+ # Loki requires a bucket for chunks and the ruler. GEL requires a third bucket for the admin API.
+ # Please provide these values if you are using object storage.
+ bucketNames:
+ chunks: aws-s3-chunks-bucket
+ ruler: aws-s3-ruler-bucket
+ admin: aws-s3-admin-bucket
+ type: s3
+ s3:
+ region: eu-central-1
+ # -- Check https://grafana.com/docs/loki/latest/configuration/#schema_config for more info on how to configure schemas
+ schemaConfig:
+ configs:
+ - from: "2023-09-19"
+ index:
+ period: 1d
+ prefix: tsdb_index_
+ object_store: s3
+ schema: v13
+ store: tsdb
+######################################################################################################################
+#
+# Enterprise Loki Configs
+#
+######################################################################################################################
+
+# -- Configuration for running Enterprise Loki
+enterprise:
+ # Enable enterprise features, license must be provided
+ enabled: true
+ # -- Grafana Enterprise Logs license
+ license:
+ contents: "content of licence"
+ tokengen:
+ annotations: {
+ eks.amazonaws.com/role-arn: arn:aws:iam::2222222:role/test-role
+ }
+ # -- Configuration for `provisioner` target
+ provisioner:
+ # -- Additional annotations for the `provisioner` Job
+ annotations: {
+ eks.amazonaws.com/role-arn: arn:aws:iam::2222222:role/test-role
+ }
+######################################################################################################################
+#
+# Service Accounts and Kubernetes RBAC
+#
+######################################################################################################################
+serviceAccount:
+ # -- Annotations for the service account
+ annotations:
+ eks.amazonaws.com/role-arn: arn:aws:iam::2222222:role/test-role
+
+# Configuration for the write pod(s)
+write:
+ persistence:
+ storageClass: gp2
+# -- Configuration for the read pod(s)
+read:
+ persistence:
+ storageClass: gp2
+# -- Configuration for the backend pod(s)
+backend:
+ persistence:
+ storageClass: gp2
diff --git a/production/helm/loki/templates/bloom-builder/_helpers-bloom-builder.tpl b/production/helm/loki/templates/bloom-builder/_helpers-bloom-builder.tpl
new file mode 100644
index 0000000000000..46359dffdf004
--- /dev/null
+++ b/production/helm/loki/templates/bloom-builder/_helpers-bloom-builder.tpl
@@ -0,0 +1,32 @@
+{{/*
+bloom-builder fullname
+*/}}
+{{- define "loki.bloomBuilderFullname" -}}
+{{ include "loki.fullname" . }}-bloom-builder
+{{- end }}
+
+{{/*
+bloom-builder common labels
+*/}}
+{{- define "loki.bloomBuilderLabels" -}}
+{{ include "loki.labels" . }}
+app.kubernetes.io/component: bloom-builder
+{{- end }}
+
+{{/*
+bloom-builder selector labels
+*/}}
+{{- define "loki.bloomBuilderSelectorLabels" -}}
+{{ include "loki.selectorLabels" . }}
+app.kubernetes.io/component: bloom-builder
+{{- end }}
+
+{{/*
+bloom-builder priority class name
+*/}}
+{{- define "loki.bloomBuilderPriorityClassName" -}}
+{{- $pcn := coalesce .Values.global.priorityClassName .Values.bloomBuilder.priorityClassName -}}
+{{- if $pcn }}
+priorityClassName: {{ $pcn }}
+{{- end }}
+{{- end }}
diff --git a/production/helm/loki/templates/bloom-builder/deployment-bloom-builder.yaml b/production/helm/loki/templates/bloom-builder/deployment-bloom-builder.yaml
new file mode 100644
index 0000000000000..5735de5da23d4
--- /dev/null
+++ b/production/helm/loki/templates/bloom-builder/deployment-bloom-builder.yaml
@@ -0,0 +1,142 @@
+{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
+{{- if $isDistributed -}}
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "loki.bloomBuilderFullname" . }}
+ namespace: {{ .Release.Namespace }}
+ labels:
+ {{- include "loki.bloomBuilderLabels" . | nindent 4 }}
+ {{- with .Values.loki.annotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+{{- if not .Values.bloomBuilder.autoscaling.enabled }}
+ replicas: {{ .Values.bloomBuilder.replicas }}
+{{- end }}
+ strategy:
+ rollingUpdate:
+ maxSurge: 0
+ maxUnavailable: 1
+ revisionHistoryLimit: {{ .Values.loki.revisionHistoryLimit }}
+ selector:
+ matchLabels:
+ {{- include "loki.bloomBuilderSelectorLabels" . | nindent 6 }}
+ template:
+ metadata:
+ annotations:
+ {{- include "loki.config.checksum" . | nindent 8 }}
+ {{- with .Values.loki.podAnnotations }}
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.podAnnotations }}
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ labels:
+ {{- include "loki.bloomBuilderSelectorLabels" . | nindent 8 }}
+ app.kubernetes.io/part-of: memberlist
+ {{- with .Values.loki.podLabels }}
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.podLabels }}
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ spec:
+ serviceAccountName: {{ include "loki.serviceAccountName" . }}
+ {{- with .Values.imagePullSecrets }}
+ imagePullSecrets:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.hostAliases }}
+ hostAliases:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- include "loki.bloomBuilderPriorityClassName" . | nindent 6 }}
+ securityContext:
+ {{- toYaml .Values.loki.podSecurityContext | nindent 8 }}
+ terminationGracePeriodSeconds: {{ .Values.bloomBuilder.terminationGracePeriodSeconds }}
+ containers:
+ - name: bloom-builder
+ image: {{ include "loki.image" . }}
+ imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
+ {{- if or .Values.loki.command .Values.bloomBuilder.command }}
+ command:
+ - {{ coalesce .Values.bloomBuilder.command .Values.loki.command | quote }}
+ {{- end }}
+ args:
+ - -config.file=/etc/loki/config/config.yaml
+ - -target=bloom-builder
+ {{- with .Values.bloomBuilder.extraArgs }}
+ {{- toYaml . | nindent 12 }}
+ {{- end }}
+ ports:
+ - name: http-metrics
+ containerPort: 3100
+ protocol: TCP
+ - name: grpc
+ containerPort: 9095
+ protocol: TCP
+ - name: http-memberlist
+ containerPort: 7946
+ protocol: TCP
+ {{- with .Values.bloomBuilder.extraEnv }}
+ env:
+ {{- toYaml . | nindent 12 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.extraEnvFrom }}
+ envFrom:
+ {{- toYaml . | nindent 12 }}
+ {{- end }}
+ securityContext:
+ {{- toYaml .Values.loki.containerSecurityContext | nindent 12 }}
+ readinessProbe:
+ {{- toYaml .Values.loki.readinessProbe | nindent 12 }}
+ volumeMounts:
+ - name: config
+ mountPath: /etc/loki/config
+ - name: runtime-config
+ mountPath: /etc/loki/runtime-config
+ {{- if .Values.enterprise.enabled }}
+ - name: license
+ mountPath: /etc/loki/license
+ {{- end }}
+ {{- with .Values.bloomBuilder.extraVolumeMounts }}
+ {{- toYaml . | nindent 12 }}
+ {{- end }}
+ resources:
+ {{- toYaml .Values.bloomBuilder.resources | nindent 12 }}
+ {{- if .Values.bloomBuilder.extraContainers }}
+ {{- toYaml .Values.bloomBuilder.extraContainers | nindent 8}}
+ {{- end }}
+ {{- with .Values.bloomBuilder.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ volumes:
+ - name: config
+ {{- include "loki.configVolume" . | nindent 10 }}
+ - name: runtime-config
+ configMap:
+ name: {{ template "loki.name" . }}-runtime
+ {{- if .Values.enterprise.enabled }}
+ - name: license
+ secret:
+ {{- if .Values.enterprise.useExternalLicense }}
+ secretName: {{ .Values.enterprise.externalLicenseName }}
+ {{- else }}
+ secretName: enterprise-logs-license
+ {{- end }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.extraVolumes }}
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+{{- end -}}
diff --git a/production/helm/loki/templates/bloom-builder/hpa.yaml b/production/helm/loki/templates/bloom-builder/hpa.yaml
new file mode 100644
index 0000000000000..2b04647d2aa61
--- /dev/null
+++ b/production/helm/loki/templates/bloom-builder/hpa.yaml
@@ -0,0 +1,55 @@
+{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
+{{- if and $isDistributed .Values.bloomBuilder.autoscaling.enabled }}
+{{- $apiVersion := include "loki.hpa.apiVersion" . -}}
+apiVersion: {{ $apiVersion }}
+kind: HorizontalPodAutoscaler
+metadata:
+ name: {{ include "loki.bloomBuilderFullname" . }}
+ namespace: {{ .Release.Namespace }}
+ labels:
+ {{- include "loki.bloomBuilderLabels" . | nindent 4 }}
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: {{ include "loki.bloomBuilderFullname" . }}
+ minReplicas: {{ .Values.bloomBuilder.autoscaling.minReplicas }}
+ maxReplicas: {{ .Values.bloomBuilder.autoscaling.maxReplicas }}
+ metrics:
+ {{- with .Values.bloomBuilder.autoscaling.targetMemoryUtilizationPercentage }}
+ - type: Resource
+ resource:
+ name: memory
+ {{- if (eq $apiVersion "autoscaling/v2") }}
+ target:
+ type: Utilization
+ averageUtilization: {{ . }}
+ {{- else }}
+ targetAverageUtilization: {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.autoscaling.targetCPUUtilizationPercentage }}
+ - type: Resource
+ resource:
+ name: cpu
+ {{- if (eq $apiVersion "autoscaling/v2") }}
+ target:
+ type: Utilization
+ averageUtilization: {{ . }}
+ {{- else }}
+ targetAverageUtilization: {{ . }}
+ {{- end }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.autoscaling.customMetrics }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- if .Values.bloomBuilder.autoscaling.behavior.enabled }}
+ behavior:
+ {{- with .Values.bloomBuilder.autoscaling.behavior.scaleDown }}
+ scaleDown: {{ toYaml . | nindent 6 }}
+ {{- end }}
+ {{- with .Values.bloomBuilder.autoscaling.behavior.scaleUp }}
+ scaleUp: {{ toYaml . | nindent 6 }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/production/helm/loki/templates/bloom-builder/poddisruptionbudget-bloom-builder.yaml b/production/helm/loki/templates/bloom-builder/poddisruptionbudget-bloom-builder.yaml
new file mode 100644
index 0000000000000..e66d762c0e64d
--- /dev/null
+++ b/production/helm/loki/templates/bloom-builder/poddisruptionbudget-bloom-builder.yaml
@@ -0,0 +1,21 @@
+{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
+{{- if and $isDistributed (gt (int .Values.bloomBuilder.replicas) 1) }}
+{{- if kindIs "invalid" .Values.bloomBuilder.maxUnavailable }}
+{{- fail "`.Values.bloomBuilder.maxUnavailable` must be set when `.Values.bloomBuilder.replicas` is greater than 1." }}
+{{- else }}
+apiVersion: {{ include "loki.pdb.apiVersion" . }}
+kind: PodDisruptionBudget
+metadata:
+ name: {{ include "loki.bloomBuilderFullname" . }}
+ namespace: {{ .Release.Namespace }}
+ labels:
+ {{- include "loki.bloomBuilderLabels" . | nindent 4 }}
+spec:
+ selector:
+ matchLabels:
+ {{- include "loki.bloomBuilderSelectorLabels" . | nindent 6 }}
+ {{- with .Values.bloomBuilder.maxUnavailable }}
+ maxUnavailable: {{ . }}
+ {{- end }}
+{{- end }}
+{{- end }}
diff --git a/production/helm/loki/templates/bloom-builder/service-bloom-builder-headless.yaml b/production/helm/loki/templates/bloom-builder/service-bloom-builder-headless.yaml
new file mode 100644
index 0000000000000..e089d4d2de40c
--- /dev/null
+++ b/production/helm/loki/templates/bloom-builder/service-bloom-builder-headless.yaml
@@ -0,0 +1,43 @@
+{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
+{{- if $isDistributed -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "loki.bloomBuilderFullname" . }}-headless
+ namespace: {{ .Release.Namespace }}
+ labels:
+ {{- include "loki.bloomBuilderLabels" . | nindent 4 }}
+ {{- with .Values.bloomBuilder.serviceLabels }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ prometheus.io/service-monitor: "false"
+ {{- with .Values.loki.serviceAnnotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+ clusterIP: None
+ type: ClusterIP
+ publishNotReadyAddresses: true
+ ports:
+ - name: http-metrics
+ port: 3100
+ targetPort: http-metrics
+ protocol: TCP
+ - name: grpc
+ port: 9095
+ targetPort: grpc
+ protocol: TCP
+ {{- if .Values.bloomBuilder.appProtocol.grpc }}
+ appProtocol: {{ .Values.bloomBuilder.appProtocol.grpc }}
+ {{- end }}
+ - name: grpclb
+ port: 9096
+ targetPort: grpc
+ protocol: TCP
+ {{- if .Values.bloomBuilder.appProtocol.grpc }}
+ appProtocol: {{ .Values.bloomBuilder.appProtocol.grpc }}
+ {{- end }}
+ selector:
+ {{- include "loki.bloomBuilderSelectorLabels" . | nindent 4 }}
+{{- end -}}
diff --git a/production/helm/loki/templates/bloom-builder/service-bloom-builder.yaml b/production/helm/loki/templates/bloom-builder/service-bloom-builder.yaml
new file mode 100644
index 0000000000000..aab082d72293f
--- /dev/null
+++ b/production/helm/loki/templates/bloom-builder/service-bloom-builder.yaml
@@ -0,0 +1,41 @@
+{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
+{{- if $isDistributed -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "loki.bloomBuilderFullname" . }}
+ namespace: {{ .Release.Namespace }}
+ labels:
+ {{- include "loki.bloomBuilderLabels" . | nindent 4 }}
+ {{- with .Values.bloomBuilder.serviceLabels }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.loki.serviceAnnotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+ type: ClusterIP
+ publishNotReadyAddresses: true
+ ports:
+ - name: http-metrics
+ port: 3100
+ targetPort: http-metrics
+ protocol: TCP
+ - name: grpc
+ port: 9095
+ targetPort: grpc
+ protocol: TCP
+ {{- if .Values.bloomBuilder.appProtocol.grpc }}
+ appProtocol: {{ .Values.bloomBuilder.appProtocol.grpc }}
+ {{- end }}
+ - name: grpclb
+ port: 9096
+ targetPort: grpc
+ protocol: TCP
+ {{- if .Values.bloomBuilder.appProtocol.grpc }}
+ appProtocol: {{ .Values.bloomBuilder.appProtocol.grpc }}
+ {{- end }}
+ selector:
+ {{- include "loki.bloomBuilderSelectorLabels" . | nindent 4 }}
+{{- end -}}
diff --git a/production/helm/loki/templates/bloom-compactor/_helpers-bloom-compactor.tpl b/production/helm/loki/templates/bloom-compactor/_helpers-bloom-compactor.tpl
deleted file mode 100644
index 193a8f883b128..0000000000000
--- a/production/helm/loki/templates/bloom-compactor/_helpers-bloom-compactor.tpl
+++ /dev/null
@@ -1,58 +0,0 @@
-{{/*
-bloom compactor fullname
-*/}}
-{{- define "loki.bloomCompactorFullname" -}}
-{{ include "loki.fullname" . }}-bloom-compactor
-{{- end }}
-
-{{/*
-bloom compactor common labels
-*/}}
-{{- define "loki.bloomCompactorLabels" -}}
-{{ include "loki.labels" . }}
-app.kubernetes.io/component: bloom-compactor
-{{- end }}
-
-{{/*
-bloom compactor selector labels
-*/}}
-{{- define "loki.bloomCompactorSelectorLabels" -}}
-{{ include "loki.selectorLabels" . }}
-app.kubernetes.io/component: bloom-compactor
-{{- end }}
-
-{{/*
-bloom compactor readinessProbe
-*/}}
-{{- define "loki.bloomCompactor.readinessProbe" -}}
-{{- with .Values.bloomCompactor.readinessProbe }}
-readinessProbe:
- {{- toYaml . | nindent 2 }}
-{{- else }}
-{{- with .Values.loki.readinessProbe }}
-readinessProbe:
- {{- toYaml . | nindent 2 }}
-{{- end }}
-{{- end }}
-{{- end -}}
-
-{{/*
-bloom compactor priority class name
-*/}}
-{{- define "loki.bloomCompactorPriorityClassName" }}
-{{- $pcn := coalesce .Values.global.priorityClassName .Values.bloomCompactor.priorityClassName -}}
-{{- if $pcn }}
-priorityClassName: {{ $pcn }}
-{{- end }}
-{{- end }}
-
-{{/*
-Create the name of the bloom compactor service account
-*/}}
-{{- define "loki.bloomCompactorServiceAccountName" -}}
-{{- if .Values.bloomCompactor.serviceAccount.create -}}
- {{ default (print (include "loki.serviceAccountName" .) "-bloom-compactor") .Values.bloomCompactor.serviceAccount.name }}
-{{- else -}}
- {{ default (include "loki.serviceAccountName" .) .Values.bloomCompactor.serviceAccount.name }}
-{{- end -}}
-{{- end -}}
diff --git a/production/helm/loki/templates/bloom-planner/_helpers-bloom-planner.tpl b/production/helm/loki/templates/bloom-planner/_helpers-bloom-planner.tpl
new file mode 100644
index 0000000000000..a4a8c6e4f9d20
--- /dev/null
+++ b/production/helm/loki/templates/bloom-planner/_helpers-bloom-planner.tpl
@@ -0,0 +1,58 @@
+{{/*
+bloom planner fullname
+*/}}
+{{- define "loki.bloomPlannerFullname" -}}
+{{ include "loki.fullname" . }}-bloom-planner
+{{- end }}
+
+{{/*
+bloom planner common labels
+*/}}
+{{- define "loki.bloomPlannerLabels" -}}
+{{ include "loki.labels" . }}
+app.kubernetes.io/component: bloom-planner
+{{- end }}
+
+{{/*
+bloom planner selector labels
+*/}}
+{{- define "loki.bloomPlannerSelectorLabels" -}}
+{{ include "loki.selectorLabels" . }}
+app.kubernetes.io/component: bloom-planner
+{{- end }}
+
+{{/*
+bloom planner readinessProbe
+*/}}
+{{- define "loki.bloomPlanner.readinessProbe" -}}
+{{- with .Values.bloomPlanner.readinessProbe }}
+readinessProbe:
+ {{- toYaml . | nindent 2 }}
+{{- else }}
+{{- with .Values.loki.readinessProbe }}
+readinessProbe:
+ {{- toYaml . | nindent 2 }}
+{{- end }}
+{{- end }}
+{{- end -}}
+
+{{/*
+bloom planner priority class name
+*/}}
+{{- define "loki.bloomPlannerPriorityClassName" }}
+{{- $pcn := coalesce .Values.global.priorityClassName .Values.bloomPlanner.priorityClassName -}}
+{{- if $pcn }}
+priorityClassName: {{ $pcn }}
+{{- end }}
+{{- end }}
+
+{{/*
+Create the name of the bloom planner service account
+*/}}
+{{- define "loki.bloomPlannerServiceAccountName" -}}
+{{- if .Values.bloomPlanner.serviceAccount.create -}}
+ {{ default (print (include "loki.serviceAccountName" .) "-bloom-planner") .Values.bloomPlanner.serviceAccount.name }}
+{{- else -}}
+ {{ default (include "loki.serviceAccountName" .) .Values.bloomPlanner.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
diff --git a/production/helm/loki/templates/bloom-planner/service-bloom-planner-headless.yaml b/production/helm/loki/templates/bloom-planner/service-bloom-planner-headless.yaml
new file mode 100644
index 0000000000000..fd02c64acd502
--- /dev/null
+++ b/production/helm/loki/templates/bloom-planner/service-bloom-planner-headless.yaml
@@ -0,0 +1,36 @@
+{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
+{{- if $isDistributed -}}
+{{- if (gt (int .Values.bloomPlanner.replicas) 0) -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "loki.bloomPlannerFullname" . }}-headless
+ namespace: {{ .Release.Namespace }}
+ labels:
+ {{- include "loki.bloomPlannerSelectorLabels" . | nindent 4 }}
+ {{- with .Values.bloomPlanner.serviceLabels }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.loki.serviceAnnotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+ type: ClusterIP
+ clusterIP: None
+ ports:
+ - name: http-metrics
+ port: 3100
+ targetPort: http-metrics
+ protocol: TCP
+ - name: grpc
+ port: 9095
+ targetPort: grpc
+ protocol: TCP
+ {{- if .Values.bloomPlanner.appProtocol.grpc }}
+ appProtocol: {{ .Values.bloomPlanner.appProtocol.grpc }}
+ {{- end }}
+ selector:
+ {{- include "loki.bloomPlannerSelectorLabels" . | nindent 4 }}
+{{- end -}}
+{{- end -}}
diff --git a/production/helm/loki/templates/bloom-compactor/statefulset-bloom-compactor.yaml b/production/helm/loki/templates/bloom-planner/statefulset-bloom-planner.yaml
similarity index 67%
rename from production/helm/loki/templates/bloom-compactor/statefulset-bloom-compactor.yaml
rename to production/helm/loki/templates/bloom-planner/statefulset-bloom-planner.yaml
index 424fa4bb65d76..8d9a9f23998a5 100644
--- a/production/helm/loki/templates/bloom-compactor/statefulset-bloom-compactor.yaml
+++ b/production/helm/loki/templates/bloom-planner/statefulset-bloom-planner.yaml
@@ -1,33 +1,33 @@
{{- $isDistributed := eq (include "loki.deployment.isDistributed" .) "true" -}}
{{- if $isDistributed }}
-{{- if (gt (int .Values.bloomCompactor.replicas) 0) -}}
+{{- if (gt (int .Values.bloomPlanner.replicas) 0) -}}
apiVersion: apps/v1
kind: StatefulSet
metadata:
- name: {{ include "loki.bloomCompactorFullname" . }}
+ name: {{ include "loki.bloomPlannerFullname" . }}
namespace: {{ .Release.Namespace }}
labels:
- {{- include "loki.bloomCompactorLabels" . | nindent 4 }}
+ {{- include "loki.bloomPlannerLabels" . | nindent 4 }}
{{- with .Values.loki.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
- replicas: {{ .Values.bloomCompactor.replicas }}
+ replicas: {{ .Values.bloomPlanner.replicas }}
podManagementPolicy: Parallel
updateStrategy:
rollingUpdate:
partition: 0
- serviceName: {{ include "loki.bloomCompactorFullname" . }}-headless
+ serviceName: {{ include "loki.bloomPlannerFullname" . }}-headless
revisionHistoryLimit: {{ .Values.loki.revisionHistoryLimit }}
- {{- if and (semverCompare ">= 1.23-0" .Capabilities.KubeVersion.Version) (.Values.bloomCompactor.persistence.enableStatefulSetAutoDeletePVC) }}
+ {{- if and (semverCompare ">= 1.23-0" .Capabilities.KubeVersion.Version) (.Values.bloomPlanner.persistence.enableStatefulSetAutoDeletePVC) }}
persistentVolumeClaimRetentionPolicy:
- whenDeleted: {{ .Values.bloomCompactor.persistence.whenDeleted }}
- whenScaled: {{ .Values.bloomCompactor.persistence.whenScaled }}
+ whenDeleted: {{ .Values.bloomPlanner.persistence.whenDeleted }}
+ whenScaled: {{ .Values.bloomPlanner.persistence.whenScaled }}
{{- end }}
selector:
matchLabels:
- {{- include "loki.bloomCompactorSelectorLabels" . | nindent 6 }}
+ {{- include "loki.bloomPlannerSelectorLabels" . | nindent 6 }}
template:
metadata:
annotations:
@@ -35,16 +35,16 @@ spec:
{{- with .Values.loki.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- with .Values.bloomCompactor.podAnnotations }}
+ {{- with .Values.bloomPlanner.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
- {{- include "loki.bloomCompactorSelectorLabels" . | nindent 8 }}
+ {{- include "loki.bloomPlannerSelectorLabels" . | nindent 8 }}
app.kubernetes.io/part-of: memberlist
{{- with .Values.loki.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- with .Values.bloomCompactor.podLabels }}
+ {{- with .Values.bloomPlanner.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
@@ -53,30 +53,30 @@ spec:
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- with .Values.bloomCompactor.hostAliases }}
+ {{- with .Values.bloomPlanner.hostAliases }}
hostAliases:
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- include "loki.bloomCompactorPriorityClassName" . | nindent 6 }}
+ {{- include "loki.bloomPlannerPriorityClassName" . | nindent 6 }}
securityContext:
{{- toYaml .Values.loki.podSecurityContext | nindent 8 }}
- terminationGracePeriodSeconds: {{ .Values.bloomCompactor.terminationGracePeriodSeconds }}
- {{- with .Values.bloomCompactor.initContainers }}
+ terminationGracePeriodSeconds: {{ .Values.bloomPlanner.terminationGracePeriodSeconds }}
+ {{- with .Values.bloomPlanner.initContainers }}
initContainers:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- - name: bloom-compactor
+ - name: bloom-planner
image: {{ include "loki.image" . }}
imagePullPolicy: {{ .Values.loki.image.pullPolicy }}
- {{- if or .Values.loki.command .Values.bloomCompactor.command }}
+ {{- if or .Values.loki.command .Values.bloomPlanner.command }}
command:
- - {{ coalesce .Values.bloomCompactor.command .Values.loki.command | quote }}
+ - {{ coalesce .Values.bloomPlanner.command .Values.loki.command | quote }}
{{- end }}
args:
- -config.file=/etc/loki/config/config.yaml
- - -target=bloom-compactor
- {{- with .Values.bloomCompactor.extraArgs }}
+ - -target=bloom-planner
+ {{- with .Values.bloomPlanner.extraArgs }}
{{- toYaml . | nindent 12 }}
{{- end }}
ports:
@@ -89,17 +89,17 @@ spec:
- name: http-memberlist
containerPort: 7946
protocol: TCP
- {{- with .Values.bloomCompactor.extraEnv }}
+ {{- with .Values.bloomPlanner.extraEnv }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
- {{- with .Values.bloomCompactor.extraEnvFrom }}
+ {{- with .Values.bloomPlanner.extraEnvFrom }}
envFrom:
{{- toYaml . | nindent 12 }}
{{- end }}
securityContext:
{{- toYaml .Values.loki.containerSecurityContext | nindent 12 }}
- {{- include "loki.bloomCompactor.readinessProbe" . | nindent 10 }}
+ {{- include "loki.bloomPlanner.readinessProbe" . | nindent 10 }}
volumeMounts:
- name: temp
mountPath: /tmp
@@ -113,25 +113,25 @@ spec:
- name: license
mountPath: /etc/loki/license
{{- end }}
- {{- with .Values.bloomCompactor.extraVolumeMounts }}
+ {{- with .Values.bloomPlanner.extraVolumeMounts }}
{{- toYaml . | nindent 12 }}
{{- end }}
- {{- with .Values.bloomCompactor.resources }}
+ {{- with .Values.bloomPlanner.resources }}
resources:
{{- toYaml . | nindent 12 }}
{{- end }}
- {{- if .Values.bloomCompactor.extraContainers }}
- {{- toYaml .Values.bloomCompactor.extraContainers | nindent 8}}
+ {{- if .Values.bloomPlanner.extraContainers }}
+ {{- toYaml .Values.bloomPlanner.extraContainers | nindent 8}}
{{- end }}
- {{- with .Values.bloomCompactor.affinity }}
+ {{- with .Values.bloomPlanner.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- with .Values.bloomCompactor.nodeSelector }}
+ {{- with .Values.bloomPlanner.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- with .Values.bloomCompactor.tolerations }}
+ {{- with .Values.bloomPlanner.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
@@ -152,16 +152,16 @@ spec:
secretName: enterprise-logs-license
{{- end }}
{{- end }}
- {{- if not .Values.bloomCompactor.persistence.enabled }}
+ {{- if not .Values.bloomPlanner.persistence.enabled }}
- name: data
emptyDir: {}
{{- end }}
- {{- with .Values.bloomCompactor.extraVolumes }}
+ {{- with .Values.bloomPlanner.extraVolumes }}
{{- toYaml . | nindent 8 }}
{{- end }}
- {{- if .Values.bloomCompactor.persistence.enabled }}
+ {{- if .Values.bloomPlanner.persistence.enabled }}
volumeClaimTemplates:
- {{- range .Values.bloomCompactor.persistence.claims }}
+ {{- range .Values.bloomPlanner.persistence.claims }}
- metadata:
name: {{ .name }}
{{- with .annotations }}
@@ -180,4 +180,4 @@ spec:
{{- end }}
{{- end }}
{{- end -}}
-{{- end -}}
\ No newline at end of file
+{{- end -}}
diff --git a/production/helm/loki/templates/memcached/_memcached-statefulset.tpl b/production/helm/loki/templates/memcached/_memcached-statefulset.tpl
index 8e2479af8a05f..ce490ee6cd713 100644
--- a/production/helm/loki/templates/memcached/_memcached-statefulset.tpl
+++ b/production/helm/loki/templates/memcached/_memcached-statefulset.tpl
@@ -104,7 +104,7 @@ spec:
name: client
args:
- -m {{ .allocatedMemory }}
- - --extended=modern,track_sizes{{ with .extraExtendedOptions }},{{ . }}{{ end }}
+ - --extended=modern,track_sizes{{ if .persistence.enabled }},ext_path={{ .persistence.mountPath }}/file:{{ .persistence.storageSize }}{{ end }}{{ with .extraExtendedOptions }},{{ . }}{{ end }}
- -I {{ .maxItemMemory }}m
- -c {{ .connectionLimit }}
- -v
@@ -122,10 +122,16 @@ spec:
{{- end }}
securityContext:
{{- toYaml $.ctx.Values.memcached.containerSecurityContext | nindent 12 }}
- {{- if .extraVolumeMounts }}
+ {{- if or .persistence.enabled .extraVolumeMounts }}
volumeMounts:
+ {{- if .persistence.enabled }}
+ - name: data
+ mountPath: {{ .persistence.mountPath }}
+ {{- end }}
+ {{- if .extraVolumeMounts }}
{{- toYaml .extraVolumeMounts | nindent 12 }}
{{- end }}
+ {{- end }}
{{- if $.ctx.Values.memcachedExporter.enabled }}
- name: exporter
@@ -151,6 +157,19 @@ spec:
{{- toYaml .extraVolumeMounts | nindent 12 }}
{{- end }}
{{- end }}
+ {{- if .persistence.enabled }}
+ volumeClaimTemplates:
+ - metadata:
+ name: data
+ spec:
+ accessModes: [ "ReadWriteOnce" ]
+ {{- with .persistence.storageClass }}
+ storageClassName: {{ if (eq "-" .) }}""{{ else }}{{ . }}{{ end }}
+ {{- end }}
+ resources:
+ requests:
+ storage: {{ .persistence.storageSize | quote }}
+ {{- end }}
{{- end -}}
{{- end -}}
{{- end -}}
diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml
index 9a5b451b9a638..ed65339cb33ad 100644
--- a/production/helm/loki/values.yaml
+++ b/production/helm/loki/values.yaml
@@ -440,6 +440,10 @@ loki:
# -- Enable tracing
tracing:
enabled: false
+ bloom_build:
+ enabled: false
+ bloom_gateway:
+ enabled: false
######################################################################################################################
#
# Enterprise Loki Configs
@@ -898,7 +902,7 @@ gateway:
# -- The gateway image repository
repository: nginxinc/nginx-unprivileged
# -- The gateway image tag
- tag: 1.24-alpine
+ tag: 1.27-alpine
# -- Overrides the gateway image tag with an image digest
digest: null
# -- The gateway image pull policy
@@ -2376,9 +2380,9 @@ compactor:
annotations: {}
# -- Set this toggle to false to opt out of automounting API credentials for the service account
automountServiceAccountToken: true
-# -- Configuration for the bloom gateway
+# -- Configuration for the bloom-gateway
bloomGateway:
- # -- Number of replicas for the bloom gateway
+ # -- Number of replicas for the bloom-gateway
replicas: 0
# -- hostAliases to add
hostAliases: []
@@ -2386,21 +2390,21 @@ bloomGateway:
# hostnames:
# - domain.tld
image:
- # -- The Docker registry for the bloom gateway image. Overrides `loki.image.registry`
+ # -- The Docker registry for the bloom-gateway image. Overrides `loki.image.registry`
registry: null
- # -- Docker image repository for the bloom gateway image. Overrides `loki.image.repository`
+ # -- Docker image repository for the bloom-gateway image. Overrides `loki.image.repository`
repository: null
- # -- Docker image tag for the bloom gateway image. Overrides `loki.image.tag`
+ # -- Docker image tag for the bloom-gateway image. Overrides `loki.image.tag`
tag: null
# -- Command to execute instead of defined in Docker image
command: null
- # -- The name of the PriorityClass for bloom gateway pods
+ # -- The name of the PriorityClass for bloom-gateway pods
priorityClassName: null
- # -- Labels for bloom gateway pods
+ # -- Labels for bloom-gateway pods
podLabels: {}
- # -- Annotations for bloom gateway pods
+ # -- Annotations for bloom-gateway pods
podAnnotations: {}
- # -- Affinity for bloom gateway pods.
+ # -- Affinity for bloom-gateway pods.
# @default -- Hard node anti-affinity
affinity:
podAntiAffinity:
@@ -2409,39 +2413,39 @@ bloomGateway:
matchLabels:
app.kubernetes.io/component: bloom-gateway
topologyKey: kubernetes.io/hostname
- # -- Labels for bloom gateway service
+ # -- Labels for bloom-gateway service
serviceLabels: {}
- # -- Additional CLI args for the bloom gateway
+ # -- Additional CLI args for the bloom-gateway
extraArgs: []
- # -- Environment variables to add to the bloom gateway pods
+ # -- Environment variables to add to the bloom-gateway pods
extraEnv: []
- # -- Environment variables from secrets or configmaps to add to the bloom gateway pods
+ # -- Environment variables from secrets or configmaps to add to the bloom-gateway pods
extraEnvFrom: []
- # -- Volume mounts to add to the bloom gateway pods
+ # -- Volume mounts to add to the bloom-gateway pods
extraVolumeMounts: []
- # -- Volumes to add to the bloom gateway pods
+ # -- Volumes to add to the bloom-gateway pods
extraVolumes: []
# -- readiness probe settings for ingester pods. If empty, use `loki.readinessProbe`
readinessProbe: {}
# -- liveness probe settings for ingester pods. If empty use `loki.livenessProbe`
livenessProbe: {}
- # -- Resource requests and limits for the bloom gateway
+ # -- Resource requests and limits for the bloom-gateway
resources: {}
- # -- Containers to add to the bloom gateway pods
+ # -- Containers to add to the bloom-gateway pods
extraContainers: []
- # -- Init containers to add to the bloom gateway pods
+ # -- Init containers to add to the bloom-gateway pods
initContainers: []
- # -- Grace period to allow the bloom gateway to shutdown before it is killed
+ # -- Grace period to allow the bloom-gateway to shutdown before it is killed
terminationGracePeriodSeconds: 30
- # -- Node selector for bloom gateway pods
+ # -- Node selector for bloom-gateway pods
nodeSelector: {}
- # -- Tolerations for bloom gateway pods
+ # -- Tolerations for bloom-gateway pods
tolerations: []
# -- Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"
appProtocol:
grpc: ""
persistence:
- # -- Enable creating PVCs for the bloom gateway
+ # -- Enable creating PVCs for the bloom-gateway
enabled: false
# -- Size of persistent disk
size: 10Gi
@@ -2451,9 +2455,9 @@ bloomGateway:
# If empty or set to null, no storageClassName spec is
# set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
storageClass: null
- # -- Annotations for bloom gateway PVCs
+ # -- Annotations for bloom-gateway PVCs
annotations: {}
- # -- List of the bloom gateway PVCs
+ # -- List of the bloom-gateway PVCs
# @notationType -- list
claims:
- name: data
@@ -2472,19 +2476,19 @@ bloomGateway:
whenScaled: Retain
serviceAccount:
create: false
- # -- The name of the ServiceAccount to use for the bloom gateway.
+ # -- The name of the ServiceAccount to use for the bloom-gateway.
# If not set and create is true, a name is generated by appending
# "-bloom-gateway" to the common ServiceAccount.
name: null
- # -- Image pull secrets for the bloom gateway service account
+ # -- Image pull secrets for the bloom-gateway service account
imagePullSecrets: []
- # -- Annotations for the bloom gateway service account
+ # -- Annotations for the bloom-gateway service account
annotations: {}
# -- Set this toggle to false to opt out of automounting API credentials for the service account
automountServiceAccountToken: true
-# -- Configuration for the bloom compactor
-bloomCompactor:
- # -- Number of replicas for the bloom compactor
+# -- Configuration for the bloom-planner
+bloomPlanner:
+ # -- Number of replicas for the bloom-planner
replicas: 0
# -- hostAliases to add
hostAliases: []
@@ -2492,62 +2496,62 @@ bloomCompactor:
# hostnames:
# - domain.tld
image:
- # -- The Docker registry for the bloom compactor image. Overrides `loki.image.registry`
+ # -- The Docker registry for the bloom-planner image. Overrides `loki.image.registry`
registry: null
- # -- Docker image repository for the bloom compactor image. Overrides `loki.image.repository`
+ # -- Docker image repository for the bloom-planner image. Overrides `loki.image.repository`
repository: null
- # -- Docker image tag for the bloom compactor image. Overrides `loki.image.tag`
+ # -- Docker image tag for the bloom-planner image. Overrides `loki.image.tag`
tag: null
# -- Command to execute instead of defined in Docker image
command: null
- # -- The name of the PriorityClass for bloom compactor pods
+ # -- The name of the PriorityClass for bloom-planner pods
priorityClassName: null
- # -- Labels for bloom compactor pods
+ # -- Labels for bloom-planner pods
podLabels: {}
- # -- Annotations for bloom compactor pods
+ # -- Annotations for bloom-planner pods
podAnnotations: {}
- # -- Affinity for bloom compactor pods.
+ # -- Affinity for bloom-planner pods.
# @default -- Hard node anti-affinity
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
- app.kubernetes.io/component: bloom-compactor
+ app.kubernetes.io/component: bloom-planner
topologyKey: kubernetes.io/hostname
- # -- Labels for bloom compactor service
+ # -- Labels for bloom-planner service
serviceLabels: {}
- # -- Additional CLI args for the bloom compactor
+ # -- Additional CLI args for the bloom-planner
extraArgs: []
- # -- Environment variables to add to the bloom compactor pods
+ # -- Environment variables to add to the bloom-planner pods
extraEnv: []
- # -- Environment variables from secrets or configmaps to add to the bloom compactor pods
+ # -- Environment variables from secrets or configmaps to add to the bloom-planner pods
extraEnvFrom: []
- # -- Volume mounts to add to the bloom compactor pods
+ # -- Volume mounts to add to the bloom-planner pods
extraVolumeMounts: []
- # -- Volumes to add to the bloom compactor pods
+ # -- Volumes to add to the bloom-planner pods
extraVolumes: []
# -- readiness probe settings for ingester pods. If empty, use `loki.readinessProbe`
readinessProbe: {}
# -- liveness probe settings for ingester pods. If empty use `loki.livenessProbe`
livenessProbe: {}
- # -- Resource requests and limits for the bloom compactor
+ # -- Resource requests and limits for the bloom-planner
resources: {}
- # -- Containers to add to the bloom compactor pods
+ # -- Containers to add to the bloom-planner pods
extraContainers: []
- # -- Init containers to add to the bloom compactor pods
+ # -- Init containers to add to the bloom-planner pods
initContainers: []
- # -- Grace period to allow the bloom compactor to shutdown before it is killed
+ # -- Grace period to allow the bloom-planner to shutdown before it is killed
terminationGracePeriodSeconds: 30
- # -- Node selector for bloom compactor pods
+ # -- Node selector for bloom-planner pods
nodeSelector: {}
- # -- Tolerations for bloom compactor pods
+ # -- Tolerations for bloom-planner pods
tolerations: []
# -- Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"
appProtocol:
grpc: ""
persistence:
- # -- Enable creating PVCs for the bloom compactor
+ # -- Enable creating PVCs for the bloom-planner
enabled: false
# -- Size of persistent disk
size: 10Gi
@@ -2557,37 +2561,115 @@ bloomCompactor:
# If empty or set to null, no storageClassName spec is
# set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
storageClass: null
- # -- Annotations for bloom compactor PVCs
+ # -- Annotations for bloom-planner PVCs
annotations: {}
- # -- List of the bloom compactor PVCs
+ # -- List of the bloom-planner PVCs
# @notationType -- list
- claims:
- - name: data
- size: 10Gi
- # -- Storage class to be used.
- # If defined, storageClassName: .
- # If set to "-", storageClassName: "", which disables dynamic provisioning.
- # If empty or set to null, no storageClassName spec is
- # set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
- storageClass: null
- # - name: wal
- # size: 150Gi
+ claims: []
# -- Enable StatefulSetAutoDeletePVC feature
enableStatefulSetAutoDeletePVC: false
whenDeleted: Retain
whenScaled: Retain
serviceAccount:
create: false
- # -- The name of the ServiceAccount to use for the bloom compactor.
+ # -- The name of the ServiceAccount to use for the bloom-planner.
# If not set and create is true, a name is generated by appending
- # "-bloom-compactor" to the common ServiceAccount.
+ # "-bloom-planner" to the common ServiceAccount.
name: null
- # -- Image pull secrets for the bloom compactor service account
+ # -- Image pull secrets for the bloom-planner service account
imagePullSecrets: []
- # -- Annotations for the bloom compactor service account
+ # -- Annotations for the bloom-planner service account
annotations: {}
# -- Set this toggle to false to opt out of automounting API credentials for the service account
automountServiceAccountToken: true
+# -- Configuration for the bloom-builder
+bloomBuilder:
+ # -- Number of replicas for the bloom-builder
+ replicas: 0
+ # -- hostAliases to add
+ hostAliases: []
+ # - ip: 1.2.3.4
+ # hostnames:
+ # - domain.tld
+ autoscaling:
+ # -- Enable autoscaling for the bloom-builder
+ enabled: false
+ # -- Minimum autoscaling replicas for the bloom-builder
+ minReplicas: 1
+ # -- Maximum autoscaling replicas for the bloom-builder
+ maxReplicas: 3
+ # -- Target CPU utilisation percentage for the bloom-builder
+ targetCPUUtilizationPercentage: 60
+ # -- Target memory utilisation percentage for the bloom-builder
+ targetMemoryUtilizationPercentage: null
+ # -- Allows one to define custom metrics using the HPA/v2 schema (for example, Pods, Object or External metrics)
+ customMetrics: []
+ # - type: Pods
+ # pods:
+ # metric:
+ # name: loki_query_rate
+ # target:
+ # type: AverageValue
+ # averageValue: 100
+ behavior:
+ # -- Enable autoscaling behaviours
+ enabled: false
+ # -- define scale down policies, must conform to HPAScalingRules
+ scaleDown: {}
+ # -- define scale up policies, must conform to HPAScalingRules
+ scaleUp: {}
+ image:
+ # -- The Docker registry for the bloom-builder image. Overrides `loki.image.registry`
+ registry: null
+ # -- Docker image repository for the bloom-builder image. Overrides `loki.image.repository`
+ repository: null
+ # -- Docker image tag for the bloom-builder image. Overrides `loki.image.tag`
+ tag: null
+ # -- Command to execute instead of defined in Docker image
+ command: null
+ # -- The name of the PriorityClass for bloom-builder pods
+ priorityClassName: null
+ # -- Labels for bloom-builder pods
+ podLabels: {}
+ # -- Annotations for bloom-builder pods
+ podAnnotations: {}
+ # -- Labels for bloom-builder service
+ serviceLabels: {}
+ # -- Additional CLI args for the bloom-builder
+ extraArgs: []
+ # -- Environment variables to add to the bloom-builder pods
+ extraEnv: []
+ # -- Environment variables from secrets or configmaps to add to the bloom-builder pods
+ extraEnvFrom: []
+ # -- Volume mounts to add to the bloom-builder pods
+ extraVolumeMounts: []
+ # -- Volumes to add to the bloom-builder pods
+ extraVolumes: []
+ # -- Resource requests and limits for the bloom-builder
+ resources: {}
+ # -- Containers to add to the bloom-builder pods
+ extraContainers: []
+ # -- Grace period to allow the bloom-builder to shutdown before it is killed
+ terminationGracePeriodSeconds: 30
+ # -- Affinity for bloom-builder pods.
+ # @default -- Hard node anti-affinity
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app.kubernetes.io/component: bloom-builder
+ topologyKey: kubernetes.io/hostname
+ # -- Pod Disruption Budget maxUnavailable
+ maxUnavailable: null
+ # -- Node selector for bloom-builder pods
+ nodeSelector: {}
+ # -- Tolerations for bloom-builder pods
+ tolerations: []
+ # -- Adds the appProtocol field to the queryFrontend service. This allows bloomBuilder to work with istio protocol selection.
+ appProtocol:
+ # -- Set the optional grpc service protocol. Ex: "grpc", "http2" or "https"
+ grpc: ""
# -- Configuration for the pattern ingester
patternIngester:
# -- Number of replicas for the pattern ingester
@@ -2967,6 +3049,20 @@ resultsCache:
service:
annotations: {}
labels: {}
+ # -- Persistence settings for the results-cache
+ persistence:
+ # -- Enable creating PVCs for the results-cache
+ enabled: false
+ # -- Size of persistent disk
+ storageSize: 10G
+ # -- Storage class to be used.
+ # If defined, storageClassName: .
+ # If set to "-", storageClassName: "", which disables dynamic provisioning.
+ # If empty or set to null, no storageClassName spec is
+ # set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
+ storageClass: null
+ # -- Volume mount path
+ mountPath: /data
chunksCache:
# -- Specifies whether memcached based chunks-cache should be enabled
enabled: true
@@ -3055,6 +3151,20 @@ chunksCache:
service:
annotations: {}
labels: {}
+ # -- Persistence settings for the chunks-cache
+ persistence:
+ # -- Enable creating PVCs for the chunks-cache
+ enabled: false
+ # -- Size of persistent disk
+ storageSize: 10G
+ # -- Storage class to be used.
+ # If defined, storageClassName: .
+ # If set to "-", storageClassName: "", which disables dynamic provisioning.
+ # If empty or set to null, no storageClassName spec is
+ # set, choosing the default provisioner (gp2 on AWS, standard on GKE, AWS, and OpenStack).
+ storageClass: null
+ # -- Volume mount path
+ mountPath: /data
######################################################################################################################
#
# Subchart configurations
diff --git a/production/terraform/modules/s3/versions.tf b/production/terraform/modules/s3/versions.tf
index 1a8a148737367..ffbae7a8f9edf 100644
--- a/production/terraform/modules/s3/versions.tf
+++ b/production/terraform/modules/s3/versions.tf
@@ -2,7 +2,7 @@ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
- version = "~> 5.64.0"
+ version = "~> 5.65.0"
}
random = {
diff --git a/tools/bloom/inspector/main.go b/tools/bloom/inspector/main.go
index 8f60422cd6487..9bc193526a9bc 100644
--- a/tools/bloom/inspector/main.go
+++ b/tools/bloom/inspector/main.go
@@ -3,36 +3,66 @@ package main
import (
"fmt"
"os"
+ "strings"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/util/mempool"
)
func main() {
- if len(os.Args) < 2 {
+ if len(os.Args) < 2 || os.Args[1] == "-h" {
fmt.Println("Usage: go run main.go BLOCK_DIRECTORY")
os.Exit(2)
}
-
path := os.Args[1]
- fmt.Printf("Block directory: %s\n", path)
+
+ fmt.Printf("Block: %s\n", path)
r := v1.NewDirectoryBlockReader(path)
b := v1.NewBlock(r, v1.NewMetrics(nil))
- q := v1.NewBlockQuerier(b, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize)
+ q := v1.NewBlockQuerier(b, &mempool.SimpleHeapAllocator{}, 256<<20)
+ qIter := q.Iter()
md, err := q.Metadata()
if err != nil {
panic(err)
}
- fmt.Printf("Metadata: %+v\n", md)
+ fmt.Printf("Checksum: 0x%x\n", md.Checksum)
+ fmt.Printf("Series: %+v\n", md.Series)
+ fmt.Printf("Options: %+v\n", md.Options)
+ fmt.Println("")
- for q.Next() {
- swb := q.At()
- fmt.Printf("%s (%d)\n", swb.Series.Fingerprint, swb.Series.Chunks.Len())
+ count := 0
+ for qIter.Next() {
+ swb := qIter.At()
+ series := swb.Series
+ fmt.Printf(
+ "%s chunks=%d fields=%+v\n",
+ series.Fingerprint,
+ series.Chunks.Len(),
+ series.Meta.Fields.Items(),
+ )
+ p := 0
+ for swb.Blooms.Next() {
+ bloom := swb.Blooms.At()
+ fmt.Printf(
+ "%s page=%d size=%v count=%v fill=%v\n",
+ strings.Repeat(" ", 16), // padding
+ p,
+ bloom.Capacity()/8,
+ bloom.Count(),
+ bloom.FillRatio(),
+ )
+ p++
+ }
+ count++
}
- if q.Err() != nil {
+
+ if qIter.Err() != nil {
fmt.Printf("error: %s\n", q.Err())
}
+
+ fmt.Println("")
+ fmt.Printf("Stream count: %4d\n", count)
}
diff --git a/tools/deprecated-config-checker/checker/checker.go b/tools/deprecated-config-checker/checker/checker.go
index 5651ab49bbe82..d49b55584d31e 100644
--- a/tools/deprecated-config-checker/checker/checker.go
+++ b/tools/deprecated-config-checker/checker/checker.go
@@ -35,7 +35,7 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
func (c *Config) Validate() error {
if c.ConfigFile == "" && c.RuntimeConfigFile == "" {
- return fmt.Errorf(configRequiredErrorMsg)
+ return fmt.Errorf("%s", configRequiredErrorMsg)
}
return nil
}
diff --git a/tools/deprecated-config-checker/deprecated-config.yaml b/tools/deprecated-config-checker/deprecated-config.yaml
index 46b89971bdd2e..0e82f7f8b1f7c 100644
--- a/tools/deprecated-config-checker/deprecated-config.yaml
+++ b/tools/deprecated-config-checker/deprecated-config.yaml
@@ -62,3 +62,6 @@ limits_config:
per_tenant_override_config: "Feature renamed to 'runtime configuration', flag deprecated in favor of runtime_config.file"
per_tenant_override_period: "Feature renamed to 'runtime configuration', flag deprecated in favor of runtime_config.period"
allow_deletes: "Use deletion_mode per tenant configuration instead."
+
+server:
+ grpc_server_stats_tracking_enabled: "Deprecated, currently doesn't do anything, will be removed in a future version."
\ No newline at end of file
diff --git a/tools/doc-generator/parse/root_blocks.go b/tools/doc-generator/parse/root_blocks.go
index 85e81705848cc..1bfcc57bc8965 100644
--- a/tools/doc-generator/parse/root_blocks.go
+++ b/tools/doc-generator/parse/root_blocks.go
@@ -15,7 +15,7 @@ import (
"golang.org/x/exp/slices"
"github.com/grafana/loki/v3/pkg/analytics"
- "github.com/grafana/loki/v3/pkg/bloomcompactor"
+ "github.com/grafana/loki/v3/pkg/bloombuild"
"github.com/grafana/loki/v3/pkg/bloomgateway"
"github.com/grafana/loki/v3/pkg/compactor"
"github.com/grafana/loki/v3/pkg/distributor"
@@ -123,16 +123,16 @@ var (
StructType: []reflect.Type{reflect.TypeOf(compactor.Config{})},
Desc: "The compactor block configures the compactor component, which compacts index shards for performance.",
},
- {
- Name: "bloom_compactor",
- StructType: []reflect.Type{reflect.TypeOf(bloomcompactor.Config{})},
- Desc: "Experimental: The bloom_compactor block configures the Loki bloom compactor server, responsible for compacting stream indexes into bloom filters and merging them as bloom blocks.",
- },
{
Name: "bloom_gateway",
StructType: []reflect.Type{reflect.TypeOf(bloomgateway.Config{})},
Desc: "Experimental: The bloom_gateway block configures the Loki bloom gateway server, responsible for serving queries for filtering chunks based on filter expressions.",
},
+ {
+ Name: "bloom_build",
+ StructType: []reflect.Type{reflect.TypeOf(bloombuild.Config{})},
+ Desc: "Experimental: The bloom_build block configures the Loki bloom planner and builder servers, responsible for building bloom filters.",
+ },
{
Name: "limits_config",
StructType: []reflect.Type{reflect.TypeOf(validation.Limits{})},
diff --git a/tools/gcplog/main.tf b/tools/gcplog/main.tf
index b6cb8bea2e771..fa2f2fbe646ae 100644
--- a/tools/gcplog/main.tf
+++ b/tools/gcplog/main.tf
@@ -2,7 +2,7 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
- version = "5.42.0"
+ version = "6.0.1"
}
}
}
diff --git a/tools/lambda-promtail/go.mod b/tools/lambda-promtail/go.mod
index fb35609574b72..c548f6149945b 100644
--- a/tools/lambda-promtail/go.mod
+++ b/tools/lambda-promtail/go.mod
@@ -5,12 +5,12 @@ go 1.22
require (
github.com/aws/aws-lambda-go v1.47.0
github.com/aws/aws-sdk-go-v2 v1.30.4
- github.com/aws/aws-sdk-go-v2/config v1.27.28
- github.com/aws/aws-sdk-go-v2/service/s3 v1.59.0
+ github.com/aws/aws-sdk-go-v2/config v1.27.31
+ github.com/aws/aws-sdk-go-v2/service/s3 v1.61.0
github.com/go-kit/log v0.2.1
github.com/gogo/protobuf v1.3.2
github.com/golang/snappy v0.0.4
- github.com/grafana/dskit v0.0.0-20240814201308-442170dfed1b
+ github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b
github.com/grafana/loki/v3 v3.0.0-20240809103847-9315b3d03d79
github.com/prometheus/common v0.55.0
github.com/stretchr/testify v1.9.0
@@ -24,7 +24,7 @@ require (
github.com/alecthomas/units v0.0.0-20240626203959-61d1e3462e30 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4 // indirect
- github.com/aws/aws-sdk-go-v2/credentials v1.17.28 // indirect
+ github.com/aws/aws-sdk-go-v2/credentials v1.17.30 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16 // indirect
@@ -36,7 +36,7 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.22.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5 // indirect
- github.com/aws/aws-sdk-go-v2/service/sts v1.30.4 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sts v1.30.5 // indirect
github.com/aws/smithy-go v1.20.4 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/buger/jsonparser v1.1.1 // indirect
@@ -67,7 +67,7 @@ require (
github.com/grafana/gomemcache v0.0.0-20240229205252-cd6a66d6fb56 // indirect
github.com/grafana/jsonparser v0.0.0-20240425183733-ea80629e1a32 // indirect
github.com/grafana/loki/pkg/push v0.0.0-20231124142027-e52380921608 // indirect
- github.com/grafana/pyroscope-go/godeltaprof v0.1.7 // indirect
+ github.com/grafana/pyroscope-go/godeltaprof v0.1.8 // indirect
github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc // indirect
github.com/hashicorp/consul/api v1.29.2 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
diff --git a/tools/lambda-promtail/go.sum b/tools/lambda-promtail/go.sum
index 2627682cc9454..c1088c8692cc5 100644
--- a/tools/lambda-promtail/go.sum
+++ b/tools/lambda-promtail/go.sum
@@ -52,10 +52,10 @@ github.com/aws/aws-sdk-go-v2 v1.30.4 h1:frhcagrVNrzmT95RJImMHgabt99vkXGslubDaDag
github.com/aws/aws-sdk-go-v2 v1.30.4/go.mod h1:CT+ZPWXbYrci8chcARI3OmI/qgd+f6WtuLOoaIA8PR0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4 h1:70PVAiL15/aBMh5LThwgXdSQorVr91L127ttckI9QQU=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4/go.mod h1:/MQxMqci8tlqDH+pjmoLu1i0tbWCUP1hhyMRuFxpQCw=
-github.com/aws/aws-sdk-go-v2/config v1.27.28 h1:OTxWGW/91C61QlneCtnD62NLb4W616/NM1jA8LhJqbg=
-github.com/aws/aws-sdk-go-v2/config v1.27.28/go.mod h1:uzVRVtJSU5EFv6Fu82AoVFKozJi2ZCY6WRCXj06rbvs=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.28 h1:m8+AHY/ND8CMHJnPoH7PJIRakWGa4gbfbxuY9TGTUXM=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.28/go.mod h1:6TF7dSc78ehD1SL6KpRIPKMA1GyyWflIkjqg+qmf4+c=
+github.com/aws/aws-sdk-go-v2/config v1.27.31 h1:kxBoRsjhT3pq0cKthgj6RU6bXTm/2SgdoUMyrVw0rAI=
+github.com/aws/aws-sdk-go-v2/config v1.27.31/go.mod h1:z04nZdSWFPaDwK3DdJOG2r+scLQzMYuJeW0CujEm9FM=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.30 h1:aau/oYFtibVovr2rDt8FHlU17BTicFEMAi29V1U+L5Q=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.30/go.mod h1:BPJ/yXV92ZVq6G8uYvbU0gSl8q94UB63nMT5ctNO38g=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12 h1:yjwoSyDZF8Jth+mUk5lSPJCkMC0lMy6FaCD51jm6ayE=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12/go.mod h1:fuR57fAgMk7ot3WcNQfb6rSEn+SUffl7ri+aa8uKysI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16 h1:TNyt/+X43KJ9IJJMjKfa3bNTiZbUP7DeCxfbTROESwY=
@@ -74,14 +74,14 @@ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18 h1:tJ5RnkHC
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18/go.mod h1:++NHzT+nAF7ZPrHPsA+ENvsXkOO8wEu+C6RXltAG4/c=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16 h1:jg16PhLPUiHIj8zYIW6bqzeQSuHVEiWnGA0Brz5Xv2I=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16/go.mod h1:Uyk1zE1VVdsHSU7096h/rwnXDzOzYQVl+FNPhPw7ShY=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.59.0 h1:Cso4Ev/XauMVsbwdhYEoxg8rxZWw43CFqqaPB5w3W2c=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.59.0/go.mod h1:BSPI0EfnYUuNHPS0uqIo5VrRwzie+Fp+YhQOUs16sKI=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.61.0 h1:Wb544Wh+xfSXqJ/j3R4aX9wrKUoZsJNmilBYZb3mKQ4=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.61.0/go.mod h1:BSPI0EfnYUuNHPS0uqIo5VrRwzie+Fp+YhQOUs16sKI=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.5 h1:zCsFCKvbj25i7p1u94imVoO447I/sFv8qq+lGJhRN0c=
github.com/aws/aws-sdk-go-v2/service/sso v1.22.5/go.mod h1:ZeDX1SnKsVlejeuz41GiajjZpRSWR7/42q/EyA/QEiM=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5 h1:SKvPgvdvmiTWoi0GAJ7AsJfOz3ngVkD/ERbs5pUnHNI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5/go.mod h1:20sz31hv/WsPa3HhU3hfrIet2kxM4Pe0r20eBZ20Tac=
-github.com/aws/aws-sdk-go-v2/service/sts v1.30.4 h1:iAckBT2OeEK/kBDyN/jDtpEExhjeeA/Im2q4X0rJZT8=
-github.com/aws/aws-sdk-go-v2/service/sts v1.30.4/go.mod h1:vmSqFK+BVIwVpDAGZB3CoCXHzurt4qBE8lf+I/kRTh0=
+github.com/aws/aws-sdk-go-v2/service/sts v1.30.5 h1:OMsEmCyz2i89XwRwPouAJvhj81wINh+4UK+k/0Yo/q8=
+github.com/aws/aws-sdk-go-v2/service/sts v1.30.5/go.mod h1:vmSqFK+BVIwVpDAGZB3CoCXHzurt4qBE8lf+I/kRTh0=
github.com/aws/smithy-go v1.20.4 h1:2HK1zBdPgRbjFOHlfeQZfpC4r72MOb9bZkiFwggKO+4=
github.com/aws/smithy-go v1.20.4/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
@@ -216,8 +216,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
-github.com/grafana/dskit v0.0.0-20240814201308-442170dfed1b h1:w3iQfdftNWfmU86f3Y4Cjzjx/+3AnKfpXzzq2cV8H/Y=
-github.com/grafana/dskit v0.0.0-20240814201308-442170dfed1b/go.mod h1:c4ASJAo1QFmXGydDzNed2o0+Fncx+x4YmQ1r9HfYU3c=
+github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b h1:x2HCzk29I0o5pRPfqWP/qwhXaPGlcz8pohq5kO1NZoE=
+github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b/go.mod h1:SPLNCARd4xdjCkue0O6hvuoveuS1dGJjDnfxYe405YQ=
github.com/grafana/gomemcache v0.0.0-20240229205252-cd6a66d6fb56 h1:X8IKQ0wu40wpvYcKfBcc5T4QnhdQjUhtUtB/1CY89lE=
github.com/grafana/gomemcache v0.0.0-20240229205252-cd6a66d6fb56/go.mod h1:PGk3RjYHpxMM8HFPhKKo+vve3DdlPUELZLSDEFehPuU=
github.com/grafana/jsonparser v0.0.0-20240425183733-ea80629e1a32 h1:NznuPwItog+rwdVg8hAuGKP29ndRSzJAwhxKldkP8oQ=
@@ -226,8 +226,8 @@ github.com/grafana/loki/pkg/push v0.0.0-20231124142027-e52380921608 h1:ZYk42718k
github.com/grafana/loki/pkg/push v0.0.0-20231124142027-e52380921608/go.mod h1:f3JSoxBTPXX5ec4FxxeC19nTBSxoTz+cBgS3cYLMcr0=
github.com/grafana/loki/v3 v3.0.0-20240809103847-9315b3d03d79 h1:5/FOzaJLAKXnQzN0MTi41s9irM7iCeKTGJ3d9kYKpu4=
github.com/grafana/loki/v3 v3.0.0-20240809103847-9315b3d03d79/go.mod h1:QgSsIqWyevcORssKdnuWnq/eg6vmYj2M8TCenSPfgQk=
-github.com/grafana/pyroscope-go/godeltaprof v0.1.7 h1:C11j63y7gymiW8VugJ9ZW0pWfxTZugdSJyC48olk5KY=
-github.com/grafana/pyroscope-go/godeltaprof v0.1.7/go.mod h1:Tk376Nbldo4Cha9RgiU7ik8WKFkNpfds98aUzS8omLE=
+github.com/grafana/pyroscope-go/godeltaprof v0.1.8 h1:iwOtYXeeVSAeYefJNaxDytgjKtUuKQbJqgAIjlnicKg=
+github.com/grafana/pyroscope-go/godeltaprof v0.1.8/go.mod h1:2+l7K7twW49Ct4wFluZD3tZ6e0SjanjcUUBPVD/UuGU=
github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc h1:GN2Lv3MGO7AS6PrRoT6yV5+wkrOpcszoIsO4+4ds248=
github.com/grafana/regexp v0.0.0-20240518133315-a468a5bfb3bc/go.mod h1:+JKpmjMGhpgPL+rXZ5nsZieVzvarn86asRlBg4uNGnk=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
@@ -300,7 +300,6 @@ github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
-github.com/klauspost/compress v1.17.3/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
diff --git a/tools/querytee/proxy_endpoint_test.go b/tools/querytee/proxy_endpoint_test.go
index 85bc45d066ae5..728f8fec415c9 100644
--- a/tools/querytee/proxy_endpoint_test.go
+++ b/tools/querytee/proxy_endpoint_test.go
@@ -189,8 +189,8 @@ func Test_ProxyEndpoint_Requests(t *testing.T) {
require.NoError(t, err)
return r
},
- handler: func(t *testing.T) http.HandlerFunc {
- return func(w http.ResponseWriter, r *http.Request) {
+ handler: func(_ *testing.T) http.HandlerFunc {
+ return func(w http.ResponseWriter, _ *http.Request) {
_, _ = w.Write([]byte("ok"))
}
},
@@ -224,7 +224,7 @@ func Test_ProxyEndpoint_Requests(t *testing.T) {
wg.Add(tc.counts)
if tc.handler == nil {
- testHandler = func(w http.ResponseWriter, r *http.Request) {
+ testHandler = func(w http.ResponseWriter, _ *http.Request) {
_, _ = w.Write([]byte("ok"))
}
@@ -320,7 +320,7 @@ func Test_ProxyEndpoint_SummaryMetrics(t *testing.T) {
requestCount.Store(0)
wg.Add(tc.counts)
- testHandler = func(w http.ResponseWriter, r *http.Request) {
+ testHandler = func(w http.ResponseWriter, _ *http.Request) {
_, _ = w.Write([]byte("ok"))
}
diff --git a/tools/tsdb/index-analyzer/analytics.go b/tools/tsdb/index-analyzer/analytics.go
index de01d47d6ec00..b574a58341924 100644
--- a/tools/tsdb/index-analyzer/analytics.go
+++ b/tools/tsdb/index-analyzer/analytics.go
@@ -73,7 +73,7 @@ func analyze(indexShipper indexshipper.IndexShipper, tableName string, tenants [
"", nil,
model.Earliest,
model.Latest,
- func(ls labels.Labels, fp model.Fingerprint, chks []tsdb_index.ChunkMeta) (stop bool) {
+ func(_ labels.Labels, _ model.Fingerprint, chks []tsdb_index.ChunkMeta) (stop bool) {
if len(chks) > maxChunksPerSeries {
maxChunksPerSeries = len(chks)
if len(chks) > 1000 {
diff --git a/tools/tsdb/tsdb-map/main.go b/tools/tsdb/tsdb-map/main.go
index 0a72ac98db13d..9f35b53fe48c6 100644
--- a/tools/tsdb/tsdb-map/main.go
+++ b/tools/tsdb/tsdb-map/main.go
@@ -93,7 +93,7 @@ func main() {
}
log.Println("writing index")
- if _, err := builder.Build(context.Background(), *dest, func(from, through model.Time, checksum uint32) tsdb.Identifier {
+ if _, err := builder.Build(context.Background(), *dest, func(_, _ model.Time, _ uint32) tsdb.Identifier {
panic("todo")
}); err != nil {
panic(err)
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/authenticator_factory.go b/vendor/github.com/IBM/go-sdk-core/v5/core/authenticator_factory.go
index 455fba064cb8e..cd1a63c905b90 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/authenticator_factory.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/authenticator_factory.go
@@ -1,6 +1,6 @@
package core
-// (C) Copyright IBM Corp. 2019, 2023.
+// (C) Copyright IBM Corp. 2019, 2024.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -22,6 +22,7 @@ import (
// GetAuthenticatorFromEnvironment instantiates an Authenticator using service properties
// retrieved from external config sources.
func GetAuthenticatorFromEnvironment(credentialKey string) (authenticator Authenticator, err error) {
+ GetLogger().Debug("Get authenticator from environment, key=%s\n", credentialKey)
properties, err := getServiceProperties(credentialKey)
if len(properties) == 0 {
return
@@ -71,5 +72,9 @@ func GetAuthenticatorFromEnvironment(credentialKey string) (authenticator Authen
)
}
+ if authenticator != nil {
+ GetLogger().Debug("Returning authenticator, type=%s\n", authenticator.AuthenticationType())
+ }
+
return
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/base_service.go b/vendor/github.com/IBM/go-sdk-core/v5/core/base_service.go
index 6a9f85a72bf8e..62e362d4103c4 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/base_service.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/base_service.go
@@ -1,6 +1,6 @@
package core
-// (C) Copyright IBM Corp. 2019, 2022.
+// (C) Copyright IBM Corp. 2019, 2024.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -18,7 +18,6 @@ import (
"bytes"
"context"
"crypto/tls"
- "crypto/x509"
"encoding/json"
"errors"
"fmt"
@@ -131,6 +130,8 @@ func (service *BaseService) Clone() *BaseService {
// ConfigureService updates the service with external configuration values.
func (service *BaseService) ConfigureService(serviceName string) error {
+ GetLogger().Debug("Configuring BaseService instance with service name: %s\n", serviceName)
+
// Try to load service properties from external config.
serviceProps, err := getServiceProperties(serviceName)
if err != nil {
@@ -144,7 +145,7 @@ func (service *BaseService) ConfigureService(serviceName string) error {
// URL
if url, ok := serviceProps[PROPNAME_SVC_URL]; ok && url != "" {
- err := service.SetURL(url)
+ err := service.SetServiceURL(url)
if err != nil {
err = RepurposeSDKProblem(err, "set-url-fail")
return err
@@ -221,6 +222,7 @@ func (service *BaseService) SetServiceURL(url string) error {
}
service.Options.URL = url
+ GetLogger().Debug("Set service URL: %s\n", url)
return nil
}
@@ -289,6 +291,7 @@ func (service *BaseService) DisableSSLVerification() {
// Disable server ssl cert & hostname verification.
tr.TLSClientConfig.InsecureSkipVerify = true // #nosec G402
}
+ GetLogger().Debug("Disabled SSL verification in HTTP client")
}
// IsSSLDisabled returns true if and only if the service's http.Client instance
@@ -332,11 +335,12 @@ func (service *BaseService) buildUserAgent() string {
}
// SetUserAgent sets the user agent value.
-func (service *BaseService) SetUserAgent(userAgentString string) {
- if userAgentString == "" {
- userAgentString = service.buildUserAgent()
+func (service *BaseService) SetUserAgent(userAgent string) {
+ if userAgent == "" {
+ userAgent = service.buildUserAgent()
}
- service.UserAgent = userAgentString
+ service.UserAgent = userAgent
+ GetLogger().Debug("Set User-Agent: %s\n", userAgent)
}
// Request invokes the specified HTTP request and returns the response.
@@ -414,9 +418,9 @@ func (service *BaseService) Request(req *http.Request, result interface{}) (deta
}
// Invoke the request, then check for errors during the invocation.
+ GetLogger().Debug("Sending HTTP request message...")
var httpResponse *http.Response
httpResponse, err = service.Client.Do(req)
-
if err != nil {
if strings.Contains(err.Error(), SSL_CERTIFICATION_ERROR) {
err = fmt.Errorf(ERRORMSG_SSL_VERIFICATION_FAILED + "\n" + err.Error())
@@ -424,6 +428,7 @@ func (service *BaseService) Request(req *http.Request, result interface{}) (deta
err = SDKErrorf(err, "", "no-connection-made", getComponentInfo())
return
}
+ GetLogger().Debug("Received HTTP response message, status code %d", httpResponse.StatusCode)
// If debug is enabled, then dump the response.
if GetLogger().IsLogLevelEnabled(LevelDebug) {
@@ -754,6 +759,7 @@ func (service *BaseService) EnableRetries(maxRetries int, maxRetryInterval time.
// Hang the retryable client off the base service via the "shim" client.
service.Client = client.StandardClient()
}
+ GetLogger().Debug("Enabled retries; maxRetries=%d, maxRetryInterval=%s\n", maxRetries, maxRetryInterval.String())
}
// DisableRetries will disable automatic retries in the service.
@@ -765,6 +771,8 @@ func (service *BaseService) DisableRetries() {
// the retryable client instance.
tr := service.Client.Transport.(*retryablehttp.RoundTripper)
service.Client = tr.Client.HTTPClient
+
+ GetLogger().Debug("Disabled retries\n")
}
}
@@ -826,16 +834,46 @@ var (
// scheme specified in the URL is invalid. This error isn't typed
// specifically so we resort to matching on the error string.
schemeErrorRe = regexp.MustCompile(`unsupported protocol scheme`)
+
+ // A regular expression to match the error returned by net/http when a
+ // request header or value is invalid. This error isn't typed
+ // specifically so we resort to matching on the error string.
+ invalidHeaderErrorRe = regexp.MustCompile(`invalid header`)
+
+ // A regular expression to match the error returned by net/http when the
+ // TLS certificate is not trusted. This error isn't typed
+ // specifically so we resort to matching on the error string.
+ notTrustedErrorRe = regexp.MustCompile(`certificate is not trusted|certificate is signed by unknown authority`)
)
// IBMCloudSDKRetryPolicy provides a default implementation of the CheckRetry interface
// associated with a retryablehttp.Client.
// This function will return true if the specified request/response should be retried.
func IBMCloudSDKRetryPolicy(ctx context.Context, resp *http.Response, err error) (bool, error) {
- // This logic was adapted from go-relyablehttp.ErrorPropagatedRetryPolicy().
+ // This logic was adapted from go-retryablehttp.ErrorPropagatedRetryPolicy().
+
+ if GetLogger().IsLogLevelEnabled(LevelDebug) {
+ // Compile the details to be included in the debug message.
+ var details []string
+ if resp != nil {
+ details = append(details, fmt.Sprintf("status_code=%d", resp.StatusCode))
+ if resp.Request != nil {
+ details = append(details, fmt.Sprintf("method=%s", resp.Request.Method))
+ details = append(details, fmt.Sprintf("url=%s", resp.Request.URL.Redacted()))
+ }
+ }
+ if err != nil {
+ details = append(details, fmt.Sprintf("error=%s", err.Error()))
+ } else {
+ details = append(details, "error=nil")
+ }
+
+ GetLogger().Debug("Considering retry attempt; %s\n", strings.Join(details, ", "))
+ }
// Do not retry on a Context-related error (Canceled or DeadlineExceeded).
if ctx.Err() != nil {
+ GetLogger().Debug("No retry, Context error: %s\n", ctx.Err().Error())
return false, ctx.Err()
}
@@ -844,21 +882,35 @@ func IBMCloudSDKRetryPolicy(ctx context.Context, resp *http.Response, err error)
if v, ok := err.(*url.Error); ok {
// Don't retry if the error was due to too many redirects.
if redirectsErrorRe.MatchString(v.Error()) {
+ GetLogger().Debug("No retry, too many redirects: %s\n", v.Error())
return false, SDKErrorf(v, "", "too-many-redirects", getComponentInfo())
}
// Don't retry if the error was due to an invalid protocol scheme.
if schemeErrorRe.MatchString(v.Error()) {
+ GetLogger().Debug("No retry, invalid protocol scheme: %s\n", v.Error())
return false, SDKErrorf(v, "", "invalid-scheme", getComponentInfo())
}
+ // Don't retry if the error was due to an invalid header.
+ if invalidHeaderErrorRe.MatchString(v.Error()) {
+ GetLogger().Debug("No retry, invalid header: %s\n", v.Error())
+ return false, SDKErrorf(v, "", "invalid-header", getComponentInfo())
+ }
+
// Don't retry if the error was due to TLS cert verification failure.
- if _, ok := v.Err.(x509.UnknownAuthorityError); ok {
+ if notTrustedErrorRe.MatchString(v.Error()) {
+ GetLogger().Debug("No retry, TLS certificate is not trusted: %s\n", v.Error())
+ return false, SDKErrorf(v, "", "cert-not-trusted", getComponentInfo())
+ }
+ if _, ok := v.Err.(*tls.CertificateVerificationError); ok {
+ GetLogger().Debug("No retry, TLS certificate validation error: %s\n", v.Error())
return false, SDKErrorf(v, "", "cert-failure", getComponentInfo())
}
}
// The error is likely recoverable so retry.
+ GetLogger().Debug("Retry will be attempted...")
return true, nil
}
@@ -867,9 +919,11 @@ func IBMCloudSDKRetryPolicy(ctx context.Context, resp *http.Response, err error)
// A 429 should be retryable.
// All codes in the 500's range except for 501 (Not Implemented) should be retryable.
if resp.StatusCode == 429 || (resp.StatusCode >= 500 && resp.StatusCode <= 599 && resp.StatusCode != 501) {
+ GetLogger().Debug("Retry will be attempted")
return true, nil
}
+ GetLogger().Debug("No retry for status code: %d\n")
return false, nil
}
@@ -880,6 +934,8 @@ func IBMCloudSDKBackoffPolicy(min, max time.Duration, attemptNum int, resp *http
// Check for a Retry-After header.
if resp != nil {
if s, ok := resp.Header["Retry-After"]; ok {
+ GetLogger().Debug("Found Retry-After header: %s\n", s)
+
// First, try to parse the value as an integer (number of seconds to wait)
if sleep, err := strconv.ParseInt(s[0], 10, 64); err == nil {
return time.Second * time.Duration(sleep)
@@ -893,7 +949,6 @@ func IBMCloudSDKBackoffPolicy(min, max time.Duration, attemptNum int, resp *http
}
return sleep
}
-
}
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/basic_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/basic_authenticator.go
index b0c12eb9bf9db..4506eb1847ba7 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/basic_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/basic_authenticator.go
@@ -64,8 +64,9 @@ func (BasicAuthenticator) AuthenticationType() string {
// Basic Authorization will be added to the request's headers in the form:
//
// Authorization: Basic
-func (this *BasicAuthenticator) Authenticate(request *http.Request) error {
- request.SetBasicAuth(this.Username, this.Password)
+func (authenticator *BasicAuthenticator) Authenticate(request *http.Request) error {
+ request.SetBasicAuth(authenticator.Username, authenticator.Password)
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
@@ -73,23 +74,23 @@ func (this *BasicAuthenticator) Authenticate(request *http.Request) error {
//
// Ensures the username and password are not Nil. Additionally, ensures
// they do not contain invalid characters.
-func (this BasicAuthenticator) Validate() error {
- if this.Username == "" {
+func (authenticator BasicAuthenticator) Validate() error {
+ if authenticator.Username == "" {
err := fmt.Errorf(ERRORMSG_PROP_MISSING, "Username")
return SDKErrorf(err, "", "no-user", getComponentInfo())
}
- if this.Password == "" {
+ if authenticator.Password == "" {
err := fmt.Errorf(ERRORMSG_PROP_MISSING, "Password")
return SDKErrorf(err, "", "no-pass", getComponentInfo())
}
- if HasBadFirstOrLastChar(this.Username) {
+ if HasBadFirstOrLastChar(authenticator.Username) {
err := fmt.Errorf(ERRORMSG_PROP_INVALID, "Username")
return SDKErrorf(err, "", "bad-user", getComponentInfo())
}
- if HasBadFirstOrLastChar(this.Password) {
+ if HasBadFirstOrLastChar(authenticator.Password) {
err := fmt.Errorf(ERRORMSG_PROP_INVALID, "Password")
return SDKErrorf(err, "", "bad-pass", getComponentInfo())
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/bearer_token_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/bearer_token_authenticator.go
index ab26448da144e..ce37cb9d678d2 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/bearer_token_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/bearer_token_authenticator.go
@@ -61,16 +61,17 @@ func (BearerTokenAuthenticator) AuthenticationType() string {
// The bearer token will be added to the request's headers in the form:
//
// Authorization: Bearer
-func (this *BearerTokenAuthenticator) Authenticate(request *http.Request) error {
- request.Header.Set("Authorization", fmt.Sprintf(`Bearer %s`, this.BearerToken))
+func (authenticator *BearerTokenAuthenticator) Authenticate(request *http.Request) error {
+ request.Header.Set("Authorization", fmt.Sprintf(`Bearer %s`, authenticator.BearerToken))
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
// Validate the authenticator's configuration.
//
// Ensures the bearer token is not Nil.
-func (this BearerTokenAuthenticator) Validate() error {
- if this.BearerToken == "" {
+func (authenticator BearerTokenAuthenticator) Validate() error {
+ if authenticator.BearerToken == "" {
err := fmt.Errorf(ERRORMSG_PROP_MISSING, "BearerToken")
return SDKErrorf(err, "", "no-token", getComponentInfo())
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/config_utils.go b/vendor/github.com/IBM/go-sdk-core/v5/core/config_utils.go
index 24d535dd966b5..39e720701191d 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/config_utils.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/config_utils.go
@@ -63,6 +63,8 @@ func getServiceProperties(serviceName string) (serviceProps map[string]string, e
return
}
+ GetLogger().Debug("Retrieving config properties for service '%s'\n", serviceName)
+
// First try to retrieve service properties from a credential file.
serviceProps = getServicePropertiesFromCredentialFile(serviceName)
@@ -76,6 +78,8 @@ func getServiceProperties(serviceName string) (serviceProps map[string]string, e
serviceProps = getServicePropertiesFromVCAP(serviceName)
}
+ GetLogger().Debug("Retrieved %d properties\n", len(serviceProps))
+
return
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go
index 2d7c3f561713d..5919889c073f3 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go
@@ -255,6 +255,7 @@ func (authenticator *ContainerAuthenticator) Authenticate(request *http.Request)
}
request.Header.Set("Authorization", "Bearer "+token)
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/cp4d_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/cp4d_authenticator.go
index 8f9a8117e7c34..66c1f1037bc77 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/cp4d_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/cp4d_authenticator.go
@@ -207,6 +207,7 @@ func (authenticator *CloudPakForDataAuthenticator) Authenticate(request *http.Re
}
request.Header.Set("Authorization", fmt.Sprintf(`Bearer %s`, token))
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
@@ -231,15 +232,19 @@ func (authenticator *CloudPakForDataAuthenticator) setTokenData(tokenData *cp4dT
// or the existing token has expired), a new access token is fetched from the token server.
func (authenticator *CloudPakForDataAuthenticator) GetToken() (string, error) {
if authenticator.getTokenData() == nil || !authenticator.getTokenData().isTokenValid() {
+ GetLogger().Debug("Performing synchronous token fetch...")
// synchronously request the token
err := authenticator.synchronizedRequestToken()
if err != nil {
return "", RepurposeSDKProblem(err, "request-token-fail")
}
} else if authenticator.getTokenData().needsRefresh() {
+ GetLogger().Debug("Performing background asynchronous token fetch...")
// If refresh needed, kick off a go routine in the background to get a new token
//nolint: errcheck
go authenticator.invokeRequestTokenData()
+ } else {
+ GetLogger().Debug("Using cached access token...")
}
// return an error if the access token is not valid or was not fetched
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/iam_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/iam_authenticator.go
index da9feb6014480..ff73a693962a1 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/iam_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/iam_authenticator.go
@@ -249,9 +249,9 @@ func (*IamAuthenticator) AuthenticationType() string {
// Authenticate adds IAM authentication information to the request.
//
-// The IAM bearer token will be added to the request's headers in the form:
+// The IAM access token will be added to the request's headers in the form:
//
-// Authorization: Bearer
+// Authorization: Bearer
func (authenticator *IamAuthenticator) Authenticate(request *http.Request) error {
token, err := authenticator.GetToken()
if err != nil {
@@ -259,6 +259,7 @@ func (authenticator *IamAuthenticator) Authenticate(request *http.Request) error
}
request.Header.Set("Authorization", "Bearer "+token)
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
@@ -362,15 +363,19 @@ func (authenticator *IamAuthenticator) Validate() error {
// or the existing token has expired), a new access token is fetched from the token server.
func (authenticator *IamAuthenticator) GetToken() (string, error) {
if authenticator.getTokenData() == nil || !authenticator.getTokenData().isTokenValid() {
+ GetLogger().Debug("Performing synchronous token fetch...")
// synchronously request the token
err := authenticator.synchronizedRequestToken()
if err != nil {
return "", RepurposeSDKProblem(err, "request-token-fail")
}
} else if authenticator.getTokenData().needsRefresh() {
+ GetLogger().Debug("Performing background asynchronous token fetch...")
// If refresh needed, kick off a go routine in the background to get a new token
//nolint: errcheck
go authenticator.invokeRequestTokenData()
+ } else {
+ GetLogger().Debug("Using cached access token...")
}
// return an error if the access token is not valid or was not fetched
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/mcsp_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/mcsp_authenticator.go
index 3bd2a2f4f0166..38845074af0fb 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/mcsp_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/mcsp_authenticator.go
@@ -183,6 +183,7 @@ func (authenticator *MCSPAuthenticator) Authenticate(request *http.Request) erro
}
request.Header.Set("Authorization", "Bearer "+token)
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
@@ -227,15 +228,19 @@ func (authenticator *MCSPAuthenticator) Validate() error {
// or the existing token has expired), a new access token is fetched from the token server.
func (authenticator *MCSPAuthenticator) GetToken() (string, error) {
if authenticator.getTokenData() == nil || !authenticator.getTokenData().isTokenValid() {
+ GetLogger().Debug("Performing synchronous token fetch...")
// synchronously request the token
err := authenticator.synchronizedRequestToken()
if err != nil {
return "", RepurposeSDKProblem(err, "request-token-fail")
}
} else if authenticator.getTokenData().needsRefresh() {
+ GetLogger().Debug("Performing background asynchronous token fetch...")
// If refresh needed, kick off a go routine in the background to get a new token.
//nolint: errcheck
go authenticator.invokeRequestTokenData()
+ } else {
+ GetLogger().Debug("Using cached access token...")
}
// return an error if the access token is not valid or was not fetched
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/parameterized_url.go b/vendor/github.com/IBM/go-sdk-core/v5/core/parameterized_url.go
index 849dad66a3fd8..81b650615c9a0 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/parameterized_url.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/parameterized_url.go
@@ -39,6 +39,7 @@ func ConstructServiceURL(
defaultUrlVariables map[string]string,
providedUrlVariables map[string]string,
) (string, error) {
+ GetLogger().Debug("Constructing service URL from parameterized URL: %s\n", parameterizedUrl)
// Verify the provided variable names.
for providedName := range providedUrlVariables {
@@ -70,5 +71,6 @@ func ConstructServiceURL(
}
formattedUrl = strings.Replace(formattedUrl, "{"+name+"}", providedValue, 1)
}
+ GetLogger().Debug("Returning service URL: %s\n", formattedUrl)
return formattedUrl, nil
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/version.go b/vendor/github.com/IBM/go-sdk-core/v5/core/version.go
index b7869180380f2..3921e502e8614 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/version.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/version.go
@@ -15,4 +15,4 @@ package core
// limitations under the License.
// Version of the SDK
-const __VERSION__ = "5.17.4"
+const __VERSION__ = "5.17.5"
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/vpc_instance_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/vpc_instance_authenticator.go
index 562318b2ceb47..4c60ee721773c 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/vpc_instance_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/vpc_instance_authenticator.go
@@ -189,6 +189,7 @@ func (authenticator *VpcInstanceAuthenticator) Authenticate(request *http.Reques
}
request.Header.Set("Authorization", "Bearer "+token)
+ GetLogger().Debug("Authenticated outbound request (type=%s)\n", authenticator.AuthenticationType())
return nil
}
diff --git a/vendor/github.com/baidubce/bce-sdk-go/bce/config.go b/vendor/github.com/baidubce/bce-sdk-go/bce/config.go
index a4f608c6e7600..44f7b2fab77d5 100644
--- a/vendor/github.com/baidubce/bce-sdk-go/bce/config.go
+++ b/vendor/github.com/baidubce/bce-sdk-go/bce/config.go
@@ -26,7 +26,7 @@ import (
// Constants and default values for the package bce
const (
- SDK_VERSION = "0.9.187"
+ SDK_VERSION = "0.9.189"
URI_PREFIX = "/" // now support uri without prefix "v1" so just set root path
DEFAULT_DOMAIN = "baidubce.com"
DEFAULT_PROTOCOL = "http"
diff --git a/vendor/github.com/efficientgo/core/testutil/testorbench.go b/vendor/github.com/efficientgo/core/testutil/testorbench.go
index c36b8877a1263..69f3fe60f8c05 100644
--- a/vendor/github.com/efficientgo/core/testutil/testorbench.go
+++ b/vendor/github.com/efficientgo/core/testutil/testorbench.go
@@ -30,7 +30,13 @@ type TB interface {
SetBytes(n int64)
N() int
+
ResetTimer()
+ StartTimer()
+ StopTimer()
+
+ ReportAllocs()
+ ReportMetric(n float64, unit string)
}
// tb implements TB as well as testing.TB interfaces.
@@ -78,8 +84,36 @@ func (t *tb) ResetTimer() {
}
}
+// StartTimer starts a timer, if it's a benchmark, noop otherwise.
+func (t *tb) StartTimer() {
+ if b, ok := t.TB.(*testing.B); ok {
+ b.StartTimer()
+ }
+}
+
+// StopTimer stops a timer, if it's a benchmark, noop otherwise.
+func (t *tb) StopTimer() {
+ if b, ok := t.TB.(*testing.B); ok {
+ b.StopTimer()
+ }
+}
+
// IsBenchmark returns true if it's a benchmark.
func (t *tb) IsBenchmark() bool {
_, ok := t.TB.(*testing.B)
return ok
}
+
+// ReportAllocs reports allocs if it's a benchmark, noop otherwise.
+func (t *tb) ReportAllocs() {
+ if b, ok := t.TB.(*testing.B); ok {
+ b.ReportAllocs()
+ }
+}
+
+// ReportMetric reports metrics if it's a benchmark, noop otherwise.
+func (t *tb) ReportMetric(n float64, unit string) {
+ if b, ok := t.TB.(*testing.B); ok {
+ b.ReportMetric(n, unit)
+ }
+}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/LICENSE b/vendor/github.com/fsouza/fake-gcs-server/LICENSE
index a619aaecef9d1..529faa468606e 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/LICENSE
+++ b/vendor/github.com/fsouza/fake-gcs-server/LICENSE
@@ -1,4 +1,4 @@
-Copyright (c) Francisco Souza
+Copyright (c) 2017-2019, Francisco Souza
All rights reserved.
Redistribution and use in source and binary forms, with or without
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/bucket.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/bucket.go
index 4026f1a4a0deb..e2fa2ad3716ee 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/bucket.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/bucket.go
@@ -6,161 +6,49 @@ package fakestorage
import (
"encoding/json"
- "errors"
- "fmt"
- "io"
"net/http"
- "regexp"
- "github.com/fsouza/fake-gcs-server/internal/backend"
"github.com/gorilla/mux"
)
-var bucketRegexp = regexp.MustCompile(`^[a-zA-Z0-9][a-zA-Z0-9._-]*[a-zA-Z0-9]$`)
-
// CreateBucket creates a bucket inside the server, so any API calls that
// require the bucket name will recognize this bucket.
//
// If the bucket already exists, this method does nothing.
-//
-// Deprecated: use CreateBucketWithOpts.
func (s *Server) CreateBucket(name string) {
- err := s.backend.CreateBucket(name, backend.BucketAttrs{VersioningEnabled: false, DefaultEventBasedHold: false})
- if err != nil {
- panic(err)
- }
-}
-
-func (s *Server) updateBucket(r *http.Request) jsonResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
- attrsToUpdate := getBucketAttrsToUpdate(r.Body)
- err := s.backend.UpdateBucket(bucketName, attrsToUpdate)
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
+ err := s.backend.CreateBucket(name)
if err != nil {
panic(err)
}
- return jsonResponse{}
-}
-
-func getBucketAttrsToUpdate(body io.ReadCloser) backend.BucketAttrs {
- var data struct {
- DefaultEventBasedHold bool `json:"defaultEventBasedHold,omitempty"`
- Versioning bucketVersioning `json:"versioning,omitempty"`
- }
- err := json.NewDecoder(body).Decode(&data)
- if err != nil {
- panic(err)
- }
- attrsToUpdate := backend.BucketAttrs{
- DefaultEventBasedHold: data.DefaultEventBasedHold,
- VersioningEnabled: data.Versioning.Enabled,
- }
- return attrsToUpdate
-}
-
-// CreateBucketOpts defines the properties of a bucket you can create with
-// CreateBucketWithOpts.
-type CreateBucketOpts struct {
- Name string
- VersioningEnabled bool
- DefaultEventBasedHold bool
-}
-
-// CreateBucketWithOpts creates a bucket inside the server, so any API calls that
-// require the bucket name will recognize this bucket. Use CreateBucketOpts to
-// customize the options for this bucket
-//
-// If the underlying backend returns an error, this method panics.
-func (s *Server) CreateBucketWithOpts(opts CreateBucketOpts) {
- err := s.backend.CreateBucket(opts.Name, backend.BucketAttrs{VersioningEnabled: opts.VersioningEnabled, DefaultEventBasedHold: opts.DefaultEventBasedHold})
- if err != nil {
- panic(err)
- }
-}
-
-func (s *Server) createBucketByPost(r *http.Request) jsonResponse {
- // Minimal version of Bucket from google.golang.org/api/storage/v1
-
- var data struct {
- Name string `json:"name,omitempty"`
- Versioning *bucketVersioning `json:"versioning,omitempty"`
- DefaultEventBasedHold bool `json:"defaultEventBasedHold,omitempty"`
- }
-
- // Read the bucket props from the request body JSON
- decoder := json.NewDecoder(r.Body)
- if err := decoder.Decode(&data); err != nil {
- return jsonResponse{errorMessage: err.Error(), status: http.StatusBadRequest}
- }
- name := data.Name
- versioning := false
- if data.Versioning != nil {
- versioning = data.Versioning.Enabled
- }
- defaultEventBasedHold := data.DefaultEventBasedHold
- if err := validateBucketName(name); err != nil {
- return jsonResponse{errorMessage: err.Error(), status: http.StatusBadRequest}
- }
-
- _, err := s.backend.GetBucket(name)
- if err == nil {
- return jsonResponse{
- errorMessage: fmt.Sprintf(
- "A Cloud Storage bucket named '%s' already exists. "+
- "Try another name. Bucket names must be globally unique "+
- "across all Google Cloud projects, including those "+
- "outside of your organization.", name),
- status: http.StatusConflict,
- }
- }
-
- // Create the named bucket
- if err := s.backend.CreateBucket(name, backend.BucketAttrs{VersioningEnabled: versioning, DefaultEventBasedHold: defaultEventBasedHold}); err != nil {
- return jsonResponse{errorMessage: err.Error()}
- }
-
- // Return the created bucket:
- bucket, err := s.backend.GetBucket(name)
- if err != nil {
- return jsonResponse{errorMessage: err.Error()}
- }
- return jsonResponse{data: newBucketResponse(bucket, s.options.BucketsLocation)}
}
-func (s *Server) listBuckets(r *http.Request) jsonResponse {
- buckets, err := s.backend.ListBuckets()
- if err != nil {
- return jsonResponse{errorMessage: err.Error()}
- }
- return jsonResponse{data: newListBucketsResponse(buckets, s.options.BucketsLocation)}
-}
+func (s *Server) listBuckets(w http.ResponseWriter, r *http.Request) {
+ s.mtx.RLock()
+ defer s.mtx.RUnlock()
-func (s *Server) getBucket(r *http.Request) jsonResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
- bucket, err := s.backend.GetBucket(bucketName)
- if err != nil {
- return jsonResponse{status: http.StatusNotFound}
- }
- return jsonResponse{data: newBucketResponse(bucket, s.options.BucketsLocation)}
-}
-
-func (s *Server) deleteBucket(r *http.Request) jsonResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
- err := s.backend.DeleteBucket(bucketName)
- if err == backend.BucketNotFound {
- return jsonResponse{status: http.StatusNotFound}
- }
- if err == backend.BucketNotEmpty {
- return jsonResponse{status: http.StatusPreconditionFailed, errorMessage: err.Error()}
- }
+ bucketNames, err := s.backend.ListBuckets()
if err != nil {
- return jsonResponse{status: http.StatusInternalServerError, errorMessage: err.Error()}
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
- return jsonResponse{}
+ resp := newListBucketsResponse(bucketNames)
+ json.NewEncoder(w).Encode(resp)
}
-func validateBucketName(bucketName string) error {
- if !bucketRegexp.MatchString(bucketName) {
- return errors.New("invalid bucket name")
+func (s *Server) getBucket(w http.ResponseWriter, r *http.Request) {
+ bucketName := mux.Vars(r)["bucketName"]
+ s.mtx.RLock()
+ defer s.mtx.RUnlock()
+ encoder := json.NewEncoder(w)
+ if err := s.backend.GetBucket(bucketName); err != nil {
+ w.WriteHeader(http.StatusNotFound)
+ err := newErrorResponse(http.StatusNotFound, "Not found", nil)
+ encoder.Encode(err)
+ return
}
- return nil
+ resp := newBucketResponse(bucketName)
+ w.WriteHeader(http.StatusOK)
+ encoder.Encode(resp)
}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/config.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/config.go
deleted file mode 100644
index a57d154279a5e..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/config.go
+++ /dev/null
@@ -1,30 +0,0 @@
-package fakestorage
-
-import (
- "encoding/json"
- "net/http"
-)
-
-func (s *Server) updateServerConfig(r *http.Request) jsonResponse {
- var configOptions struct {
- ExternalUrl string `json:"externalUrl,omitempty"`
- PublicHost string `json:"publicHost,omitempty"`
- }
- err := json.NewDecoder(r.Body).Decode(&configOptions)
- if err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: "Update server config payload can not be parsed.",
- }
- }
-
- if configOptions.ExternalUrl != "" {
- s.externalURL = configOptions.ExternalUrl
- }
-
- if configOptions.PublicHost != "" {
- s.publicHost = configOptions.PublicHost
- }
-
- return jsonResponse{status: http.StatusOK}
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/json_response.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/json_response.go
deleted file mode 100644
index f16a7c5c10180..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/json_response.go
+++ /dev/null
@@ -1,72 +0,0 @@
-package fakestorage
-
-import (
- "encoding/json"
- "errors"
- "net/http"
- "os"
- "syscall"
-
- "github.com/fsouza/fake-gcs-server/internal/backend"
-)
-
-type jsonResponse struct {
- status int
- header http.Header
- data any
- errorMessage string
-}
-
-type jsonHandler = func(r *http.Request) jsonResponse
-
-func jsonToHTTPHandler(h jsonHandler) http.HandlerFunc {
- return func(w http.ResponseWriter, r *http.Request) {
- resp := h(r)
- w.Header().Set("Content-Type", "application/json")
- for name, values := range resp.header {
- for _, value := range values {
- w.Header().Add(name, value)
- }
- }
-
- status := resp.getStatus()
- var data any
- if status > 399 {
- data = newErrorResponse(status, resp.getErrorMessage(status), nil)
- } else {
- data = resp.data
- }
-
- w.WriteHeader(status)
- json.NewEncoder(w).Encode(data)
- }
-}
-
-func (r *jsonResponse) getStatus() int {
- if r.status > 0 {
- return r.status
- }
- if r.errorMessage != "" {
- return http.StatusInternalServerError
- }
- return http.StatusOK
-}
-
-func (r *jsonResponse) getErrorMessage(status int) string {
- if r.errorMessage != "" {
- return r.errorMessage
- }
- return http.StatusText(status)
-}
-
-func errToJsonResponse(err error) jsonResponse {
- status := 0
- var pathError *os.PathError
- if errors.As(err, &pathError) && pathError.Err == syscall.ENAMETOOLONG {
- status = http.StatusBadRequest
- }
- if err == backend.PreConditionFailed {
- status = http.StatusPreconditionFailed
- }
- return jsonResponse{errorMessage: err.Error(), status: status}
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_tranport.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_tranport.go
index b228c787ae682..afaa2efeac76a 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_tranport.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_tranport.go
@@ -7,14 +7,16 @@ package fakestorage
import (
"net/http"
"net/http/httptest"
+
+ "github.com/gorilla/mux"
)
type muxTransport struct {
- handler http.Handler
+ router *mux.Router
}
func (t *muxTransport) RoundTrip(r *http.Request) (*http.Response, error) {
w := httptest.NewRecorder()
- t.handler.ServeHTTP(w, r)
+ t.router.ServeHTTP(w, r)
return w.Result(), nil
}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go
index 6c1533ffea45a..bc1d472f36e30 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go
@@ -5,365 +5,84 @@
package fakestorage
import (
- "bytes"
- "compress/gzip"
"encoding/json"
- "encoding/xml"
- "errors"
"fmt"
- "io"
"net/http"
"sort"
"strconv"
"strings"
- "time"
- "cloud.google.com/go/storage"
"github.com/fsouza/fake-gcs-server/internal/backend"
- "github.com/fsouza/fake-gcs-server/internal/notification"
"github.com/gorilla/mux"
)
-var errInvalidGeneration = errors.New("invalid generation ID")
-
-// ObjectAttrs returns only the meta-data about an object without its contents.
-type ObjectAttrs struct {
- BucketName string
- Name string
- Size int64
- ContentType string
- ContentEncoding string
+// Object represents the object that is stored within the fake server.
+type Object struct {
+ BucketName string `json:"-"`
+ Name string `json:"name"`
+ Content []byte `json:"-"`
// Crc32c checksum of Content. calculated by server when it's upload methods are used.
- Crc32c string
- Md5Hash string
- Etag string
- ACL []storage.ACLRule
- // Dates and generation can be manually injected, so you can do assertions on them,
- // or let us fill these fields for you
- Created time.Time
- Updated time.Time
- Deleted time.Time
- CustomTime time.Time
- Generation int64
- Metadata map[string]string
+ Crc32c string `json:"crc32c,omitempty"`
+ Md5Hash string `json:"md5hash,omitempty"`
}
-func (o *ObjectAttrs) id() string {
+func (o *Object) id() string {
return o.BucketName + "/" + o.Name
}
-type jsonObject struct {
- BucketName string `json:"bucket"`
- Name string `json:"name"`
- Size int64 `json:"size,string"`
- ContentType string `json:"contentType"`
- ContentEncoding string `json:"contentEncoding"`
- Crc32c string `json:"crc32c,omitempty"`
- Md5Hash string `json:"md5Hash,omitempty"`
- Etag string `json:"etag,omitempty"`
- ACL []aclRule `json:"acl,omitempty"`
- Created time.Time `json:"created,omitempty"`
- Updated time.Time `json:"updated,omitempty"`
- Deleted time.Time `json:"deleted,omitempty"`
- CustomTime time.Time `json:"customTime,omitempty"`
- Generation int64 `json:"generation,omitempty,string"`
- Metadata map[string]string `json:"metadata,omitempty"`
-}
-
-// MarshalJSON for ObjectAttrs to use ACLRule instead of storage.ACLRule
-func (o ObjectAttrs) MarshalJSON() ([]byte, error) {
- temp := jsonObject{
- BucketName: o.BucketName,
- Name: o.Name,
- ContentType: o.ContentType,
- ContentEncoding: o.ContentEncoding,
- Size: o.Size,
- Crc32c: o.Crc32c,
- Md5Hash: o.Md5Hash,
- Etag: o.Etag,
- Created: o.Created,
- Updated: o.Updated,
- Deleted: o.Deleted,
- CustomTime: o.CustomTime,
- Generation: o.Generation,
- Metadata: o.Metadata,
- }
- temp.ACL = make([]aclRule, len(o.ACL))
- for i, ACL := range o.ACL {
- temp.ACL[i] = aclRule(ACL)
- }
- return json.Marshal(temp)
-}
-
-// UnmarshalJSON for ObjectAttrs to use ACLRule instead of storage.ACLRule
-func (o *ObjectAttrs) UnmarshalJSON(data []byte) error {
- var temp jsonObject
- if err := json.Unmarshal(data, &temp); err != nil {
- return err
- }
- o.BucketName = temp.BucketName
- o.Name = temp.Name
- o.ContentType = temp.ContentType
- o.ContentEncoding = temp.ContentEncoding
- o.Size = temp.Size
- o.Crc32c = temp.Crc32c
- o.Md5Hash = temp.Md5Hash
- o.Etag = temp.Etag
- o.Created = temp.Created
- o.Updated = temp.Updated
- o.Deleted = temp.Deleted
- o.Generation = temp.Generation
- o.Metadata = temp.Metadata
- o.CustomTime = temp.CustomTime
- o.ACL = make([]storage.ACLRule, len(temp.ACL))
- for i, ACL := range temp.ACL {
- o.ACL[i] = storage.ACLRule(ACL)
- }
-
- return nil
-}
-
-// Object represents an object that is stored within the fake server. The
-// content of this type is stored is buffered, i.e. it's stored in memory.
-// Use StreamingObject to stream the content from a reader, e.g a file.
-type Object struct {
- ObjectAttrs
- Content []byte `json:"-"`
-}
-
-type noopSeekCloser struct {
- io.ReadSeeker
-}
-
-func (n noopSeekCloser) Close() error {
- return nil
-}
-
-func (o Object) StreamingObject() StreamingObject {
- return StreamingObject{
- ObjectAttrs: o.ObjectAttrs,
- Content: noopSeekCloser{bytes.NewReader(o.Content)},
- }
-}
-
-// StreamingObject is the streaming version of Object.
-type StreamingObject struct {
- ObjectAttrs
- Content io.ReadSeekCloser `json:"-"`
-}
-
-func (o *StreamingObject) Close() error {
- if o != nil && o.Content != nil {
- return o.Content.Close()
- }
- return nil
-}
-
-func (o *StreamingObject) BufferedObject() (Object, error) {
- data, err := io.ReadAll(o.Content)
- return Object{
- ObjectAttrs: o.ObjectAttrs,
- Content: data,
- }, err
-}
-
-// ACLRule is an alias of storage.ACLRule to have custom JSON marshal
-type aclRule storage.ACLRule
-
-// ProjectTeam is an alias of storage.ProjectTeam to have custom JSON marshal
-type projectTeam storage.ProjectTeam
-
-// MarshalJSON for ACLRule to customize field names
-func (acl aclRule) MarshalJSON() ([]byte, error) {
- temp := struct {
- Entity storage.ACLEntity `json:"entity"`
- EntityID string `json:"entityId"`
- Role storage.ACLRole `json:"role"`
- Domain string `json:"domain"`
- Email string `json:"email"`
- ProjectTeam *projectTeam `json:"projectTeam"`
- }{
- Entity: acl.Entity,
- EntityID: acl.EntityID,
- Role: acl.Role,
- Domain: acl.Domain,
- Email: acl.Email,
- ProjectTeam: (*projectTeam)(acl.ProjectTeam),
- }
- return json.Marshal(temp)
-}
-
-// UnmarshalJSON for ACLRule to customize field names
-func (acl *aclRule) UnmarshalJSON(data []byte) error {
- temp := struct {
- Entity storage.ACLEntity `json:"entity"`
- EntityID string `json:"entityId"`
- Role storage.ACLRole `json:"role"`
- Domain string `json:"domain"`
- Email string `json:"email"`
- ProjectTeam *projectTeam `json:"projectTeam"`
- }{}
- if err := json.Unmarshal(data, &temp); err != nil {
- return err
- }
- acl.Entity = temp.Entity
- acl.EntityID = temp.EntityID
- acl.Role = temp.Role
- acl.Domain = temp.Domain
- acl.Email = temp.Email
- acl.ProjectTeam = (*storage.ProjectTeam)(temp.ProjectTeam)
- return nil
-}
-
-// MarshalJSON for ProjectTeam to customize field names
-func (team projectTeam) MarshalJSON() ([]byte, error) {
- temp := struct {
- ProjectNumber string `json:"projectNumber"`
- Team string `json:"team"`
- }{
- ProjectNumber: team.ProjectNumber,
- Team: team.Team,
- }
- return json.Marshal(temp)
-}
-
-// UnmarshalJSON for ProjectTeam to customize field names
-func (team *projectTeam) UnmarshalJSON(data []byte) error {
- temp := struct {
- ProjectNumber string `json:"projectNumber"`
- Team string `json:"team"`
- }{}
- if err := json.Unmarshal(data, &temp); err != nil {
- return err
- }
- team.ProjectNumber = temp.ProjectNumber
- team.Team = temp.Team
- return nil
-}
+type objectList []Object
-type objectAttrsList []ObjectAttrs
-
-func (o objectAttrsList) Len() int {
+func (o objectList) Len() int {
return len(o)
}
-func (o objectAttrsList) Less(i int, j int) bool {
+func (o objectList) Less(i int, j int) bool {
return o[i].Name < o[j].Name
}
-func (o *objectAttrsList) Swap(i int, j int) {
+func (o *objectList) Swap(i int, j int) {
d := *o
d[i], d[j] = d[j], d[i]
}
-// CreateObject is the non-streaming version of CreateObjectStreaming.
+// CreateObject stores the given object internally.
//
-// In addition to streaming, CreateObjectStreaming returns an error instead of
-// panicking when an error occurs.
+// If the bucket within the object doesn't exist, it also creates it. If the
+// object already exists, it overrides the object.
func (s *Server) CreateObject(obj Object) {
- err := s.CreateObjectStreaming(obj.StreamingObject())
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
+ err := s.createObject(obj)
if err != nil {
panic(err)
}
}
-// CreateObjectStreaming stores the given object internally.
-//
-// If the bucket within the object doesn't exist, it also creates it. If the
-// object already exists, it overwrites the object.
-func (s *Server) CreateObjectStreaming(obj StreamingObject) error {
- obj, err := s.createObject(obj, backend.NoConditions{})
- if err != nil {
- return err
- }
- obj.Close()
- return nil
-}
-
-func (s *Server) createObject(obj StreamingObject, conditions backend.Conditions) (StreamingObject, error) {
- oldBackendObj, err := s.backend.GetObject(obj.BucketName, obj.Name)
- // Calling Close before checking err is okay on objects, and the object
- // may need to be closed whether or not there's an error.
- defer oldBackendObj.Close() //lint:ignore SA5001 // see above
-
- prevVersionExisted := err == nil
-
- // The caller is responsible for closing the created object.
- newBackendObj, err := s.backend.CreateObject(toBackendObjects([]StreamingObject{obj})[0], conditions)
- if err != nil {
- return StreamingObject{}, err
- }
-
- var newObjEventAttr map[string]string
- if prevVersionExisted {
- newObjEventAttr = map[string]string{
- "overwroteGeneration": strconv.FormatInt(oldBackendObj.Generation, 10),
- }
-
- oldObjEventAttr := map[string]string{
- "overwrittenByGeneration": strconv.FormatInt(newBackendObj.Generation, 10),
- }
-
- bucket, _ := s.backend.GetBucket(obj.BucketName)
- if bucket.VersioningEnabled {
- s.eventManager.Trigger(&oldBackendObj, notification.EventArchive, oldObjEventAttr)
- } else {
- s.eventManager.Trigger(&oldBackendObj, notification.EventDelete, oldObjEventAttr)
- }
- }
-
- newObj := fromBackendObjects([]backend.StreamingObject{newBackendObj})[0]
- s.eventManager.Trigger(&newBackendObj, notification.EventFinalize, newObjEventAttr)
- return newObj, nil
-}
-
-type ListOptions struct {
- Prefix string
- Delimiter string
- Versions bool
- StartOffset string
- EndOffset string
- IncludeTrailingDelimiter bool
+func (s *Server) createObject(obj Object) error {
+ return s.backend.CreateObject(toBackendObjects([]Object{obj})[0])
}
// ListObjects returns a sorted list of objects that match the given criteria,
// or an error if the bucket doesn't exist.
-//
-// Deprecated: use ListObjectsWithOptions.
-func (s *Server) ListObjects(bucketName, prefix, delimiter string, versions bool) ([]ObjectAttrs, []string, error) {
- return s.ListObjectsWithOptions(bucketName, ListOptions{
- Prefix: prefix,
- Delimiter: delimiter,
- Versions: versions,
- })
-}
-
-func (s *Server) ListObjectsWithOptions(bucketName string, options ListOptions) ([]ObjectAttrs, []string, error) {
- backendObjects, err := s.backend.ListObjects(bucketName, options.Prefix, options.Versions)
+func (s *Server) ListObjects(bucketName, prefix, delimiter string) ([]Object, []string, error) {
+ s.mtx.RLock()
+ defer s.mtx.RUnlock()
+ backendObjects, err := s.backend.ListObjects(bucketName)
if err != nil {
return nil, nil, err
}
- objects := fromBackendObjectsAttrs(backendObjects)
- olist := objectAttrsList(objects)
+ objects := fromBackendObjects(backendObjects)
+ olist := objectList(objects)
sort.Sort(&olist)
- var respObjects []ObjectAttrs
+ var respObjects []Object
prefixes := make(map[string]bool)
for _, obj := range olist {
- if !strings.HasPrefix(obj.Name, options.Prefix) {
- continue
- }
- objName := strings.Replace(obj.Name, options.Prefix, "", 1)
- delimPos := strings.Index(objName, options.Delimiter)
- if options.Delimiter != "" && delimPos > -1 {
- prefix := obj.Name[:len(options.Prefix)+delimPos+1]
- if isInOffset(prefix, options.StartOffset, options.EndOffset) {
- prefixes[prefix] = true
- }
- if options.IncludeTrailingDelimiter && obj.Name == prefix {
- respObjects = append(respObjects, obj)
- }
- } else {
- if isInOffset(obj.Name, options.StartOffset, options.EndOffset) {
+ if strings.HasPrefix(obj.Name, prefix) {
+ objName := strings.Replace(obj.Name, prefix, "", 1)
+ delimPos := strings.Index(objName, delimiter)
+ if delimiter != "" && delimPos > -1 {
+ prefixes[obj.Name[:len(prefix)+delimPos+1]] = true
+ } else {
respObjects = append(respObjects, obj)
}
}
@@ -376,781 +95,143 @@ func (s *Server) ListObjectsWithOptions(bucketName string, options ListOptions)
return respObjects, respPrefixes, nil
}
-func isInOffset(name, startOffset, endOffset string) bool {
- if endOffset != "" && startOffset != "" {
- return strings.Compare(name, endOffset) < 0 && strings.Compare(name, startOffset) >= 0
- } else if endOffset != "" {
- return strings.Compare(name, endOffset) < 0
- } else if startOffset != "" {
- return strings.Compare(name, startOffset) >= 0
- } else {
- return true
- }
-}
-
-func getCurrentIfZero(date time.Time) time.Time {
- if date.IsZero() {
- return time.Now()
- }
- return date
-}
-
-func toBackendObjects(objects []StreamingObject) []backend.StreamingObject {
- backendObjects := make([]backend.StreamingObject, 0, len(objects))
+func toBackendObjects(objects []Object) []backend.Object {
+ backendObjects := []backend.Object{}
for _, o := range objects {
- backendObjects = append(backendObjects, backend.StreamingObject{
- ObjectAttrs: backend.ObjectAttrs{
- BucketName: o.BucketName,
- Name: o.Name,
- ContentType: o.ContentType,
- ContentEncoding: o.ContentEncoding,
- ACL: o.ACL,
- Created: getCurrentIfZero(o.Created).Format(timestampFormat),
- Deleted: o.Deleted.Format(timestampFormat),
- Updated: getCurrentIfZero(o.Updated).Format(timestampFormat),
- CustomTime: o.CustomTime.Format(timestampFormat),
- Generation: o.Generation,
- Metadata: o.Metadata,
- },
- Content: o.Content,
- })
- }
- return backendObjects
-}
-
-func bufferedObjectsToBackendObjects(objects []Object) []backend.StreamingObject {
- backendObjects := make([]backend.StreamingObject, 0, len(objects))
- for _, bufferedObject := range objects {
- o := bufferedObject.StreamingObject()
- backendObjects = append(backendObjects, backend.StreamingObject{
- ObjectAttrs: backend.ObjectAttrs{
- BucketName: o.BucketName,
- Name: o.Name,
- ContentType: o.ContentType,
- ContentEncoding: o.ContentEncoding,
- ACL: o.ACL,
- Created: getCurrentIfZero(o.Created).Format(timestampFormat),
- Deleted: o.Deleted.Format(timestampFormat),
- Updated: getCurrentIfZero(o.Updated).Format(timestampFormat),
- CustomTime: o.CustomTime.Format(timestampFormat),
- Generation: o.Generation,
- Metadata: o.Metadata,
- Crc32c: o.Crc32c,
- Md5Hash: o.Md5Hash,
- Size: o.Size,
- Etag: o.Etag,
- },
- Content: o.Content,
+ backendObjects = append(backendObjects, backend.Object{
+ BucketName: o.BucketName,
+ Name: o.Name,
+ Content: o.Content,
+ Crc32c: o.Crc32c,
+ Md5Hash: o.Md5Hash,
})
}
return backendObjects
}
-func fromBackendObjects(objects []backend.StreamingObject) []StreamingObject {
- backendObjects := make([]StreamingObject, 0, len(objects))
+func fromBackendObjects(objects []backend.Object) []Object {
+ backendObjects := []Object{}
for _, o := range objects {
- backendObjects = append(backendObjects, StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: o.BucketName,
- Name: o.Name,
- Size: o.Size,
- ContentType: o.ContentType,
- ContentEncoding: o.ContentEncoding,
- Crc32c: o.Crc32c,
- Md5Hash: o.Md5Hash,
- Etag: o.Etag,
- ACL: o.ACL,
- Created: convertTimeWithoutError(o.Created),
- Deleted: convertTimeWithoutError(o.Deleted),
- Updated: convertTimeWithoutError(o.Updated),
- CustomTime: convertTimeWithoutError(o.CustomTime),
- Generation: o.Generation,
- Metadata: o.Metadata,
- },
- Content: o.Content,
+ backendObjects = append(backendObjects, Object{
+ BucketName: o.BucketName,
+ Name: o.Name,
+ Content: o.Content,
+ Crc32c: o.Crc32c,
+ Md5Hash: o.Md5Hash,
})
}
return backendObjects
}
-func fromBackendObjectsAttrs(objectAttrs []backend.ObjectAttrs) []ObjectAttrs {
- oattrs := make([]ObjectAttrs, 0, len(objectAttrs))
- for _, o := range objectAttrs {
- oattrs = append(oattrs, ObjectAttrs{
- BucketName: o.BucketName,
- Name: o.Name,
- Size: o.Size,
- ContentType: o.ContentType,
- ContentEncoding: o.ContentEncoding,
- Crc32c: o.Crc32c,
- Md5Hash: o.Md5Hash,
- Etag: o.Etag,
- ACL: o.ACL,
- Created: convertTimeWithoutError(o.Created),
- Deleted: convertTimeWithoutError(o.Deleted),
- Updated: convertTimeWithoutError(o.Updated),
- CustomTime: convertTimeWithoutError(o.CustomTime),
- Generation: o.Generation,
- Metadata: o.Metadata,
- })
- }
- return oattrs
-}
-
-func convertTimeWithoutError(t string) time.Time {
- r, _ := time.Parse(timestampFormat, t)
- return r
-}
-
-// GetObject is the non-streaming version of GetObjectStreaming.
+// GetObject returns the object with the given name in the given bucket, or an
+// error if the object doesn't exist.
func (s *Server) GetObject(bucketName, objectName string) (Object, error) {
- streamingObject, err := s.GetObjectStreaming(bucketName, objectName)
- if err != nil {
- return Object{}, err
- }
- return streamingObject.BufferedObject()
-}
-
-// GetObjectStreaming returns the object with the given name in the given
-// bucket, or an error if the object doesn't exist.
-func (s *Server) GetObjectStreaming(bucketName, objectName string) (StreamingObject, error) {
backendObj, err := s.backend.GetObject(bucketName, objectName)
- if err != nil {
- return StreamingObject{}, err
- }
- obj := fromBackendObjects([]backend.StreamingObject{backendObj})[0]
- return obj, nil
-}
-
-// GetObjectWithGeneration is the non-streaming version of
-// GetObjectWithGenerationStreaming.
-func (s *Server) GetObjectWithGeneration(bucketName, objectName string, generation int64) (Object, error) {
- streamingObject, err := s.GetObjectWithGenerationStreaming(bucketName, objectName, generation)
if err != nil {
return Object{}, err
}
- return streamingObject.BufferedObject()
-}
-
-// GetObjectWithGenerationStreaming returns the object with the given name and
-// given generation ID in the given bucket, or an error if the object doesn't
-// exist.
-//
-// If versioning is enabled, archived versions are considered.
-func (s *Server) GetObjectWithGenerationStreaming(bucketName, objectName string, generation int64) (StreamingObject, error) {
- backendObj, err := s.backend.GetObjectWithGeneration(bucketName, objectName, generation)
- if err != nil {
- return StreamingObject{}, err
- }
- obj := fromBackendObjects([]backend.StreamingObject{backendObj})[0]
+ obj := fromBackendObjects([]backend.Object{backendObj})[0]
return obj, nil
}
-func (s *Server) objectWithGenerationOnValidGeneration(bucketName, objectName, generationStr string) (StreamingObject, error) {
- generation, err := strconv.ParseInt(generationStr, 10, 64)
- if err != nil && generationStr != "" {
- return StreamingObject{}, errInvalidGeneration
- } else if generation > 0 {
- return s.GetObjectWithGenerationStreaming(bucketName, objectName, generation)
- }
- return s.GetObjectStreaming(bucketName, objectName)
-}
-
-func (s *Server) listObjects(r *http.Request) jsonResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
- objs, prefixes, err := s.ListObjectsWithOptions(bucketName, ListOptions{
- Prefix: r.URL.Query().Get("prefix"),
- Delimiter: r.URL.Query().Get("delimiter"),
- Versions: r.URL.Query().Get("versions") == "true",
- StartOffset: r.URL.Query().Get("startOffset"),
- EndOffset: r.URL.Query().Get("endOffset"),
- IncludeTrailingDelimiter: r.URL.Query().Get("includeTrailingDelimiter") == "true",
- })
+func (s *Server) listObjects(w http.ResponseWriter, r *http.Request) {
+ bucketName := mux.Vars(r)["bucketName"]
+ prefix := r.URL.Query().Get("prefix")
+ delimiter := r.URL.Query().Get("delimiter")
+ objs, prefixes, err := s.ListObjects(bucketName, prefix, delimiter)
+ encoder := json.NewEncoder(w)
if err != nil {
- return jsonResponse{status: http.StatusNotFound}
- }
- return jsonResponse{data: newListObjectsResponse(objs, prefixes)}
-}
-
-func (s *Server) xmlListObjects(r *http.Request) xmlResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
-
- opts := ListOptions{
- Prefix: r.URL.Query().Get("prefix"),
- Delimiter: r.URL.Query().Get("delimiter"),
- Versions: r.URL.Query().Get("versions") == "true",
- }
-
- objs, prefixes, err := s.ListObjectsWithOptions(bucketName, opts)
- if err != nil {
- return xmlResponse{
- status: http.StatusInternalServerError,
- errorMessage: err.Error(),
- }
- }
-
- result := ListBucketResult{
- Name: bucketName,
- Delimiter: opts.Delimiter,
- Prefix: opts.Prefix,
- KeyCount: len(objs),
- }
-
- if opts.Delimiter != "" {
- for _, prefix := range prefixes {
- result.CommonPrefixes = append(result.CommonPrefixes, CommonPrefix{Prefix: prefix})
- }
- }
-
- for _, obj := range objs {
- result.Contents = append(result.Contents, Contents{
- Key: obj.Name,
- Generation: obj.Generation,
- Size: obj.Size,
- LastModified: obj.Updated.Format(time.RFC3339),
- ETag: ETag{Value: obj.Etag},
- })
- }
-
- raw, err := xml.Marshal(result)
- if err != nil {
- return xmlResponse{
- status: http.StatusInternalServerError,
- errorMessage: err.Error(),
- }
- }
-
- return xmlResponse{
- status: http.StatusOK,
- data: []byte(xml.Header + string(raw)),
- }
-}
-
-func (s *Server) getObject(w http.ResponseWriter, r *http.Request) {
- if alt := r.URL.Query().Get("alt"); alt == "media" || r.Method == http.MethodHead {
- s.downloadObject(w, r)
+ w.WriteHeader(http.StatusNotFound)
+ errResp := newErrorResponse(http.StatusNotFound, "Not Found", nil)
+ encoder.Encode(errResp)
return
}
-
- handler := jsonToHTTPHandler(func(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
-
- obj, err := s.objectWithGenerationOnValidGeneration(vars["bucketName"], vars["objectName"], r.FormValue("generation"))
- // Calling Close before checking err is okay on objects, and the object
- // may need to be closed whether or not there's an error.
- defer obj.Close() //lint:ignore SA5001 // see above
- if err != nil {
- statusCode := http.StatusNotFound
- var errMessage string
- if errors.Is(err, errInvalidGeneration) {
- statusCode = http.StatusBadRequest
- errMessage = err.Error()
- }
- return jsonResponse{
- status: statusCode,
- errorMessage: errMessage,
- }
- }
- header := make(http.Header)
- header.Set("Accept-Ranges", "bytes")
- return jsonResponse{
- header: header,
- data: newObjectResponse(obj.ObjectAttrs),
- }
- })
-
- handler(w, r)
-}
-
-func (s *Server) deleteObject(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
- obj, err := s.GetObjectStreaming(vars["bucketName"], vars["objectName"])
- // Calling Close before checking err is okay on objects, and the object
- // may need to be closed whether or not there's an error.
- defer obj.Close() //lint:ignore SA5001 // see above
- if err == nil {
- err = s.backend.DeleteObject(vars["bucketName"], vars["objectName"])
- }
- if err != nil {
- return jsonResponse{status: http.StatusNotFound}
- }
- bucket, _ := s.backend.GetBucket(obj.BucketName)
- backendObj := toBackendObjects([]StreamingObject{obj})[0]
- if bucket.VersioningEnabled {
- s.eventManager.Trigger(&backendObj, notification.EventArchive, nil)
- } else {
- s.eventManager.Trigger(&backendObj, notification.EventDelete, nil)
- }
- return jsonResponse{}
+ encoder.Encode(newListObjectsResponse(objs, prefixes))
}
-func (s *Server) listObjectACL(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
-
- obj, err := s.GetObjectStreaming(vars["bucketName"], vars["objectName"])
+func (s *Server) getObject(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ encoder := json.NewEncoder(w)
+ obj, err := s.GetObject(vars["bucketName"], vars["objectName"])
if err != nil {
- return jsonResponse{status: http.StatusNotFound}
+ errResp := newErrorResponse(http.StatusNotFound, "Not Found", nil)
+ w.WriteHeader(http.StatusNotFound)
+ encoder.Encode(errResp)
+ return
}
- defer obj.Close()
-
- return jsonResponse{data: newACLListResponse(obj.ObjectAttrs)}
+ w.Header().Set("Accept-Ranges", "bytes")
+ encoder.Encode(newObjectResponse(obj))
}
-func (s *Server) setObjectACL(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
-
- obj, err := s.GetObjectStreaming(vars["bucketName"], vars["objectName"])
+func (s *Server) deleteObject(w http.ResponseWriter, r *http.Request) {
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
+ vars := mux.Vars(r)
+ err := s.backend.DeleteObject(vars["bucketName"], vars["objectName"])
if err != nil {
- return jsonResponse{status: http.StatusNotFound}
- }
- defer obj.Close()
-
- var data struct {
- Entity string
- Role string
- }
-
- decoder := json.NewDecoder(r.Body)
- if err := decoder.Decode(&data); err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: err.Error(),
- }
- }
-
- entity := storage.ACLEntity(data.Entity)
- role := storage.ACLRole(data.Role)
- obj.ACL = []storage.ACLRule{{
- Entity: entity,
- Role: role,
- }}
-
- obj, err = s.createObject(obj, backend.NoConditions{})
- if err != nil {
- return errToJsonResponse(err)
+ errResp := newErrorResponse(http.StatusNotFound, "Not Found", nil)
+ w.WriteHeader(http.StatusNotFound)
+ json.NewEncoder(w).Encode(errResp)
+ return
}
- defer obj.Close()
-
- return jsonResponse{data: newACLListResponse(obj.ObjectAttrs)}
+ w.WriteHeader(http.StatusOK)
}
-func (s *Server) rewriteObject(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
- obj, err := s.objectWithGenerationOnValidGeneration(vars["sourceBucket"], vars["sourceObject"], r.FormValue("sourceGeneration"))
- // Calling Close before checking err is okay on objects, and the object
- // may need to be closed whether or not there's an error.
- defer obj.Close() //lint:ignore SA5001 // see above
+func (s *Server) rewriteObject(w http.ResponseWriter, r *http.Request) {
+ vars := mux.Vars(r)
+ obj, err := s.GetObject(vars["sourceBucket"], vars["sourceObject"])
if err != nil {
- statusCode := http.StatusNotFound
- var errMessage string
- if errors.Is(err, errInvalidGeneration) {
- statusCode = http.StatusBadRequest
- errMessage = err.Error()
- }
- return jsonResponse{errorMessage: errMessage, status: statusCode}
- }
-
- var metadata multipartMetadata
- err = json.NewDecoder(r.Body).Decode(&metadata)
- if err != nil && err != io.EOF { // The body is optional
- return jsonResponse{errorMessage: "Invalid metadata", status: http.StatusBadRequest}
- }
-
- // Only supplied metadata overwrites the new object's metdata
- if len(metadata.Metadata) == 0 {
- metadata.Metadata = obj.Metadata
- }
- if metadata.ContentType == "" {
- metadata.ContentType = obj.ContentType
- }
- if metadata.ContentEncoding == "" {
- metadata.ContentEncoding = obj.ContentEncoding
+ http.Error(w, "not found", http.StatusNotFound)
+ return
}
-
dstBucket := vars["destinationBucket"]
- newObject := StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: dstBucket,
- Name: vars["destinationObject"],
- ACL: obj.ACL,
- ContentType: metadata.ContentType,
- ContentEncoding: metadata.ContentEncoding,
- Metadata: metadata.Metadata,
- },
- Content: obj.Content,
+ newObject := Object{
+ BucketName: dstBucket,
+ Name: vars["destinationObject"],
+ Content: append([]byte(nil), obj.Content...),
+ Crc32c: obj.Crc32c,
+ Md5Hash: obj.Md5Hash,
}
-
- created, err := s.createObject(newObject, backend.NoConditions{})
- if err != nil {
- return errToJsonResponse(err)
- }
- defer created.Close()
-
- if vars["copyType"] == "copyTo" {
- return jsonResponse{data: newObjectResponse(created.ObjectAttrs)}
- }
- return jsonResponse{data: newObjectRewriteResponse(created.ObjectAttrs)}
+ s.CreateObject(newObject)
+ w.Header().Set("Content-Type", "application/json")
+ json.NewEncoder(w).Encode(newObjectRewriteResponse(newObject))
}
func (s *Server) downloadObject(w http.ResponseWriter, r *http.Request) {
- vars := unescapeMuxVars(mux.Vars(r))
- obj, err := s.objectWithGenerationOnValidGeneration(vars["bucketName"], vars["objectName"], r.FormValue("generation"))
- // Calling Close before checking err is okay on objects, and the object
- // may need to be closed whether or not there's an error.
- defer obj.Close() //lint:ignore SA5001 // see above
+ vars := mux.Vars(r)
+ obj, err := s.GetObject(vars["bucketName"], vars["objectName"])
if err != nil {
- statusCode := http.StatusNotFound
- message := http.StatusText(statusCode)
- if errors.Is(err, errInvalidGeneration) {
- statusCode = http.StatusBadRequest
- message = err.Error()
- }
- http.Error(w, message, statusCode)
+ http.Error(w, "not found", http.StatusNotFound)
return
}
-
- var content io.Reader
- content = obj.Content
status := http.StatusOK
-
- transcoded := false
- ranged := false
- start := int64(0)
- lastByte := int64(0)
- satisfiable := true
- contentLength := int64(0)
-
- handledTranscoding := func() bool {
- // This should also be false if the Cache-Control metadata field == "no-transform",
- // but we don't currently support that field.
- // See https://cloud.google.com/storage/docs/transcoding
-
- if obj.ContentEncoding == "gzip" && !strings.Contains(r.Header.Get("accept-encoding"), "gzip") {
- // GCS will transparently decompress gzipped content, see
- // https://cloud.google.com/storage/docs/transcoding
- // In this case, any Range header is ignored and the full content is returned.
-
- // If the content is not a valid gzip file, ignore errors and continue
- // without transcoding. Otherwise, return decompressed content.
- gzipReader, err := gzip.NewReader(content)
- if err == nil {
- rawContent, err := io.ReadAll(gzipReader)
- if err == nil {
- transcoded = true
- content = bytes.NewReader(rawContent)
- contentLength = int64(len(rawContent))
- obj.Size = contentLength
- return true
- }
- }
- }
- return false
- }
-
- if !handledTranscoding() {
- ranged, start, lastByte, satisfiable = s.handleRange(obj, r)
- contentLength = lastByte - start + 1
- }
-
- if ranged && satisfiable {
- _, err = obj.Content.Seek(start, io.SeekStart)
- if err != nil {
- http.Error(w, "could not seek", http.StatusInternalServerError)
- return
- }
- content = io.LimitReader(obj.Content, contentLength)
+ start, end, content := s.handleRange(obj, r)
+ if len(content) != len(obj.Content) {
status = http.StatusPartialContent
- w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", start, lastByte, obj.Size))
+ w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", start, end, len(obj.Content)))
}
w.Header().Set("Accept-Ranges", "bytes")
- w.Header().Set("Content-Length", strconv.FormatInt(contentLength, 10))
- w.Header().Set("X-Goog-Generation", strconv.FormatInt(obj.Generation, 10))
- w.Header().Set("X-Goog-Hash", fmt.Sprintf("crc32c=%s,md5=%s", obj.Crc32c, obj.Md5Hash))
- w.Header().Set("Last-Modified", obj.Updated.Format(http.TimeFormat))
- w.Header().Set("ETag", obj.Etag)
- for name, value := range obj.Metadata {
- w.Header().Set("X-Goog-Meta-"+name, value)
- }
- w.Header().Set("Access-Control-Allow-Origin", "*")
-
- if ranged && !satisfiable {
- status = http.StatusRequestedRangeNotSatisfiable
- content = bytes.NewReader([]byte(fmt.Sprintf(``+
- `InvalidRange
`+
- `The requested range cannot be satisfied. `+
- `%s `, r.Header.Get("Range"))))
- w.Header().Set(contentTypeHeader, "application/xml; charset=UTF-8")
- } else {
- if obj.ContentType != "" {
- w.Header().Set(contentTypeHeader, obj.ContentType)
- }
- // If content was transcoded, the underlying encoding was removed so we shouldn't report it.
- if obj.ContentEncoding != "" && !transcoded {
- w.Header().Set("Content-Encoding", obj.ContentEncoding)
- }
- // X-Goog-Stored-Content-Encoding must be set to the original encoding,
- // defaulting to "identity" if no encoding was set.
- storedContentEncoding := "identity"
- if obj.ContentEncoding != "" {
- storedContentEncoding = obj.ContentEncoding
- }
- w.Header().Set("X-Goog-Stored-Content-Encoding", storedContentEncoding)
- }
-
+ w.Header().Set("Content-Length", strconv.Itoa(len(content)))
w.WriteHeader(status)
if r.Method == http.MethodGet {
- io.Copy(w, content)
+ w.Write(content)
}
}
-func (s *Server) handleRange(obj StreamingObject, r *http.Request) (ranged bool, start int64, lastByte int64, satisfiable bool) {
- start, end, err := parseRange(r.Header.Get("Range"), obj.Size)
- if err != nil {
- // If the range isn't valid, GCS returns all content.
- return false, 0, obj.Size - 1, false
- }
- // GCS is pretty flexible when it comes to invalid ranges. A 416 http
- // response is only returned when the range start is beyond the length of
- // the content. Otherwise, the range is ignored.
- switch {
- // Invalid start. Return 416 and NO content.
- // Examples:
- // Length: 40, Range: bytes=50-60
- // Length: 40, Range: bytes=50-
- case start >= obj.Size:
- // This IS a ranged request, but it ISN'T satisfiable.
- return true, 0, 0, false
- // Negative range, ignore range and return all content.
- // Examples:
- // Length: 40, Range: bytes=30-20
- case end < start:
- return false, 0, obj.Size - 1, false
- // Return range. Clamp start and end.
- // Examples:
- // Length: 40, Range: bytes=-100
- // Length: 40, Range: bytes=0-100
- default:
- if start < 0 {
- start = 0
- }
- if end >= obj.Size {
- end = obj.Size - 1
- }
- return true, start, end, true
- }
-}
-
-// parseRange parses the range header and returns the corresponding start and
-// end indices in the content. The end index is inclusive. This function
-// doesn't validate that the start and end indices fall within the content
-// bounds. The content length is only used to handle "suffix length" and
-// range-to-end ranges.
-func parseRange(rangeHeaderValue string, contentLength int64) (start int64, end int64, err error) {
- // For information about the range header, see:
- // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Range
- // https://httpwg.org/specs/rfc7233.html#header.range
- // https://httpwg.org/specs/rfc7233.html#byte.ranges
- // https://httpwg.org/specs/rfc7233.html#status.416
- //
- // =
- //
- // The following ranges are parsed:
- // "bytes=40-50" (range with given start and end)
- // "bytes=40-" (range to end of content)
- // "bytes=-40" (suffix length, offset from end of string)
- //
- // The unit MUST be "bytes".
- parts := strings.SplitN(rangeHeaderValue, "=", 2)
- if len(parts) != 2 {
- return 0, 0, fmt.Errorf("expecting `=` in range header, got: %s", rangeHeaderValue)
- }
- if parts[0] != "bytes" {
- return 0, 0, fmt.Errorf("invalid range unit, expecting `bytes`, got: %s", parts[0])
- }
- rangeSpec := parts[1]
- if len(rangeSpec) == 0 {
- return 0, 0, errors.New("empty range")
- }
- if rangeSpec[0] == '-' {
- offsetFromEnd, err := strconv.ParseInt(rangeSpec, 10, 64)
- if err != nil {
- return 0, 0, fmt.Errorf("invalid suffix length, got: %s", rangeSpec)
- }
- start = contentLength + offsetFromEnd
- end = contentLength - 1
- } else {
- rangeParts := strings.SplitN(rangeSpec, "-", 2)
- if len(rangeParts) != 2 {
- return 0, 0, fmt.Errorf("only one range supported, got: %s", rangeSpec)
- }
- start, err = strconv.ParseInt(rangeParts[0], 10, 64)
- if err != nil {
- return 0, 0, fmt.Errorf("invalid range start, got: %s", rangeParts[0])
- }
- if rangeParts[1] == "" {
- end = contentLength - 1
- } else {
- end, err = strconv.ParseInt(rangeParts[1], 10, 64)
- if err != nil {
- return 0, 0, fmt.Errorf("invalid range end, got: %s", rangeParts[1])
+func (s *Server) handleRange(obj Object, r *http.Request) (start, end int, content []byte) {
+ if reqRange := r.Header.Get("Range"); reqRange != "" {
+ parts := strings.SplitN(reqRange, "=", 2)
+ if len(parts) == 2 && parts[0] == "bytes" {
+ rangeParts := strings.SplitN(parts[1], "-", 2)
+ if len(rangeParts) == 2 {
+ start, _ = strconv.Atoi(rangeParts[0])
+ end, _ = strconv.Atoi(rangeParts[1])
+ if end < 1 {
+ end = len(obj.Content)
+ }
+ return start, end, obj.Content[start:end]
}
}
}
- return start, end, nil
-}
-
-func (s *Server) patchObject(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
- bucketName := vars["bucketName"]
- objectName := vars["objectName"]
-
- type acls struct {
- Entity string
- Role string
- }
-
- var payload struct {
- ContentType string
- ContentEncoding string
- Metadata map[string]string `json:"metadata"`
- CustomTime string
- Acl []acls
- }
- err := json.NewDecoder(r.Body).Decode(&payload)
- if err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: "Metadata in the request couldn't decode",
- }
- }
-
- var attrsToUpdate backend.ObjectAttrs
-
- attrsToUpdate.ContentType = payload.ContentType
- attrsToUpdate.ContentEncoding = payload.ContentEncoding
- attrsToUpdate.Metadata = payload.Metadata
- attrsToUpdate.CustomTime = payload.CustomTime
-
- if len(payload.Acl) > 0 {
- attrsToUpdate.ACL = []storage.ACLRule{}
- for _, aclData := range payload.Acl {
- newAcl := storage.ACLRule{Entity: storage.ACLEntity(aclData.Entity), Role: storage.ACLRole(aclData.Role)}
- attrsToUpdate.ACL = append(attrsToUpdate.ACL, newAcl)
- }
- }
-
- backendObj, err := s.backend.PatchObject(bucketName, objectName, attrsToUpdate)
- if err != nil {
- return jsonResponse{
- status: http.StatusNotFound,
- errorMessage: "Object not found to be PATCHed",
- }
- }
- defer backendObj.Close()
-
- s.eventManager.Trigger(&backendObj, notification.EventMetadata, nil)
- return jsonResponse{data: fromBackendObjects([]backend.StreamingObject{backendObj})[0]}
-}
-
-func (s *Server) updateObject(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
- bucketName := vars["bucketName"]
- objectName := vars["objectName"]
-
- type acls struct {
- Entity string
- Role string
- }
-
- var payload struct {
- Metadata map[string]string `json:"metadata"`
- ContentType string `json:"contentType"`
- CustomTime string
- Acl []acls
- }
- err := json.NewDecoder(r.Body).Decode(&payload)
- if err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: "Metadata in the request couldn't decode",
- }
- }
-
- var attrsToUpdate backend.ObjectAttrs
-
- attrsToUpdate.Metadata = payload.Metadata
- attrsToUpdate.CustomTime = payload.CustomTime
- attrsToUpdate.ContentType = payload.ContentType
- if len(payload.Acl) > 0 {
- attrsToUpdate.ACL = []storage.ACLRule{}
- for _, aclData := range payload.Acl {
- newAcl := storage.ACLRule{Entity: storage.ACLEntity(aclData.Entity), Role: storage.ACLRole(aclData.Role)}
- attrsToUpdate.ACL = append(attrsToUpdate.ACL, newAcl)
- }
- }
- backendObj, err := s.backend.UpdateObject(bucketName, objectName, attrsToUpdate)
- if err != nil {
- return jsonResponse{
- status: http.StatusNotFound,
- errorMessage: "Object not found to be updated",
- }
- }
- defer backendObj.Close()
-
- s.eventManager.Trigger(&backendObj, notification.EventMetadata, nil)
- return jsonResponse{data: fromBackendObjects([]backend.StreamingObject{backendObj})[0]}
-}
-
-func (s *Server) composeObject(r *http.Request) jsonResponse {
- vars := unescapeMuxVars(mux.Vars(r))
- bucketName := vars["bucketName"]
- destinationObject := vars["destinationObject"]
-
- var composeRequest struct {
- SourceObjects []struct {
- Name string
- }
- Destination struct {
- Bucket string
- ContentType string
- Metadata map[string]string
- }
- }
-
- decoder := json.NewDecoder(r.Body)
- err := decoder.Decode(&composeRequest)
- if err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: "Error parsing request body",
- }
- }
-
- const maxComposeObjects = 32
- if len(composeRequest.SourceObjects) > maxComposeObjects {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: fmt.Sprintf("The number of source components provided (%d) exceeds the maximum (%d)", len(composeRequest.SourceObjects), maxComposeObjects),
- }
- }
-
- sourceNames := make([]string, 0, len(composeRequest.SourceObjects))
- for _, n := range composeRequest.SourceObjects {
- sourceNames = append(sourceNames, n.Name)
- }
-
- backendObj, err := s.backend.ComposeObject(bucketName, sourceNames, destinationObject, composeRequest.Destination.Metadata, composeRequest.Destination.ContentType)
- if err != nil {
- return jsonResponse{
- status: http.StatusInternalServerError,
- errorMessage: "Error running compose",
- }
- }
- defer backendObj.Close()
-
- obj := fromBackendObjects([]backend.StreamingObject{backendObj})[0]
-
- s.eventManager.Trigger(&backendObj, notification.EventFinalize, nil)
-
- return jsonResponse{data: newObjectResponse(obj.ObjectAttrs)}
+ return 0, 0, obj.Content
}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/response.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/response.go
index e28b84eeb73af..92164cafb1057 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/response.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/response.go
@@ -4,72 +4,44 @@
package fakestorage
-import (
- "time"
-
- "github.com/fsouza/fake-gcs-server/internal/backend"
-)
-
-const timestampFormat = "2006-01-02T15:04:05.999999Z07:00"
-
-func formatTime(t time.Time) string {
- if t.IsZero() {
- return ""
- }
- return t.Format(timestampFormat)
-}
+import "sort"
type listResponse struct {
- Kind string `json:"kind"`
- Items []any `json:"items,omitempty"`
- Prefixes []string `json:"prefixes,omitempty"`
+ Kind string `json:"kind"`
+ Items []interface{} `json:"items"`
+ Prefixes []string `json:"prefixes"`
}
-func newListBucketsResponse(buckets []backend.Bucket, location string) listResponse {
+func newListBucketsResponse(bucketNames []string) listResponse {
resp := listResponse{
Kind: "storage#buckets",
- Items: make([]any, len(buckets)),
+ Items: make([]interface{}, len(bucketNames)),
}
- for i, bucket := range buckets {
- resp.Items[i] = newBucketResponse(bucket, location)
+ sort.Strings(bucketNames)
+ for i, name := range bucketNames {
+ resp.Items[i] = newBucketResponse(name)
}
return resp
}
type bucketResponse struct {
- Kind string `json:"kind"`
- ID string `json:"id"`
- DefaultEventBasedHold bool `json:"defaultEventBasedHold"`
- Name string `json:"name"`
- Versioning *bucketVersioning `json:"versioning,omitempty"`
- TimeCreated string `json:"timeCreated,omitempty"`
- Updated string `json:"updated,omitempty"`
- Location string `json:"location,omitempty"`
- StorageClass string `json:"storageClass,omitempty"`
+ Kind string `json:"kind"`
+ ID string `json:"id"`
+ Name string `json:"name"`
}
-type bucketVersioning struct {
- Enabled bool `json:"enabled,omitempty"`
-}
-
-func newBucketResponse(bucket backend.Bucket, location string) bucketResponse {
+func newBucketResponse(bucketName string) bucketResponse {
return bucketResponse{
- Kind: "storage#bucket",
- ID: bucket.Name,
- Name: bucket.Name,
- DefaultEventBasedHold: bucket.DefaultEventBasedHold,
- Versioning: &bucketVersioning{bucket.VersioningEnabled},
- TimeCreated: formatTime(bucket.TimeCreated),
- Updated: formatTime(bucket.TimeCreated), // not tracking update times yet, reporting `updated` = `timeCreated`
- Location: location,
- StorageClass: "STANDARD",
+ Kind: "storage#bucket",
+ ID: bucketName,
+ Name: bucketName,
}
}
-func newListObjectsResponse(objs []ObjectAttrs, prefixes []string) listResponse {
+func newListObjectsResponse(objs []Object, prefixes []string) listResponse {
resp := listResponse{
Kind: "storage#objects",
- Items: make([]any, len(objs)),
+ Items: make([]interface{}, len(objs)),
Prefixes: prefixes,
}
for i, obj := range objs {
@@ -78,93 +50,27 @@ func newListObjectsResponse(objs []ObjectAttrs, prefixes []string) listResponse
return resp
}
-// objectAccessControl is copied from the Google SDK to avoid direct
-// dependency.
-type objectAccessControl struct {
- Bucket string `json:"bucket,omitempty"`
- Domain string `json:"domain,omitempty"`
- Email string `json:"email,omitempty"`
- Entity string `json:"entity,omitempty"`
- EntityID string `json:"entityId,omitempty"`
- Etag string `json:"etag,omitempty"`
- Generation int64 `json:"generation,omitempty,string"`
- ID string `json:"id,omitempty"`
- Kind string `json:"kind,omitempty"`
- Object string `json:"object,omitempty"`
- ProjectTeam struct {
- ProjectNumber string `json:"projectNumber,omitempty"`
- Team string `json:"team,omitempty"`
- } `json:"projectTeam,omitempty"`
- Role string `json:"role,omitempty"`
- SelfLink string `json:"selfLink,omitempty"`
-}
-
type objectResponse struct {
- Kind string `json:"kind"`
- Name string `json:"name"`
- ID string `json:"id"`
- Bucket string `json:"bucket"`
- Size int64 `json:"size,string"`
- ContentType string `json:"contentType,omitempty"`
- ContentEncoding string `json:"contentEncoding,omitempty"`
- Crc32c string `json:"crc32c,omitempty"`
- ACL []*objectAccessControl `json:"acl,omitempty"`
- Md5Hash string `json:"md5Hash,omitempty"`
- Etag string `json:"etag,omitempty"`
- TimeCreated string `json:"timeCreated,omitempty"`
- TimeDeleted string `json:"timeDeleted,omitempty"`
- Updated string `json:"updated,omitempty"`
- Generation int64 `json:"generation,string"`
- CustomTime string `json:"customTime,omitempty"`
- Metadata map[string]string `json:"metadata,omitempty"`
+ Kind string `json:"kind"`
+ Name string `json:"name"`
+ ID string `json:"id"`
+ Bucket string `json:"bucket"`
+ Size int64 `json:"size,string"`
+ // Crc32c: CRC32c checksum, same as in google storage client code
+ Crc32c string `json:"crc32c,omitempty"`
+ Md5Hash string `json:"md5hash,omitempty"`
}
-func newObjectResponse(obj ObjectAttrs) objectResponse {
- acl := getAccessControlsListFromObject(obj)
-
+func newObjectResponse(obj Object) objectResponse {
return objectResponse{
- Kind: "storage#object",
- ID: obj.id(),
- Bucket: obj.BucketName,
- Name: obj.Name,
- Size: obj.Size,
- ContentType: obj.ContentType,
- ContentEncoding: obj.ContentEncoding,
- Crc32c: obj.Crc32c,
- Md5Hash: obj.Md5Hash,
- Etag: obj.Etag,
- ACL: acl,
- Metadata: obj.Metadata,
- TimeCreated: formatTime(obj.Created),
- TimeDeleted: formatTime(obj.Deleted),
- Updated: formatTime(obj.Updated),
- CustomTime: formatTime(obj.CustomTime),
- Generation: obj.Generation,
- }
-}
-
-type aclListResponse struct {
- Items []*objectAccessControl `json:"items"`
-}
-
-func newACLListResponse(obj ObjectAttrs) aclListResponse {
- if len(obj.ACL) == 0 {
- return aclListResponse{}
- }
- return aclListResponse{Items: getAccessControlsListFromObject(obj)}
-}
-
-func getAccessControlsListFromObject(obj ObjectAttrs) []*objectAccessControl {
- aclItems := make([]*objectAccessControl, len(obj.ACL))
- for idx, aclRule := range obj.ACL {
- aclItems[idx] = &objectAccessControl{
- Bucket: obj.BucketName,
- Entity: string(aclRule.Entity),
- Object: obj.Name,
- Role: string(aclRule.Role),
- }
+ Kind: "storage#object",
+ ID: obj.id(),
+ Bucket: obj.BucketName,
+ Name: obj.Name,
+ Size: int64(len(obj.Content)),
+ Crc32c: obj.Crc32c,
+ Md5Hash: obj.Md5Hash,
}
- return aclItems
}
type rewriteResponse struct {
@@ -176,11 +82,11 @@ type rewriteResponse struct {
Resource objectResponse `json:"resource"`
}
-func newObjectRewriteResponse(obj ObjectAttrs) rewriteResponse {
+func newObjectRewriteResponse(obj Object) rewriteResponse {
return rewriteResponse{
Kind: "storage#rewriteResponse",
- TotalBytesRewritten: obj.Size,
- ObjectSize: obj.Size,
+ TotalBytesRewritten: int64(len(obj.Content)),
+ ObjectSize: int64(len(obj.Content)),
Done: true,
RewriteToken: "",
Resource: newObjectResponse(obj),
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/server.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/server.go
index 7d5f1da33aa2d..165d9d7ec2ed4 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/server.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/server.go
@@ -5,52 +5,30 @@
package fakestorage
import (
- "bufio"
- "bytes"
- "compress/gzip"
"context"
"crypto/tls"
- "errors"
"fmt"
- "io"
- "mime"
- "mime/multipart"
"net"
"net/http"
"net/http/httptest"
- "net/http/httputil"
- "net/textproto"
- "net/url"
- "os"
- "path/filepath"
- "strings"
"sync"
"cloud.google.com/go/storage"
"github.com/fsouza/fake-gcs-server/internal/backend"
- "github.com/fsouza/fake-gcs-server/internal/checksum"
- "github.com/fsouza/fake-gcs-server/internal/notification"
- "github.com/gorilla/handlers"
"github.com/gorilla/mux"
- "golang.org/x/oauth2/google"
"google.golang.org/api/option"
)
-const defaultPublicHost = "storage.googleapis.com"
-
// Server is the fake server.
//
// It provides a fake implementation of the Google Cloud Storage API.
type Server struct {
- backend backend.Storage
- uploads sync.Map
- transport http.RoundTripper
- ts *httptest.Server
- handler http.Handler
- options Options
- externalURL string
- publicHost string
- eventManager notification.EventManager
+ backend backend.Storage
+ uploads map[string]Object
+ transport http.RoundTripper
+ ts *httptest.Server
+ mux *mux.Router
+ mtx sync.RWMutex
}
// NewServer creates a new instance of the server, pre-loaded with the given
@@ -63,8 +41,6 @@ func NewServer(objects []Object) *Server {
}
// NewServerWithHostPort creates a new server that listens on a custom host and port
-//
-// Deprecated: use NewServerWithOptions.
func NewServerWithHostPort(objects []Object, host string, port uint16) (*Server, error) {
return NewServerWithOptions(Options{
InitialObjects: objects,
@@ -73,102 +49,30 @@ func NewServerWithHostPort(objects []Object, host string, port uint16) (*Server,
})
}
-// Options are used to configure the server on creation.
+// Options are used to configure the server on creation
type Options struct {
InitialObjects []Object
StorageRoot string
- Seed string
- Scheme string
Host string
Port uint16
// when set to true, the server will not actually start a TCP listener,
// client requests will get processed by an internal mocked transport.
NoListener bool
-
- // Optional external URL, such as https://gcs.127.0.0.1.nip.io:4443
- // Returned in the Location header for resumable uploads
- // The "real" value is https://www.googleapis.com, the JSON API
- // The default is whatever the server is bound to, such as https://0.0.0.0:4443
- ExternalURL string
-
- // Optional URL for public access
- // An example is "storage.gcs.127.0.0.1.nip.io:4443", which will configure
- // the server to serve objects at:
- // https://storage.gcs.127.0.0.1.nip.io:4443//
- // https://.storage.gcs.127.0.0.1.nip.io:4443>/
- // If unset, the default is "storage.googleapis.com", the XML API
- PublicHost string
-
- // Optional list of headers to add to the CORS header allowlist
- // An example is "X-Goog-Meta-Uploader", which will allow a
- // custom metadata header named "X-Goog-Meta-Uploader" to be
- // sent through the browser
- AllowedCORSHeaders []string
-
- // Destination for writing log.
- Writer io.Writer
-
- // EventOptions contains the events that should be published and the URL
- // of the Google cloud function such events should be published to.
- EventOptions notification.EventManagerOptions
-
- // Location used for buckets in the server.
- BucketsLocation string
-
- CertificateLocation string
-
- PrivateKeyLocation string
}
-// NewServerWithOptions creates a new server configured according to the
-// provided options.
+// NewServerWithOptions creates a new server with custom options
func NewServerWithOptions(options Options) (*Server, error) {
- s, err := newServer(options)
+ s, err := newServer(options.InitialObjects, options.StorageRoot)
if err != nil {
return nil, err
}
-
- allowedHeaders := []string{"Content-Type", "Content-Encoding", "Range", "Content-Range"}
- allowedHeaders = append(allowedHeaders, options.AllowedCORSHeaders...)
-
- cors := handlers.CORS(
- handlers.AllowedMethods([]string{
- http.MethodHead,
- http.MethodGet,
- http.MethodPost,
- http.MethodPut,
- http.MethodPatch,
- http.MethodDelete,
- }),
- handlers.AllowedHeaders(allowedHeaders),
- handlers.AllowedOrigins([]string{"*"}),
- handlers.AllowCredentials(),
- handlers.ExposedHeaders([]string{"Location"}),
- )
-
- s.handler = cors(s.handler)
- if options.Writer != nil {
- s.handler = handlers.LoggingHandler(options.Writer, s.handler)
- }
- s.handler = requestCompressHandler(s.handler)
- s.transport = &muxTransport{handler: s.handler}
-
- s.eventManager, err = notification.NewPubsubEventManager(options.EventOptions, options.Writer)
- if err != nil {
- return nil, err
- }
-
if options.NoListener {
+ s.setTransportToMux()
return s, nil
}
- s.ts = httptest.NewUnstartedServer(s.handler)
- startFunc := s.ts.StartTLS
- if options.Scheme == "http" {
- startFunc = s.ts.Start
- }
-
+ s.ts = httptest.NewUnstartedServer(s.mux)
if options.Port != 0 {
addr := fmt.Sprintf("%s:%d", options.Host, options.Port)
l, err := net.Listen("tcp", addr)
@@ -177,254 +81,64 @@ func NewServerWithOptions(options Options) (*Server, error) {
}
s.ts.Listener.Close()
s.ts.Listener = l
+ s.ts.StartTLS()
+ } else {
+ s.ts.StartTLS()
}
- if options.CertificateLocation != "" && options.PrivateKeyLocation != "" {
- cert, err := tls.LoadX509KeyPair(options.CertificateLocation, options.PrivateKeyLocation)
- if err != nil {
- return nil, err
- }
- s.ts.TLS = &tls.Config{Certificates: []tls.Certificate{cert}}
- }
- startFunc()
-
+ s.setTransportToAddr(s.ts.Listener.Addr().String())
return s, nil
}
-func newServer(options Options) (*Server, error) {
- if len(options.InitialObjects) > 0 && options.Seed != "" {
- return nil, errors.New("please provide either a seed directory or a list of initial objects")
- }
-
- var backendObjects []backend.StreamingObject
- if len(options.InitialObjects) > 0 {
- backendObjects = bufferedObjectsToBackendObjects(options.InitialObjects)
- }
-
+func newServer(objects []Object, storageRoot string) (*Server, error) {
+ backendObjects := toBackendObjects(objects)
var backendStorage backend.Storage
var err error
- if options.StorageRoot != "" {
- backendStorage, err = backend.NewStorageFS(backendObjects, options.StorageRoot)
+ if storageRoot != "" {
+ backendStorage, err = backend.NewStorageFS(backendObjects, storageRoot)
} else {
- backendStorage, err = backend.NewStorageMemory(backendObjects)
+ backendStorage = backend.NewStorageMemory(backendObjects)
}
if err != nil {
return nil, err
}
- publicHost := options.PublicHost
- if publicHost == "" {
- publicHost = defaultPublicHost
- }
-
s := Server{
- backend: backendStorage,
- uploads: sync.Map{},
- externalURL: options.ExternalURL,
- publicHost: publicHost,
- options: options,
- eventManager: ¬ification.PubsubEventManager{},
+ backend: backendStorage,
+ uploads: make(map[string]Object),
}
s.buildMuxer()
- _, err = s.seed()
- if err != nil {
- return nil, err
- }
return &s, nil
}
-func unescapeMuxVars(vars map[string]string) map[string]string {
- m := make(map[string]string)
- for k, v := range vars {
- r, err := url.PathUnescape(v)
- if err == nil {
- m[k] = r
- } else {
- m[k] = v
- }
+func (s *Server) setTransportToAddr(addr string) {
+ // #nosec
+ tlsConfig := tls.Config{InsecureSkipVerify: true}
+ s.transport = &http.Transport{
+ TLSClientConfig: &tlsConfig,
+ DialTLS: func(string, string) (net.Conn, error) {
+ return tls.Dial("tcp", addr, &tlsConfig)
+ },
}
- return m
}
-func (s *Server) buildMuxer() {
- const apiPrefix = "/storage/v1"
- handler := mux.NewRouter().SkipClean(true).UseEncodedPath()
-
- // healthcheck
- handler.Path("/_internal/healthcheck").Methods(http.MethodGet).HandlerFunc(s.healthcheck)
-
- routers := []*mux.Router{
- handler.PathPrefix(apiPrefix).Subrouter(),
- handler.MatcherFunc(s.publicHostMatcher).PathPrefix(apiPrefix).Subrouter(),
- }
-
- for _, r := range routers {
- r.Path("/b").Methods(http.MethodGet).HandlerFunc(jsonToHTTPHandler(s.listBuckets))
- r.Path("/b/").Methods(http.MethodGet).HandlerFunc(jsonToHTTPHandler(s.listBuckets))
- r.Path("/b").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.createBucketByPost))
- r.Path("/b/").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.createBucketByPost))
- r.Path("/b/{bucketName}").Methods(http.MethodGet).HandlerFunc(jsonToHTTPHandler(s.getBucket))
- r.Path("/b/{bucketName}").Methods(http.MethodPatch).HandlerFunc(jsonToHTTPHandler(s.updateBucket))
- r.Path("/b/{bucketName}").Methods(http.MethodDelete).HandlerFunc(jsonToHTTPHandler(s.deleteBucket))
- r.Path("/b/{bucketName}/o").Methods(http.MethodGet).HandlerFunc(jsonToHTTPHandler(s.listObjects))
- r.Path("/b/{bucketName}/o/").Methods(http.MethodGet).HandlerFunc(jsonToHTTPHandler(s.listObjects))
- r.Path("/b/{bucketName}/o/{objectName:.+}").Methods(http.MethodPatch).HandlerFunc(jsonToHTTPHandler(s.patchObject))
- r.Path("/b/{bucketName}/o/{objectName:.+}/acl").Methods(http.MethodGet).HandlerFunc(jsonToHTTPHandler(s.listObjectACL))
- r.Path("/b/{bucketName}/o/{objectName:.+}/acl").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.setObjectACL))
- r.Path("/b/{bucketName}/o/{objectName:.+}/acl/{entity}").Methods(http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.setObjectACL))
- r.Path("/b/{bucketName}/o/{objectName:.+}").Methods(http.MethodGet, http.MethodHead).HandlerFunc(s.getObject)
- r.Path("/b/{bucketName}/o/{objectName:.+}").Methods(http.MethodDelete).HandlerFunc(jsonToHTTPHandler(s.deleteObject))
- r.Path("/b/{sourceBucket}/o/{sourceObject:.+}/{copyType:rewriteTo|copyTo}/b/{destinationBucket}/o/{destinationObject:.+}").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.rewriteObject))
- r.Path("/b/{bucketName}/o/{destinationObject:.+}/compose").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.composeObject))
- r.Path("/b/{bucketName}/o/{objectName:.+}").Methods(http.MethodPut, http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.updateObject))
- }
-
- // Internal / update server configuration
- handler.Path("/_internal/config").Methods(http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.updateServerConfig))
- handler.MatcherFunc(s.publicHostMatcher).Path("/_internal/config").Methods(http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.updateServerConfig))
- handler.Path("/_internal/reseed").Methods(http.MethodPut, http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.reseedServer))
- // Internal - end
-
- // XML API
- xmlApiRouters := []*mux.Router{
- handler.Host(fmt.Sprintf("{bucketName}.%s", s.publicHost)).Subrouter(),
- handler.MatcherFunc(s.publicHostMatcher).PathPrefix(`/{bucketName}`).Subrouter(),
- }
- for _, r := range xmlApiRouters {
- r.Path("/").Methods(http.MethodGet).HandlerFunc(xmlToHTTPHandler(s.xmlListObjects))
- r.Path("").Methods(http.MethodGet).HandlerFunc(xmlToHTTPHandler(s.xmlListObjects))
- }
-
- bucketHost := fmt.Sprintf("{bucketName}.%s", s.publicHost)
- handler.Host(bucketHost).Path("/{objectName:.+}").Methods(http.MethodGet, http.MethodHead).HandlerFunc(s.downloadObject)
- handler.Path("/download/storage/v1/b/{bucketName}/o/{objectName:.+}").Methods(http.MethodGet, http.MethodHead).HandlerFunc(s.downloadObject)
- handler.Path("/upload/storage/v1/b/{bucketName}/o").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.insertObject))
- handler.Path("/upload/storage/v1/b/{bucketName}/o/").Methods(http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.insertObject))
- handler.Path("/upload/storage/v1/b/{bucketName}/o").Methods(http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.uploadFileContent))
- handler.Path("/upload/storage/v1/b/{bucketName}/o/").Methods(http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.uploadFileContent))
- handler.Path("/upload/resumable/{uploadId}").Methods(http.MethodPut, http.MethodPost).HandlerFunc(jsonToHTTPHandler(s.uploadFileContent))
-
- // Batch endpoint
- handler.MatcherFunc(s.publicHostMatcher).Path("/batch/storage/v1").Methods(http.MethodPost).HandlerFunc(s.handleBatchCall)
- handler.Path("/batch/storage/v1").Methods(http.MethodPost).HandlerFunc(s.handleBatchCall)
-
- handler.MatcherFunc(s.publicHostMatcher).Path("/{bucketName}/{objectName:.+}").Methods(http.MethodGet, http.MethodHead).HandlerFunc(s.downloadObject)
- handler.Host("{bucketName:.+}").Path("/{objectName:.+}").Methods(http.MethodGet, http.MethodHead).HandlerFunc(s.downloadObject)
-
- // Form Uploads
- handler.Host(s.publicHost).Path("/{bucketName}").MatcherFunc(matchFormData).Methods(http.MethodPost, http.MethodPut).HandlerFunc(xmlToHTTPHandler(s.insertFormObject))
- handler.Host(bucketHost).MatcherFunc(matchFormData).Methods(http.MethodPost, http.MethodPut).HandlerFunc(xmlToHTTPHandler(s.insertFormObject))
-
- // Signed URLs (upload and download)
- handler.MatcherFunc(s.publicHostMatcher).Path("/{bucketName}/{objectName:.+}").Methods(http.MethodPost, http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.insertObject))
- handler.MatcherFunc(s.publicHostMatcher).Path("/{bucketName}/{objectName:.+}").Methods(http.MethodGet, http.MethodHead).HandlerFunc(s.getObject)
- handler.Host(bucketHost).Path("/{objectName:.+}").Methods(http.MethodPost, http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.insertObject))
- handler.Host("{bucketName:.+}").Path("/{objectName:.+}").Methods(http.MethodPost, http.MethodPut).HandlerFunc(jsonToHTTPHandler(s.insertObject))
-
- s.handler = handler
+func (s *Server) setTransportToMux() {
+ s.transport = &muxTransport{router: s.mux}
}
-func (s *Server) seed() ([]backend.StreamingObject, error) {
- if s.options.Seed == "" {
- return nil, nil
- }
-
- initialObjects, emptyBuckets := generateObjectsFromFiles(s.options.Seed)
-
- backendObjects := bufferedObjectsToBackendObjects(initialObjects)
-
- var err error
- if s.options.StorageRoot != "" {
- s.backend, err = backend.NewStorageFS(backendObjects, s.options.StorageRoot)
- } else {
- s.backend, err = backend.NewStorageMemory(backendObjects)
- }
- if err != nil {
- return nil, err
- }
-
- for _, bucketName := range emptyBuckets {
- s.CreateBucketWithOpts(CreateBucketOpts{Name: bucketName})
- }
- return backendObjects, nil
-}
-
-func (s *Server) reseedServer(r *http.Request) jsonResponse {
- backendObjects, err := s.seed()
- if err != nil {
- return errToJsonResponse(err)
- }
-
- return jsonResponse{data: fromBackendObjects(backendObjects)}
-}
-
-func generateObjectsFromFiles(folder string) ([]Object, []string) {
- var objects []Object
- var emptyBuckets []string
- if files, err := os.ReadDir(folder); err == nil {
- for _, f := range files {
- if !f.IsDir() {
- continue
- }
- bucketName := f.Name()
- localBucketPath := filepath.Join(folder, bucketName)
-
- bucketObjects, err := objectsFromBucket(localBucketPath, bucketName)
- if err != nil {
- continue
- }
-
- if len(bucketObjects) < 1 {
- emptyBuckets = append(emptyBuckets, bucketName)
- }
- objects = append(objects, bucketObjects...)
- }
- }
- return objects, emptyBuckets
-}
-
-func objectsFromBucket(localBucketPath, bucketName string) ([]Object, error) {
- var objects []Object
- err := filepath.Walk(localBucketPath, func(path string, info os.FileInfo, _ error) error {
- if info.Mode().IsRegular() {
- // Rel() should never return error since path always descend from localBucketPath
- relPath, _ := filepath.Rel(localBucketPath, path)
- objectKey := filepath.ToSlash(relPath)
- fileContent, err := os.ReadFile(path)
- if err != nil {
- return fmt.Errorf("could not read file %q: %w", path, err)
- }
- objects = append(objects, Object{
- ObjectAttrs: ObjectAttrs{
- ACL: []storage.ACLRule{
- {
- Entity: "projectOwner-test-project",
- Role: "OWNER",
- },
- },
- BucketName: bucketName,
- Name: objectKey,
- ContentType: mime.TypeByExtension(filepath.Ext(path)),
- Crc32c: checksum.EncodedCrc32cChecksum(fileContent),
- Md5Hash: checksum.EncodedMd5Hash(fileContent),
- },
- Content: fileContent,
- })
- }
- return nil
- })
- return objects, err
-}
-
-func (s *Server) healthcheck(w http.ResponseWriter, r *http.Request) {
- w.WriteHeader(http.StatusOK)
-}
-
-// publicHostMatcher matches incoming requests against the currently specified server publicHost.
-func (s *Server) publicHostMatcher(r *http.Request, rm *mux.RouteMatch) bool {
- if strings.Contains(s.publicHost, ":") || !strings.Contains(r.Host, ":") {
- return r.Host == s.publicHost
- }
- idx := strings.IndexByte(r.Host, ':')
- return r.Host[:idx] == s.publicHost
+func (s *Server) buildMuxer() {
+ s.mux = mux.NewRouter()
+ s.mux.Host("storage.googleapis.com").Path("/{bucketName}/{objectName:.+}").Methods("GET", "HEAD").HandlerFunc(s.downloadObject)
+ s.mux.Host("{bucketName}.storage.googleapis.com").Path("/{objectName:.+}").Methods("GET", "HEAD").HandlerFunc(s.downloadObject)
+ r := s.mux.PathPrefix("/storage/v1").Subrouter()
+ r.Path("/b").Methods("GET").HandlerFunc(s.listBuckets)
+ r.Path("/b/{bucketName}").Methods("GET").HandlerFunc(s.getBucket)
+ r.Path("/b/{bucketName}/o").Methods("GET").HandlerFunc(s.listObjects)
+ r.Path("/b/{bucketName}/o").Methods("POST").HandlerFunc(s.insertObject)
+ r.Path("/b/{bucketName}/o/{objectName:.+}").Methods("GET").HandlerFunc(s.getObject)
+ r.Path("/b/{bucketName}/o/{objectName:.+}").Methods("DELETE").HandlerFunc(s.deleteObject)
+ r.Path("/b/{sourceBucket}/o/{sourceObject:.+}/rewriteTo/b/{destinationBucket}/o/{destinationObject:.+}").HandlerFunc(s.rewriteObject)
+ s.mux.Path("/download/storage/v1/b/{bucketName}/o/{objectName}").Methods("GET").HandlerFunc(s.downloadObject)
+ s.mux.Path("/upload/storage/v1/b/{bucketName}/o").Methods("POST").HandlerFunc(s.insertObject)
+ s.mux.Path("/upload/resumable/{uploadId}").Methods("PUT", "POST").HandlerFunc(s.uploadFileContent)
}
// Stop stops the server, closing all connections.
@@ -439,136 +153,20 @@ func (s *Server) Stop() {
// URL returns the server URL.
func (s *Server) URL() string {
- if s.externalURL != "" {
- return s.externalURL
- }
if s.ts != nil {
return s.ts.URL
}
return ""
}
-// PublicURL returns the server's public download URL.
-func (s *Server) PublicURL() string {
- return fmt.Sprintf("%s://%s", s.scheme(), s.publicHost)
-}
-
-func (s *Server) Backend() backend.Storage {
- return s.backend
-}
-
-func (s *Server) scheme() string {
- if s.options.Scheme == "http" {
- return "http"
- }
- return "https"
-}
-
// HTTPClient returns an HTTP client configured to talk to the server.
func (s *Server) HTTPClient() *http.Client {
return &http.Client{Transport: s.transport}
}
-// HTTPHandler returns an HTTP handler that behaves like GCS.
-func (s *Server) HTTPHandler() http.Handler {
- return s.handler
-}
-
// Client returns a GCS client configured to talk to the server.
func (s *Server) Client() *storage.Client {
- client, err := storage.NewClient(context.Background(), option.WithHTTPClient(s.HTTPClient()), option.WithCredentials(&google.Credentials{}))
- if err != nil {
- panic(err)
- }
+ opt := option.WithHTTPClient(s.HTTPClient())
+ client, _ := storage.NewClient(context.Background(), opt)
return client
}
-
-func (s *Server) handleBatchCall(w http.ResponseWriter, r *http.Request) {
- reader, err := r.MultipartReader()
- if err != nil {
- http.Error(w, "invalid Content-Type header", http.StatusBadRequest)
- return
- }
-
- var b bytes.Buffer
- mw := multipart.NewWriter(&b)
- defer mw.Close()
- w.Header().Set("Content-Type", "multipart/mixed; boundary="+mw.Boundary())
-
- w.WriteHeader(http.StatusOK)
- part, err := reader.NextPart()
- for ; err == nil; part, err = reader.NextPart() {
- contentID := part.Header.Get("Content-ID")
- if contentID == "" {
- // missing content ID, skip
- continue
- }
-
- partHeaders := textproto.MIMEHeader{}
- partHeaders.Set("Content-Type", "application/http")
- partHeaders.Set("Content-ID", strings.Replace(contentID, "<", "@,;:\"/[]?=
-// (including space). gsutil likes to use `=` in the boundary, but incorrectly
-// quotes it using single quotes.
-//
-// We do exclude \ and " from the regexp because those are not supported by the
-// mime package.
-//
-// This has been reported to gsutil
-// (https://github.com/GoogleCloudPlatform/gsutil/issues/1466). If that issue
-// ever gets closed, we should be able to get rid of this hack.
-var gsutilBoundary = regexp.MustCompile(`boundary='([^']*[()<>@,;:"/\[\]?= ]+[^']*)'`)
-
type multipartMetadata struct {
- ContentType string `json:"contentType"`
- ContentEncoding string `json:"contentEncoding"`
- CustomTime time.Time `json:"customTime,omitempty"`
- Name string `json:"name"`
- Metadata map[string]string `json:"metadata"`
-}
-
-type contentRange struct {
- KnownRange bool // Is the range known, or "*"?
- KnownTotal bool // Is the total known, or "*"?
- Start int // Start of the range, -1 if unknown
- End int // End of the range, -1 if unknown
- Total int // Total bytes expected, -1 if unknown
+ Name string `json:"name"`
}
-type generationCondition struct {
- ifGenerationMatch *int64
- ifGenerationNotMatch *int64
-}
-
-func (c generationCondition) ConditionsMet(activeGeneration int64) bool {
- if c.ifGenerationMatch != nil && *c.ifGenerationMatch != activeGeneration {
- return false
- }
- if c.ifGenerationNotMatch != nil && *c.ifGenerationNotMatch == activeGeneration {
- return false
- }
- return true
-}
-
-func (s *Server) insertObject(r *http.Request) jsonResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
-
- if _, err := s.backend.GetBucket(bucketName); err != nil {
- return jsonResponse{status: http.StatusNotFound}
+func (s *Server) insertObject(w http.ResponseWriter, r *http.Request) {
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
+ bucketName := mux.Vars(r)["bucketName"]
+ if err := s.backend.GetBucket(bucketName); err != nil {
+ w.WriteHeader(http.StatusNotFound)
+ err := newErrorResponse(http.StatusNotFound, "Not found", nil)
+ json.NewEncoder(w).Encode(err)
+ return
}
uploadType := r.URL.Query().Get("uploadType")
- if uploadType == "" && r.Header.Get("X-Goog-Upload-Protocol") == uploadTypeResumable {
- uploadType = uploadTypeResumable
- }
-
switch uploadType {
- case uploadTypeMedia:
- return s.simpleUpload(bucketName, r)
- case uploadTypeMultipart:
- return s.multipartUpload(bucketName, r)
- case uploadTypeResumable:
- return s.resumableUpload(bucketName, r)
+ case "media":
+ s.simpleUpload(bucketName, w, r)
+ case "multipart":
+ s.multipartUpload(bucketName, w, r)
+ case "resumable":
+ s.resumableUpload(bucketName, w, r)
default:
- // Support Signed URL Uploads
- if r.URL.Query().Get("X-Goog-Algorithm") != "" {
- switch r.Method {
- case http.MethodPost:
- return s.resumableUpload(bucketName, r)
- case http.MethodPut:
- return s.signedUpload(bucketName, r)
- }
- }
- return jsonResponse{errorMessage: "invalid uploadType", status: http.StatusBadRequest}
+ http.Error(w, "invalid uploadType", http.StatusBadRequest)
}
}
-func (s *Server) insertFormObject(r *http.Request) xmlResponse {
- bucketName := unescapeMuxVars(mux.Vars(r))["bucketName"]
-
- if err := r.ParseMultipartForm(32 << 20); nil != err {
- return xmlResponse{errorMessage: "invalid form", status: http.StatusBadRequest}
- }
-
- // Load metadata
- var name string
- if keys, ok := r.MultipartForm.Value["key"]; ok {
- name = keys[0]
- }
+func (s *Server) simpleUpload(bucketName string, w http.ResponseWriter, r *http.Request) {
+ defer r.Body.Close()
+ name := r.URL.Query().Get("name")
if name == "" {
- return xmlResponse{errorMessage: "missing key", status: http.StatusBadRequest}
- }
- var predefinedACL string
- if acls, ok := r.MultipartForm.Value["acl"]; ok {
- predefinedACL = acls[0]
- }
- var contentEncoding string
- if contentEncodings, ok := r.MultipartForm.Value["Content-Encoding"]; ok {
- contentEncoding = contentEncodings[0]
- }
- var contentType string
- if contentTypes, ok := r.MultipartForm.Value["Content-Type"]; ok {
- contentType = contentTypes[0]
- }
- successActionStatus := http.StatusNoContent
- if successActionStatuses, ok := r.MultipartForm.Value["success_action_status"]; ok {
- successInt, err := strconv.Atoi(successActionStatuses[0])
- if err != nil {
- return xmlResponse{errorMessage: err.Error(), status: http.StatusBadRequest}
- }
- if successInt != http.StatusOK && successInt != http.StatusCreated && successInt != http.StatusNoContent {
- return xmlResponse{errorMessage: "invalid success action status", status: http.StatusBadRequest}
- }
- successActionStatus = successInt
+ http.Error(w, "name is required for simple uploads", http.StatusBadRequest)
+ return
}
- metaData := make(map[string]string)
- for key := range r.MultipartForm.Value {
- lowerKey := strings.ToLower(key)
- if metaDataKey := strings.TrimPrefix(lowerKey, "x-goog-meta-"); metaDataKey != lowerKey {
- metaData[metaDataKey] = r.MultipartForm.Value[key][0]
- }
- }
-
- // Load file
- var file *multipart.FileHeader
- if files, ok := r.MultipartForm.File["file"]; ok {
- file = files[0]
- }
- if file == nil {
- return xmlResponse{errorMessage: "missing file", status: http.StatusBadRequest}
- }
- infile, err := file.Open()
+ data, err := ioutil.ReadAll(r.Body)
if err != nil {
- return xmlResponse{errorMessage: err.Error()}
- }
- obj := StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: name,
- ContentType: contentType,
- ContentEncoding: contentEncoding,
- ACL: getObjectACL(predefinedACL),
- Metadata: metaData,
- },
- Content: infile,
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
- obj, err = s.createObject(obj, backend.NoConditions{})
+ obj := Object{BucketName: bucketName, Name: name, Content: data, Crc32c: encodedCrc32cChecksum(data), Md5Hash: encodedMd5Hash(data)}
+ err = s.createObject(obj)
if err != nil {
- return xmlResponse{errorMessage: err.Error()}
- }
- defer obj.Close()
-
- if successActionStatus == 201 {
- objectURI := fmt.Sprintf("%s/%s%s", s.URL(), bucketName, name)
- xmlBody := createXmlResponseBody(bucketName, obj.Etag, strings.TrimPrefix(name, "/"), objectURI)
- return xmlResponse{status: successActionStatus, data: xmlBody}
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
- return xmlResponse{status: successActionStatus}
+ w.WriteHeader(http.StatusOK)
+ json.NewEncoder(w).Encode(obj)
}
-func (s *Server) wrapUploadPreconditions(r *http.Request, bucketName string, objectName string) (generationCondition, error) {
- result := generationCondition{
- ifGenerationMatch: nil,
- ifGenerationNotMatch: nil,
- }
- ifGenerationMatch := r.URL.Query().Get("ifGenerationMatch")
-
- if ifGenerationMatch != "" {
- gen, err := strconv.ParseInt(ifGenerationMatch, 10, 64)
- if err != nil {
- return generationCondition{}, err
- }
- result.ifGenerationMatch = &gen
- }
-
- ifGenerationNotMatch := r.URL.Query().Get("ifGenerationNotMatch")
-
- if ifGenerationNotMatch != "" {
- gen, err := strconv.ParseInt(ifGenerationNotMatch, 10, 64)
- if err != nil {
- return generationCondition{}, err
- }
- result.ifGenerationNotMatch = &gen
- }
+var crc32cTable = crc32.MakeTable(crc32.Castagnoli)
- return result, nil
+func crc32cChecksum(content []byte) []byte {
+ checksummer := crc32.New(crc32cTable)
+ checksummer.Write(content)
+ return checksummer.Sum(make([]byte, 0, 4))
}
-func (s *Server) simpleUpload(bucketName string, r *http.Request) jsonResponse {
- defer r.Body.Close()
- name := r.URL.Query().Get("name")
- predefinedACL := r.URL.Query().Get("predefinedAcl")
- contentEncoding := r.URL.Query().Get("contentEncoding")
- customTime := r.URL.Query().Get("customTime")
- if name == "" {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: "name is required for simple uploads",
- }
- }
- obj := StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: name,
- ContentType: r.Header.Get(contentTypeHeader),
- ContentEncoding: contentEncoding,
- CustomTime: convertTimeWithoutError(customTime),
- ACL: getObjectACL(predefinedACL),
- },
- Content: notImplementedSeeker{r.Body},
- }
- obj, err := s.createObject(obj, backend.NoConditions{})
- if err != nil {
- return errToJsonResponse(err)
- }
- obj.Close()
- return jsonResponse{data: newObjectResponse(obj.ObjectAttrs)}
+func encodedChecksum(checksum []byte) string {
+ return base64.StdEncoding.EncodeToString(checksum)
}
-type notImplementedSeeker struct {
- io.ReadCloser
+func encodedCrc32cChecksum(content []byte) string {
+ return encodedChecksum(crc32cChecksum(content))
}
-func (s notImplementedSeeker) Seek(offset int64, whence int) (int64, error) {
- return 0, errors.New("not implemented")
+func md5Hash(b []byte) []byte {
+ /* #nosec G401 */
+ h := md5.New()
+ h.Write(b)
+ return h.Sum(nil)
}
-func (s *Server) signedUpload(bucketName string, r *http.Request) jsonResponse {
- defer r.Body.Close()
- name := unescapeMuxVars(mux.Vars(r))["objectName"]
- predefinedACL := r.URL.Query().Get("predefinedAcl")
- contentEncoding := r.URL.Query().Get("contentEncoding")
- customTime := r.URL.Query().Get("customTime")
-
- // Load data from HTTP Headers
- if contentEncoding == "" {
- contentEncoding = r.Header.Get("Content-Encoding")
- }
-
- metaData := make(map[string]string)
- for key := range r.Header {
- lowerKey := strings.ToLower(key)
- if metaDataKey := strings.TrimPrefix(lowerKey, "x-goog-meta-"); metaDataKey != lowerKey {
- metaData[metaDataKey] = r.Header.Get(key)
- }
- }
-
- obj := StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: name,
- ContentType: r.Header.Get(contentTypeHeader),
- ContentEncoding: contentEncoding,
- CustomTime: convertTimeWithoutError(customTime),
- ACL: getObjectACL(predefinedACL),
- Metadata: metaData,
- },
- Content: notImplementedSeeker{r.Body},
- }
- obj, err := s.createObject(obj, backend.NoConditions{})
- if err != nil {
- return errToJsonResponse(err)
- }
- obj.Close()
- return jsonResponse{data: newObjectResponse(obj.ObjectAttrs)}
+func encodedHash(hash []byte) string {
+ return base64.StdEncoding.EncodeToString(hash)
}
-func getObjectACL(predefinedACL string) []storage.ACLRule {
- if predefinedACL == "publicRead" {
- return []storage.ACLRule{
- {
- Entity: "allUsers",
- Role: "READER",
- },
- }
- }
-
- return []storage.ACLRule{
- {
- Entity: "projectOwner-test-project",
- Role: "OWNER",
- },
- }
+func encodedMd5Hash(content []byte) string {
+ return encodedHash(md5Hash(content))
}
-func (s *Server) multipartUpload(bucketName string, r *http.Request) jsonResponse {
+func (s *Server) multipartUpload(bucketName string, w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
- params, err := parseContentTypeParams(r.Header.Get(contentTypeHeader))
+ _, params, err := mime.ParseMediaType(r.Header.Get("Content-Type"))
if err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: "invalid Content-Type header",
- }
+ http.Error(w, "invalid Content-Type header", http.StatusBadRequest)
+ return
}
var (
metadata *multipartMetadata
content []byte
)
- var contentType string
reader := multipart.NewReader(r.Body, params["boundary"])
-
- var partReaders []io.Reader
-
part, err := reader.NextPart()
for ; err == nil; part, err = reader.NextPart() {
if metadata == nil {
metadata, err = loadMetadata(part)
- contentType = metadata.ContentType
} else {
- contentType = part.Header.Get(contentTypeHeader)
content, err = loadContent(part)
- partReaders = append(partReaders, bytes.NewReader(content))
}
if err != nil {
break
}
}
if err != io.EOF {
- return jsonResponse{errorMessage: err.Error()}
- }
-
- objName := r.URL.Query().Get("name")
- predefinedACL := r.URL.Query().Get("predefinedAcl")
- if objName == "" {
- objName = metadata.Name
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
-
- conditions, err := s.wrapUploadPreconditions(r, bucketName, objName)
+ obj := Object{BucketName: bucketName, Name: metadata.Name, Content: content, Crc32c: encodedCrc32cChecksum(content), Md5Hash: encodedMd5Hash(content)}
+ err = s.createObject(obj)
if err != nil {
- return jsonResponse{
- status: http.StatusBadRequest,
- errorMessage: err.Error(),
- }
- }
-
- obj := StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: objName,
- ContentType: contentType,
- ContentEncoding: metadata.ContentEncoding,
- CustomTime: metadata.CustomTime,
- ACL: getObjectACL(predefinedACL),
- Metadata: metadata.Metadata,
- },
- Content: notImplementedSeeker{io.NopCloser(io.MultiReader(partReaders...))},
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
-
- obj, err = s.createObject(obj, conditions)
- if err != nil {
- return errToJsonResponse(err)
- }
- defer obj.Close()
- return jsonResponse{data: newObjectResponse(obj.ObjectAttrs)}
-}
-
-func parseContentTypeParams(requestContentType string) (map[string]string, error) {
- requestContentType = gsutilBoundary.ReplaceAllString(requestContentType, `boundary="$1"`)
- _, params, err := mime.ParseMediaType(requestContentType)
- return params, err
+ w.WriteHeader(http.StatusOK)
+ json.NewEncoder(w).Encode(obj)
}
-func (s *Server) resumableUpload(bucketName string, r *http.Request) jsonResponse {
- if r.URL.Query().Has("upload_id") {
- return s.uploadFileContent(r)
- }
- predefinedACL := r.URL.Query().Get("predefinedAcl")
- contentEncoding := r.URL.Query().Get("contentEncoding")
- metadata := new(multipartMetadata)
- if r.Body != http.NoBody {
- var err error
- metadata, err = loadMetadata(r.Body)
- if err != nil {
- return jsonResponse{errorMessage: err.Error()}
- }
- }
+func (s *Server) resumableUpload(bucketName string, w http.ResponseWriter, r *http.Request) {
objName := r.URL.Query().Get("name")
if objName == "" {
+ metadata, err := loadMetadata(r.Body)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
+ }
objName = metadata.Name
}
- if contentEncoding == "" {
- contentEncoding = metadata.ContentEncoding
- }
- obj := Object{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: objName,
- ContentType: metadata.ContentType,
- ContentEncoding: contentEncoding,
- CustomTime: metadata.CustomTime,
- ACL: getObjectACL(predefinedACL),
- Metadata: metadata.Metadata,
- },
- }
+ obj := Object{BucketName: bucketName, Name: objName}
uploadID, err := generateUploadID()
if err != nil {
- return jsonResponse{errorMessage: err.Error()}
- }
- s.uploads.Store(uploadID, obj)
- header := make(http.Header)
- location := fmt.Sprintf(
- "%s/upload/storage/v1/b/%s/o?uploadType=resumable&name=%s&upload_id=%s",
- s.URL(),
- bucketName,
- url.PathEscape(objName),
- uploadID,
- )
- header.Set("Location", location)
- if r.Header.Get("X-Goog-Upload-Command") == "start" {
- header.Set("X-Goog-Upload-URL", location)
- header.Set("X-Goog-Upload-Status", "active")
- }
- return jsonResponse{
- data: newObjectResponse(obj.ObjectAttrs),
- header: header,
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
+ s.uploads[uploadID] = obj
+ w.Header().Set("Location", s.URL()+"/upload/resumable/"+uploadID)
+ w.WriteHeader(http.StatusOK)
+ json.NewEncoder(w).Encode(obj)
}
-// uploadFileContent accepts a chunk of a resumable upload
-//
-// A resumable upload is sent in one or more chunks. The request's
-// "Content-Range" header is used to determine if more data is expected.
-//
-// When sending streaming content, the total size is unknown until the stream
-// is exhausted. The Go client always sends streaming content. The sequence of
-// "Content-Range" headers for 2600-byte content sent in 1000-byte chunks are:
-//
-// Content-Range: bytes 0-999/*
-// Content-Range: bytes 1000-1999/*
-// Content-Range: bytes 2000-2599/*
-// Content-Range: bytes */2600
-//
-// When sending chunked content of a known size, the total size is sent as
-// well. The Python client uses this method to upload files and in-memory
-// content. The sequence of "Content-Range" headers for the 2600-byte content
-// sent in 1000-byte chunks are:
-//
-// Content-Range: bytes 0-999/2600
-// Content-Range: bytes 1000-1999/2600
-// Content-Range: bytes 2000-2599/2600
-//
-// The server collects the content, analyzes the "Content-Range", and returns a
-// "308 Permanent Redirect" response if more chunks are expected, and a
-// "200 OK" response if the upload is complete (the Go client also accepts a
-// "201 Created" response). The "Range" header in the response should be set to
-// the size of the content received so far, such as:
-//
-// Range: bytes 0-2000
-//
-// The client (such as the Go client) can send a header "X-Guploader-No-308" if
-// it can't process a native "308 Permanent Redirect". The in-process response
-// then has a status of "200 OK", with a header "X-Http-Status-Code-Override"
-// set to "308".
-func (s *Server) uploadFileContent(r *http.Request) jsonResponse {
- uploadID := r.URL.Query().Get("upload_id")
- rawObj, ok := s.uploads.Load(uploadID)
+func (s *Server) uploadFileContent(w http.ResponseWriter, r *http.Request) {
+ uploadID := mux.Vars(r)["uploadId"]
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
+ obj, ok := s.uploads[uploadID]
if !ok {
- return jsonResponse{status: http.StatusNotFound}
+ http.Error(w, "upload not found", http.StatusNotFound)
+ return
}
- obj := rawObj.(Object)
- // TODO: stream upload file content to and from disk (when using the FS
- // backend, at least) instead of loading the entire content into memory.
content, err := loadContent(r.Body)
if err != nil {
- return jsonResponse{errorMessage: err.Error()}
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
commit := true
- status := http.StatusOK
+ status := http.StatusCreated
+ objLength := len(obj.Content)
obj.Content = append(obj.Content, content...)
- obj.Crc32c = checksum.EncodedCrc32cChecksum(obj.Content)
- obj.Md5Hash = checksum.EncodedMd5Hash(obj.Content)
- obj.Etag = fmt.Sprintf("%q", obj.Md5Hash)
- contentTypeHeader := r.Header.Get(contentTypeHeader)
- if contentTypeHeader != "" {
- obj.ContentType = contentTypeHeader
- } else {
- obj.ContentType = "application/octet-stream"
- }
- responseHeader := make(http.Header)
+ obj.Crc32c = encodedCrc32cChecksum(obj.Content)
+ obj.Md5Hash = encodedMd5Hash(obj.Content)
if contentRange := r.Header.Get("Content-Range"); contentRange != "" {
- parsed, err := parseContentRange(contentRange)
+ commit, err = parseRange(contentRange, objLength, len(content), w)
if err != nil {
- return jsonResponse{errorMessage: err.Error(), status: http.StatusBadRequest}
- }
- if parsed.KnownRange {
- // Middle of streaming request, or any part of chunked request
- responseHeader.Set("Range", fmt.Sprintf("bytes=0-%d", parsed.End))
- // Complete if the range covers the known total
- commit = parsed.KnownTotal && (parsed.End+1 >= parsed.Total)
- } else {
- // End of a streaming request
- responseHeader.Set("Range", fmt.Sprintf("bytes=0-%d", len(obj.Content)))
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
}
}
if commit {
- s.uploads.Delete(uploadID)
- streamingObject, err := s.createObject(obj.StreamingObject(), backend.NoConditions{})
+ delete(s.uploads, uploadID)
+ err = s.createObject(obj)
if err != nil {
- return errToJsonResponse(err)
- }
- defer streamingObject.Close()
- obj, err = streamingObject.BufferedObject()
- if err != nil {
- return errToJsonResponse(err)
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ return
}
} else {
- if _, no308 := r.Header["X-Guploader-No-308"]; no308 {
- // Go client
- responseHeader.Set("X-Http-Status-Code-Override", "308")
- } else {
- // Python client
- status = http.StatusPermanentRedirect
- }
- s.uploads.Store(uploadID, obj)
- }
- if r.Header.Get("X-Goog-Upload-Command") == "upload, finalize" {
- responseHeader.Set("X-Goog-Upload-Status", "final")
- }
- return jsonResponse{
- status: status,
- data: newObjectResponse(obj.ObjectAttrs),
- header: responseHeader,
- }
+ status = http.StatusOK
+ w.Header().Set("X-Http-Status-Code-Override", "308")
+ s.uploads[uploadID] = obj
+ }
+ data, _ := json.Marshal(obj)
+ w.Header().Set("Content-Type", "application/json")
+ w.Header().Set("Content-Length", strconv.Itoa(len(data)))
+ w.WriteHeader(status)
+ w.Write(data)
}
-// Parse a Content-Range header
-// Some possible valid header values:
-//
-// bytes 0-1023/4096 (first 1024 bytes of a 4096-byte document)
-// bytes 1024-2047/* (second 1024 bytes of a streaming document)
-// bytes */4096 (The end of 4096 byte streaming document)
-// bytes 0-*/* (start and end of a streaming document as sent by nodeJS client lib)
-// bytes */* (start and end of a streaming document as sent by the C++ SDK)
-func parseContentRange(r string) (parsed contentRange, err error) {
+func parseRange(r string, objLength, bodyLength int, w http.ResponseWriter) (finished bool, err error) {
invalidErr := fmt.Errorf("invalid Content-Range: %v", r)
-
- // Require that units == "bytes"
const bytesPrefix = "bytes "
+ var contentLength int
if !strings.HasPrefix(r, bytesPrefix) {
- return parsed, invalidErr
+ return false, invalidErr
}
-
- // Split range from total length
parts := strings.SplitN(r[len(bytesPrefix):], "/", 2)
if len(parts) != 2 {
- return parsed, invalidErr
+ return false, invalidErr
}
+ var rangeStart, rangeEnd int
- // Process range
if parts[0] == "*" {
- parsed.Start = -1
- parsed.End = -1
+ rangeStart = objLength
+ rangeEnd = objLength + bodyLength
} else {
rangeParts := strings.SplitN(parts[0], "-", 2)
if len(rangeParts) != 2 {
- return parsed, invalidErr
+ return false, invalidErr
}
-
- parsed.Start, err = strconv.Atoi(rangeParts[0])
+ rangeStart, err = strconv.Atoi(rangeParts[0])
if err != nil {
- return parsed, invalidErr
+ return false, invalidErr
}
-
- if rangeParts[1] == "*" {
- parsed.End = -1
- } else {
- parsed.KnownRange = true
- parsed.End, err = strconv.Atoi(rangeParts[1])
- if err != nil {
- return parsed, invalidErr
- }
- }
- }
-
- // Process total length
- if parts[1] == "*" {
- parsed.Total = -1
- } else {
- parsed.KnownTotal = true
- parsed.Total, err = strconv.Atoi(parts[1])
+ rangeEnd, err = strconv.Atoi(rangeParts[1])
if err != nil {
- return parsed, invalidErr
+ return false, invalidErr
}
}
- return parsed, nil
+ contentLength = objLength + bodyLength
+ finished = rangeEnd == contentLength
+ w.Header().Set("Range", fmt.Sprintf("bytes=%d-%d", rangeStart, rangeEnd))
+
+ return finished, nil
}
func loadMetadata(rc io.ReadCloser) (*multipartMetadata, error) {
@@ -630,7 +255,7 @@ func loadMetadata(rc io.ReadCloser) (*multipartMetadata, error) {
func loadContent(rc io.ReadCloser) ([]byte, error) {
defer rc.Close()
- return io.ReadAll(rc)
+ return ioutil.ReadAll(rc)
}
func generateUploadID() (string, error) {
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/xml_response.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/xml_response.go
deleted file mode 100644
index 50d9661df84c8..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/xml_response.go
+++ /dev/null
@@ -1,123 +0,0 @@
-package fakestorage
-
-import (
- "encoding/xml"
- "net/http"
- "strings"
-)
-
-type xmlResponse struct {
- status int
- header http.Header
- data any
- errorMessage string
-}
-
-type xmlResponseBody struct {
- XMLName xml.Name `xml:"PostResponse"`
- Bucket string
- Etag struct {
- Value string `xml:",innerxml"`
- }
- Key string
- Location string
-}
-
-type ListBucketResult struct {
- XMLName xml.Name `xml:"ListBucketResult"`
- Name string `xml:"Name"`
- CommonPrefixes []CommonPrefix `xml:"CommonPrefixes,omitempty"`
- Delimiter string `xml:"Delimiter"`
- Prefix string `xml:"Prefix"`
- KeyCount int `xml:"KeyCount"`
- Contents []Contents `xml:"Contents"`
-}
-
-type Contents struct {
- XMLName xml.Name `xml:"Contents"`
- Key string `xml:"Key"`
- Generation int64 `xml:"Generation"`
- LastModified string `xml:"LastModified"`
- ETag ETag
- Size int64 `xml:"Size"`
-}
-
-type CommonPrefix struct {
- Prefix string `xml:"Prefix"`
-}
-
-type ETag struct {
- Value string `xml:",innerxml"`
-}
-
-func (e *ETag) Equals(etag string) bool {
- trim := func(s string) string {
- return strings.TrimPrefix(strings.TrimSuffix(s, "\""), "\"")
- }
- return trim(e.Value) == trim(etag)
-}
-
-type xmlHandler = func(r *http.Request) xmlResponse
-
-func xmlToHTTPHandler(h xmlHandler) http.HandlerFunc {
- return func(w http.ResponseWriter, r *http.Request) {
- resp := h(r)
- w.Header().Set("Content-Type", "application/xml")
- for name, values := range resp.header {
- for _, value := range values {
- w.Header().Add(name, value)
- }
- }
-
- status := resp.getStatus()
- var data any
- if status > 399 {
- data = newErrorResponse(status, resp.getErrorMessage(status), nil)
- } else {
- data = resp.data
- }
-
- w.WriteHeader(status)
-
- dataBytes, ok := data.([]byte)
- if ok {
- w.Write(dataBytes)
- } else {
- xml.NewEncoder(w).Encode(data)
- }
- }
-}
-
-func createXmlResponseBody(bucketName, etag, key, location string) []byte {
- responseBody := xmlResponseBody{
- Bucket: bucketName,
- Etag: struct {
- Value string `xml:",innerxml"`
- }{etag},
- Location: location,
- Key: key,
- }
- x, err := xml.Marshal(responseBody)
- if err != nil {
- return nil
- }
-
- return []byte(xml.Header + string(x))
-}
-
-func (r *xmlResponse) getStatus() int {
- if r.status > 0 {
- return r.status
- }
- if r.errorMessage != "" {
- return http.StatusInternalServerError
- }
- return http.StatusOK
-}
-
-func (r *xmlResponse) getErrorMessage(status int) string {
- if r.errorMessage != "" {
- return r.errorMessage
- }
- return http.StatusText(status)
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/bucket.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/bucket.go
deleted file mode 100644
index e56a7aa7950a5..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/bucket.go
+++ /dev/null
@@ -1,22 +0,0 @@
-// Copyright 2019 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import "time"
-
-// Bucket represents the bucket that is stored within the fake server.
-type Bucket struct {
- Name string
- VersioningEnabled bool
- TimeCreated time.Time
- DefaultEventBasedHold bool
-}
-
-const bucketMetadataSuffix = ".bucketMetadata"
-
-type BucketAttrs struct {
- DefaultEventBasedHold bool
- VersioningEnabled bool
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go
index 64f1106820011..24b1b2cb9437e 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go
@@ -5,465 +5,129 @@
package backend
import (
- "bytes"
"encoding/json"
- "errors"
"fmt"
- "io"
- "io/fs"
+ "io/ioutil"
"net/url"
"os"
+ "path"
"path/filepath"
"strings"
- "sync"
- "syscall"
- "time"
-
- "github.com/fsouza/fake-gcs-server/internal/checksum"
- "github.com/pkg/xattr"
)
-// storageFS is an implementation of the backend storage that stores data on disk
-//
+// StorageFS is an implementation of the backend storage that stores data on disk
// The layout is the following:
-//
// - rootDir
-//
-// |- bucket1
-// \- bucket2
-// |- object1
-// \- object2
-//
+// |- bucket1
+// \- bucket2
+// |- object1
+// \- object2
// Bucket and object names are url path escaped, so there's no special meaning of forward slashes.
-type storageFS struct {
+type StorageFS struct {
rootDir string
- mtx sync.RWMutex
- mh metadataHandler
}
-// NewStorageFS creates an instance of the filesystem-backed storage backend.
-func NewStorageFS(objects []StreamingObject, rootDir string) (Storage, error) {
+// NewStorageFS creates an instance of StorageMemory
+func NewStorageFS(objects []Object, rootDir string) (Storage, error) {
if !strings.HasSuffix(rootDir, "/") {
rootDir += "/"
}
- err := os.MkdirAll(rootDir, 0o700)
- if err != nil {
- return nil, err
+ s := &StorageFS{
+ rootDir: rootDir,
}
-
- var mh metadataHandler = metadataFile{}
- // Use xattr for metadata if rootDir supports it.
- if xattr.XATTR_SUPPORTED {
- xattrHandler := metadataXattr{}
- var xerr *xattr.Error
- _, err = xattrHandler.read(rootDir)
- if err == nil || (errors.As(err, &xerr) && xerr.Err == xattr.ENOATTR) {
- mh = xattrHandler
- }
- }
-
- s := &storageFS{rootDir: rootDir, mh: mh}
for _, o := range objects {
- obj, err := s.CreateObject(o, NoConditions{})
+ err := s.CreateObject(o)
if err != nil {
return nil, err
}
- obj.Close()
}
return s, nil
}
-// CreateBucket creates a bucket in the fs backend. A bucket is a folder in the
-// root directory.
-func (s *storageFS) CreateBucket(name string, bucketAttrs BucketAttrs) error {
- s.mtx.Lock()
- defer s.mtx.Unlock()
- return s.createBucket(name, bucketAttrs)
-}
-
-func (s *storageFS) createBucket(name string, bucketAttrs BucketAttrs) error {
- if bucketAttrs.VersioningEnabled {
- return errors.New("not implemented: fs storage type does not support versioning yet")
- }
- path := filepath.Join(s.rootDir, url.PathEscape(name))
- err := os.MkdirAll(path, 0o700)
- if err != nil {
- return err
- }
- encoded, err := json.Marshal(bucketAttrs)
- if err != nil {
- return err
- }
- return writeFile(path+bucketMetadataSuffix, encoded, 0o600)
+// CreateBucket creates a bucket
+func (s *StorageFS) CreateBucket(name string) error {
+ return os.MkdirAll(filepath.Join(s.rootDir, url.PathEscape(name)), 0700)
}
-// ListBuckets returns a list of buckets from the list of directories in the
-// root directory.
-func (s *storageFS) ListBuckets() ([]Bucket, error) {
- s.mtx.RLock()
- defer s.mtx.RUnlock()
- infos, err := os.ReadDir(s.rootDir)
+// ListBuckets lists buckets
+func (s *StorageFS) ListBuckets() ([]string, error) {
+ infos, err := ioutil.ReadDir(s.rootDir)
if err != nil {
return nil, err
}
- buckets := []Bucket{}
+ buckets := []string{}
for _, info := range infos {
if info.IsDir() {
unescaped, err := url.PathUnescape(info.Name())
if err != nil {
- return nil, fmt.Errorf("failed to unescape object name %s: %w", info.Name(), err)
- }
- fileInfo, err := info.Info()
- if err != nil {
- return nil, fmt.Errorf("failed to get file info for %s: %w", info.Name(), err)
+ return nil, fmt.Errorf("failed to unescape object name %s: %s", info.Name(), err)
}
- buckets = append(buckets, Bucket{Name: unescaped, TimeCreated: timespecToTime(createTimeFromFileInfo(fileInfo))})
+ buckets = append(buckets, unescaped)
}
}
return buckets, nil
}
-func timespecToTime(ts syscall.Timespec) time.Time {
- return time.Unix(int64(ts.Sec), int64(ts.Nsec))
+// GetBucket checks if a bucket exists
+func (s *StorageFS) GetBucket(name string) error {
+ _, err := os.Stat(filepath.Join(s.rootDir, url.PathEscape(name)))
+ return err
}
-func (s *storageFS) UpdateBucket(bucketName string, attrsToUpdate BucketAttrs) error {
- if attrsToUpdate.VersioningEnabled {
- return errors.New("not implemented: fs storage type does not support versioning yet")
- }
- encoded, err := json.Marshal(attrsToUpdate)
+// CreateObject stores an object
+func (s *StorageFS) CreateObject(obj Object) error {
+ err := s.CreateBucket(obj.BucketName)
if err != nil {
return err
}
- path := filepath.Join(s.rootDir, url.PathEscape(bucketName))
- return writeFile(path+bucketMetadataSuffix, encoded, 0o600)
-}
-
-// GetBucket returns information about the given bucket, or an error if it
-// doesn't exist.
-func (s *storageFS) GetBucket(name string) (Bucket, error) {
- s.mtx.RLock()
- defer s.mtx.RUnlock()
- path := filepath.Join(s.rootDir, url.PathEscape(name))
- dirInfo, err := os.Stat(path)
- if err != nil {
- return Bucket{}, err
- }
- attrs, err := getBucketAttributes(path)
- if err != nil {
- return Bucket{}, err
- }
- return Bucket{Name: name, VersioningEnabled: false, TimeCreated: timespecToTime(createTimeFromFileInfo(dirInfo)), DefaultEventBasedHold: attrs.DefaultEventBasedHold}, err
-}
-
-func getBucketAttributes(path string) (BucketAttrs, error) {
- content, err := os.ReadFile(path + bucketMetadataSuffix)
+ encoded, err := json.Marshal(obj)
if err != nil {
- if os.IsNotExist(err) {
- return BucketAttrs{}, nil
- }
- return BucketAttrs{}, err
- }
- var attrs BucketAttrs
- err = json.Unmarshal(content, &attrs)
- if err != nil {
- return BucketAttrs{}, err
- }
- return attrs, nil
-}
-
-// DeleteBucket removes the bucket from the backend.
-func (s *storageFS) DeleteBucket(name string) error {
- objs, err := s.ListObjects(name, "", false)
- if err != nil {
- return BucketNotFound
- }
- if len(objs) > 0 {
- return BucketNotEmpty
+ return err
}
-
- s.mtx.Lock()
- defer s.mtx.Unlock()
- return os.RemoveAll(filepath.Join(s.rootDir, url.PathEscape(name)))
+ return ioutil.WriteFile(filepath.Join(s.rootDir, url.PathEscape(obj.BucketName), url.PathEscape(obj.Name)), encoded, 0664)
}
-// CreateObject stores an object as a regular file on disk. The backing content
-// for the object may be in the same file that's being updated, so a temporary
-// file is first created and then moved into place. This also makes it so any
-// object content readers currently open continue reading from the original
-// file instead of the newly created file.
-//
-// The crc32c checksum and md5 hash of the object content is calculated when
-// reading the object content. Any checksum or hash in the passed-in object
-// metadata is overwritten.
-func (s *storageFS) CreateObject(obj StreamingObject, conditions Conditions) (StreamingObject, error) {
- if obj.Generation > 0 {
- return StreamingObject{}, errors.New("not implemented: fs storage type does not support objects generation yet")
- }
-
- // Note: this was a quick fix for issue #701. Now that we have a way to
- // persist object attributes, we should implement versioning in the
- // filesystem backend and handle generations outside of the backends.
- obj.Generation = time.Now().UnixNano() / 1000
-
- s.mtx.Lock()
- defer s.mtx.Unlock()
- err := s.createBucket(obj.BucketName, BucketAttrs{VersioningEnabled: false})
- if err != nil {
- return StreamingObject{}, err
- }
-
- var activeGeneration int64
- existingObj, err := s.getObject(obj.BucketName, obj.Name)
- if err != nil {
- activeGeneration = 0
- } else {
- activeGeneration = existingObj.Generation
- }
-
- if !conditions.ConditionsMet(activeGeneration) {
- return StreamingObject{}, PreConditionFailed
- }
-
- path := filepath.Join(s.rootDir, url.PathEscape(obj.BucketName), obj.Name)
- if err = os.MkdirAll(filepath.Dir(path), 0o700); err != nil {
- return StreamingObject{}, err
- }
-
- // Nothing to do if this operation only creates directories
- if strings.HasSuffix(obj.Name, "/") {
- // TODO: populate Crc32c, Md5Hash, and Etag
- return StreamingObject{obj.ObjectAttrs, noopSeekCloser{bytes.NewReader([]byte{})}}, nil
- }
-
- var buf bytes.Buffer
- hasher := checksum.NewStreamingHasher()
- objectContent := io.TeeReader(obj.Content, hasher)
-
- if _, err = io.Copy(&buf, objectContent); err != nil {
- return StreamingObject{}, err
- }
-
- if obj.Crc32c == "" {
- obj.Crc32c = hasher.EncodedCrc32cChecksum()
- }
- if obj.Md5Hash == "" {
- obj.Md5Hash = hasher.EncodedMd5Hash()
- }
- if obj.Etag == "" {
- obj.Etag = fmt.Sprintf("%q", obj.Md5Hash)
- }
-
- // TODO: Handle if metadata is not present more gracefully?
- encoded, err := json.Marshal(obj.ObjectAttrs)
+// ListObjects lists the objects in a given bucket with a given prefix and delimeter
+func (s *StorageFS) ListObjects(bucketName string) ([]Object, error) {
+ infos, err := ioutil.ReadDir(path.Join(s.rootDir, url.PathEscape(bucketName)))
if err != nil {
- return StreamingObject{}, err
- }
-
- if err := writeFile(path, buf.Bytes(), 0o600); err != nil {
- return StreamingObject{}, err
- }
-
- if err = s.mh.write(path, encoded); err != nil {
- return StreamingObject{}, err
+ return nil, err
}
-
- err = openObjectAndSetSize(&obj, path)
-
- return obj, err
-}
-
-// ListObjects lists the objects in a given bucket with a given prefix and
-// delimeter.
-func (s *storageFS) ListObjects(bucketName string, prefix string, versions bool) ([]ObjectAttrs, error) {
- s.mtx.RLock()
- defer s.mtx.RUnlock()
-
- objects := []ObjectAttrs{}
- bucketPath := filepath.Join(s.rootDir, url.PathEscape(bucketName))
- if err := filepath.Walk(bucketPath, func(path string, info fs.FileInfo, err error) error {
+ objects := []Object{}
+ for _, info := range infos {
+ unescaped, err := url.PathUnescape(info.Name())
if err != nil {
- return err
- }
-
- objName, _ := filepath.Rel(bucketPath, path)
- if s.mh.isSpecialFile(info.Name()) {
- return nil
- }
- if info.IsDir() {
- return nil
+ return nil, fmt.Errorf("failed to unescape object name %s: %s", info.Name(), err)
}
- if prefix != "" && !strings.HasPrefix(objName, prefix) {
- return nil
- }
- objAttrs, err := s.getObjectAttrs(bucketName, objName)
+ object, err := s.GetObject(bucketName, unescaped)
if err != nil {
- return err
+ return nil, err
}
- objects = append(objects, objAttrs)
- return nil
- }); err != nil {
- return nil, err
+ objects = append(objects, object)
}
return objects, nil
}
-// GetObject get an object by bucket and name.
-func (s *storageFS) GetObject(bucketName, objectName string) (StreamingObject, error) {
- s.mtx.RLock()
- defer s.mtx.RUnlock()
- return s.getObject(bucketName, objectName)
-}
-
-// GetObjectWithGeneration retrieves an specific version of the object. Not
-// implemented for this backend.
-func (s *storageFS) GetObjectWithGeneration(bucketName, objectName string, generation int64) (StreamingObject, error) {
- obj, err := s.GetObject(bucketName, objectName)
- if err != nil {
- return obj, err
- }
- if obj.Generation != generation {
- return obj, fmt.Errorf("generation mismatch, object generation is %v, requested generation is %v (note: filesystem backend does not support versioning)", obj.Generation, generation)
- }
- return obj, nil
-}
-
-func (s *storageFS) getObject(bucketName, objectName string) (StreamingObject, error) {
- attrs, err := s.getObjectAttrs(bucketName, objectName)
- if err != nil {
- return StreamingObject{}, err
- }
-
- obj := StreamingObject{ObjectAttrs: attrs}
- path := filepath.Join(s.rootDir, url.PathEscape(bucketName), objectName)
- err = openObjectAndSetSize(&obj, path)
-
- return obj, err
-}
-
-func openObjectAndSetSize(obj *StreamingObject, path string) error {
- info, err := os.Stat(path)
+// GetObject get an object by bucket and name
+func (s *StorageFS) GetObject(bucketName, objectName string) (Object, error) {
+ encoded, err := ioutil.ReadFile(filepath.Join(s.rootDir, url.PathEscape(bucketName), url.PathEscape(objectName)))
if err != nil {
- return err
+ return Object{}, err
}
-
- obj.Content = newLazyReader(path)
- obj.Size = info.Size()
-
- return nil
-}
-
-func (s *storageFS) getObjectAttrs(bucketName, objectName string) (ObjectAttrs, error) {
- path := filepath.Join(s.rootDir, url.PathEscape(bucketName), objectName)
- encoded, err := s.mh.read(path)
- if err != nil {
- return ObjectAttrs{}, err
- }
-
- var attrs ObjectAttrs
- if err = json.Unmarshal(encoded, &attrs); err != nil {
- return ObjectAttrs{}, err
- }
-
- info, err := os.Stat(path)
+ var obj Object
+ err = json.Unmarshal(encoded, &obj)
if err != nil {
- return ObjectAttrs{}, fmt.Errorf("failed to stat: %w", err)
+ return Object{}, err
}
-
- attrs.Name = filepath.ToSlash(objectName)
- attrs.BucketName = bucketName
- attrs.Size = info.Size()
- return attrs, nil
+ obj.Name = objectName
+ obj.BucketName = bucketName
+ return obj, nil
}
-// DeleteObject deletes an object by bucket and name.
-func (s *storageFS) DeleteObject(bucketName, objectName string) error {
- s.mtx.Lock()
- defer s.mtx.Unlock()
+// DeleteObject deletes an object by bucket and name
+func (s *StorageFS) DeleteObject(bucketName, objectName string) error {
if objectName == "" {
- return errors.New("can't delete object with empty name")
+ return fmt.Errorf("can't delete object with empty name")
}
- path := filepath.Join(s.rootDir, url.PathEscape(bucketName), objectName)
- if err := s.mh.remove(path); err != nil {
- return err
- }
- return os.Remove(path)
-}
-
-func (s *storageFS) PatchObject(bucketName, objectName string, attrsToUpdate ObjectAttrs) (StreamingObject, error) {
- obj, err := s.GetObject(bucketName, objectName)
- if err != nil {
- return StreamingObject{}, err
- }
- defer obj.Close()
-
- obj.patch(attrsToUpdate)
- obj.Generation = 0 // reset generation id
- return s.CreateObject(obj, NoConditions{})
-}
-
-func (s *storageFS) UpdateObject(bucketName, objectName string, attrsToUpdate ObjectAttrs) (StreamingObject, error) {
- obj, err := s.GetObject(bucketName, objectName)
- if err != nil {
- return StreamingObject{}, err
- }
- defer obj.Close()
-
- if attrsToUpdate.Metadata != nil {
- obj.Metadata = map[string]string{}
- }
- obj.patch(attrsToUpdate)
- obj.Generation = 0 // reset generation id
- return s.CreateObject(obj, NoConditions{})
-}
-
-type concatenatedContent struct {
- io.Reader
-}
-
-func (c concatenatedContent) Close() error {
- return errors.New("not implemented")
-}
-
-func (c concatenatedContent) Seek(offset int64, whence int) (int64, error) {
- return 0, errors.New("not implemented")
-}
-
-func concatObjectReaders(objects []StreamingObject) io.ReadSeekCloser {
- readers := make([]io.Reader, len(objects))
- for i := range objects {
- readers[i] = objects[i].Content
- }
- return concatenatedContent{io.MultiReader(readers...)}
-}
-
-func (s *storageFS) ComposeObject(bucketName string, objectNames []string, destinationName string, metadata map[string]string, contentType string) (StreamingObject, error) {
- var sourceObjects []StreamingObject
- for _, n := range objectNames {
- obj, err := s.GetObject(bucketName, n)
- if err != nil {
- return StreamingObject{}, err
- }
- defer obj.Close()
- sourceObjects = append(sourceObjects, obj)
- }
-
- dest := StreamingObject{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: destinationName,
- ContentType: contentType,
- Created: time.Now().String(),
- },
- }
-
- dest.Content = concatObjectReaders(sourceObjects)
- dest.Metadata = metadata
-
- result, err := s.CreateObject(dest, NoConditions{})
- if err != nil {
- return result, err
- }
-
- return result, nil
+ return os.Remove(filepath.Join(s.rootDir, url.PathEscape(bucketName), url.PathEscape(objectName)))
}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/lazy_file.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/lazy_file.go
deleted file mode 100644
index 8c30a3149213c..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/lazy_file.go
+++ /dev/null
@@ -1,53 +0,0 @@
-// Copyright 2022 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "io"
- "os"
- "sync"
-)
-
-type lazyReader struct {
- filename string
- once *sync.Once
- f *os.File
- err error
-}
-
-func newLazyReader(filename string) io.ReadSeekCloser {
- return &lazyReader{
- filename: filename,
- once: &sync.Once{},
- }
-}
-
-func (r *lazyReader) open() {
- r.f, r.err = os.Open(r.filename)
-}
-
-func (r *lazyReader) Read(p []byte) (int, error) {
- r.once.Do(r.open)
- if r.err != nil {
- return 0, r.err
- }
- return r.f.Read(p)
-}
-
-func (r *lazyReader) Seek(offset int64, whence int) (int64, error) {
- r.once.Do(r.open)
- if r.err != nil {
- return 0, r.err
- }
- return r.f.Seek(offset, whence)
-}
-
-func (r *lazyReader) Close() error {
- r.once.Do(r.open)
- if r.err != nil {
- return r.err
- }
- return r.f.Close()
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/memory.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/memory.go
index 5075bf19653a2..257843ad36308 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/memory.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/memory.go
@@ -7,386 +7,118 @@ package backend
import (
"errors"
"fmt"
- "io"
- "strings"
"sync"
- "time"
-
- "github.com/fsouza/fake-gcs-server/internal/checksum"
)
-const timestampFormat = "2006-01-02T15:04:05.999999Z07:00"
-
-// storageMemory is an implementation of the backend storage that stores data
-// in memory.
-type storageMemory struct {
- buckets map[string]bucketInMemory
+// StorageMemory is an implementation of the backend storage that stores data in memory
+type StorageMemory struct {
+ buckets map[string][]Object
mtx sync.RWMutex
}
-type bucketInMemory struct {
- Bucket
- // maybe we can refactor how the memory backend works? no need to store
- // Object instances.
- activeObjects []Object
- archivedObjects []Object
-}
-
-func newBucketInMemory(name string, versioningEnabled bool, bucketAttrs BucketAttrs) bucketInMemory {
- return bucketInMemory{Bucket{name, versioningEnabled, time.Now(), bucketAttrs.DefaultEventBasedHold}, []Object{}, []Object{}}
-}
-
-func (bm *bucketInMemory) addObject(obj Object) Object {
- if obj.Crc32c == "" {
- obj.Crc32c = checksum.EncodedCrc32cChecksum(obj.Content)
- }
- if obj.Md5Hash == "" {
- obj.Md5Hash = checksum.EncodedMd5Hash(obj.Content)
- }
- if obj.Etag == "" {
- obj.Etag = fmt.Sprintf("%q", obj.Md5Hash)
- }
- if obj.Size == 0 {
- obj.Size = int64(len(obj.Content))
- }
- obj.Generation = getNewGenerationIfZero(obj.Generation)
- index := findObject(obj, bm.activeObjects, false)
- if index >= 0 {
- if bm.VersioningEnabled {
- bm.activeObjects[index].Deleted = time.Now().Format(timestampFormat)
- bm.cpToArchive(bm.activeObjects[index])
- }
- bm.activeObjects[index] = obj
- } else {
- bm.activeObjects = append(bm.activeObjects, obj)
- }
-
- return obj
-}
-
-func getNewGenerationIfZero(generation int64) int64 {
- if generation == 0 {
- return time.Now().UnixNano() / 1000
- }
- return generation
-}
-
-func (bm *bucketInMemory) deleteObject(obj Object, matchGeneration bool) {
- index := findObject(obj, bm.activeObjects, matchGeneration)
- if index < 0 {
- return
- }
- if bm.VersioningEnabled {
- obj.Deleted = time.Now().Format(timestampFormat)
- bm.mvToArchive(obj)
- } else {
- bm.deleteFromObjectList(obj, true)
- }
-}
-
-func (bm *bucketInMemory) cpToArchive(obj Object) {
- bm.archivedObjects = append(bm.archivedObjects, obj)
-}
-
-func (bm *bucketInMemory) mvToArchive(obj Object) {
- bm.cpToArchive(obj)
- bm.deleteFromObjectList(obj, true)
-}
-
-func (bm *bucketInMemory) deleteFromObjectList(obj Object, active bool) {
- objects := bm.activeObjects
- if !active {
- objects = bm.archivedObjects
- }
- index := findObject(obj, objects, !active)
- objects[index] = objects[len(objects)-1]
- if active {
- bm.activeObjects = objects[:len(objects)-1]
- } else {
- bm.archivedObjects = objects[:len(objects)-1]
- }
-}
-
-// findObject looks for an object in the given list and return the index where it
-// was found, or -1 if the object doesn't exist.
-func findObject(obj Object, objectList []Object, matchGeneration bool) int {
- for i, o := range objectList {
- if matchGeneration && obj.ID() == o.ID() {
- return i
- }
- if !matchGeneration && obj.IDNoGen() == o.IDNoGen() {
- return i
- }
- }
- return -1
-}
-
-// findLastObjectGeneration looks for an object in the given list and return the index where it
-// was found, or -1 if the object doesn't exist.
-func findLastObjectGeneration(obj Object, objectList []Object) int64 {
- highScore := int64(0)
- for _, o := range objectList {
- if obj.IDNoGen() == o.IDNoGen() && o.Generation > highScore {
- highScore = o.Generation
- }
- }
- return highScore
-}
-
-// NewStorageMemory creates an instance of StorageMemory.
-func NewStorageMemory(objects []StreamingObject) (Storage, error) {
- s := &storageMemory{
- buckets: make(map[string]bucketInMemory),
+// NewStorageMemory creates an instance of StorageMemory
+func NewStorageMemory(objects []Object) Storage {
+ s := &StorageMemory{
+ buckets: make(map[string][]Object),
}
for _, o := range objects {
- bufferedObject, err := o.BufferedObject()
- if err != nil {
- return nil, err
- }
- s.CreateBucket(o.BucketName, BucketAttrs{false, false})
- bucket := s.buckets[o.BucketName]
- bucket.addObject(bufferedObject)
- s.buckets[o.BucketName] = bucket
- }
- return s, nil
-}
-
-func (s *storageMemory) UpdateBucket(bucketName string, attrsToUpdate BucketAttrs) error {
- bucketInMemory, err := s.getBucketInMemory(bucketName)
- if err != nil {
- return err
+ s.buckets[o.BucketName] = append(s.buckets[o.BucketName], o)
}
- bucketInMemory.DefaultEventBasedHold = attrsToUpdate.DefaultEventBasedHold
- bucketInMemory.VersioningEnabled = attrsToUpdate.VersioningEnabled
- s.buckets[bucketName] = bucketInMemory
- return nil
+ return s
}
-// CreateBucket creates a bucket.
-func (s *storageMemory) CreateBucket(name string, bucketAttrs BucketAttrs) error {
+// CreateBucket creates a bucket
+func (s *StorageMemory) CreateBucket(name string) error {
s.mtx.Lock()
defer s.mtx.Unlock()
- bucket, err := s.getBucketInMemory(name)
- if err == nil {
- if bucket.VersioningEnabled != bucketAttrs.VersioningEnabled {
- return fmt.Errorf("a bucket named %s already exists, but with different properties", name)
- }
- return nil
+ if _, ok := s.buckets[name]; !ok {
+ s.buckets[name] = nil
}
- s.buckets[name] = newBucketInMemory(name, bucketAttrs.VersioningEnabled, bucketAttrs)
return nil
}
-// ListBuckets lists buckets currently registered in the backend.
-func (s *storageMemory) ListBuckets() ([]Bucket, error) {
- s.mtx.RLock()
- defer s.mtx.RUnlock()
- buckets := []Bucket{}
- for _, bucketInMemory := range s.buckets {
- buckets = append(buckets, Bucket{bucketInMemory.Name, bucketInMemory.VersioningEnabled, bucketInMemory.TimeCreated, false})
+// ListBuckets lists buckets
+func (s *StorageMemory) ListBuckets() ([]string, error) {
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
+ buckets := []string{}
+ for bucket := range s.buckets {
+ buckets = append(buckets, bucket)
}
return buckets, nil
}
-// GetBucket retrieves the bucket information from the backend.
-func (s *storageMemory) GetBucket(name string) (Bucket, error) {
- s.mtx.RLock()
- defer s.mtx.RUnlock()
- bucketInMemory, err := s.getBucketInMemory(name)
- return Bucket{bucketInMemory.Name, bucketInMemory.VersioningEnabled, bucketInMemory.TimeCreated, bucketInMemory.DefaultEventBasedHold}, err
-}
+// GetBucket checks if a bucket exists
+func (s *StorageMemory) GetBucket(name string) error {
+ s.mtx.Lock()
+ defer s.mtx.Unlock()
-func (s *storageMemory) getBucketInMemory(name string) (bucketInMemory, error) {
- if bucketInMemory, found := s.buckets[name]; found {
- return bucketInMemory, nil
+ if _, ok := s.buckets[name]; !ok {
+ return fmt.Errorf("no bucket named %s", name)
}
- return bucketInMemory{}, fmt.Errorf("no bucket named %s", name)
+ return nil
}
-// DeleteBucket removes the bucket from the backend.
-func (s *storageMemory) DeleteBucket(name string) error {
- objs, err := s.ListObjects(name, "", false)
- if err != nil {
- return BucketNotFound
- }
- if len(objs) > 0 {
- return BucketNotEmpty
- }
-
+// CreateObject stores an object
+func (s *StorageMemory) CreateObject(obj Object) error {
s.mtx.Lock()
defer s.mtx.Unlock()
- delete(s.buckets, name)
+
+ index := s.findObject(obj)
+ if index < 0 {
+ s.buckets[obj.BucketName] = append(s.buckets[obj.BucketName], obj)
+ } else {
+ s.buckets[obj.BucketName][index] = obj
+ }
return nil
}
-// CreateObject stores an object in the backend.
-func (s *storageMemory) CreateObject(obj StreamingObject, conditions Conditions) (StreamingObject, error) {
- s.mtx.Lock()
- defer s.mtx.Unlock()
- bucketInMemory, err := s.getBucketInMemory(obj.BucketName)
- if err != nil {
- bucketInMemory = newBucketInMemory(obj.BucketName, false, BucketAttrs{})
- }
- bufferedObj, err := obj.BufferedObject()
- currentGeneration := findLastObjectGeneration(bufferedObj, bucketInMemory.activeObjects)
- if !conditions.ConditionsMet(currentGeneration) {
- return StreamingObject{}, PreConditionFailed
- }
- if err != nil {
- return StreamingObject{}, err
+// findObject looks for an object in its bucket and return the index where it
+// was found, or -1 if the object doesn't exist.
+//
+// It doesn't lock the mutex, callers must lock the mutex before calling this
+// method.
+func (s *StorageMemory) findObject(obj Object) int {
+ for i, o := range s.buckets[obj.BucketName] {
+ if obj.ID() == o.ID() {
+ return i
+ }
}
- newObj := bucketInMemory.addObject(bufferedObj)
- s.buckets[obj.BucketName] = bucketInMemory
- return newObj.StreamingObject(), nil
+ return -1
}
-// ListObjects lists the objects in a given bucket with a given prefix and
-// delimiter.
-func (s *storageMemory) ListObjects(bucketName string, prefix string, versions bool) ([]ObjectAttrs, error) {
+// ListObjects lists the objects in a given bucket with a given prefix and delimeter
+func (s *StorageMemory) ListObjects(bucketName string) ([]Object, error) {
s.mtx.RLock()
defer s.mtx.RUnlock()
- bucketInMemory, err := s.getBucketInMemory(bucketName)
- if err != nil {
- return []ObjectAttrs{}, err
- }
- objAttrs := make([]ObjectAttrs, 0, len(bucketInMemory.activeObjects))
- for _, obj := range bucketInMemory.activeObjects {
- if prefix != "" && !strings.HasPrefix(obj.Name, prefix) {
- continue
- }
- objAttrs = append(objAttrs, obj.ObjectAttrs)
- }
- if !versions {
- return objAttrs, nil
+ objects, ok := s.buckets[bucketName]
+ if !ok {
+ return nil, errors.New("bucket not found")
}
-
- archvObjs := make([]ObjectAttrs, 0, len(bucketInMemory.archivedObjects))
- for _, obj := range bucketInMemory.archivedObjects {
- if prefix != "" && !strings.HasPrefix(obj.Name, prefix) {
- continue
- }
- archvObjs = append(archvObjs, obj.ObjectAttrs)
- }
- return append(objAttrs, archvObjs...), nil
+ return objects, nil
}
-func (s *storageMemory) GetObject(bucketName, objectName string) (StreamingObject, error) {
- return s.GetObjectWithGeneration(bucketName, objectName, 0)
-}
-
-// GetObjectWithGeneration retrieves a specific version of the object.
-func (s *storageMemory) GetObjectWithGeneration(bucketName, objectName string, generation int64) (StreamingObject, error) {
+// GetObject get an object by bucket and name
+func (s *StorageMemory) GetObject(bucketName, objectName string) (Object, error) {
+ obj := Object{BucketName: bucketName, Name: objectName}
s.mtx.RLock()
defer s.mtx.RUnlock()
- bucketInMemory, err := s.getBucketInMemory(bucketName)
- if err != nil {
- return StreamingObject{}, err
- }
- matchGeneration := false
- obj := Object{ObjectAttrs: ObjectAttrs{BucketName: bucketName, Name: objectName}}
- listToConsider := bucketInMemory.activeObjects
- if generation != 0 {
- matchGeneration = true
- obj.Generation = generation
- listToConsider = append(listToConsider, bucketInMemory.archivedObjects...)
- }
- index := findObject(obj, listToConsider, matchGeneration)
+ index := s.findObject(obj)
if index < 0 {
- return obj.StreamingObject(), errors.New("object not found")
+ return obj, errors.New("object not found")
}
-
- return listToConsider[index].StreamingObject(), nil
+ return s.buckets[bucketName][index], nil
}
-func (s *storageMemory) DeleteObject(bucketName, objectName string) error {
- obj, err := s.GetObject(bucketName, objectName)
- if err != nil {
- return err
- }
- s.mtx.Lock()
- defer s.mtx.Unlock()
- bucketInMemory, err := s.getBucketInMemory(bucketName)
- if err != nil {
- return err
- }
- bufferedObject, err := obj.BufferedObject()
- if err != nil {
- return err
+// DeleteObject deletes an object by bucket and name
+func (s *StorageMemory) DeleteObject(bucketName, objectName string) error {
+ obj := Object{BucketName: bucketName, Name: objectName}
+ index := s.findObject(obj)
+ if index < 0 {
+ return fmt.Errorf("no such object in bucket %s: %s", bucketName, objectName)
}
- bucketInMemory.deleteObject(bufferedObject, true)
- s.buckets[bucketName] = bucketInMemory
+ bucket := s.buckets[obj.BucketName]
+ bucket[index] = bucket[len(bucket)-1]
+ s.buckets[obj.BucketName] = bucket[:len(bucket)-1]
return nil
}
-
-func (s *storageMemory) PatchObject(bucketName, objectName string, attrsToUpdate ObjectAttrs) (StreamingObject, error) {
- obj, err := s.GetObject(bucketName, objectName)
- if err != nil {
- return StreamingObject{}, err
- }
-
- obj.patch(attrsToUpdate)
- s.CreateObject(obj, NoConditions{})
- return obj, nil
-}
-
-// UpdateObject replaces an object metadata, custom time, and acl.
-func (s *storageMemory) UpdateObject(bucketName, objectName string, attrsToUpdate ObjectAttrs) (StreamingObject, error) {
- obj, err := s.GetObject(bucketName, objectName)
- if err != nil {
- return StreamingObject{}, err
- }
-
- if attrsToUpdate.Metadata != nil {
- obj.Metadata = map[string]string{}
- }
- obj.patch(attrsToUpdate)
- s.CreateObject(obj, NoConditions{})
- return obj, nil
-}
-
-func (s *storageMemory) ComposeObject(bucketName string, objectNames []string, destinationName string, metadata map[string]string, contentType string) (StreamingObject, error) {
- var data []byte
- for _, n := range objectNames {
- obj, err := s.GetObject(bucketName, n)
- if err != nil {
- return StreamingObject{}, err
- }
- objectContent, err := io.ReadAll(obj.Content)
- if err != nil {
- return StreamingObject{}, err
- }
- data = append(data, objectContent...)
- }
-
- var dest Object
- streamingDest, err := s.GetObject(bucketName, destinationName)
- if err != nil {
- dest = Object{
- ObjectAttrs: ObjectAttrs{
- BucketName: bucketName,
- Name: destinationName,
- ContentType: contentType,
- Created: time.Now().String(),
- },
- }
- } else {
- dest, err = streamingDest.BufferedObject()
- if err != nil {
- return StreamingObject{}, err
- }
- }
-
- dest.Content = data
- dest.Crc32c = ""
- dest.Md5Hash = ""
- dest.Etag = ""
- dest.Size = 0
- dest.Metadata = metadata
-
- result, err := s.CreateObject(dest.StreamingObject(), NoConditions{})
- if err != nil {
- return result, err
- }
-
- return result, nil
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata.go
deleted file mode 100644
index 6d9d2313d27dc..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata.go
+++ /dev/null
@@ -1,13 +0,0 @@
-// Copyright 2022 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-type metadataHandler interface {
- write(path string, encoded []byte) error
- read(path string) ([]byte, error)
- remove(path string) error
- isSpecialFile(path string) bool
- rename(pathSrc, pathDst string) error
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata_file.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata_file.go
deleted file mode 100644
index 94cce654a8c69..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata_file.go
+++ /dev/null
@@ -1,34 +0,0 @@
-// Copyright 2022 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "os"
- "strings"
-)
-
-const metadataSuffix = ".metadata"
-
-type metadataFile struct{}
-
-func (m metadataFile) write(path string, encoded []byte) error {
- return writeFile(path+metadataSuffix, encoded, 0o600)
-}
-
-func (m metadataFile) read(path string) ([]byte, error) {
- return os.ReadFile(path + metadataSuffix)
-}
-
-func (m metadataFile) isSpecialFile(path string) bool {
- return strings.HasSuffix(path, metadataSuffix)
-}
-
-func (m metadataFile) remove(path string) error {
- return os.Remove(path + metadataSuffix)
-}
-
-func (m metadataFile) rename(pathSrc, pathDst string) error {
- return os.Rename(pathSrc+metadataSuffix, pathDst+metadataSuffix)
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata_xattr.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata_xattr.go
deleted file mode 100644
index 9d40580120be6..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/metadata_xattr.go
+++ /dev/null
@@ -1,33 +0,0 @@
-// Copyright 2022 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "github.com/pkg/xattr"
-)
-
-const xattrKey = "user.metadata"
-
-type metadataXattr struct{}
-
-func (m metadataXattr) write(path string, encoded []byte) error {
- return xattr.Set(path, xattrKey, encoded)
-}
-
-func (m metadataXattr) read(path string) ([]byte, error) {
- return xattr.Get(path, xattrKey)
-}
-
-func (m metadataXattr) isSpecialFile(path string) bool {
- return false
-}
-
-func (m metadataXattr) remove(path string) error {
- return nil
-}
-
-func (m metadataXattr) rename(pathSrc, pathDst string) error {
- return nil
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/object.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/object.go
index 7dc742e9d1d3a..e0ca2b12ec571 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/object.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/object.go
@@ -4,102 +4,16 @@
package backend
-import (
- "bytes"
- "fmt"
- "io"
- "reflect"
-
- "cloud.google.com/go/storage"
-)
-
-// ObjectAttrs represents the meta-data without its contents.
-type ObjectAttrs struct {
- BucketName string `json:"-"`
- Name string `json:"-"`
- Size int64 `json:"-"`
- ContentType string
- ContentEncoding string
- Crc32c string
- Md5Hash string
- Etag string
- ACL []storage.ACLRule
- Metadata map[string]string
- Created string
- Deleted string
- Updated string
- CustomTime string
- Generation int64
-}
-
-// ID is used for comparing objects.
-func (o *ObjectAttrs) ID() string {
- return fmt.Sprintf("%s#%d", o.IDNoGen(), o.Generation)
-}
-
-// IDNoGen does not consider the generation field.
-func (o *ObjectAttrs) IDNoGen() string {
- return fmt.Sprintf("%s/%s", o.BucketName, o.Name)
-}
-
// Object represents the object that is stored within the fake server.
type Object struct {
- ObjectAttrs
- Content []byte
-}
-
-type noopSeekCloser struct {
- io.ReadSeeker
-}
-
-func (n noopSeekCloser) Close() error {
- return nil
-}
-
-func (o Object) StreamingObject() StreamingObject {
- return StreamingObject{
- ObjectAttrs: o.ObjectAttrs,
- Content: noopSeekCloser{bytes.NewReader(o.Content)},
- }
-}
-
-type StreamingObject struct {
- ObjectAttrs
- Content io.ReadSeekCloser
-}
-
-func (o *StreamingObject) Close() error {
- if o != nil && o.Content != nil {
- return o.Content.Close()
- }
- return nil
-}
-
-// Convert this StreamingObject to a (buffered) Object.
-func (o *StreamingObject) BufferedObject() (Object, error) {
- data, err := io.ReadAll(o.Content)
- return Object{
- ObjectAttrs: o.ObjectAttrs,
- Content: data,
- }, err
+ BucketName string `json:"-"`
+ Name string `json:"-"`
+ Content []byte
+ Crc32c string
+ Md5Hash string
}
-func (o *StreamingObject) patch(attrsToUpdate ObjectAttrs) {
- currObjValues := reflect.ValueOf(&(o.ObjectAttrs)).Elem()
- currObjType := currObjValues.Type()
- newObjValues := reflect.ValueOf(attrsToUpdate)
- for i := 0; i < newObjValues.NumField(); i++ {
- if reflect.Value.IsZero(newObjValues.Field(i)) {
- continue
- } else if currObjType.Field(i).Name == "Metadata" {
- if o.Metadata == nil {
- o.Metadata = map[string]string{}
- }
- for k, v := range attrsToUpdate.Metadata {
- o.Metadata[k] = v
- }
- } else {
- currObjValues.Field(i).Set(newObjValues.Field(i))
- }
- }
+// ID is useful for comparing objects
+func (o *Object) ID() string {
+ return o.BucketName + "/" + o.Name
}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go
index da8e8e51e2128..c77583462fdb5 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go
@@ -2,43 +2,15 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Package backend proides the backends used by fake-gcs-server.
package backend
-type Conditions interface {
- ConditionsMet(activeGeneration int64) bool
-}
-
-type NoConditions struct{}
-
-func (NoConditions) ConditionsMet(int64) bool {
- return true
-}
-
-// Storage is the generic interface for implementing the backend storage of the
-// server.
+// Storage is the generic interface for implementing the backend storage of the server
type Storage interface {
- CreateBucket(name string, bucketAttrs BucketAttrs) error
- ListBuckets() ([]Bucket, error)
- GetBucket(name string) (Bucket, error)
- UpdateBucket(name string, attrsToUpdate BucketAttrs) error
- DeleteBucket(name string) error
- CreateObject(obj StreamingObject, conditions Conditions) (StreamingObject, error)
- ListObjects(bucketName string, prefix string, versions bool) ([]ObjectAttrs, error)
- GetObject(bucketName, objectName string) (StreamingObject, error)
- GetObjectWithGeneration(bucketName, objectName string, generation int64) (StreamingObject, error)
+ CreateBucket(name string) error
+ ListBuckets() ([]string, error)
+ GetBucket(name string) error
+ CreateObject(obj Object) error
+ ListObjects(bucketName string) ([]Object, error)
+ GetObject(bucketName, objectName string) (Object, error)
DeleteObject(bucketName, objectName string) error
- PatchObject(bucketName, objectName string, attrsToUpdate ObjectAttrs) (StreamingObject, error)
- UpdateObject(bucketName, objectName string, attrsToUpdate ObjectAttrs) (StreamingObject, error)
- ComposeObject(bucketName string, objectNames []string, destinationName string, metadata map[string]string, contentType string) (StreamingObject, error)
}
-
-type Error string
-
-func (e Error) Error() string { return string(e) }
-
-const (
- BucketNotFound = Error("bucket not found")
- BucketNotEmpty = Error("bucket must be empty prior to deletion")
- PreConditionFailed = Error("Precondition failed")
-)
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_darwin.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_darwin.go
deleted file mode 100644
index cb3998a95ccc6..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_darwin.go
+++ /dev/null
@@ -1,17 +0,0 @@
-// Copyright 2019 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "os"
- "syscall"
-)
-
-func createTimeFromFileInfo(input os.FileInfo) syscall.Timespec {
- if statT, ok := input.Sys().(*syscall.Stat_t); ok {
- return statT.Ctimespec
- }
- return syscall.Timespec{}
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_linux.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_linux.go
deleted file mode 100644
index 0f959e9b74c6c..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_linux.go
+++ /dev/null
@@ -1,18 +0,0 @@
-// Copyright 2019 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "os"
- "syscall"
-)
-
-func createTimeFromFileInfo(input os.FileInfo) syscall.Timespec {
- if statT, ok := input.Sys().(*syscall.Stat_t); ok {
- // not true: Ctime is not created time, but not creating a file to persist this metadata, yet...
- return statT.Ctim
- }
- return syscall.Timespec{}
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_windows.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_windows.go
deleted file mode 100644
index 54c7bc9b0badd..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/time_windows.go
+++ /dev/null
@@ -1,18 +0,0 @@
-// Copyright 2019 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "os"
- "syscall"
-)
-
-func createTimeFromFileInfo(input os.FileInfo) syscall.Timespec {
- if statT, ok := input.Sys().(*syscall.Win32FileAttributeData); ok {
- nsec := statT.CreationTime.Nanoseconds()
- return syscall.NsecToTimespec(nsec)
- }
- return syscall.Timespec{}
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/writefile_unix.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/writefile_unix.go
deleted file mode 100644
index 2e5e510fbc3d4..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/writefile_unix.go
+++ /dev/null
@@ -1,17 +0,0 @@
-// Copyright 2022 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-//go:build !windows
-
-package backend
-
-import (
- "os"
-
- "github.com/google/renameio/v2"
-)
-
-func writeFile(filename string, data []byte, perm os.FileMode) error {
- return renameio.WriteFile(filename, data, perm)
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/writefile_windows.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/writefile_windows.go
deleted file mode 100644
index 2d6600c803024..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/writefile_windows.go
+++ /dev/null
@@ -1,13 +0,0 @@
-// Copyright 2022 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package backend
-
-import (
- "os"
-)
-
-func writeFile(filename string, data []byte, perm os.FileMode) error {
- return os.WriteFile(filename, data, perm)
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/checksum/checksum.go b/vendor/github.com/fsouza/fake-gcs-server/internal/checksum/checksum.go
deleted file mode 100644
index c247336d8e65e..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/checksum/checksum.go
+++ /dev/null
@@ -1,70 +0,0 @@
-// Copyright 2021 Francisco Souza. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package checksum
-
-import (
- "crypto/md5"
- "encoding/base64"
- "hash"
- "hash/crc32"
-)
-
-var crc32cTable = crc32.MakeTable(crc32.Castagnoli)
-
-func crc32cChecksum(content []byte) []byte {
- checksummer := crc32.New(crc32cTable)
- checksummer.Write(content)
- return checksummer.Sum(make([]byte, 0, 4))
-}
-
-func EncodedChecksum(checksum []byte) string {
- return base64.StdEncoding.EncodeToString(checksum)
-}
-
-func EncodedCrc32cChecksum(content []byte) string {
- return EncodedChecksum(crc32cChecksum(content))
-}
-
-func MD5Hash(b []byte) []byte {
- h := md5.New()
- h.Write(b)
- return h.Sum(nil)
-}
-
-func EncodedHash(hash []byte) string {
- return base64.StdEncoding.EncodeToString(hash)
-}
-
-func EncodedMd5Hash(content []byte) string {
- return EncodedHash(MD5Hash(content))
-}
-
-type StreamingHasher struct {
- crc32 hash.Hash32
- md5 hash.Hash
-}
-
-func NewStreamingHasher() *StreamingHasher {
- return &StreamingHasher{
- crc32: crc32.New(crc32cTable),
- md5: md5.New(),
- }
-}
-
-func (s *StreamingHasher) Write(p []byte) (n int, err error) {
- n, err = s.crc32.Write(p)
- if err != nil {
- return n, err
- }
- return s.md5.Write(p)
-}
-
-func (s *StreamingHasher) EncodedCrc32cChecksum() string {
- return EncodedChecksum(s.crc32.Sum(nil))
-}
-
-func (s *StreamingHasher) EncodedMd5Hash() string {
- return EncodedHash(s.md5.Sum(nil))
-}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go b/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go
deleted file mode 100644
index f20ac8c87a40a..0000000000000
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go
+++ /dev/null
@@ -1,222 +0,0 @@
-package notification
-
-import (
- "context"
- "encoding/json"
- "fmt"
- "io"
- "strconv"
- "strings"
- "time"
-
- "cloud.google.com/go/pubsub"
- "github.com/fsouza/fake-gcs-server/internal/backend"
-)
-
-// EventType is the type of event to trigger. The descriptions of the events
-// can be found here:
-// https://cloud.google.com/storage/docs/pubsub-notifications#events.
-type EventType string
-
-const (
- // EventFinalize is triggered when an object is added.
- EventFinalize EventType = "OBJECT_FINALIZE"
- // EventDelete is triggered when an object is deleted.
- EventDelete = "OBJECT_DELETE"
- // EventMetadata is triggered when an object's metadata is changed.
- EventMetadata = "OBJECT_METADATA_UPDATE"
- // EventArchive bucket versioning must be enabled. is triggered when an object becomes the non current version
- EventArchive = "OBJECT_ARCHIVE"
-)
-
-// EventNotificationOptions contains flags for events, that if true, will create
-// trigger notifications when they occur.
-type EventNotificationOptions struct {
- Finalize bool
- Delete bool
- MetadataUpdate bool
- Archive bool
-}
-
-// EventManagerOptions determines what events are triggered and where.
-type EventManagerOptions struct {
- // ProjectID is the project ID containing the pubsub topic.
- ProjectID string
- // TopicName is the pubsub topic name to publish events on.
- TopicName string
- // Bucket is the name of the bucket to publish events from.
- Bucket string
- // ObjectPrefix, if not empty, only objects having this prefix will generate
- // trigger events.
- ObjectPrefix string
- // NotifyOn determines what events to trigger.
- NotifyOn EventNotificationOptions
-}
-
-type EventManager interface {
- Trigger(o *backend.StreamingObject, eventType EventType, extraEventAttr map[string]string)
-}
-
-// PubsubEventManager checks if an event should be published.
-type PubsubEventManager struct {
- // publishSynchronously is a flag that if true, events will be published
- // synchronously and not in a goroutine. It is used during tests to prevent
- // race conditions.
- publishSynchronously bool
- // notifyOn determines what events are triggered.
- notifyOn EventNotificationOptions
- // writer is where logs are written to.
- writer io.Writer
- // bucket, if not empty, only objects from this bucker will generate trigger events.
- bucket string
- // objectPrefix, if not empty, only objects having this prefix will generate
- // trigger events.
- objectPrefix string
- // publisher is used to publish events on.
- publisher eventPublisher
-}
-
-func NewPubsubEventManager(options EventManagerOptions, w io.Writer) (*PubsubEventManager, error) {
- manager := &PubsubEventManager{
- writer: w,
- notifyOn: options.NotifyOn,
- bucket: options.Bucket,
- objectPrefix: options.ObjectPrefix,
- }
- if options.ProjectID != "" && options.TopicName != "" {
- ctx := context.Background()
- client, err := pubsub.NewClient(ctx, options.ProjectID)
- if err != nil {
- return nil, fmt.Errorf("error creating pubsub client: %v", err)
- }
- manager.publisher = client.Topic(options.TopicName)
- }
- return manager, nil
-}
-
-// eventPublisher is the interface to publish triggered events.
-type eventPublisher interface {
- Publish(ctx context.Context, msg *pubsub.Message) *pubsub.PublishResult
-}
-
-// Trigger checks if an event should be triggered. If so, it publishes the
-// event to a pubsub queue.
-func (m *PubsubEventManager) Trigger(o *backend.StreamingObject, eventType EventType, extraEventAttr map[string]string) {
- if m.publisher == nil {
- return
- }
- if m.bucket != "" && o.BucketName != m.bucket {
- return
- }
- if m.objectPrefix != "" && !strings.HasPrefix(o.Name, m.objectPrefix) {
- return
- }
- switch eventType {
- case EventFinalize:
- if !m.notifyOn.Finalize {
- return
- }
- case EventDelete:
- if !m.notifyOn.Delete {
- return
- }
- case EventMetadata:
- if !m.notifyOn.MetadataUpdate {
- return
- }
- case EventArchive:
- if !m.notifyOn.Archive {
- return
- }
- }
- eventTime := time.Now().Format(time.RFC3339)
- publishFunc := func() {
- err := m.publish(o, eventType, eventTime, extraEventAttr)
- if m.writer != nil {
- if err != nil {
- fmt.Fprintf(m.writer, "error publishing event: %v", err)
- } else {
- fmt.Fprintf(m.writer, "sent event %s for object %s\n", string(eventType), o.ID())
- }
- }
- }
- if m.publishSynchronously {
- publishFunc()
- } else {
- go publishFunc()
- }
-}
-
-func (m *PubsubEventManager) publish(o *backend.StreamingObject, eventType EventType, eventTime string, extraEventAttr map[string]string) error {
- ctx := context.Background()
- data, attributes, err := generateEvent(o, eventType, eventTime, extraEventAttr)
- if err != nil {
- return err
- }
- if r := m.publisher.Publish(ctx, &pubsub.Message{
- Data: data,
- Attributes: attributes,
- }); r != nil {
- _, err = r.Get(ctx)
- return err
- }
- return nil
-}
-
-// gcsEvent is the payload of a GCS event. Note that all properties are string-quoted.
-// The description of the full object can be found here:
-// https://cloud.google.com/storage/docs/json_api/v1/objects#resource-representations.
-type gcsEvent struct {
- Kind string `json:"kind"`
- ID string `json:"id"`
- Name string `json:"name"`
- Bucket string `json:"bucket"`
- Generation int64 `json:"generation,string,omitempty"`
- ContentType string `json:"contentType"`
- ContentEncoding string `json:"contentEncoding,omitempty"`
- Created string `json:"timeCreated,omitempty"`
- Updated string `json:"updated,omitempty"`
- StorageClass string `json:"storageClass"`
- Size int64 `json:"size,string"`
- MD5Hash string `json:"md5Hash,omitempty"`
- CRC32c string `json:"crc32c,omitempty"`
- MetaData map[string]string `json:"metadata,omitempty"`
-}
-
-func generateEvent(o *backend.StreamingObject, eventType EventType, eventTime string, extraEventAttr map[string]string) ([]byte, map[string]string, error) {
- payload := gcsEvent{
- Kind: "storage#object",
- ID: o.ID(),
- Name: o.Name,
- Bucket: o.BucketName,
- Generation: o.Generation,
- ContentType: o.ContentType,
- ContentEncoding: o.ContentEncoding,
- Created: o.Created,
- Updated: o.Updated,
- StorageClass: "STANDARD",
- Size: o.Size,
- MD5Hash: o.Md5Hash,
- CRC32c: o.Crc32c,
- MetaData: o.Metadata,
- }
- attributes := map[string]string{
- "bucketId": o.BucketName,
- "eventTime": eventTime,
- "eventType": string(eventType),
- "objectGeneration": strconv.FormatInt(o.Generation, 10),
- "objectId": o.Name,
- "payloadFormat": "JSON_API_V1",
- }
- for k, v := range extraEventAttr {
- if _, exists := attributes[k]; exists {
- return nil, nil, fmt.Errorf("cannot overwrite duplicate event attribute %s", k)
- }
- attributes[k] = v
- }
- data, err := json.Marshal(&payload)
- if err != nil {
- return nil, nil, err
- }
- return data, attributes, nil
-}
diff --git a/vendor/github.com/gorilla/handlers/.editorconfig b/vendor/github.com/gorilla/handlers/.editorconfig
deleted file mode 100644
index c6b74c3e0d0c7..0000000000000
--- a/vendor/github.com/gorilla/handlers/.editorconfig
+++ /dev/null
@@ -1,20 +0,0 @@
-; https://editorconfig.org/
-
-root = true
-
-[*]
-insert_final_newline = true
-charset = utf-8
-trim_trailing_whitespace = true
-indent_style = space
-indent_size = 2
-
-[{Makefile,go.mod,go.sum,*.go,.gitmodules}]
-indent_style = tab
-indent_size = 4
-
-[*.md]
-indent_size = 4
-trim_trailing_whitespace = false
-
-eclint_indent_style = unset
\ No newline at end of file
diff --git a/vendor/github.com/gorilla/handlers/.gitignore b/vendor/github.com/gorilla/handlers/.gitignore
deleted file mode 100644
index 577a89e813831..0000000000000
--- a/vendor/github.com/gorilla/handlers/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-# Output of the go test coverage tool
-coverage.coverprofile
diff --git a/vendor/github.com/gorilla/handlers/LICENSE b/vendor/github.com/gorilla/handlers/LICENSE
deleted file mode 100644
index bb9d80bc9b6bc..0000000000000
--- a/vendor/github.com/gorilla/handlers/LICENSE
+++ /dev/null
@@ -1,27 +0,0 @@
-Copyright (c) 2023 The Gorilla Authors. All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
- * Redistributions of source code must retain the above copyright
-notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above
-copyright notice, this list of conditions and the following disclaimer
-in the documentation and/or other materials provided with the
-distribution.
- * Neither the name of Google Inc. nor the names of its
-contributors may be used to endorse or promote products derived from
-this software without specific prior written permission.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/gorilla/handlers/Makefile b/vendor/github.com/gorilla/handlers/Makefile
deleted file mode 100644
index 003b784f7edbf..0000000000000
--- a/vendor/github.com/gorilla/handlers/Makefile
+++ /dev/null
@@ -1,34 +0,0 @@
-GO_LINT=$(shell which golangci-lint 2> /dev/null || echo '')
-GO_LINT_URI=github.com/golangci/golangci-lint/cmd/golangci-lint@latest
-
-GO_SEC=$(shell which gosec 2> /dev/null || echo '')
-GO_SEC_URI=github.com/securego/gosec/v2/cmd/gosec@latest
-
-GO_VULNCHECK=$(shell which govulncheck 2> /dev/null || echo '')
-GO_VULNCHECK_URI=golang.org/x/vuln/cmd/govulncheck@latest
-
-.PHONY: verify
-verify: sec govulncheck lint test
-
-.PHONY: lint
-lint:
- $(if $(GO_LINT), ,go install $(GO_LINT_URI))
- @echo "##### Running golangci-lint #####"
- golangci-lint run -v
-
-.PHONY: sec
-sec:
- $(if $(GO_SEC), ,go install $(GO_SEC_URI))
- @echo "##### Running gosec #####"
- gosec ./...
-
-.PHONY: govulncheck
-govulncheck:
- $(if $(GO_VULNCHECK), ,go install $(GO_VULNCHECK_URI))
- @echo "##### Running govulncheck #####"
- govulncheck ./...
-
-.PHONY: test
-test:
- @echo "##### Running tests #####"
- go test -race -cover -coverprofile=coverage.coverprofile -covermode=atomic -v ./...
diff --git a/vendor/github.com/gorilla/handlers/README.md b/vendor/github.com/gorilla/handlers/README.md
deleted file mode 100644
index 02555b2642c5f..0000000000000
--- a/vendor/github.com/gorilla/handlers/README.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# gorilla/handlers
-
-![Testing](https://github.com/gorilla/handlers/actions/workflows/test.yml/badge.svg)
-[![Codecov](https://codecov.io/github/gorilla/handlers/branch/main/graph/badge.svg)](https://codecov.io/github/gorilla/handlers)
-[![GoDoc](https://godoc.org/github.com/gorilla/handlers?status.svg)](https://godoc.org/github.com/gorilla/handlers)
-[![Sourcegraph](https://sourcegraph.com/github.com/gorilla/handlers/-/badge.svg)](https://sourcegraph.com/github.com/gorilla/handlers?badge)
-
-Package handlers is a collection of handlers (aka "HTTP middleware") for use
-with Go's `net/http` package (or any framework supporting `http.Handler`), including:
-
-* [**LoggingHandler**](https://godoc.org/github.com/gorilla/handlers#LoggingHandler) for logging HTTP requests in the Apache [Common Log
- Format](http://httpd.apache.org/docs/2.2/logs.html#common).
-* [**CombinedLoggingHandler**](https://godoc.org/github.com/gorilla/handlers#CombinedLoggingHandler) for logging HTTP requests in the Apache [Combined Log
- Format](http://httpd.apache.org/docs/2.2/logs.html#combined) commonly used by
- both Apache and nginx.
-* [**CompressHandler**](https://godoc.org/github.com/gorilla/handlers#CompressHandler) for gzipping responses.
-* [**ContentTypeHandler**](https://godoc.org/github.com/gorilla/handlers#ContentTypeHandler) for validating requests against a list of accepted
- content types.
-* [**MethodHandler**](https://godoc.org/github.com/gorilla/handlers#MethodHandler) for matching HTTP methods against handlers in a
- `map[string]http.Handler`
-* [**ProxyHeaders**](https://godoc.org/github.com/gorilla/handlers#ProxyHeaders) for populating `r.RemoteAddr` and `r.URL.Scheme` based on the
- `X-Forwarded-For`, `X-Real-IP`, `X-Forwarded-Proto` and RFC7239 `Forwarded`
- headers when running a Go server behind a HTTP reverse proxy.
-* [**CanonicalHost**](https://godoc.org/github.com/gorilla/handlers#CanonicalHost) for re-directing to the preferred host when handling multiple
- domains (i.e. multiple CNAME aliases).
-* [**RecoveryHandler**](https://godoc.org/github.com/gorilla/handlers#RecoveryHandler) for recovering from unexpected panics.
-
-Other handlers are documented [on the Gorilla
-website](https://www.gorillatoolkit.org/pkg/handlers).
-
-## Example
-
-A simple example using `handlers.LoggingHandler` and `handlers.CompressHandler`:
-
-```go
-import (
- "net/http"
- "github.com/gorilla/handlers"
-)
-
-func main() {
- r := http.NewServeMux()
-
- // Only log requests to our admin dashboard to stdout
- r.Handle("/admin", handlers.LoggingHandler(os.Stdout, http.HandlerFunc(ShowAdminDashboard)))
- r.HandleFunc("/", ShowIndex)
-
- // Wrap our server with our gzip handler to gzip compress all responses.
- http.ListenAndServe(":8000", handlers.CompressHandler(r))
-}
-```
-
-## License
-
-BSD licensed. See the included LICENSE file for details.
-
diff --git a/vendor/github.com/gorilla/handlers/canonical.go b/vendor/github.com/gorilla/handlers/canonical.go
deleted file mode 100644
index 7121f5307bec9..0000000000000
--- a/vendor/github.com/gorilla/handlers/canonical.go
+++ /dev/null
@@ -1,73 +0,0 @@
-package handlers
-
-import (
- "net/http"
- "net/url"
- "strings"
-)
-
-type canonical struct {
- h http.Handler
- domain string
- code int
-}
-
-// CanonicalHost is HTTP middleware that re-directs requests to the canonical
-// domain. It accepts a domain and a status code (e.g. 301 or 302) and
-// re-directs clients to this domain. The existing request path is maintained.
-//
-// Note: If the provided domain is considered invalid by url.Parse or otherwise
-// returns an empty scheme or host, clients are not re-directed.
-//
-// Example:
-//
-// r := mux.NewRouter()
-// canonical := handlers.CanonicalHost("http://www.gorillatoolkit.org", 302)
-// r.HandleFunc("/route", YourHandler)
-//
-// log.Fatal(http.ListenAndServe(":7000", canonical(r)))
-func CanonicalHost(domain string, code int) func(h http.Handler) http.Handler {
- fn := func(h http.Handler) http.Handler {
- return canonical{h, domain, code}
- }
-
- return fn
-}
-
-func (c canonical) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- dest, err := url.Parse(c.domain)
- if err != nil {
- // Call the next handler if the provided domain fails to parse.
- c.h.ServeHTTP(w, r)
- return
- }
-
- if dest.Scheme == "" || dest.Host == "" {
- // Call the next handler if the scheme or host are empty.
- // Note that url.Parse won't fail on in this case.
- c.h.ServeHTTP(w, r)
- return
- }
-
- if !strings.EqualFold(cleanHost(r.Host), dest.Host) {
- // Re-build the destination URL
- dest := dest.Scheme + "://" + dest.Host + r.URL.Path
- if r.URL.RawQuery != "" {
- dest += "?" + r.URL.RawQuery
- }
- http.Redirect(w, r, dest, c.code)
- return
- }
-
- c.h.ServeHTTP(w, r)
-}
-
-// cleanHost cleans invalid Host headers by stripping anything after '/' or ' '.
-// This is backported from Go 1.5 (in response to issue #11206) and attempts to
-// mitigate malformed Host headers that do not match the format in RFC7230.
-func cleanHost(in string) string {
- if i := strings.IndexAny(in, " /"); i != -1 {
- return in[:i]
- }
- return in
-}
diff --git a/vendor/github.com/gorilla/handlers/compress.go b/vendor/github.com/gorilla/handlers/compress.go
deleted file mode 100644
index d6f589503b5ea..0000000000000
--- a/vendor/github.com/gorilla/handlers/compress.go
+++ /dev/null
@@ -1,143 +0,0 @@
-// Copyright 2013 The Gorilla Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package handlers
-
-import (
- "compress/flate"
- "compress/gzip"
- "io"
- "net/http"
- "strings"
-
- "github.com/felixge/httpsnoop"
-)
-
-const acceptEncoding string = "Accept-Encoding"
-
-type compressResponseWriter struct {
- compressor io.Writer
- w http.ResponseWriter
-}
-
-func (cw *compressResponseWriter) WriteHeader(c int) {
- cw.w.Header().Del("Content-Length")
- cw.w.WriteHeader(c)
-}
-
-func (cw *compressResponseWriter) Write(b []byte) (int, error) {
- h := cw.w.Header()
- if h.Get("Content-Type") == "" {
- h.Set("Content-Type", http.DetectContentType(b))
- }
- h.Del("Content-Length")
-
- return cw.compressor.Write(b)
-}
-
-func (cw *compressResponseWriter) ReadFrom(r io.Reader) (int64, error) {
- return io.Copy(cw.compressor, r)
-}
-
-type flusher interface {
- Flush() error
-}
-
-func (cw *compressResponseWriter) Flush() {
- // Flush compressed data if compressor supports it.
- if f, ok := cw.compressor.(flusher); ok {
- _ = f.Flush()
- }
- // Flush HTTP response.
- if f, ok := cw.w.(http.Flusher); ok {
- f.Flush()
- }
-}
-
-// CompressHandler gzip compresses HTTP responses for clients that support it
-// via the 'Accept-Encoding' header.
-//
-// Compressing TLS traffic may leak the page contents to an attacker if the
-// page contains user input: http://security.stackexchange.com/a/102015/12208
-func CompressHandler(h http.Handler) http.Handler {
- return CompressHandlerLevel(h, gzip.DefaultCompression)
-}
-
-// CompressHandlerLevel gzip compresses HTTP responses with specified compression level
-// for clients that support it via the 'Accept-Encoding' header.
-//
-// The compression level should be gzip.DefaultCompression, gzip.NoCompression,
-// or any integer value between gzip.BestSpeed and gzip.BestCompression inclusive.
-// gzip.DefaultCompression is used in case of invalid compression level.
-func CompressHandlerLevel(h http.Handler, level int) http.Handler {
- if level < gzip.DefaultCompression || level > gzip.BestCompression {
- level = gzip.DefaultCompression
- }
-
- const (
- gzipEncoding = "gzip"
- flateEncoding = "deflate"
- )
-
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- // detect what encoding to use
- var encoding string
- for _, curEnc := range strings.Split(r.Header.Get(acceptEncoding), ",") {
- curEnc = strings.TrimSpace(curEnc)
- if curEnc == gzipEncoding || curEnc == flateEncoding {
- encoding = curEnc
- break
- }
- }
-
- // always add Accept-Encoding to Vary to prevent intermediate caches corruption
- w.Header().Add("Vary", acceptEncoding)
-
- // if we weren't able to identify an encoding we're familiar with, pass on the
- // request to the handler and return
- if encoding == "" {
- h.ServeHTTP(w, r)
- return
- }
-
- if r.Header.Get("Upgrade") != "" {
- h.ServeHTTP(w, r)
- return
- }
-
- // wrap the ResponseWriter with the writer for the chosen encoding
- var encWriter io.WriteCloser
- if encoding == gzipEncoding {
- encWriter, _ = gzip.NewWriterLevel(w, level)
- } else if encoding == flateEncoding {
- encWriter, _ = flate.NewWriter(w, level)
- }
- defer encWriter.Close()
-
- w.Header().Set("Content-Encoding", encoding)
- r.Header.Del(acceptEncoding)
-
- cw := &compressResponseWriter{
- w: w,
- compressor: encWriter,
- }
-
- w = httpsnoop.Wrap(w, httpsnoop.Hooks{
- Write: func(httpsnoop.WriteFunc) httpsnoop.WriteFunc {
- return cw.Write
- },
- WriteHeader: func(httpsnoop.WriteHeaderFunc) httpsnoop.WriteHeaderFunc {
- return cw.WriteHeader
- },
- Flush: func(httpsnoop.FlushFunc) httpsnoop.FlushFunc {
- return cw.Flush
- },
- ReadFrom: func(rff httpsnoop.ReadFromFunc) httpsnoop.ReadFromFunc {
- return cw.ReadFrom
- },
- })
-
- h.ServeHTTP(w, r)
- })
-}
diff --git a/vendor/github.com/gorilla/handlers/cors.go b/vendor/github.com/gorilla/handlers/cors.go
deleted file mode 100644
index 8af9c096e5e40..0000000000000
--- a/vendor/github.com/gorilla/handlers/cors.go
+++ /dev/null
@@ -1,352 +0,0 @@
-package handlers
-
-import (
- "net/http"
- "strconv"
- "strings"
-)
-
-// CORSOption represents a functional option for configuring the CORS middleware.
-type CORSOption func(*cors) error
-
-type cors struct {
- h http.Handler
- allowedHeaders []string
- allowedMethods []string
- allowedOrigins []string
- allowedOriginValidator OriginValidator
- exposedHeaders []string
- maxAge int
- ignoreOptions bool
- allowCredentials bool
- optionStatusCode int
-}
-
-// OriginValidator takes an origin string and returns whether or not that origin is allowed.
-type OriginValidator func(string) bool
-
-var (
- defaultCorsOptionStatusCode = http.StatusOK
- defaultCorsMethods = []string{http.MethodGet, http.MethodHead, http.MethodPost}
- defaultCorsHeaders = []string{"Accept", "Accept-Language", "Content-Language", "Origin"}
- // (WebKit/Safari v9 sends the Origin header by default in AJAX requests).
-)
-
-const (
- corsOptionMethod string = http.MethodOptions
- corsAllowOriginHeader string = "Access-Control-Allow-Origin"
- corsExposeHeadersHeader string = "Access-Control-Expose-Headers"
- corsMaxAgeHeader string = "Access-Control-Max-Age"
- corsAllowMethodsHeader string = "Access-Control-Allow-Methods"
- corsAllowHeadersHeader string = "Access-Control-Allow-Headers"
- corsAllowCredentialsHeader string = "Access-Control-Allow-Credentials"
- corsRequestMethodHeader string = "Access-Control-Request-Method"
- corsRequestHeadersHeader string = "Access-Control-Request-Headers"
- corsOriginHeader string = "Origin"
- corsVaryHeader string = "Vary"
- corsOriginMatchAll string = "*"
-)
-
-func (ch *cors) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- origin := r.Header.Get(corsOriginHeader)
- if !ch.isOriginAllowed(origin) {
- if r.Method != corsOptionMethod || ch.ignoreOptions {
- ch.h.ServeHTTP(w, r)
- }
-
- return
- }
-
- if r.Method == corsOptionMethod {
- if ch.ignoreOptions {
- ch.h.ServeHTTP(w, r)
- return
- }
-
- if _, ok := r.Header[corsRequestMethodHeader]; !ok {
- w.WriteHeader(http.StatusBadRequest)
- return
- }
-
- method := r.Header.Get(corsRequestMethodHeader)
- if !ch.isMatch(method, ch.allowedMethods) {
- w.WriteHeader(http.StatusMethodNotAllowed)
- return
- }
-
- requestHeaders := strings.Split(r.Header.Get(corsRequestHeadersHeader), ",")
- allowedHeaders := []string{}
- for _, v := range requestHeaders {
- canonicalHeader := http.CanonicalHeaderKey(strings.TrimSpace(v))
- if canonicalHeader == "" || ch.isMatch(canonicalHeader, defaultCorsHeaders) {
- continue
- }
-
- if !ch.isMatch(canonicalHeader, ch.allowedHeaders) {
- w.WriteHeader(http.StatusForbidden)
- return
- }
-
- allowedHeaders = append(allowedHeaders, canonicalHeader)
- }
-
- if len(allowedHeaders) > 0 {
- w.Header().Set(corsAllowHeadersHeader, strings.Join(allowedHeaders, ","))
- }
-
- if ch.maxAge > 0 {
- w.Header().Set(corsMaxAgeHeader, strconv.Itoa(ch.maxAge))
- }
-
- if !ch.isMatch(method, defaultCorsMethods) {
- w.Header().Set(corsAllowMethodsHeader, method)
- }
- } else if len(ch.exposedHeaders) > 0 {
- w.Header().Set(corsExposeHeadersHeader, strings.Join(ch.exposedHeaders, ","))
- }
-
- if ch.allowCredentials {
- w.Header().Set(corsAllowCredentialsHeader, "true")
- }
-
- if len(ch.allowedOrigins) > 1 {
- w.Header().Set(corsVaryHeader, corsOriginHeader)
- }
-
- returnOrigin := origin
- if ch.allowedOriginValidator == nil && len(ch.allowedOrigins) == 0 {
- returnOrigin = "*"
- } else {
- for _, o := range ch.allowedOrigins {
- // A configuration of * is different than explicitly setting an allowed
- // origin. Returning arbitrary origin headers in an access control allow
- // origin header is unsafe and is not required by any use case.
- if o == corsOriginMatchAll {
- returnOrigin = "*"
- break
- }
- }
- }
- w.Header().Set(corsAllowOriginHeader, returnOrigin)
-
- if r.Method == corsOptionMethod {
- w.WriteHeader(ch.optionStatusCode)
- return
- }
- ch.h.ServeHTTP(w, r)
-}
-
-// CORS provides Cross-Origin Resource Sharing middleware.
-// Example:
-//
-// import (
-// "net/http"
-//
-// "github.com/gorilla/handlers"
-// "github.com/gorilla/mux"
-// )
-//
-// func main() {
-// r := mux.NewRouter()
-// r.HandleFunc("/users", UserEndpoint)
-// r.HandleFunc("/projects", ProjectEndpoint)
-//
-// // Apply the CORS middleware to our top-level router, with the defaults.
-// http.ListenAndServe(":8000", handlers.CORS()(r))
-// }
-func CORS(opts ...CORSOption) func(http.Handler) http.Handler {
- return func(h http.Handler) http.Handler {
- ch := parseCORSOptions(opts...)
- ch.h = h
- return ch
- }
-}
-
-func parseCORSOptions(opts ...CORSOption) *cors {
- ch := &cors{
- allowedMethods: defaultCorsMethods,
- allowedHeaders: defaultCorsHeaders,
- allowedOrigins: []string{},
- optionStatusCode: defaultCorsOptionStatusCode,
- }
-
- for _, option := range opts {
- _ = option(ch) //TODO: @bharat-rajani, return error to caller if not nil?
- }
-
- return ch
-}
-
-//
-// Functional options for configuring CORS.
-//
-
-// AllowedHeaders adds the provided headers to the list of allowed headers in a
-// CORS request.
-// This is an append operation so the headers Accept, Accept-Language,
-// and Content-Language are always allowed.
-// Content-Type must be explicitly declared if accepting Content-Types other than
-// application/x-www-form-urlencoded, multipart/form-data, or text/plain.
-func AllowedHeaders(headers []string) CORSOption {
- return func(ch *cors) error {
- for _, v := range headers {
- normalizedHeader := http.CanonicalHeaderKey(strings.TrimSpace(v))
- if normalizedHeader == "" {
- continue
- }
-
- if !ch.isMatch(normalizedHeader, ch.allowedHeaders) {
- ch.allowedHeaders = append(ch.allowedHeaders, normalizedHeader)
- }
- }
-
- return nil
- }
-}
-
-// AllowedMethods can be used to explicitly allow methods in the
-// Access-Control-Allow-Methods header.
-// This is a replacement operation so you must also
-// pass GET, HEAD, and POST if you wish to support those methods.
-func AllowedMethods(methods []string) CORSOption {
- return func(ch *cors) error {
- ch.allowedMethods = []string{}
- for _, v := range methods {
- normalizedMethod := strings.ToUpper(strings.TrimSpace(v))
- if normalizedMethod == "" {
- continue
- }
-
- if !ch.isMatch(normalizedMethod, ch.allowedMethods) {
- ch.allowedMethods = append(ch.allowedMethods, normalizedMethod)
- }
- }
-
- return nil
- }
-}
-
-// AllowedOrigins sets the allowed origins for CORS requests, as used in the
-// 'Allow-Access-Control-Origin' HTTP header.
-// Note: Passing in a []string{"*"} will allow any domain.
-func AllowedOrigins(origins []string) CORSOption {
- return func(ch *cors) error {
- for _, v := range origins {
- if v == corsOriginMatchAll {
- ch.allowedOrigins = []string{corsOriginMatchAll}
- return nil
- }
- }
-
- ch.allowedOrigins = origins
- return nil
- }
-}
-
-// AllowedOriginValidator sets a function for evaluating allowed origins in CORS requests, represented by the
-// 'Allow-Access-Control-Origin' HTTP header.
-func AllowedOriginValidator(fn OriginValidator) CORSOption {
- return func(ch *cors) error {
- ch.allowedOriginValidator = fn
- return nil
- }
-}
-
-// OptionStatusCode sets a custom status code on the OPTIONS requests.
-// Default behaviour sets it to 200 to reflect best practices. This is option is not mandatory
-// and can be used if you need a custom status code (i.e 204).
-//
-// More informations on the spec:
-// https://fetch.spec.whatwg.org/#cors-preflight-fetch
-func OptionStatusCode(code int) CORSOption {
- return func(ch *cors) error {
- ch.optionStatusCode = code
- return nil
- }
-}
-
-// ExposedHeaders can be used to specify headers that are available
-// and will not be stripped out by the user-agent.
-func ExposedHeaders(headers []string) CORSOption {
- return func(ch *cors) error {
- ch.exposedHeaders = []string{}
- for _, v := range headers {
- normalizedHeader := http.CanonicalHeaderKey(strings.TrimSpace(v))
- if normalizedHeader == "" {
- continue
- }
-
- if !ch.isMatch(normalizedHeader, ch.exposedHeaders) {
- ch.exposedHeaders = append(ch.exposedHeaders, normalizedHeader)
- }
- }
-
- return nil
- }
-}
-
-// MaxAge determines the maximum age (in seconds) between preflight requests. A
-// maximum of 10 minutes is allowed. An age above this value will default to 10
-// minutes.
-func MaxAge(age int) CORSOption {
- return func(ch *cors) error {
- // Maximum of 10 minutes.
- if age > 600 {
- age = 600
- }
-
- ch.maxAge = age
- return nil
- }
-}
-
-// IgnoreOptions causes the CORS middleware to ignore OPTIONS requests, instead
-// passing them through to the next handler. This is useful when your application
-// or framework has a pre-existing mechanism for responding to OPTIONS requests.
-func IgnoreOptions() CORSOption {
- return func(ch *cors) error {
- ch.ignoreOptions = true
- return nil
- }
-}
-
-// AllowCredentials can be used to specify that the user agent may pass
-// authentication details along with the request.
-func AllowCredentials() CORSOption {
- return func(ch *cors) error {
- ch.allowCredentials = true
- return nil
- }
-}
-
-func (ch *cors) isOriginAllowed(origin string) bool {
- if origin == "" {
- return false
- }
-
- if ch.allowedOriginValidator != nil {
- return ch.allowedOriginValidator(origin)
- }
-
- if len(ch.allowedOrigins) == 0 {
- return true
- }
-
- for _, allowedOrigin := range ch.allowedOrigins {
- if allowedOrigin == origin || allowedOrigin == corsOriginMatchAll {
- return true
- }
- }
-
- return false
-}
-
-func (ch *cors) isMatch(needle string, haystack []string) bool {
- for _, v := range haystack {
- if v == needle {
- return true
- }
- }
-
- return false
-}
diff --git a/vendor/github.com/gorilla/handlers/doc.go b/vendor/github.com/gorilla/handlers/doc.go
deleted file mode 100644
index 944e5a8ae9982..0000000000000
--- a/vendor/github.com/gorilla/handlers/doc.go
+++ /dev/null
@@ -1,9 +0,0 @@
-/*
-Package handlers is a collection of handlers (aka "HTTP middleware") for use
-with Go's net/http package (or any framework supporting http.Handler).
-
-The package includes handlers for logging in standardised formats, compressing
-HTTP responses, validating content types and other useful tools for manipulating
-requests and responses.
-*/
-package handlers
diff --git a/vendor/github.com/gorilla/handlers/handlers.go b/vendor/github.com/gorilla/handlers/handlers.go
deleted file mode 100644
index 9b92fce3333e7..0000000000000
--- a/vendor/github.com/gorilla/handlers/handlers.go
+++ /dev/null
@@ -1,150 +0,0 @@
-// Copyright 2013 The Gorilla Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package handlers
-
-import (
- "bufio"
- "fmt"
- "net"
- "net/http"
- "sort"
- "strings"
-)
-
-// MethodHandler is an http.Handler that dispatches to a handler whose key in the
-// MethodHandler's map matches the name of the HTTP request's method, eg: GET
-//
-// If the request's method is OPTIONS and OPTIONS is not a key in the map then
-// the handler responds with a status of 200 and sets the Allow header to a
-// comma-separated list of available methods.
-//
-// If the request's method doesn't match any of its keys the handler responds
-// with a status of HTTP 405 "Method Not Allowed" and sets the Allow header to a
-// comma-separated list of available methods.
-type MethodHandler map[string]http.Handler
-
-func (h MethodHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
- if handler, ok := h[req.Method]; ok {
- handler.ServeHTTP(w, req)
- } else {
- allow := []string{}
- for k := range h {
- allow = append(allow, k)
- }
- sort.Strings(allow)
- w.Header().Set("Allow", strings.Join(allow, ", "))
- if req.Method == http.MethodOptions {
- w.WriteHeader(http.StatusOK)
- } else {
- http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
- }
- }
-}
-
-// responseLogger is wrapper of http.ResponseWriter that keeps track of its HTTP
-// status code and body size.
-type responseLogger struct {
- w http.ResponseWriter
- status int
- size int
-}
-
-func (l *responseLogger) Write(b []byte) (int, error) {
- size, err := l.w.Write(b)
- l.size += size
- return size, err
-}
-
-func (l *responseLogger) WriteHeader(s int) {
- l.w.WriteHeader(s)
- l.status = s
-}
-
-func (l *responseLogger) Status() int {
- return l.status
-}
-
-func (l *responseLogger) Size() int {
- return l.size
-}
-
-func (l *responseLogger) Hijack() (net.Conn, *bufio.ReadWriter, error) {
- conn, rw, err := l.w.(http.Hijacker).Hijack()
- if err == nil && l.status == 0 {
- // The status will be StatusSwitchingProtocols if there was no error and
- // WriteHeader has not been called yet
- l.status = http.StatusSwitchingProtocols
- }
- return conn, rw, err
-}
-
-// isContentType validates the Content-Type header matches the supplied
-// contentType. That is, its type and subtype match.
-func isContentType(h http.Header, contentType string) bool {
- ct := h.Get("Content-Type")
- if i := strings.IndexRune(ct, ';'); i != -1 {
- ct = ct[0:i]
- }
- return ct == contentType
-}
-
-// ContentTypeHandler wraps and returns a http.Handler, validating the request
-// content type is compatible with the contentTypes list. It writes a HTTP 415
-// error if that fails.
-//
-// Only PUT, POST, and PATCH requests are considered.
-func ContentTypeHandler(h http.Handler, contentTypes ...string) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- if !(r.Method == http.MethodPut || r.Method == http.MethodPost || r.Method == http.MethodPatch) {
- h.ServeHTTP(w, r)
- return
- }
-
- for _, ct := range contentTypes {
- if isContentType(r.Header, ct) {
- h.ServeHTTP(w, r)
- return
- }
- }
- http.Error(w, fmt.Sprintf("Unsupported content type %q; expected one of %q",
- r.Header.Get("Content-Type"),
- contentTypes),
- http.StatusUnsupportedMediaType)
- })
-}
-
-const (
- // HTTPMethodOverrideHeader is a commonly used
- // http header to override a request method.
- HTTPMethodOverrideHeader = "X-HTTP-Method-Override"
- // HTTPMethodOverrideFormKey is a commonly used
- // HTML form key to override a request method.
- HTTPMethodOverrideFormKey = "_method"
-)
-
-// HTTPMethodOverrideHandler wraps and returns a http.Handler which checks for
-// the X-HTTP-Method-Override header or the _method form key, and overrides (if
-// valid) request.Method with its value.
-//
-// This is especially useful for HTTP clients that don't support many http verbs.
-// It isn't secure to override e.g a GET to a POST, so only POST requests are
-// considered. Likewise, the override method can only be a "write" method: PUT,
-// PATCH or DELETE.
-//
-// Form method takes precedence over header method.
-func HTTPMethodOverrideHandler(h http.Handler) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- if r.Method == http.MethodPost {
- om := r.FormValue(HTTPMethodOverrideFormKey)
- if om == "" {
- om = r.Header.Get(HTTPMethodOverrideHeader)
- }
- if om == http.MethodPut || om == http.MethodPatch || om == http.MethodDelete {
- r.Method = om
- }
- }
- h.ServeHTTP(w, r)
- })
-}
diff --git a/vendor/github.com/gorilla/handlers/logging.go b/vendor/github.com/gorilla/handlers/logging.go
deleted file mode 100644
index 2badb6fbff844..0000000000000
--- a/vendor/github.com/gorilla/handlers/logging.go
+++ /dev/null
@@ -1,246 +0,0 @@
-// Copyright 2013 The Gorilla Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package handlers
-
-import (
- "io"
- "net"
- "net/http"
- "net/url"
- "strconv"
- "time"
- "unicode/utf8"
-
- "github.com/felixge/httpsnoop"
-)
-
-// Logging
-
-// LogFormatterParams is the structure any formatter will be handed when time to log comes.
-type LogFormatterParams struct {
- Request *http.Request
- URL url.URL
- TimeStamp time.Time
- StatusCode int
- Size int
-}
-
-// LogFormatter gives the signature of the formatter function passed to CustomLoggingHandler.
-type LogFormatter func(writer io.Writer, params LogFormatterParams)
-
-// loggingHandler is the http.Handler implementation for LoggingHandlerTo and its
-// friends
-
-type loggingHandler struct {
- writer io.Writer
- handler http.Handler
- formatter LogFormatter
-}
-
-func (h loggingHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
- t := time.Now()
- logger, w := makeLogger(w)
- url := *req.URL
-
- h.handler.ServeHTTP(w, req)
- if req.MultipartForm != nil {
- err := req.MultipartForm.RemoveAll()
- if err != nil {
- return
- }
- }
-
- params := LogFormatterParams{
- Request: req,
- URL: url,
- TimeStamp: t,
- StatusCode: logger.Status(),
- Size: logger.Size(),
- }
-
- h.formatter(h.writer, params)
-}
-
-func makeLogger(w http.ResponseWriter) (*responseLogger, http.ResponseWriter) {
- logger := &responseLogger{w: w, status: http.StatusOK}
- return logger, httpsnoop.Wrap(w, httpsnoop.Hooks{
- Write: func(httpsnoop.WriteFunc) httpsnoop.WriteFunc {
- return logger.Write
- },
- WriteHeader: func(httpsnoop.WriteHeaderFunc) httpsnoop.WriteHeaderFunc {
- return logger.WriteHeader
- },
- })
-}
-
-const lowerhex = "0123456789abcdef"
-
-func appendQuoted(buf []byte, s string) []byte {
- var runeTmp [utf8.UTFMax]byte
- for width := 0; len(s) > 0; s = s[width:] { //nolint: wastedassign //TODO: why width starts from 0and reassigned as 1
- r := rune(s[0])
- width = 1
- if r >= utf8.RuneSelf {
- r, width = utf8.DecodeRuneInString(s)
- }
- if width == 1 && r == utf8.RuneError {
- buf = append(buf, `\x`...)
- buf = append(buf, lowerhex[s[0]>>4])
- buf = append(buf, lowerhex[s[0]&0xF])
- continue
- }
- if r == rune('"') || r == '\\' { // always backslashed
- buf = append(buf, '\\')
- buf = append(buf, byte(r))
- continue
- }
- if strconv.IsPrint(r) {
- n := utf8.EncodeRune(runeTmp[:], r)
- buf = append(buf, runeTmp[:n]...)
- continue
- }
- switch r {
- case '\a':
- buf = append(buf, `\a`...)
- case '\b':
- buf = append(buf, `\b`...)
- case '\f':
- buf = append(buf, `\f`...)
- case '\n':
- buf = append(buf, `\n`...)
- case '\r':
- buf = append(buf, `\r`...)
- case '\t':
- buf = append(buf, `\t`...)
- case '\v':
- buf = append(buf, `\v`...)
- default:
- switch {
- case r < ' ':
- buf = append(buf, `\x`...)
- buf = append(buf, lowerhex[s[0]>>4])
- buf = append(buf, lowerhex[s[0]&0xF])
- case r > utf8.MaxRune:
- r = 0xFFFD
- fallthrough
- case r < 0x10000:
- buf = append(buf, `\u`...)
- for s := 12; s >= 0; s -= 4 {
- buf = append(buf, lowerhex[r>>uint(s)&0xF])
- }
- default:
- buf = append(buf, `\U`...)
- for s := 28; s >= 0; s -= 4 {
- buf = append(buf, lowerhex[r>>uint(s)&0xF])
- }
- }
- }
- }
- return buf
-}
-
-// buildCommonLogLine builds a log entry for req in Apache Common Log Format.
-// ts is the timestamp with which the entry should be logged.
-// status and size are used to provide the response HTTP status and size.
-func buildCommonLogLine(req *http.Request, url url.URL, ts time.Time, status int, size int) []byte {
- username := "-"
- if url.User != nil {
- if name := url.User.Username(); name != "" {
- username = name
- }
- }
-
- host, _, err := net.SplitHostPort(req.RemoteAddr)
- if err != nil {
- host = req.RemoteAddr
- }
-
- uri := req.RequestURI
-
- // Requests using the CONNECT method over HTTP/2.0 must use
- // the authority field (aka r.Host) to identify the target.
- // Refer: https://httpwg.github.io/specs/rfc7540.html#CONNECT
- if req.ProtoMajor == 2 && req.Method == "CONNECT" {
- uri = req.Host
- }
- if uri == "" {
- uri = url.RequestURI()
- }
-
- buf := make([]byte, 0, 3*(len(host)+len(username)+len(req.Method)+len(uri)+len(req.Proto)+50)/2)
- buf = append(buf, host...)
- buf = append(buf, " - "...)
- buf = append(buf, username...)
- buf = append(buf, " ["...)
- buf = append(buf, ts.Format("02/Jan/2006:15:04:05 -0700")...)
- buf = append(buf, `] "`...)
- buf = append(buf, req.Method...)
- buf = append(buf, " "...)
- buf = appendQuoted(buf, uri)
- buf = append(buf, " "...)
- buf = append(buf, req.Proto...)
- buf = append(buf, `" `...)
- buf = append(buf, strconv.Itoa(status)...)
- buf = append(buf, " "...)
- buf = append(buf, strconv.Itoa(size)...)
- return buf
-}
-
-// writeLog writes a log entry for req to w in Apache Common Log Format.
-// ts is the timestamp with which the entry should be logged.
-// status and size are used to provide the response HTTP status and size.
-func writeLog(writer io.Writer, params LogFormatterParams) {
- buf := buildCommonLogLine(params.Request, params.URL, params.TimeStamp, params.StatusCode, params.Size)
- buf = append(buf, '\n')
- _, _ = writer.Write(buf)
-}
-
-// writeCombinedLog writes a log entry for req to w in Apache Combined Log Format.
-// ts is the timestamp with which the entry should be logged.
-// status and size are used to provide the response HTTP status and size.
-func writeCombinedLog(writer io.Writer, params LogFormatterParams) {
- buf := buildCommonLogLine(params.Request, params.URL, params.TimeStamp, params.StatusCode, params.Size)
- buf = append(buf, ` "`...)
- buf = appendQuoted(buf, params.Request.Referer())
- buf = append(buf, `" "`...)
- buf = appendQuoted(buf, params.Request.UserAgent())
- buf = append(buf, '"', '\n')
- _, _ = writer.Write(buf)
-}
-
-// CombinedLoggingHandler return a http.Handler that wraps h and logs requests to out in
-// Apache Combined Log Format.
-//
-// See http://httpd.apache.org/docs/2.2/logs.html#combined for a description of this format.
-//
-// LoggingHandler always sets the ident field of the log to -.
-func CombinedLoggingHandler(out io.Writer, h http.Handler) http.Handler {
- return loggingHandler{out, h, writeCombinedLog}
-}
-
-// LoggingHandler return a http.Handler that wraps h and logs requests to out in
-// Apache Common Log Format (CLF).
-//
-// See http://httpd.apache.org/docs/2.2/logs.html#common for a description of this format.
-//
-// LoggingHandler always sets the ident field of the log to -
-//
-// Example:
-//
-// r := mux.NewRouter()
-// r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
-// w.Write([]byte("This is a catch-all route"))
-// })
-// loggedRouter := handlers.LoggingHandler(os.Stdout, r)
-// http.ListenAndServe(":1123", loggedRouter)
-func LoggingHandler(out io.Writer, h http.Handler) http.Handler {
- return loggingHandler{out, h, writeLog}
-}
-
-// CustomLoggingHandler provides a way to supply a custom log formatter
-// while taking advantage of the mechanisms in this package.
-func CustomLoggingHandler(out io.Writer, h http.Handler, f LogFormatter) http.Handler {
- return loggingHandler{out, h, f}
-}
diff --git a/vendor/github.com/gorilla/handlers/proxy_headers.go b/vendor/github.com/gorilla/handlers/proxy_headers.go
deleted file mode 100644
index 281d753e95a28..0000000000000
--- a/vendor/github.com/gorilla/handlers/proxy_headers.go
+++ /dev/null
@@ -1,120 +0,0 @@
-package handlers
-
-import (
- "net/http"
- "regexp"
- "strings"
-)
-
-var (
- // De-facto standard header keys.
- xForwardedFor = http.CanonicalHeaderKey("X-Forwarded-For")
- xForwardedHost = http.CanonicalHeaderKey("X-Forwarded-Host")
- xForwardedProto = http.CanonicalHeaderKey("X-Forwarded-Proto")
- xForwardedScheme = http.CanonicalHeaderKey("X-Forwarded-Scheme")
- xRealIP = http.CanonicalHeaderKey("X-Real-IP")
-)
-
-var (
- // RFC7239 defines a new "Forwarded: " header designed to replace the
- // existing use of X-Forwarded-* headers.
- // e.g. Forwarded: for=192.0.2.60;proto=https;by=203.0.113.43.
- forwarded = http.CanonicalHeaderKey("Forwarded")
- // Allows for a sub-match of the first value after 'for=' to the next
- // comma, semi-colon or space. The match is case-insensitive.
- forRegex = regexp.MustCompile(`(?i)(?:for=)([^(;|,| )]+)`)
- // Allows for a sub-match for the first instance of scheme (http|https)
- // prefixed by 'proto='. The match is case-insensitive.
- protoRegex = regexp.MustCompile(`(?i)(?:proto=)(https|http)`)
-)
-
-// ProxyHeaders inspects common reverse proxy headers and sets the corresponding
-// fields in the HTTP request struct. These are X-Forwarded-For and X-Real-IP
-// for the remote (client) IP address, X-Forwarded-Proto or X-Forwarded-Scheme
-// for the scheme (http|https), X-Forwarded-Host for the host and the RFC7239
-// Forwarded header, which may include both client IPs and schemes.
-//
-// NOTE: This middleware should only be used when behind a reverse
-// proxy like nginx, HAProxy or Apache. Reverse proxies that don't (or are
-// configured not to) strip these headers from client requests, or where these
-// headers are accepted "as is" from a remote client (e.g. when Go is not behind
-// a proxy), can manifest as a vulnerability if your application uses these
-// headers for validating the 'trustworthiness' of a request.
-func ProxyHeaders(h http.Handler) http.Handler {
- fn := func(w http.ResponseWriter, r *http.Request) {
- // Set the remote IP with the value passed from the proxy.
- if fwd := getIP(r); fwd != "" {
- r.RemoteAddr = fwd
- }
-
- // Set the scheme (proto) with the value passed from the proxy.
- if scheme := getScheme(r); scheme != "" {
- r.URL.Scheme = scheme
- }
- // Set the host with the value passed by the proxy
- if r.Header.Get(xForwardedHost) != "" {
- r.Host = r.Header.Get(xForwardedHost)
- }
- // Call the next handler in the chain.
- h.ServeHTTP(w, r)
- }
-
- return http.HandlerFunc(fn)
-}
-
-// getIP retrieves the IP from the X-Forwarded-For, X-Real-IP and RFC7239
-// Forwarded headers (in that order).
-func getIP(r *http.Request) string {
- var addr string
-
- switch {
- case r.Header.Get(xForwardedFor) != "":
- fwd := r.Header.Get(xForwardedFor)
- // Only grab the first (client) address. Note that '192.168.0.1,
- // 10.1.1.1' is a valid key for X-Forwarded-For where addresses after
- // the first may represent forwarding proxies earlier in the chain.
- s := strings.Index(fwd, ", ")
- if s == -1 {
- s = len(fwd)
- }
- addr = fwd[:s]
- case r.Header.Get(xRealIP) != "":
- addr = r.Header.Get(xRealIP)
- case r.Header.Get(forwarded) != "":
- // match should contain at least two elements if the protocol was
- // specified in the Forwarded header. The first element will always be
- // the 'for=' capture, which we ignore. In the case of multiple IP
- // addresses (for=8.8.8.8, 8.8.4.4,172.16.1.20 is valid) we only
- // extract the first, which should be the client IP.
- if match := forRegex.FindStringSubmatch(r.Header.Get(forwarded)); len(match) > 1 {
- // IPv6 addresses in Forwarded headers are quoted-strings. We strip
- // these quotes.
- addr = strings.Trim(match[1], `"`)
- }
- }
-
- return addr
-}
-
-// getScheme retrieves the scheme from the X-Forwarded-Proto and RFC7239
-// Forwarded headers (in that order).
-func getScheme(r *http.Request) string {
- var scheme string
-
- // Retrieve the scheme from X-Forwarded-Proto.
- if proto := r.Header.Get(xForwardedProto); proto != "" {
- scheme = strings.ToLower(proto)
- } else if proto = r.Header.Get(xForwardedScheme); proto != "" {
- scheme = strings.ToLower(proto)
- } else if proto = r.Header.Get(forwarded); proto != "" {
- // match should contain at least two elements if the protocol was
- // specified in the Forwarded header. The first element will always be
- // the 'proto=' capture, which we ignore. In the case of multiple proto
- // parameters (invalid) we only extract the first.
- if match := protoRegex.FindStringSubmatch(proto); len(match) > 1 {
- scheme = strings.ToLower(match[1])
- }
- }
-
- return scheme
-}
diff --git a/vendor/github.com/gorilla/handlers/recovery.go b/vendor/github.com/gorilla/handlers/recovery.go
deleted file mode 100644
index 0d4f955ecbda0..0000000000000
--- a/vendor/github.com/gorilla/handlers/recovery.go
+++ /dev/null
@@ -1,98 +0,0 @@
-package handlers
-
-import (
- "log"
- "net/http"
- "runtime/debug"
-)
-
-// RecoveryHandlerLogger is an interface used by the recovering handler to print logs.
-type RecoveryHandlerLogger interface {
- Println(...interface{})
-}
-
-type recoveryHandler struct {
- handler http.Handler
- logger RecoveryHandlerLogger
- printStack bool
-}
-
-// RecoveryOption provides a functional approach to define
-// configuration for a handler; such as setting the logging
-// whether or not to print stack traces on panic.
-type RecoveryOption func(http.Handler)
-
-func parseRecoveryOptions(h http.Handler, opts ...RecoveryOption) http.Handler {
- for _, option := range opts {
- option(h)
- }
-
- return h
-}
-
-// RecoveryHandler is HTTP middleware that recovers from a panic,
-// logs the panic, writes http.StatusInternalServerError, and
-// continues to the next handler.
-//
-// Example:
-//
-// r := mux.NewRouter()
-// r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
-// panic("Unexpected error!")
-// })
-//
-// http.ListenAndServe(":1123", handlers.RecoveryHandler()(r))
-func RecoveryHandler(opts ...RecoveryOption) func(h http.Handler) http.Handler {
- return func(h http.Handler) http.Handler {
- r := &recoveryHandler{handler: h}
- return parseRecoveryOptions(r, opts...)
- }
-}
-
-// RecoveryLogger is a functional option to override
-// the default logger.
-func RecoveryLogger(logger RecoveryHandlerLogger) RecoveryOption {
- return func(h http.Handler) {
- r := h.(*recoveryHandler) //nolint:errcheck //TODO:
- // @bharat-rajani should return type-assertion error but would break the API?
- r.logger = logger
- }
-}
-
-// PrintRecoveryStack is a functional option to enable
-// or disable printing stack traces on panic.
-func PrintRecoveryStack(shouldPrint bool) RecoveryOption {
- return func(h http.Handler) {
- r := h.(*recoveryHandler) //nolint:errcheck //TODO:
- // @bharat-rajani should return type-assertion error but would break the API?
- r.printStack = shouldPrint
- }
-}
-
-func (h recoveryHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
- defer func() {
- if err := recover(); err != nil {
- w.WriteHeader(http.StatusInternalServerError)
- h.log(err)
- }
- }()
-
- h.handler.ServeHTTP(w, req)
-}
-
-func (h recoveryHandler) log(v ...interface{}) {
- if h.logger != nil {
- h.logger.Println(v...)
- } else {
- log.Println(v...)
- }
-
- if h.printStack {
- stack := string(debug.Stack())
- if h.logger != nil {
- h.logger.Println(stack)
- } else {
- log.Println(stack)
- }
- }
-}
diff --git a/vendor/github.com/grafana/dskit/httpgrpc/httpgrpc.go b/vendor/github.com/grafana/dskit/httpgrpc/httpgrpc.go
index 02e6e493736b4..616023899b7ec 100644
--- a/vendor/github.com/grafana/dskit/httpgrpc/httpgrpc.go
+++ b/vendor/github.com/grafana/dskit/httpgrpc/httpgrpc.go
@@ -106,16 +106,23 @@ func FromHeader(hs http.Header) []*Header {
return result
}
-// Errorf returns a HTTP gRPC error than is correctly forwarded over
+// Error returns a HTTP gRPC error that is correctly forwarded over
// gRPC, and can eventually be converted back to a HTTP response with
// HTTPResponseFromError.
-func Errorf(code int, tmpl string, args ...interface{}) error {
+func Error(code int, msg string) error {
return ErrorFromHTTPResponse(&HTTPResponse{
Code: int32(code),
- Body: []byte(fmt.Sprintf(tmpl, args...)),
+ Body: []byte(msg),
})
}
+// Errorf returns a HTTP gRPC error that is correctly forwarded over
+// gRPC, and can eventually be converted back to a HTTP response with
+// HTTPResponseFromError.
+func Errorf(code int, tmpl string, args ...interface{}) error {
+ return Error(code, fmt.Sprintf(tmpl, args...))
+}
+
// ErrorFromHTTPResponse converts an HTTP response into a grpc error, and uses HTTP response body as an error message.
// Note that if HTTP response body contains non-utf8 string, then returned error cannot be marshalled by protobuf.
func ErrorFromHTTPResponse(resp *HTTPResponse) error {
diff --git a/vendor/github.com/grafana/dskit/httpgrpc/server/server.go b/vendor/github.com/grafana/dskit/httpgrpc/server/server.go
index 6a831dac0f8fd..935ec0fc5e313 100644
--- a/vendor/github.com/grafana/dskit/httpgrpc/server/server.go
+++ b/vendor/github.com/grafana/dskit/httpgrpc/server/server.go
@@ -186,7 +186,7 @@ func NewClient(address string) (*Client, error) {
),
}
- conn, err := grpc.Dial(address, dialOptions...)
+ conn, err := grpc.NewClient(address, dialOptions...)
if err != nil {
return nil, err
}
diff --git a/vendor/github.com/grafana/dskit/ring/batch.go b/vendor/github.com/grafana/dskit/ring/batch.go
index f982bd6c68c3e..e107cab830fce 100644
--- a/vendor/github.com/grafana/dskit/ring/batch.go
+++ b/vendor/github.com/grafana/dskit/ring/batch.go
@@ -131,7 +131,7 @@ func DoBatchWithOptions(ctx context.Context, op Operation, r DoBatchRing, keys [
// Get call below takes ~1 microsecond for ~500 instances.
// Checking every 10K calls would be every 10ms.
if i%10e3 == 0 {
- if err := ctx.Err(); err != nil {
+ if err := context.Cause(ctx); err != nil {
o.Cleanup()
return err
}
@@ -161,7 +161,7 @@ func DoBatchWithOptions(ctx context.Context, op Operation, r DoBatchRing, keys [
}
// One last check before calling the callbacks: it doesn't make sense if context is canceled.
- if err := ctx.Err(); err != nil {
+ if err := context.Cause(ctx); err != nil {
o.Cleanup()
return err
}
@@ -196,7 +196,7 @@ func DoBatchWithOptions(ctx context.Context, op Operation, r DoBatchRing, keys [
case <-tracker.done:
return nil
case <-ctx.Done():
- return ctx.Err()
+ return context.Cause(ctx)
}
}
diff --git a/vendor/github.com/grafana/dskit/ring/lifecycler.go b/vendor/github.com/grafana/dskit/ring/lifecycler.go
index 1ff80d99ac807..083f112bdf137 100644
--- a/vendor/github.com/grafana/dskit/ring/lifecycler.go
+++ b/vendor/github.com/grafana/dskit/ring/lifecycler.go
@@ -415,6 +415,11 @@ func (i *Lifecycler) setReadOnlyState(readOnly bool, readOnlyLastUpdated time.Ti
defer i.stateMtx.Unlock()
i.readOnly = readOnly
i.readOnlyLastUpdated = readOnlyLastUpdated
+ if readOnly {
+ i.lifecyclerMetrics.readonly.Set(1)
+ } else {
+ i.lifecyclerMetrics.readonly.Set(0)
+ }
}
// ClaimTokensFor takes all the tokens for the supplied ingester and assigns them to this ingester.
@@ -678,8 +683,8 @@ func (i *Lifecycler) initRing(ctx context.Context) error {
now := time.Now()
// The instance doesn't exist in the ring, so it's safe to set the registered timestamp as of now.
i.setRegisteredAt(now)
- // Clear read-only state, and set last update time to "now".
- i.setReadOnlyState(false, now)
+ // Clear read-only state, and set last update time to "zero".
+ i.setReadOnlyState(false, time.Time{})
// We use the tokens from the file only if it does not exist in the ring yet.
if len(tokensFromFile) > 0 {
@@ -719,8 +724,8 @@ func (i *Lifecycler) initRing(ctx context.Context) error {
}
tokens := Tokens(instanceDesc.Tokens)
- level.Info(i.logger).Log("msg", "existing instance found in ring", "state", instanceDesc.State, "tokens",
- len(tokens), "ring", i.RingName)
+ ro, rots := instanceDesc.GetReadOnlyState()
+ level.Info(i.logger).Log("msg", "existing instance found in ring", "state", instanceDesc.State, "tokens", len(tokens), "ring", i.RingName, "readOnly", ro, "readOnlyStateUpdate", rots)
// If the ingester fails to clean its ring entry up or unregister_on_shutdown=false, it can leave behind its
// ring state as LEAVING. Make sure to switch to the ACTIVE state.
diff --git a/vendor/github.com/grafana/dskit/ring/lifecycler_metrics.go b/vendor/github.com/grafana/dskit/ring/lifecycler_metrics.go
index fe29cdfd5fc80..e5f85e4e42387 100644
--- a/vendor/github.com/grafana/dskit/ring/lifecycler_metrics.go
+++ b/vendor/github.com/grafana/dskit/ring/lifecycler_metrics.go
@@ -8,6 +8,7 @@ import (
type LifecyclerMetrics struct {
consulHeartbeats prometheus.Counter
shutdownDuration *prometheus.HistogramVec
+ readonly prometheus.Gauge
}
func NewLifecyclerMetrics(ringName string, reg prometheus.Registerer) *LifecyclerMetrics {
@@ -23,6 +24,11 @@ func NewLifecyclerMetrics(ringName string, reg prometheus.Registerer) *Lifecycle
Buckets: prometheus.ExponentialBuckets(10, 2, 8), // Biggest bucket is 10*2^(9-1) = 2560, or 42 mins.
ConstLabels: prometheus.Labels{"name": ringName},
}, []string{"op", "status"}),
+ readonly: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
+ Name: "lifecycler_read_only",
+ Help: "Set to 1 if this lifecycler's instance entry is in read-only state.",
+ ConstLabels: prometheus.Labels{"name": ringName},
+ }),
}
}
diff --git a/vendor/github.com/grafana/dskit/ring/model.go b/vendor/github.com/grafana/dskit/ring/model.go
index fb3095172b55b..c4ba6446693b9 100644
--- a/vendor/github.com/grafana/dskit/ring/model.go
+++ b/vendor/github.com/grafana/dskit/ring/model.go
@@ -594,6 +594,29 @@ func (d *Desc) writableInstancesWithTokensCountPerZone() map[string]int {
return instancesCountPerZone
}
+func (d *Desc) readOnlyInstancesAndOldestReadOnlyUpdatedTimestamp() (int, int64) {
+ readOnlyInstances := 0
+ oldestReadOnlyUpdatedTimestamp := int64(0)
+ first := true
+
+ if d != nil {
+ for _, ingester := range d.Ingesters {
+ if !ingester.ReadOnly {
+ continue
+ }
+
+ readOnlyInstances++
+ if first {
+ oldestReadOnlyUpdatedTimestamp = ingester.ReadOnlyUpdatedTimestamp
+ } else {
+ oldestReadOnlyUpdatedTimestamp = min(oldestReadOnlyUpdatedTimestamp, ingester.ReadOnlyUpdatedTimestamp)
+ }
+ first = false
+ }
+ }
+ return readOnlyInstances, oldestReadOnlyUpdatedTimestamp
+}
+
type CompareResult int
// CompareResult responses
diff --git a/vendor/github.com/grafana/dskit/ring/replication_set.go b/vendor/github.com/grafana/dskit/ring/replication_set.go
index ffdcf80ab5268..ae37820202561 100644
--- a/vendor/github.com/grafana/dskit/ring/replication_set.go
+++ b/vendor/github.com/grafana/dskit/ring/replication_set.go
@@ -316,7 +316,7 @@ func DoUntilQuorumWithoutSuccessfulContextCancellation[T any](ctx context.Contex
ext.Error.Set(cfg.Logger.Span, true)
}
- contextTracker.cancelAllContexts(cancellation.NewErrorf(cause))
+ contextTracker.cancelAllContexts(cancellation.NewError(errors.New(cause)))
cleanupResultsAlreadyReceived()
return nil, err
}
diff --git a/vendor/github.com/grafana/dskit/ring/ring.go b/vendor/github.com/grafana/dskit/ring/ring.go
index bb7e29c28a410..c8db7da50c61b 100644
--- a/vendor/github.com/grafana/dskit/ring/ring.go
+++ b/vendor/github.com/grafana/dskit/ring/ring.go
@@ -191,6 +191,12 @@ type Ring struct {
// then this value will be 0.
oldestRegisteredTimestamp int64
+ readOnlyInstances *int // Number of instances with ReadOnly flag set. Only valid if not nil.
+ // Oldest value of ReadOnlyUpdatedTimestamp for read-only instances. If there are no read-only instances,
+ // or if any read-only instance has ReadOnlyUpdatedTimestamp == 0 (which should not happen), then this value will be 0.
+ // Only valid if not nil.
+ oldestReadOnlyUpdatedTimestamp *int64
+
// Maps a token with the information of the instance holding it. This map is immutable and
// cannot be changed in place because it's shared "as is" between subrings (the only way to
// change it is to create a new one and replace it).
@@ -315,7 +321,7 @@ func (r *Ring) starting(ctx context.Context) error {
func (r *Ring) loop(ctx context.Context) error {
// Update the ring metrics at start of the main loop.
r.mtx.Lock()
- r.updateRingMetrics(Different)
+ r.updateRingMetrics()
r.mtx.Unlock()
r.KVClient.WatchKey(ctx, r.key, func(value interface{}) bool {
@@ -356,11 +362,17 @@ func (r *Ring) updateRingState(ringDesc *Desc) {
// when watching the ring for updates).
r.mtx.Lock()
r.ringDesc = ringDesc
- r.updateRingMetrics(rc)
+ if rc != Equal {
+ r.updateRingMetrics()
+ }
r.mtx.Unlock()
return
}
+ r.setRingStateFromDesc(ringDesc, true, true, true)
+}
+
+func (r *Ring) setRingStateFromDesc(ringDesc *Desc, updateMetrics, updateRegisteredTimestampCache, updateReadOnlyInstances bool) {
now := time.Now()
ringTokens := ringDesc.GetTokens()
ringTokensByZone := ringDesc.getTokensByZone()
@@ -372,6 +384,7 @@ func (r *Ring) updateRingState(ringDesc *Desc) {
instancesWithTokensCountPerZone := ringDesc.instancesWithTokensCountPerZone()
writableInstancesWithTokensCount := ringDesc.writableInstancesWithTokensCount()
writableInstancesWithTokensCountPerZone := ringDesc.writableInstancesWithTokensCountPerZone()
+ readOnlyInstances, oldestReadOnlyUpdatedTimestamp := ringDesc.readOnlyInstancesAndOldestReadOnlyUpdatedTimestamp()
r.mtx.Lock()
defer r.mtx.Unlock()
@@ -385,8 +398,14 @@ func (r *Ring) updateRingState(ringDesc *Desc) {
r.instancesWithTokensCountPerZone = instancesWithTokensCountPerZone
r.writableInstancesWithTokensCount = writableInstancesWithTokensCount
r.writableInstancesWithTokensCountPerZone = writableInstancesWithTokensCountPerZone
- r.oldestRegisteredTimestamp = oldestRegisteredTimestamp
+ if updateRegisteredTimestampCache {
+ r.oldestRegisteredTimestamp = oldestRegisteredTimestamp
+ }
r.lastTopologyChange = now
+ if updateReadOnlyInstances {
+ r.readOnlyInstances = &readOnlyInstances
+ r.oldestReadOnlyUpdatedTimestamp = &oldestReadOnlyUpdatedTimestamp
+ }
// Invalidate all cached subrings.
if r.shuffledSubringCache != nil {
@@ -396,7 +415,9 @@ func (r *Ring) updateRingState(ringDesc *Desc) {
r.shuffledSubringWithLookbackCache = make(map[subringCacheKey]cachedSubringWithLookback[*Ring])
}
- r.updateRingMetrics(rc)
+ if updateMetrics {
+ r.updateRingMetrics()
+ }
}
// Get returns n (or more) instances which form the replicas for the given key.
@@ -636,11 +657,7 @@ func (r *Desc) CountTokens() map[string]int64 {
}
// updateRingMetrics updates ring metrics. Caller must be holding the Write lock!
-func (r *Ring) updateRingMetrics(compareResult CompareResult) {
- if compareResult == Equal {
- return
- }
-
+func (r *Ring) updateRingMetrics() {
numByState := map[string]int{}
oldestTimestampByState := map[string]int64{}
@@ -668,10 +685,6 @@ func (r *Ring) updateRingMetrics(compareResult CompareResult) {
r.oldestTimestampGaugeVec.WithLabelValues(state).Set(float64(timestamp))
}
- if compareResult == EqualButStatesAndTimestamps {
- return
- }
-
r.totalTokensGauge.Set(float64(len(r.ringTokens)))
}
@@ -697,18 +710,16 @@ func (r *Ring) updateRingMetrics(compareResult CompareResult) {
//
// Subring returned by this method does not contain instances that have read-only field set.
func (r *Ring) ShuffleShard(identifier string, size int) ReadRing {
- // Use all possible instances if shuffle sharding is disabled. We don't set size to r.InstancesCount(), because
- // that could lead to not all instances being returned when ring zones are unbalanced.
- // Reason for not returning entire ring directly is that we need to filter out read-only instances.
- if size <= 0 {
- size = math.MaxInt
- }
-
if cached := r.getCachedShuffledSubring(identifier, size); cached != nil {
return cached
}
- result := r.shuffleShard(identifier, size, 0, time.Now())
+ var result *Ring
+ if size <= 0 {
+ result = r.filterOutReadOnlyInstances(0, time.Now())
+ } else {
+ result = r.shuffleShard(identifier, size, 0, time.Now())
+ }
// Only cache subring if it is different from this ring, to avoid deadlocks in getCachedShuffledSubring,
// when we update the cached ring.
if result != r {
@@ -725,17 +736,20 @@ func (r *Ring) ShuffleShard(identifier string, size int) ReadRing {
//
// This function supports caching, but the cache will only be effective if successive calls for the
// same identifier are with the same lookbackPeriod and increasing values of now.
+//
+// Subring returned by this method does not contain read-only instances that have changed their state
+// before the lookback period.
func (r *Ring) ShuffleShardWithLookback(identifier string, size int, lookbackPeriod time.Duration, now time.Time) ReadRing {
- // Nothing to do if the shard size is not smaller than the actual ring.
- if size <= 0 || r.InstancesCount() <= size {
- return r
- }
-
if cached := r.getCachedShuffledSubringWithLookback(identifier, size, lookbackPeriod, now); cached != nil {
return cached
}
- result := r.shuffleShard(identifier, size, lookbackPeriod, now)
+ var result *Ring
+ if size <= 0 {
+ result = r.filterOutReadOnlyInstances(lookbackPeriod, now)
+ } else {
+ result = r.shuffleShard(identifier, size, lookbackPeriod, now)
+ }
if result != r {
r.setCachedShuffledSubringWithLookback(identifier, size, lookbackPeriod, now, result)
@@ -756,6 +770,9 @@ func (r *Ring) shuffleShard(identifier string, size int, lookbackPeriod time.Dur
//
// If any instance had RegisteredTimestamp equal to 0 (it would not cause additional lookup of next instance),
// then r.oldestRegisteredTimestamp is zero too, and we skip this optimization.
+ //
+ // Even if some instances are read-only, they must have changed their read-only status within lookback window
+ // (because they were all registered within lookback window), so they would be included in the result.
if lookbackPeriod > 0 && r.oldestRegisteredTimestamp > 0 && r.oldestRegisteredTimestamp >= lookbackUntil {
return r
}
@@ -778,6 +795,19 @@ func (r *Ring) shuffleShard(identifier string, size int, lookbackPeriod time.Dur
var tokens []uint32
if r.cfg.ZoneAwarenessEnabled {
+ // If we're going to include all instances from this zone, we can simply filter out
+ // unwanted instances, and avoid iterating through tokens.
+ if numInstancesPerZone >= r.instancesCountPerZone[zone] {
+ for id, inst := range r.ringDesc.Ingesters {
+ if inst.Zone == zone && shouldIncludeReadonlyInstanceInTheShard(inst, lookbackPeriod, lookbackUntil) {
+ shard[id] = inst
+ }
+ }
+
+ // We can go to the next zone, no need to iterate tokens.
+ continue
+ }
+
tokens = r.ringTokensByZone[zone]
} else {
// When zone-awareness is disabled, we just iterate over 1 single fake zone
@@ -819,11 +849,9 @@ func (r *Ring) shuffleShard(identifier string, size int, lookbackPeriod time.Dur
instanceID := info.InstanceID
instance := r.ringDesc.Ingesters[instanceID]
- // The lookbackPeriod is 0 when this function is called by ShuffleShard(). In this case, we want read only instances excluded.
- if lookbackPeriod == 0 && instance.ReadOnly {
+ if !shouldIncludeReadonlyInstanceInTheShard(instance, lookbackPeriod, lookbackUntil) {
continue
}
-
// Include instance in the subring.
shard[instanceID] = instance
@@ -855,7 +883,56 @@ func (r *Ring) shuffleShard(identifier string, size int, lookbackPeriod time.Dur
}
}
- // Build a read-only ring for the shard.
+ return r.buildRingForTheShard(shard)
+}
+
+// shouldIncludeReadonlyInstanceInTheShard returns true if instance is not read-only, or when it is read-only and should be included in the shuffle shard.
+func shouldIncludeReadonlyInstanceInTheShard(instance InstanceDesc, lookbackPeriod time.Duration, lookbackUntil int64) bool {
+ if !instance.ReadOnly {
+ return true
+ }
+ // The lookbackPeriod is 0 when this function is called by ShuffleShard(). In this case, we want read only instances excluded.
+ if lookbackPeriod == 0 {
+ return false
+ }
+ // With lookback period >0, read only instances are only included if they have not changed read-only status in the lookback window.
+ // If ReadOnlyUpdatedTimestamp is not set, we include the instance, and extend the shard later.
+ if lookbackPeriod > 0 && instance.ReadOnlyUpdatedTimestamp > 0 && instance.ReadOnlyUpdatedTimestamp < lookbackUntil {
+ return false
+ }
+ return true
+}
+
+// filterOutReadOnlyInstances removes all read-only instances from the ring, and returns the resulting ring.
+func (r *Ring) filterOutReadOnlyInstances(lookbackPeriod time.Duration, now time.Time) *Ring {
+ lookbackUntil := now.Add(-lookbackPeriod).Unix()
+
+ r.mtx.RLock()
+ defer r.mtx.RUnlock()
+
+ // If there are no read-only instances, there's no need to do any filtering.
+ if r.readOnlyInstances != nil && *r.readOnlyInstances == 0 {
+ return r
+ }
+
+ // If all readOnlyUpdatedTimestamp values are within lookback window, we can return the ring without any filtering.
+ if lookbackPeriod > 0 && r.oldestReadOnlyUpdatedTimestamp != nil && *r.oldestReadOnlyUpdatedTimestamp >= lookbackUntil {
+ return r
+ }
+
+ shard := make(map[string]InstanceDesc, len(r.ringDesc.Ingesters))
+
+ for id, inst := range r.ringDesc.Ingesters {
+ if shouldIncludeReadonlyInstanceInTheShard(inst, lookbackPeriod, lookbackUntil) {
+ shard[id] = inst
+ }
+ }
+
+ return r.buildRingForTheShard(shard)
+}
+
+// buildRingForTheShard builds read-only ring for the shard (this ring won't be updated in the future).
+func (r *Ring) buildRingForTheShard(shard map[string]InstanceDesc) *Ring {
shardDesc := &Desc{Ingesters: shard}
shardTokensByZone := shardDesc.getTokensByZone()
shardTokens := mergeTokenGroups(shardTokensByZone)
diff --git a/vendor/github.com/grafana/dskit/runtimeconfig/manager.go b/vendor/github.com/grafana/dskit/runtimeconfig/manager.go
index 84b69de766d00..e74d011d99501 100644
--- a/vendor/github.com/grafana/dskit/runtimeconfig/manager.go
+++ b/vendor/github.com/grafana/dskit/runtimeconfig/manager.go
@@ -2,12 +2,14 @@ package runtimeconfig
import (
"bytes"
+ "compress/gzip"
"context"
"crypto/sha256"
"flag"
"fmt"
"io"
"os"
+ "strings"
"sync"
"time"
@@ -183,8 +185,8 @@ func (om *Manager) loadConfig() error {
mergedConfig := map[string]interface{}{}
for _, f := range om.cfg.LoadPath {
- yamlFile := map[string]interface{}{}
- err := yaml.Unmarshal(rawData[f], &yamlFile)
+ data := rawData[f]
+ yamlFile, err := om.unmarshalMaybeGzipped(f, data)
if err != nil {
om.configLoadSuccess.Set(0)
return errors.Wrapf(err, "unmarshal file %q", f)
@@ -218,6 +220,32 @@ func (om *Manager) loadConfig() error {
return nil
}
+func (om *Manager) unmarshalMaybeGzipped(filename string, data []byte) (map[string]any, error) {
+ yamlFile := map[string]any{}
+ if strings.HasSuffix(filename, ".gz") {
+ r, err := gzip.NewReader(bytes.NewReader(data))
+ if err != nil {
+ return nil, errors.Wrap(err, "read gzipped file")
+ }
+ defer r.Close()
+ err = yaml.NewDecoder(r).Decode(&yamlFile)
+ return yamlFile, errors.Wrap(err, "uncompress/unmarshal gzipped file")
+ }
+
+ if err := yaml.Unmarshal(data, &yamlFile); err != nil {
+ // Give a hint if we think that file is gzipped.
+ if isGzip(data) {
+ return nil, errors.Wrap(err, "file looks gzipped but doesn't have a .gz extension")
+ }
+ return nil, err
+ }
+ return yamlFile, nil
+}
+
+func isGzip(data []byte) bool {
+ return len(data) > 2 && data[0] == 0x1f && data[1] == 0x8b
+}
+
func mergeConfigMaps(a, b map[string]interface{}) map[string]interface{} {
out := make(map[string]interface{}, len(a))
for k, v := range a {
diff --git a/vendor/github.com/grafana/dskit/server/server.go b/vendor/github.com/grafana/dskit/server/server.go
index a23eead3891e4..7b8e7593d9eef 100644
--- a/vendor/github.com/grafana/dskit/server/server.go
+++ b/vendor/github.com/grafana/dskit/server/server.go
@@ -31,7 +31,6 @@ import (
"golang.org/x/net/netutil"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
- "google.golang.org/grpc/experimental"
"google.golang.org/grpc/keepalive"
"github.com/grafana/dskit/httpgrpc"
@@ -197,7 +196,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&cfg.GRPCServerMinTimeBetweenPings, "server.grpc.keepalive.min-time-between-pings", 5*time.Minute, "Minimum amount of time a client should wait before sending a keepalive ping. If client sends keepalive ping more often, server will send GOAWAY and close the connection.")
f.BoolVar(&cfg.GRPCServerPingWithoutStreamAllowed, "server.grpc.keepalive.ping-without-stream-allowed", false, "If true, server allows keepalive pings even when there are no active streams(RPCs). If false, and client sends ping when there are no active streams, server will send GOAWAY and close the connection.")
f.BoolVar(&cfg.GRPCServerStatsTrackingEnabled, "server.grpc.stats-tracking-enabled", true, "If true, the request_message_bytes, response_message_bytes, and inflight_requests metrics will be tracked. Enabling this option prevents the use of memory pools for parsing gRPC request bodies and may lead to more memory allocations.")
- f.BoolVar(&cfg.GRPCServerRecvBufferPoolsEnabled, "server.grpc.recv-buffer-pools-enabled", false, "If true, gGPC's buffer pools will be used to handle incoming requests. Enabling this feature can reduce memory allocation, but also requires disabling GRPC server stats tracking by setting `server.grpc.stats-tracking-enabled=false`. This is an experimental gRPC feature, so it might be removed in a future version of the gRPC library.")
+ f.BoolVar(&cfg.GRPCServerRecvBufferPoolsEnabled, "server.grpc.recv-buffer-pools-enabled", false, "Deprecated option, has no effect and will be removed in a future version.")
f.IntVar(&cfg.GRPCServerNumWorkers, "server.grpc.num-workers", 0, "If non-zero, configures the amount of GRPC server workers used to serve the requests.")
f.StringVar(&cfg.PathPrefix, "server.path-prefix", "", "Base path to serve all API routes from (e.g. /v1/)")
f.StringVar(&cfg.LogFormat, "log.format", log.LogfmtFormat, "Output log messages in the given format. Valid formats: [logfmt, json]")
@@ -439,10 +438,7 @@ func newServer(cfg Config, metrics *Metrics) (*Server, error) {
}
if cfg.GRPCServerRecvBufferPoolsEnabled {
- if cfg.GRPCServerStatsTrackingEnabled {
- return nil, fmt.Errorf("grpc_server_stats_tracking_enabled must be set to false if grpc_server_recv_buffer_pools_enabled is true")
- }
- grpcOptions = append(grpcOptions, experimental.RecvBufferPool(grpc.NewSharedBufferPool()))
+ level.Warn(logger).Log("msg", "'server.grpc.recv-buffer-pools-enabled' is a deprecated option that currently has no effect and will be removed in a future version")
}
grpcOptions = append(grpcOptions, cfg.GRPCOptions...)
diff --git a/vendor/github.com/hashicorp/consul/api/config_entry_mesh.go b/vendor/github.com/hashicorp/consul/api/config_entry_mesh.go
index 1a1ebb8b536bc..e035d15967783 100644
--- a/vendor/github.com/hashicorp/consul/api/config_entry_mesh.go
+++ b/vendor/github.com/hashicorp/consul/api/config_entry_mesh.go
@@ -26,6 +26,14 @@ type MeshConfigEntry struct {
// MutualTLSMode=permissive in either service-defaults or proxy-defaults.
AllowEnablingPermissiveMutualTLS bool `json:",omitempty" alias:"allow_enabling_permissive_mutual_tls"`
+ // ValidateClusters controls whether the clusters the route table refers to are validated. The default value is
+ // false. When set to false and a route refers to a cluster that does not exist, the route table loads and routing
+ // to a non-existent cluster results in a 404. When set to true and the route is set to a cluster that do not exist,
+ // the route table will not load. For more information, refer to
+ // [HTTP route configuration in the Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route.proto#envoy-v3-api-field-config-route-v3-routeconfiguration-validate-clusters)
+ // for more details.
+ ValidateClusters bool `json:",omitempty" alias:"validate_clusters"`
+
TLS *MeshTLSConfig `json:",omitempty"`
HTTP *MeshHTTPConfig `json:",omitempty"`
diff --git a/vendor/github.com/hashicorp/go-msgpack/v2/codec/doc.go b/vendor/github.com/hashicorp/go-msgpack/v2/codec/doc.go
index deac00010c2bf..325c6e1ed9570 100644
--- a/vendor/github.com/hashicorp/go-msgpack/v2/codec/doc.go
+++ b/vendor/github.com/hashicorp/go-msgpack/v2/codec/doc.go
@@ -61,11 +61,10 @@ Rich Feature Set includes:
- Drop-in replacement for encoding/json. `json:` key in struct tag supported.
- Provides a RPC Server and Client Codec for net/rpc communication protocol.
- Handle unique idiosyncrasies of codecs e.g.
- - For messagepack, configure how ambiguities in handling raw bytes are resolved
- - For messagepack, provide rpc server/client codec to support
- msgpack-rpc protocol defined at:
- https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
-
+ - For messagepack, configure how ambiguities in handling raw bytes are resolved
+ - For messagepack, provide rpc server/client codec to support
+ msgpack-rpc protocol defined at:
+ https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
## Extension Support
@@ -75,11 +74,13 @@ custom types.
There are no restrictions on what the custom type can be. Some examples:
```go
- type BisSet []int
- type BitSet64 uint64
- type UUID string
- type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
- type GifImage struct { ... }
+
+ type BisSet []int
+ type BitSet64 uint64
+ type UUID string
+ type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
+ type GifImage struct { ... }
+
```
As an illustration, MyStructWithUnexportedFields would normally be encoded
@@ -87,7 +88,6 @@ as an empty map because it has no exported fields, while UUID would be
encoded as a string. However, with extension support, you can encode any of
these however you like.
-
## Custom Encoding and Decoding
This package maintains symmetry in the encoding and decoding halfs. We
@@ -108,13 +108,11 @@ Consequently, if a type only defines one-half of the symmetry (e.g. it
implements UnmarshalJSON() but not MarshalJSON() ), then that type doesn't
satisfy the check and we will continue walking down the decision tree.
-
## RPC
RPC Client and Server Codecs are implemented, so the codecs can be used with
the standard net/rpc package.
-
## Usage
The Handle is SAFE for concurrent READ, but NOT SAFE for concurrent
@@ -135,85 +133,93 @@ Consequently, the usage model is basically:
Sample usage model:
```go
- // create and configure Handle
- var (
- mh codec.MsgpackHandle
- )
-
- mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
-
- // configure extensions
- // e.g. for msgpack, define functions and enable Time support for tag 1
- mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)
-
- // create and use decoder/encoder
- var (
- r io.Reader
- w io.Writer
- b []byte
- h = &mh
- )
-
- dec = codec.NewDecoder(r, h)
- dec = codec.NewDecoderBytes(b, h)
- err = dec.Decode(&v)
-
- enc = codec.NewEncoder(w, h)
- enc = codec.NewEncoderBytes(&b, h)
- err = enc.Encode(v)
-
- //RPC Server
- go func() {
- for {
- conn, err := listener.Accept()
- rpcCodec := codec.GoRpc.ServerCodec(conn, h)
- //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
- rpc.ServeCodec(rpcCodec)
- }
- }()
-
- //RPC Communication (client side)
- conn, err = net.Dial("tcp", "localhost:5555")
- rpcCodec := codec.GoRpc.ClientCodec(conn, h)
- //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
- client := rpc.NewClientWithCodec(rpcCodec)
-```
+ // create and configure Handle
+ var (
+ mh codec.MsgpackHandle
+ )
+
+ mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
+
+ // configure extensions
+ // e.g. for msgpack, define functions and enable Time support for tag 1
+ mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)
+
+ // create and use decoder/encoder
+ var (
+ r io.Reader
+ w io.Writer
+ b []byte
+ h = &mh
+ )
+
+ dec = codec.NewDecoder(r, h)
+ dec = codec.NewDecoderBytes(b, h)
+ err = dec.Decode(&v)
+
+ enc = codec.NewEncoder(w, h)
+ enc = codec.NewEncoderBytes(&b, h)
+ err = enc.Encode(v)
+
+ //RPC Server
+ go func() {
+ for {
+ conn, err := listener.Accept()
+ rpcCodec := codec.GoRpc.ServerCodec(conn, h)
+ //OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
+ rpc.ServeCodec(rpcCodec)
+ }
+ }()
+
+ //RPC Communication (client side)
+ conn, err = net.Dial("tcp", "localhost:5555")
+ rpcCodec := codec.GoRpc.ClientCodec(conn, h)
+ //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
+ client := rpc.NewClientWithCodec(rpcCodec)
+
+```
## Running Tests
To run tests, use the following:
```
- go test
+
+ go test
+
```
To run the full suite of tests, use the following:
```
- go test -tags alltests -run Suite
+
+ go test -tags alltests -run Suite
+
```
You can run the tag 'safe' to run tests or build in safe mode. e.g.
```
- go test -tags safe -run Json
- go test -tags "alltests safe" -run Suite
+
+ go test -tags safe -run Json
+ go test -tags "alltests safe" -run Suite
+
```
## Running Benchmarks
```
- cd codec/bench
- ./bench.sh -d
- ./bench.sh -c
- ./bench.sh -s
- go test -bench . -benchmem -benchtime 1s
+
+ cd codec/bench
+ ./bench.sh -d
+ ./bench.sh -c
+ ./bench.sh -s
+ go test -bench . -benchmem -benchtime 1s
+
```
Please see http://github.com/hashicorp/go-codec-bench .
-
## Caveats
Struct fields matching the following are ignored during encoding and
diff --git a/vendor/github.com/hashicorp/go-msgpack/v2/codec/gen.go b/vendor/github.com/hashicorp/go-msgpack/v2/codec/gen.go
index efaff02e3ab24..92e631cad6ecb 100644
--- a/vendor/github.com/hashicorp/go-msgpack/v2/codec/gen.go
+++ b/vendor/github.com/hashicorp/go-msgpack/v2/codec/gen.go
@@ -7,7 +7,7 @@
package codec
import (
- "encoding/base64"
+ "encoding/base32"
"errors"
"fmt"
"io"
@@ -132,7 +132,8 @@ var (
errGenAllTypesSamePkg = errors.New("All types must be in the same package")
errGenExpectArrayOrMap = errors.New("unexpected type. Expecting array/map/slice")
- genBase64enc = base64.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789__")
+ // base64 requires 64 unique characters in Go 1.22+, which is not possible for Go identifiers.
+ genBase32enc = base32.NewEncoding("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdef")
genQNameRegex = regexp.MustCompile(`[A-Za-z_.]+`)
)
@@ -1829,7 +1830,7 @@ func genMethodNameT(t reflect.Type, tRef reflect.Type) (n string) {
} else {
// best way to get the package name inclusive
// return ptrPfx + strings.Replace(tstr, ".", "_", 1000)
- // return ptrPfx + genBase64enc.EncodeToString([]byte(tstr))
+ // return ptrPfx + genBase32enc.EncodeToString([]byte(tstr))
if t.Name() != "" && genQNameRegex.MatchString(tstr) {
return ptrPfx + strings.Replace(tstr, ".", "_", 1000)
} else {
@@ -1840,12 +1841,12 @@ func genMethodNameT(t reflect.Type, tRef reflect.Type) (n string) {
}
}
-// genCustomNameForType base64encodes the t.String() value in such a way
+// genCustomNameForType base32encodes the t.String() value in such a way
// that it can be used within a function name.
func genCustomTypeName(tstr string) string {
- len2 := genBase64enc.EncodedLen(len(tstr))
+ len2 := genBase32enc.EncodedLen(len(tstr))
bufx := make([]byte, len2)
- genBase64enc.Encode(bufx, []byte(tstr))
+ genBase32enc.Encode(bufx, []byte(tstr))
for i := len2 - 1; i >= 0; i-- {
if bufx[i] == '=' {
len2--
diff --git a/vendor/github.com/hashicorp/raft/raft.go b/vendor/github.com/hashicorp/raft/raft.go
index 183f041a42254..cbc9a59afe15e 100644
--- a/vendor/github.com/hashicorp/raft/raft.go
+++ b/vendor/github.com/hashicorp/raft/raft.go
@@ -1749,7 +1749,7 @@ func (r *Raft) requestPreVote(rpc RPC, req *RequestPreVoteRequest) {
}()
// Check if we have an existing leader [who's not the candidate] and also
- var candidate ServerAddress
+ candidate := r.trans.DecodePeer(req.GetRPCHeader().Addr)
candidateID := ServerID(req.ID)
// if the Servers list is empty that mean the cluster is very likely trying to bootstrap,
@@ -1805,7 +1805,6 @@ func (r *Raft) requestPreVote(rpc RPC, req *RequestPreVoteRequest) {
}
resp.Granted = true
- r.setLastContact()
}
// installSnapshot is invoked when we get a InstallSnapshot RPC call.
diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go b/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go
index 5f117afa49346..a70cbea9e57ea 100644
--- a/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go
+++ b/vendor/github.com/minio/minio-go/v7/api-put-object-multipart.go
@@ -24,7 +24,6 @@ import (
"encoding/hex"
"encoding/xml"
"fmt"
- "hash/crc32"
"io"
"net/http"
"net/url"
@@ -87,7 +86,7 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj
if opts.UserMetadata == nil {
opts.UserMetadata = make(map[string]string, 1)
}
- opts.UserMetadata["X-Amz-Checksum-Algorithm"] = "CRC32C"
+ opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String()
}
// Initiate a new multipart upload.
@@ -116,7 +115,7 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj
// CRC32C is ~50% faster on AMD64 @ 30GB/s
var crcBytes []byte
customHeader := make(http.Header)
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
+ crc := opts.AutoChecksum.Hasher()
for partNumber <= totalPartsCount {
length, rErr := readFull(reader, buf)
if rErr == io.EOF && partNumber > 1 {
@@ -154,7 +153,7 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj
crc.Reset()
crc.Write(buf[:length])
cSum := crc.Sum(nil)
- customHeader.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(cSum))
+ customHeader.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(cSum))
crcBytes = append(crcBytes, cSum...)
}
@@ -202,12 +201,13 @@ func (c *Client) putObjectMultipartNoStream(ctx context.Context, bucketName, obj
sort.Sort(completedParts(complMultipartUpload.Parts))
opts = PutObjectOptions{
ServerSideEncryption: opts.ServerSideEncryption,
+ AutoChecksum: opts.AutoChecksum,
}
if len(crcBytes) > 0 {
// Add hash of hashes.
crc.Reset()
crc.Write(crcBytes)
- opts.UserMetadata = map[string]string{"X-Amz-Checksum-Crc32c": base64.StdEncoding.EncodeToString(crc.Sum(nil))}
+ opts.UserMetadata = map[string]string{opts.AutoChecksum.Key(): base64.StdEncoding.EncodeToString(crc.Sum(nil))}
}
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts)
if err != nil {
diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go b/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go
index 51226630d2b5f..7f316564b3ab5 100644
--- a/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go
+++ b/vendor/github.com/minio/minio-go/v7/api-put-object-streaming.go
@@ -22,7 +22,6 @@ import (
"context"
"encoding/base64"
"fmt"
- "hash/crc32"
"io"
"net/http"
"net/url"
@@ -115,7 +114,7 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN
if opts.UserMetadata == nil {
opts.UserMetadata = make(map[string]string, 1)
}
- opts.UserMetadata["X-Amz-Checksum-Algorithm"] = "CRC32C"
+ opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String()
}
// Initiate a new multipart upload.
uploadID, err := c.newUploadID(ctx, bucketName, objectName, opts)
@@ -195,10 +194,10 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN
sectionReader := newHook(io.NewSectionReader(reader, readOffset, partSize), opts.Progress)
trailer := make(http.Header, 1)
if withChecksum {
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
- trailer.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(crc.Sum(nil)))
+ crc := opts.AutoChecksum.Hasher()
+ trailer.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(crc.Sum(nil)))
sectionReader = newHashReaderWrapper(sectionReader, crc, func(hash []byte) {
- trailer.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(hash))
+ trailer.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(hash))
})
}
@@ -271,17 +270,18 @@ func (c *Client) putObjectMultipartStreamFromReadAt(ctx context.Context, bucketN
opts = PutObjectOptions{
ServerSideEncryption: opts.ServerSideEncryption,
+ AutoChecksum: opts.AutoChecksum,
}
if withChecksum {
// Add hash of hashes.
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
+ crc := opts.AutoChecksum.Hasher()
for _, part := range complMultipartUpload.Parts {
- cs, err := base64.StdEncoding.DecodeString(part.ChecksumCRC32C)
+ cs, err := base64.StdEncoding.DecodeString(part.Checksum(opts.AutoChecksum))
if err == nil {
crc.Write(cs)
}
}
- opts.UserMetadata = map[string]string{"X-Amz-Checksum-Crc32c": base64.StdEncoding.EncodeToString(crc.Sum(nil))}
+ opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))}
}
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts)
@@ -308,7 +308,7 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b
if opts.UserMetadata == nil {
opts.UserMetadata = make(map[string]string, 1)
}
- opts.UserMetadata["X-Amz-Checksum-Algorithm"] = "CRC32C"
+ opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String()
}
// Calculate the optimal parts info for a given size.
@@ -337,7 +337,7 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b
// CRC32C is ~50% faster on AMD64 @ 30GB/s
var crcBytes []byte
customHeader := make(http.Header)
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
+ crc := opts.AutoChecksum.Hasher()
md5Hash := c.md5Hasher()
defer md5Hash.Close()
@@ -381,7 +381,7 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b
crc.Reset()
crc.Write(buf[:length])
cSum := crc.Sum(nil)
- customHeader.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(cSum))
+ customHeader.Set(opts.AutoChecksum.KeyCapitalized(), base64.StdEncoding.EncodeToString(cSum))
crcBytes = append(crcBytes, cSum...)
}
@@ -433,12 +433,13 @@ func (c *Client) putObjectMultipartStreamOptionalChecksum(ctx context.Context, b
opts = PutObjectOptions{
ServerSideEncryption: opts.ServerSideEncryption,
+ AutoChecksum: opts.AutoChecksum,
}
if len(crcBytes) > 0 {
// Add hash of hashes.
crc.Reset()
crc.Write(crcBytes)
- opts.UserMetadata = map[string]string{"X-Amz-Checksum-Crc32c": base64.StdEncoding.EncodeToString(crc.Sum(nil))}
+ opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))}
}
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts)
if err != nil {
@@ -467,7 +468,7 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam
if opts.UserMetadata == nil {
opts.UserMetadata = make(map[string]string, 1)
}
- opts.UserMetadata["X-Amz-Checksum-Algorithm"] = "CRC32C"
+ opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String()
}
// Cancel all when an error occurs.
@@ -500,7 +501,7 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam
// Create checksums
// CRC32C is ~50% faster on AMD64 @ 30GB/s
var crcBytes []byte
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
+ crc := opts.AutoChecksum.Hasher()
// Total data read and written to server. should be equal to 'size' at the end of the call.
var totalUploadedSize int64
@@ -558,7 +559,7 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam
crc.Reset()
crc.Write(buf[:length])
cSum := crc.Sum(nil)
- customHeader.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(cSum))
+ customHeader.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(cSum))
crcBytes = append(crcBytes, cSum...)
}
@@ -639,12 +640,13 @@ func (c *Client) putObjectMultipartStreamParallel(ctx context.Context, bucketNam
opts = PutObjectOptions{
ServerSideEncryption: opts.ServerSideEncryption,
+ AutoChecksum: opts.AutoChecksum,
}
if len(crcBytes) > 0 {
// Add hash of hashes.
crc.Reset()
crc.Write(crcBytes)
- opts.UserMetadata = map[string]string{"X-Amz-Checksum-Crc32c": base64.StdEncoding.EncodeToString(crc.Sum(nil))}
+ opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))}
}
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts)
if err != nil {
@@ -765,7 +767,10 @@ func (c *Client) putObjectDo(ctx context.Context, bucketName, objectName string,
contentMD5Base64: md5Base64,
contentSHA256Hex: sha256Hex,
streamSha256: !opts.DisableContentSha256,
- addCrc: addCrc,
+ }
+ if addCrc {
+ opts.AutoChecksum.SetDefault(ChecksumCRC32C)
+ reqMetadata.addCrc = &opts.AutoChecksum
}
if opts.Internal.SourceVersionID != "" {
if opts.Internal.SourceVersionID != nullVersionID {
diff --git a/vendor/github.com/minio/minio-go/v7/api-put-object.go b/vendor/github.com/minio/minio-go/v7/api-put-object.go
index 6ccb58156701f..a792cfe390ce3 100644
--- a/vendor/github.com/minio/minio-go/v7/api-put-object.go
+++ b/vendor/github.com/minio/minio-go/v7/api-put-object.go
@@ -23,7 +23,6 @@ import (
"encoding/base64"
"errors"
"fmt"
- "hash/crc32"
"io"
"net/http"
"sort"
@@ -90,6 +89,11 @@ type PutObjectOptions struct {
DisableContentSha256 bool
DisableMultipart bool
+ // AutoChecksum is the type of checksum that will be added if no other checksum is added,
+ // like MD5 or SHA256 streaming checksum, and it is feasible for the upload type.
+ // If none is specified CRC32C is used, since it is generally the fastest.
+ AutoChecksum ChecksumType
+
// ConcurrentStreamParts will create NumThreads buffers of PartSize bytes,
// fill them serially and upload them in parallel.
// This can be used for faster uploads on non-seekable or slow-to-seek input.
@@ -300,6 +304,7 @@ func (c *Client) putObjectCommon(ctx context.Context, bucketName, objectName str
if size > int64(maxMultipartPutObjectSize) {
return UploadInfo{}, errEntityTooLarge(size, maxMultipartPutObjectSize, bucketName, objectName)
}
+ opts.AutoChecksum.SetDefault(ChecksumCRC32C)
// NOTE: Streaming signature is not supported by GCS.
if s3utils.IsGoogleEndpoint(*c.endpointURL) {
@@ -361,7 +366,7 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam
if opts.UserMetadata == nil {
opts.UserMetadata = make(map[string]string, 1)
}
- opts.UserMetadata["X-Amz-Checksum-Algorithm"] = "CRC32C"
+ opts.UserMetadata["X-Amz-Checksum-Algorithm"] = opts.AutoChecksum.String()
}
// Initiate a new multipart upload.
@@ -390,7 +395,7 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam
// CRC32C is ~50% faster on AMD64 @ 30GB/s
var crcBytes []byte
customHeader := make(http.Header)
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
+ crc := opts.AutoChecksum.Hasher()
for partNumber <= totalPartsCount {
length, rerr := readFull(reader, buf)
@@ -413,7 +418,7 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam
crc.Reset()
crc.Write(buf[:length])
cSum := crc.Sum(nil)
- customHeader.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(cSum))
+ customHeader.Set(opts.AutoChecksum.Key(), base64.StdEncoding.EncodeToString(cSum))
crcBytes = append(crcBytes, cSum...)
}
@@ -466,12 +471,13 @@ func (c *Client) putObjectMultipartStreamNoLength(ctx context.Context, bucketNam
opts = PutObjectOptions{
ServerSideEncryption: opts.ServerSideEncryption,
+ AutoChecksum: opts.AutoChecksum,
}
if len(crcBytes) > 0 {
// Add hash of hashes.
crc.Reset()
crc.Write(crcBytes)
- opts.UserMetadata = map[string]string{"X-Amz-Checksum-Crc32c": base64.StdEncoding.EncodeToString(crc.Sum(nil))}
+ opts.UserMetadata = map[string]string{opts.AutoChecksum.KeyCapitalized(): base64.StdEncoding.EncodeToString(crc.Sum(nil))}
}
uploadInfo, err := c.completeMultipartUpload(ctx, bucketName, objectName, uploadID, complMultipartUpload, opts)
if err != nil {
diff --git a/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go b/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go
index 1527b746e6969..790606c509da5 100644
--- a/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go
+++ b/vendor/github.com/minio/minio-go/v7/api-s3-datatypes.go
@@ -340,6 +340,22 @@ type CompletePart struct {
ChecksumSHA256 string `xml:"ChecksumSHA256,omitempty"`
}
+// Checksum will return the checksum for the given type.
+// Will return the empty string if not set.
+func (c CompletePart) Checksum(t ChecksumType) string {
+ switch {
+ case t.Is(ChecksumCRC32C):
+ return c.ChecksumCRC32C
+ case t.Is(ChecksumCRC32):
+ return c.ChecksumCRC32
+ case t.Is(ChecksumSHA1):
+ return c.ChecksumSHA1
+ case t.Is(ChecksumSHA256):
+ return c.ChecksumSHA256
+ }
+ return ""
+}
+
// completeMultipartUpload container for completing multipart upload.
type completeMultipartUpload struct {
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ CompleteMultipartUpload" json:"-"`
diff --git a/vendor/github.com/minio/minio-go/v7/api.go b/vendor/github.com/minio/minio-go/v7/api.go
index 13c493d0fb7b4..be28e3fdfc41c 100644
--- a/vendor/github.com/minio/minio-go/v7/api.go
+++ b/vendor/github.com/minio/minio-go/v7/api.go
@@ -23,7 +23,6 @@ import (
"encoding/base64"
"errors"
"fmt"
- "hash/crc32"
"io"
"math/rand"
"net"
@@ -129,7 +128,7 @@ type Options struct {
// Global constants.
const (
libraryName = "minio-go"
- libraryVersion = "v7.0.75"
+ libraryVersion = "v7.0.76"
)
// User Agent should always following the below style.
@@ -471,7 +470,7 @@ type requestMetadata struct {
contentMD5Base64 string // carries base64 encoded md5sum
contentSHA256Hex string // carries hex encoded sha256sum
streamSha256 bool
- addCrc bool
+ addCrc *ChecksumType
trailer http.Header // (http.Request).Trailer. Requires v4 signature.
}
@@ -616,16 +615,16 @@ func (c *Client) executeMethod(ctx context.Context, method string, metadata requ
}
}
- if metadata.addCrc && metadata.contentLength > 0 {
+ if metadata.addCrc != nil && metadata.contentLength > 0 {
if metadata.trailer == nil {
metadata.trailer = make(http.Header, 1)
}
- crc := crc32.New(crc32.MakeTable(crc32.Castagnoli))
+ crc := metadata.addCrc.Hasher()
metadata.contentBody = newHashReaderWrapper(metadata.contentBody, crc, func(hash []byte) {
// Update trailer when done.
- metadata.trailer.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(hash))
+ metadata.trailer.Set(metadata.addCrc.Key(), base64.StdEncoding.EncodeToString(hash))
})
- metadata.trailer.Set("x-amz-checksum-crc32c", base64.StdEncoding.EncodeToString(crc.Sum(nil)))
+ metadata.trailer.Set(metadata.addCrc.Key(), base64.StdEncoding.EncodeToString(crc.Sum(nil)))
}
// Create cancel context to control 'newRetryTimer' go routine.
diff --git a/vendor/github.com/minio/minio-go/v7/checksum.go b/vendor/github.com/minio/minio-go/v7/checksum.go
index a1f6f434f68e1..7eb1bf25abfd6 100644
--- a/vendor/github.com/minio/minio-go/v7/checksum.go
+++ b/vendor/github.com/minio/minio-go/v7/checksum.go
@@ -25,6 +25,7 @@ import (
"hash/crc32"
"io"
"math/bits"
+ "net/http"
)
// ChecksumType contains information about the checksum type.
@@ -78,6 +79,11 @@ func (c ChecksumType) Key() string {
return ""
}
+// KeyCapitalized returns the capitalized key as used in HTTP headers.
+func (c ChecksumType) KeyCapitalized() string {
+ return http.CanonicalHeaderKey(c.Key())
+}
+
// RawByteLen returns the size of the un-encoded checksum.
func (c ChecksumType) RawByteLen() int {
switch c & checksumMask {
@@ -112,6 +118,13 @@ func (c ChecksumType) IsSet() bool {
return bits.OnesCount32(uint32(c)) == 1
}
+// SetDefault will set the checksum if not already set.
+func (c *ChecksumType) SetDefault(t ChecksumType) {
+ if !c.IsSet() {
+ *c = t
+ }
+}
+
// String returns the type as a string.
// CRC32, CRC32C, SHA1, and SHA256 for valid values.
// Empty string for unset and "" if not valid.
diff --git a/vendor/github.com/minio/minio-go/v7/functional_tests.go b/vendor/github.com/minio/minio-go/v7/functional_tests.go
index 871034bc7ec3a..8a908e3fd8fa1 100644
--- a/vendor/github.com/minio/minio-go/v7/functional_tests.go
+++ b/vendor/github.com/minio/minio-go/v7/functional_tests.go
@@ -24,7 +24,6 @@ import (
"archive/zip"
"bytes"
"context"
- "crypto/sha1"
"crypto/sha256"
"encoding/base64"
"errors"
@@ -166,7 +165,7 @@ func logError(testName, function string, args map[string]interface{}, startTime
}
}
-// log failed test runs
+// Log failed test runs, do not call this directly, use logError instead, as that correctly stops the test run
func logFailure(testName, function string, args map[string]interface{}, startTime time.Time, alert, message string, err error) {
l := baseLogger(testName, function, args, startTime).With(
"status", "FAIL",
@@ -2199,22 +2198,15 @@ func testPutObjectWithChecksums() {
defer cleanupBucket(bucketName, c)
tests := []struct {
- header string
- hasher hash.Hash
-
- // Checksum values
- ChecksumCRC32 string
- ChecksumCRC32C string
- ChecksumSHA1 string
- ChecksumSHA256 string
+ cs minio.ChecksumType
}{
- {header: "x-amz-checksum-crc32", hasher: crc32.NewIEEE()},
- {header: "x-amz-checksum-crc32c", hasher: crc32.New(crc32.MakeTable(crc32.Castagnoli))},
- {header: "x-amz-checksum-sha1", hasher: sha1.New()},
- {header: "x-amz-checksum-sha256", hasher: sha256.New()},
+ {cs: minio.ChecksumCRC32C},
+ {cs: minio.ChecksumCRC32},
+ {cs: minio.ChecksumSHA1},
+ {cs: minio.ChecksumSHA256},
}
- for i, test := range tests {
+ for _, test := range tests {
bufSize := dataFileMap["datafile-10-kB"]
// Save the data
@@ -2235,29 +2227,27 @@ func testPutObjectWithChecksums() {
logError(testName, function, args, startTime, "", "Read failed", err)
return
}
- h := test.hasher
+ h := test.cs.Hasher()
h.Reset()
- // Wrong CRC.
- meta[test.header] = base64.StdEncoding.EncodeToString(h.Sum(nil))
+
+ // Test with Wrong CRC.
+ meta[test.cs.Key()] = base64.StdEncoding.EncodeToString(h.Sum(nil))
args["metadata"] = meta
args["range"] = "false"
+ args["checksum"] = test.cs.String()
resp, err := c.PutObject(context.Background(), bucketName, objectName, bytes.NewReader(b), int64(bufSize), minio.PutObjectOptions{
DisableMultipart: true,
UserMetadata: meta,
})
if err == nil {
- if i == 0 && resp.ChecksumCRC32 == "" {
- logIgnored(testName, function, args, startTime, "Checksums does not appear to be supported by backend")
- return
- }
- logError(testName, function, args, startTime, "", "PutObject failed", err)
+ logError(testName, function, args, startTime, "", "PutObject did not fail on wrong CRC", err)
return
}
// Set correct CRC.
h.Write(b)
- meta[test.header] = base64.StdEncoding.EncodeToString(h.Sum(nil))
+ meta[test.cs.Key()] = base64.StdEncoding.EncodeToString(h.Sum(nil))
reader.Close()
resp, err = c.PutObject(context.Background(), bucketName, objectName, bytes.NewReader(b), int64(bufSize), minio.PutObjectOptions{
@@ -2419,17 +2409,12 @@ func testPutMultipartObjectWithChecksums() {
}
defer cleanupBucket(bucketName, c)
tests := []struct {
- header string
- hasher hash.Hash
-
- // Checksum values
- ChecksumCRC32 string
- ChecksumCRC32C string
- ChecksumSHA1 string
- ChecksumSHA256 string
+ cs minio.ChecksumType
}{
- // Currently there is no way to override the checksum type.
- {header: "x-amz-checksum-crc32c", hasher: crc32.New(crc32.MakeTable(crc32.Castagnoli)), ChecksumCRC32C: "OpEx0Q==-13"},
+ {cs: minio.ChecksumCRC32C},
+ {cs: minio.ChecksumCRC32},
+ {cs: minio.ChecksumSHA1},
+ {cs: minio.ChecksumSHA256},
}
for _, test := range tests {
@@ -2438,11 +2423,12 @@ func testPutMultipartObjectWithChecksums() {
// Save the data
objectName := randString(60, rand.NewSource(time.Now().UnixNano()), "")
args["objectName"] = objectName
+ args["checksum"] = test.cs.String()
cmpChecksum := func(got, want string) {
if want != got {
- // logError(testName, function, args, startTime, "", "checksum mismatch", fmt.Errorf("want %s, got %s", want, got))
- fmt.Printf("want %s, got %s\n", want, got)
+ logError(testName, function, args, startTime, "", "checksum mismatch", fmt.Errorf("want %s, got %s", want, got))
+ //fmt.Printf("want %s, got %s\n", want, got)
return
}
}
@@ -2455,9 +2441,9 @@ func testPutMultipartObjectWithChecksums() {
return
}
reader.Close()
- h := test.hasher
+ h := test.cs.Hasher()
h.Reset()
- test.ChecksumCRC32C = hashMultiPart(b, partSize, test.hasher)
+ want := hashMultiPart(b, partSize, test.cs.Hasher())
// Set correct CRC.
@@ -2466,15 +2452,40 @@ func testPutMultipartObjectWithChecksums() {
DisableMultipart: false,
UserMetadata: nil,
PartSize: partSize,
+ AutoChecksum: test.cs,
})
if err != nil {
logError(testName, function, args, startTime, "", "PutObject failed", err)
return
}
- cmpChecksum(resp.ChecksumSHA256, test.ChecksumSHA256)
- cmpChecksum(resp.ChecksumSHA1, test.ChecksumSHA1)
- cmpChecksum(resp.ChecksumCRC32, test.ChecksumCRC32)
- cmpChecksum(resp.ChecksumCRC32C, test.ChecksumCRC32C)
+
+ switch test.cs {
+ case minio.ChecksumCRC32C:
+ cmpChecksum(resp.ChecksumCRC32C, want)
+ case minio.ChecksumCRC32:
+ cmpChecksum(resp.ChecksumCRC32, want)
+ case minio.ChecksumSHA1:
+ cmpChecksum(resp.ChecksumSHA1, want)
+ case minio.ChecksumSHA256:
+ cmpChecksum(resp.ChecksumSHA256, want)
+ }
+
+ s, err := c.GetObjectAttributes(context.Background(), bucketName, objectName, minio.ObjectAttributesOptions{})
+ if err != nil {
+ logError(testName, function, args, startTime, "", "GetObjectAttributes failed", err)
+ return
+ }
+ want = want[:strings.IndexByte(want, '-')]
+ switch test.cs {
+ case minio.ChecksumCRC32C:
+ cmpChecksum(s.Checksum.ChecksumCRC32C, want)
+ case minio.ChecksumCRC32:
+ cmpChecksum(s.Checksum.ChecksumCRC32, want)
+ case minio.ChecksumSHA1:
+ cmpChecksum(s.Checksum.ChecksumSHA1, want)
+ case minio.ChecksumSHA256:
+ cmpChecksum(s.Checksum.ChecksumSHA256, want)
+ }
// Read the data back
gopts := minio.GetObjectOptions{Checksum: true}
@@ -2496,18 +2507,17 @@ func testPutMultipartObjectWithChecksums() {
// Test part 2 checksum...
h.Reset()
h.Write(b[partSize : 2*partSize])
- got := base64.StdEncoding.EncodeToString(h.Sum(nil))
- if test.ChecksumSHA256 != "" {
- cmpChecksum(st.ChecksumSHA256, got)
- }
- if test.ChecksumSHA1 != "" {
- cmpChecksum(st.ChecksumSHA1, got)
- }
- if test.ChecksumCRC32 != "" {
- cmpChecksum(st.ChecksumCRC32, got)
- }
- if test.ChecksumCRC32C != "" {
- cmpChecksum(st.ChecksumCRC32C, got)
+ want = base64.StdEncoding.EncodeToString(h.Sum(nil))
+
+ switch test.cs {
+ case minio.ChecksumCRC32C:
+ cmpChecksum(st.ChecksumCRC32C, want)
+ case minio.ChecksumCRC32:
+ cmpChecksum(st.ChecksumCRC32, want)
+ case minio.ChecksumSHA1:
+ cmpChecksum(st.ChecksumSHA1, want)
+ case minio.ChecksumSHA256:
+ cmpChecksum(st.ChecksumSHA256, want)
}
delete(args, "metadata")
@@ -13500,7 +13510,7 @@ func testCors() {
Secure: mustParseBool(os.Getenv(enableHTTPS)),
})
if err != nil {
- logFailure(testName, function, args, startTime, "", "MinIO client object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
@@ -13516,7 +13526,7 @@ func testCors() {
bucketName = randString(60, rand.NewSource(time.Now().UnixNano()), "minio-go-test-")
err = c.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: "us-east-1"})
if err != nil {
- logFailure(testName, function, args, startTime, "", "MakeBucket failed", err)
+ logError(testName, function, args, startTime, "", "MakeBucket failed", err)
return
}
}
@@ -13526,7 +13536,7 @@ func testCors() {
publicPolicy := `{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["*"]},"Action":["s3:*"],"Resource":["arn:aws:s3:::` + bucketName + `", "arn:aws:s3:::` + bucketName + `/*"]}]}`
err = c.SetBucketPolicy(ctx, bucketName, publicPolicy)
if err != nil {
- logFailure(testName, function, args, startTime, "", "SetBucketPolicy failed", err)
+ logError(testName, function, args, startTime, "", "SetBucketPolicy failed", err)
return
}
@@ -13540,7 +13550,7 @@ func testCors() {
_, err = c.PutObject(ctx, bucketName, objectName, reader, int64(bufSize), minio.PutObjectOptions{ContentType: "binary/octet-stream"})
if err != nil {
- logFailure(testName, function, args, startTime, "", "PutObject call failed", err)
+ logError(testName, function, args, startTime, "", "PutObject call failed", err)
return
}
bucketURL := c.EndpointURL().String() + "/" + bucketName + "/"
@@ -13548,7 +13558,7 @@ func testCors() {
transport, err := minio.DefaultTransport(mustParseBool(os.Getenv(enableHTTPS)))
if err != nil {
- logFailure(testName, function, args, startTime, "", "DefaultTransport failed", err)
+ logError(testName, function, args, startTime, "", "DefaultTransport failed", err)
return
}
httpClient := &http.Client{
@@ -14156,7 +14166,7 @@ func testCors() {
}
err = c.SetBucketCors(ctx, bucketName, corsConfig)
if err != nil {
- logFailure(testName, function, args, startTime, "", "SetBucketCors failed to apply", err)
+ logError(testName, function, args, startTime, "", "SetBucketCors failed to apply", err)
return
}
}
@@ -14165,7 +14175,7 @@ func testCors() {
if test.method != "" && test.url != "" {
req, err := http.NewRequestWithContext(ctx, test.method, test.url, nil)
if err != nil {
- logFailure(testName, function, args, startTime, "", "HTTP request creation failed", err)
+ logError(testName, function, args, startTime, "", "HTTP request creation failed", err)
return
}
req.Header.Set("User-Agent", "MinIO-go-FunctionalTest/"+appVersion)
@@ -14175,7 +14185,7 @@ func testCors() {
}
resp, err := httpClient.Do(req)
if err != nil {
- logFailure(testName, function, args, startTime, "", "HTTP request failed", err)
+ logError(testName, function, args, startTime, "", "HTTP request failed", err)
return
}
defer resp.Body.Close()
@@ -14183,7 +14193,7 @@ func testCors() {
// Check returned status code
if resp.StatusCode != test.wantStatus {
errStr := fmt.Sprintf(" incorrect status code in response, want: %d, got: %d", test.wantStatus, resp.StatusCode)
- logFailure(testName, function, args, startTime, "", errStr, nil)
+ logError(testName, function, args, startTime, "", errStr, nil)
return
}
@@ -14191,12 +14201,12 @@ func testCors() {
if test.wantBodyContains != "" {
body, err := io.ReadAll(resp.Body)
if err != nil {
- logFailure(testName, function, args, startTime, "", "Failed to read response body", err)
+ logError(testName, function, args, startTime, "", "Failed to read response body", err)
return
}
if !strings.Contains(string(body), test.wantBodyContains) {
errStr := fmt.Sprintf(" incorrect body in response, want: %s, in got: %s", test.wantBodyContains, string(body))
- logFailure(testName, function, args, startTime, "", errStr, nil)
+ logError(testName, function, args, startTime, "", errStr, nil)
return
}
}
@@ -14213,7 +14223,7 @@ func testCors() {
gotVal = strings.ReplaceAll(gotVal, " ", "")
if gotVal != v {
errStr := fmt.Sprintf(" incorrect header in response, want: %s: '%s', got: '%s'", k, v, gotVal)
- logFailure(testName, function, args, startTime, "", errStr, nil)
+ logError(testName, function, args, startTime, "", errStr, nil)
return
}
}
@@ -14241,7 +14251,7 @@ func testCorsSetGetDelete() {
Secure: mustParseBool(os.Getenv(enableHTTPS)),
})
if err != nil {
- logFailure(testName, function, args, startTime, "", "MinIO client object creation failed", err)
+ logError(testName, function, args, startTime, "", "MinIO client object creation failed", err)
return
}
@@ -14258,7 +14268,7 @@ func testCorsSetGetDelete() {
// Make a new bucket.
err = c.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: "us-east-1"})
if err != nil {
- logFailure(testName, function, args, startTime, "", "MakeBucket failed", err)
+ logError(testName, function, args, startTime, "", "MakeBucket failed", err)
return
}
defer cleanupBucket(bucketName, c)
@@ -14284,37 +14294,37 @@ func testCorsSetGetDelete() {
corsConfig := cors.NewConfig(corsRules)
err = c.SetBucketCors(ctx, bucketName, corsConfig)
if err != nil {
- logFailure(testName, function, args, startTime, "", "SetBucketCors failed to apply", err)
+ logError(testName, function, args, startTime, "", "SetBucketCors failed to apply", err)
return
}
// Get the rules and check they match what we set
gotCorsConfig, err := c.GetBucketCors(ctx, bucketName)
if err != nil {
- logFailure(testName, function, args, startTime, "", "GetBucketCors failed", err)
+ logError(testName, function, args, startTime, "", "GetBucketCors failed", err)
return
}
if !reflect.DeepEqual(corsConfig, gotCorsConfig) {
msg := fmt.Sprintf("GetBucketCors returned unexpected rules, expected: %+v, got: %+v", corsConfig, gotCorsConfig)
- logFailure(testName, function, args, startTime, "", msg, nil)
+ logError(testName, function, args, startTime, "", msg, nil)
return
}
// Delete the rules
err = c.SetBucketCors(ctx, bucketName, nil)
if err != nil {
- logFailure(testName, function, args, startTime, "", "SetBucketCors failed to delete", err)
+ logError(testName, function, args, startTime, "", "SetBucketCors failed to delete", err)
return
}
// Get the rules and check they are now empty
gotCorsConfig, err = c.GetBucketCors(ctx, bucketName)
if err != nil {
- logFailure(testName, function, args, startTime, "", "GetBucketCors failed", err)
+ logError(testName, function, args, startTime, "", "GetBucketCors failed", err)
return
}
if gotCorsConfig != nil {
- logFailure(testName, function, args, startTime, "", "GetBucketCors returned unexpected rules", nil)
+ logError(testName, function, args, startTime, "", "GetBucketCors returned unexpected rules", nil)
return
}
diff --git a/vendor/github.com/pierrec/lz4/v4/README.md b/vendor/github.com/pierrec/lz4/v4/README.md
index 4629c9d0e03d6..dee77545b0c21 100644
--- a/vendor/github.com/pierrec/lz4/v4/README.md
+++ b/vendor/github.com/pierrec/lz4/v4/README.md
@@ -21,7 +21,7 @@ go get github.com/pierrec/lz4/v4
There is a command line interface tool to compress and decompress LZ4 files.
```
-go install github.com/pierrec/lz4/v4/cmd/lz4c
+go install github.com/pierrec/lz4/v4/cmd/lz4c@latest
```
Usage
diff --git a/vendor/github.com/pierrec/lz4/v4/compressing_reader.go b/vendor/github.com/pierrec/lz4/v4/compressing_reader.go
new file mode 100644
index 0000000000000..8df0dc76d0072
--- /dev/null
+++ b/vendor/github.com/pierrec/lz4/v4/compressing_reader.go
@@ -0,0 +1,222 @@
+package lz4
+
+import (
+ "errors"
+ "io"
+
+ "github.com/pierrec/lz4/v4/internal/lz4block"
+ "github.com/pierrec/lz4/v4/internal/lz4errors"
+ "github.com/pierrec/lz4/v4/internal/lz4stream"
+)
+
+type crState int
+
+const (
+ crStateInitial crState = iota
+ crStateReading
+ crStateFlushing
+ crStateDone
+)
+
+type CompressingReader struct {
+ state crState
+ src io.ReadCloser // source reader
+ level lz4block.CompressionLevel // how hard to try
+ frame *lz4stream.Frame // frame being built
+ in []byte
+ out ovWriter
+ handler func(int)
+}
+
+// NewCompressingReader creates a reader which reads compressed data from
+// raw stream. This makes it a logical opposite of a normal lz4.Reader.
+// We require an io.ReadCloser as an underlying source for compatibility
+// with Go's http.Request.
+func NewCompressingReader(src io.ReadCloser) *CompressingReader {
+ zrd := &CompressingReader {
+ frame: lz4stream.NewFrame(),
+ }
+
+ _ = zrd.Apply(DefaultBlockSizeOption, DefaultChecksumOption, defaultOnBlockDone)
+ zrd.Reset(src)
+
+ return zrd
+}
+
+// Source exposes the underlying source stream for introspection and control.
+func (zrd *CompressingReader) Source() io.ReadCloser {
+ return zrd.src
+}
+
+// Close simply invokes the underlying stream Close method. This method is
+// provided for the benefit of Go http client/server, which relies on Close
+// for goroutine termination.
+func (zrd *CompressingReader) Close() error {
+ return zrd.src.Close()
+}
+
+// Apply applies useful options to the lz4 encoder.
+func (zrd *CompressingReader) Apply(options ...Option) (err error) {
+ if zrd.state != crStateInitial {
+ return lz4errors.ErrOptionClosedOrError
+ }
+
+ zrd.Reset(zrd.src)
+
+ for _, o := range options {
+ if err = o(zrd); err != nil {
+ return
+ }
+ }
+ return
+}
+
+func (*CompressingReader) private() {}
+
+func (zrd *CompressingReader) init() error {
+ zrd.frame.InitW(&zrd.out, 1, false)
+ size := zrd.frame.Descriptor.Flags.BlockSizeIndex()
+ zrd.in = size.Get()
+ return zrd.frame.Descriptor.Write(zrd.frame, &zrd.out)
+}
+
+// Read allows reading of lz4 compressed data
+func (zrd *CompressingReader) Read(p []byte) (n int, err error) {
+ defer func() {
+ if err != nil {
+ zrd.state = crStateDone
+ }
+ }()
+
+ if !zrd.out.reset(p) {
+ return len(p), nil
+ }
+
+ switch zrd.state {
+ case crStateInitial:
+ err = zrd.init()
+ if err != nil {
+ return
+ }
+ zrd.state = crStateReading
+ case crStateDone:
+ return 0, errors.New("This reader is done")
+ case crStateFlushing:
+ if zrd.out.dataPos > 0 {
+ n = zrd.out.dataPos
+ zrd.out.data = nil
+ zrd.out.dataPos = 0
+ return
+ } else {
+ zrd.state = crStateDone
+ return 0, io.EOF
+ }
+ }
+
+ for zrd.state == crStateReading {
+ block := zrd.frame.Blocks.Block
+
+ var rCount int
+ rCount, err = io.ReadFull(zrd.src, zrd.in)
+ switch err {
+ case nil:
+ err = block.Compress(
+ zrd.frame, zrd.in[ : rCount], zrd.level,
+ ).Write(zrd.frame, &zrd.out)
+ zrd.handler(len(block.Data))
+ if err != nil {
+ return
+ }
+
+ if zrd.out.dataPos == len(zrd.out.data) {
+ n = zrd.out.dataPos
+ zrd.out.dataPos = 0
+ zrd.out.data = nil
+ return
+ }
+ case io.EOF, io.ErrUnexpectedEOF: // read may be partial
+ if rCount > 0 {
+ err = block.Compress(
+ zrd.frame, zrd.in[ : rCount], zrd.level,
+ ).Write(zrd.frame, &zrd.out)
+ zrd.handler(len(block.Data))
+ if err != nil {
+ return
+ }
+ }
+
+ err = zrd.frame.CloseW(&zrd.out, 1)
+ if err != nil {
+ return
+ }
+ zrd.state = crStateFlushing
+
+ n = zrd.out.dataPos
+ zrd.out.dataPos = 0
+ zrd.out.data = nil
+ return
+ default:
+ return
+ }
+ }
+
+ err = lz4errors.ErrInternalUnhandledState
+ return
+}
+
+// Reset makes the stream usable again; mostly handy to reuse lz4 encoder
+// instances.
+func (zrd *CompressingReader) Reset(src io.ReadCloser) {
+ zrd.frame.Reset(1)
+ zrd.state = crStateInitial
+ zrd.src = src
+ zrd.out.clear()
+}
+
+type ovWriter struct {
+ data []byte
+ ov []byte
+ dataPos int
+ ovPos int
+}
+
+func (wr *ovWriter) Write(p []byte) (n int, err error) {
+ count := copy(wr.data[wr.dataPos : ], p)
+ wr.dataPos += count
+
+ if count < len(p) {
+ wr.ov = append(wr.ov, p[count : ]...)
+ }
+
+ return len(p), nil
+}
+
+func (wr *ovWriter) reset(out []byte) bool {
+ ovRem := len(wr.ov) - wr.ovPos
+
+ if ovRem >= len(out) {
+ wr.ovPos += copy(out, wr.ov[wr.ovPos : ])
+ return false
+ }
+
+ if ovRem > 0 {
+ copy(out, wr.ov[wr.ovPos : ])
+ wr.ov = wr.ov[ : 0]
+ wr.ovPos = 0
+ wr.dataPos = ovRem
+ } else if wr.ovPos > 0 {
+ wr.ov = wr.ov[ : 0]
+ wr.ovPos = 0
+ wr.dataPos = 0
+ }
+
+ wr.data = out
+ return true
+}
+
+func (wr *ovWriter) clear() {
+ wr.data = nil
+ wr.dataPos = 0
+ wr.ov = wr.ov[ : 0]
+ wr.ovPos = 0
+}
diff --git a/vendor/github.com/pierrec/lz4/v4/internal/lz4block/blocks.go b/vendor/github.com/pierrec/lz4/v4/internal/lz4block/blocks.go
index a1bfa99e4b451..138083d9479c8 100644
--- a/vendor/github.com/pierrec/lz4/v4/internal/lz4block/blocks.go
+++ b/vendor/github.com/pierrec/lz4/v4/internal/lz4block/blocks.go
@@ -8,12 +8,9 @@ const (
Block256Kb
Block1Mb
Block4Mb
+ Block8Mb = 2 * Block4Mb
)
-// In legacy mode all blocks are compressed regardless
-// of the compressed size: use the bound size.
-var Block8Mb = uint32(CompressBlockBound(8 << 20))
-
var (
BlockPool64K = sync.Pool{New: func() interface{} { return make([]byte, Block64Kb) }}
BlockPool256K = sync.Pool{New: func() interface{} { return make([]byte, Block256Kb) }}
diff --git a/vendor/github.com/pierrec/lz4/v4/internal/lz4stream/block.go b/vendor/github.com/pierrec/lz4/v4/internal/lz4stream/block.go
index 459086f09b294..e96465460c5d4 100644
--- a/vendor/github.com/pierrec/lz4/v4/internal/lz4stream/block.go
+++ b/vendor/github.com/pierrec/lz4/v4/internal/lz4stream/block.go
@@ -224,9 +224,7 @@ func (b *FrameDataBlock) Close(f *Frame) {
func (b *FrameDataBlock) Compress(f *Frame, src []byte, level lz4block.CompressionLevel) *FrameDataBlock {
data := b.data
if f.isLegacy() {
- // In legacy mode, the buffer is sized according to CompressBlockBound,
- // but only 8Mb is buffered for compression.
- src = src[:8<<20]
+ data = data[:cap(data)]
} else {
data = data[:len(src)] // trigger the incompressible flag in CompressBlock
}
diff --git a/vendor/github.com/pierrec/lz4/v4/options.go b/vendor/github.com/pierrec/lz4/v4/options.go
index 46a87380313f4..57a44e767dc6e 100644
--- a/vendor/github.com/pierrec/lz4/v4/options.go
+++ b/vendor/github.com/pierrec/lz4/v4/options.go
@@ -57,6 +57,13 @@ func BlockSizeOption(size BlockSize) Option {
}
w.frame.Descriptor.Flags.BlockSizeIndexSet(lz4block.Index(size))
return nil
+ case *CompressingReader:
+ size := uint32(size)
+ if !lz4block.IsValid(size) {
+ return fmt.Errorf("%w: %d", lz4errors.ErrOptionInvalidBlockSize, size)
+ }
+ w.frame.Descriptor.Flags.BlockSizeIndexSet(lz4block.Index(size))
+ return nil
}
return lz4errors.ErrOptionNotApplicable
}
@@ -72,6 +79,9 @@ func BlockChecksumOption(flag bool) Option {
case *Writer:
w.frame.Descriptor.Flags.BlockChecksumSet(flag)
return nil
+ case *CompressingReader:
+ w.frame.Descriptor.Flags.BlockChecksumSet(flag)
+ return nil
}
return lz4errors.ErrOptionNotApplicable
}
@@ -87,6 +97,9 @@ func ChecksumOption(flag bool) Option {
case *Writer:
w.frame.Descriptor.Flags.ContentChecksumSet(flag)
return nil
+ case *CompressingReader:
+ w.frame.Descriptor.Flags.ContentChecksumSet(flag)
+ return nil
}
return lz4errors.ErrOptionNotApplicable
}
@@ -104,6 +117,10 @@ func SizeOption(size uint64) Option {
w.frame.Descriptor.Flags.SizeSet(size > 0)
w.frame.Descriptor.ContentSize = size
return nil
+ case *CompressingReader:
+ w.frame.Descriptor.Flags.SizeSet(size > 0)
+ w.frame.Descriptor.ContentSize = size
+ return nil
}
return lz4errors.ErrOptionNotApplicable
}
@@ -162,6 +179,14 @@ func CompressionLevelOption(level CompressionLevel) Option {
}
w.level = lz4block.CompressionLevel(level)
return nil
+ case *CompressingReader:
+ switch level {
+ case Fast, Level1, Level2, Level3, Level4, Level5, Level6, Level7, Level8, Level9:
+ default:
+ return fmt.Errorf("%w: %d", lz4errors.ErrOptionInvalidCompressionLevel, level)
+ }
+ w.level = lz4block.CompressionLevel(level)
+ return nil
}
return lz4errors.ErrOptionNotApplicable
}
@@ -186,6 +211,9 @@ func OnBlockDoneOption(handler func(size int)) Option {
case *Reader:
rw.handler = handler
return nil
+ case *CompressingReader:
+ rw.handler = handler
+ return nil
}
return lz4errors.ErrOptionNotApplicable
}
diff --git a/vendor/github.com/pierrec/lz4/v4/writer.go b/vendor/github.com/pierrec/lz4/v4/writer.go
index 77699f2b54aa1..4358adee1093f 100644
--- a/vendor/github.com/pierrec/lz4/v4/writer.go
+++ b/vendor/github.com/pierrec/lz4/v4/writer.go
@@ -150,6 +150,10 @@ func (w *Writer) Flush() (err error) {
case writeState:
case errorState:
return w.state.err
+ case newState:
+ if err = w.init(); w.state.next(err) {
+ return
+ }
default:
return nil
}
diff --git a/vendor/github.com/pkg/xattr/.gitignore b/vendor/github.com/pkg/xattr/.gitignore
deleted file mode 100644
index d8b32652e5a92..0000000000000
--- a/vendor/github.com/pkg/xattr/.gitignore
+++ /dev/null
@@ -1,26 +0,0 @@
-# Compiled Object files, Static and Dynamic libs (Shared Objects)
-*.o
-*.a
-*.so
-
-# Folders
-_obj
-_test
-.DS_Store
-
-# Architecture specific extensions/prefixes
-*.[568vq]
-[568vq].out
-
-*.cgo1.go
-*.cgo2.c
-_cgo_defun.c
-_cgo_gotypes.go
-_cgo_export.*
-
-_testmain.go
-
-*.exe
-*.test
-
-*.swp
diff --git a/vendor/github.com/pkg/xattr/LICENSE b/vendor/github.com/pkg/xattr/LICENSE
deleted file mode 100644
index 99d2e9dc8ff27..0000000000000
--- a/vendor/github.com/pkg/xattr/LICENSE
+++ /dev/null
@@ -1,25 +0,0 @@
-Copyright (c) 2012 Dave Cheney. All rights reserved.
-Copyright (c) 2014 Kuba Podgórski. All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
- * Redistributions of source code must retain the above copyright
-notice, this list of conditions and the following disclaimer.
- * Redistributions in binary form must reproduce the above
-copyright notice, this list of conditions and the following disclaimer
-in the documentation and/or other materials provided with the
-distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/pkg/xattr/README.md b/vendor/github.com/pkg/xattr/README.md
deleted file mode 100644
index 0662c0208c572..0000000000000
--- a/vendor/github.com/pkg/xattr/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-[![GoDoc](https://godoc.org/github.com/pkg/xattr?status.svg)](http://godoc.org/github.com/pkg/xattr)
-[![Go Report Card](https://goreportcard.com/badge/github.com/pkg/xattr)](https://goreportcard.com/report/github.com/pkg/xattr)
-[![Build Status](https://github.com/pkg/xattr/workflows/build/badge.svg)](https://github.com/pkg/xattr/actions?query=workflow%3Abuild)
-[![Codecov](https://codecov.io/gh/pkg/xattr/branch/master/graph/badge.svg)](https://codecov.io/gh/pkg/xattr)
-
-xattr
-=====
-Extended attribute support for Go (linux + darwin + freebsd + netbsd + solaris).
-
-"Extended attributes are name:value pairs associated permanently with files and directories, similar to the environment strings associated with a process. An attribute may be defined or undefined. If it is defined, its value may be empty or non-empty." [See more...](https://en.wikipedia.org/wiki/Extended_file_attributes)
-
-`SetWithFlags` allows to additionally pass system flags to be forwarded to the underlying calls. FreeBSD and NetBSD do not support this and the parameter will be ignored.
-
-The `L` variants of all functions (`LGet/LSet/...`) are identical to `Get/Set/...` except that they
-do not reference a symlink that appears at the end of a path. See
-[GoDoc](http://godoc.org/github.com/pkg/xattr) for details.
-
-### Example
-```go
- const path = "/tmp/myfile"
- const prefix = "user."
-
- if err := xattr.Set(path, prefix+"test", []byte("test-attr-value")); err != nil {
- log.Fatal(err)
- }
-
- var list []string
- if list, err = xattr.List(path); err != nil {
- log.Fatal(err)
- }
-
- var data []byte
- if data, err = xattr.Get(path, prefix+"test"); err != nil {
- log.Fatal(err)
- }
-
- if err = xattr.Remove(path, prefix+"test"); err != nil {
- log.Fatal(err)
- }
-
- // One can also specify the flags parameter to be passed to the OS.
- if err := xattr.SetWithFlags(path, prefix+"test", []byte("test-attr-value"), xattr.XATTR_CREATE); err != nil {
- log.Fatal(err)
- }
-```
diff --git a/vendor/github.com/pkg/xattr/xattr.go b/vendor/github.com/pkg/xattr/xattr.go
deleted file mode 100644
index e34e274d51373..0000000000000
--- a/vendor/github.com/pkg/xattr/xattr.go
+++ /dev/null
@@ -1,258 +0,0 @@
-/*
-Package xattr provides support for extended attributes on linux, darwin and freebsd.
-Extended attributes are name:value pairs associated permanently with files and directories,
-similar to the environment strings associated with a process.
-An attribute may be defined or undefined. If it is defined, its value may be empty or non-empty.
-More details you can find here: https://en.wikipedia.org/wiki/Extended_file_attributes .
-
-All functions are provided in triples: Get/LGet/FGet, Set/LSet/FSet etc. The "L"
-variant will not follow a symlink at the end of the path, and "F" variant accepts
-a file descriptor instead of a path.
-
-Example for "L" variant, assuming path is "/symlink1/symlink2", where both components are
-symlinks:
-Get will follow "symlink1" and "symlink2" and operate on the target of
-"symlink2". LGet will follow "symlink1" but operate directly on "symlink2".
-*/
-package xattr
-
-import (
- "os"
- "syscall"
-)
-
-// Error records an error and the operation, file path and attribute that caused it.
-type Error struct {
- Op string
- Path string
- Name string
- Err error
-}
-
-func (e *Error) Unwrap() error { return e.Err }
-
-func (e *Error) Error() (errstr string) {
- if e.Op != "" {
- errstr += e.Op
- }
- if e.Path != "" {
- if errstr != "" {
- errstr += " "
- }
- errstr += e.Path
- }
- if e.Name != "" {
- if errstr != "" {
- errstr += " "
- }
- errstr += e.Name
- }
- if e.Err != nil {
- if errstr != "" {
- errstr += ": "
- }
- errstr += e.Err.Error()
- }
- return
-}
-
-// Get retrieves extended attribute data associated with path. It will follow
-// all symlinks along the path.
-func Get(path, name string) ([]byte, error) {
- return get(path, name, func(name string, data []byte) (int, error) {
- return getxattr(path, name, data)
- })
-}
-
-// LGet is like Get but does not follow a symlink at the end of the path.
-func LGet(path, name string) ([]byte, error) {
- return get(path, name, func(name string, data []byte) (int, error) {
- return lgetxattr(path, name, data)
- })
-}
-
-// FGet is like Get but accepts a os.File instead of a file path.
-func FGet(f *os.File, name string) ([]byte, error) {
- return get(f.Name(), name, func(name string, data []byte) (int, error) {
- return fgetxattr(f, name, data)
- })
-}
-
-type getxattrFunc func(name string, data []byte) (int, error)
-
-// get contains the buffer allocation logic used by both Get and LGet.
-func get(path string, name string, getxattrFunc getxattrFunc) ([]byte, error) {
- const (
- // Start with a 1 KB buffer for the xattr value
- initialBufSize = 1024
-
- // The theoretical maximum xattr value size on MacOS is 64 MB. On Linux it's
- // much smaller: documented at 64 KB. However, at least on TrueNAS SCALE, a
- // Debian-based Linux distro, it can be larger.
- maxBufSize = 64 * 1024 * 1024
-
- // Function name as reported in error messages
- myname = "xattr.get"
- )
-
- size := initialBufSize
- for {
- data := make([]byte, size)
- read, err := getxattrFunc(name, data)
-
- // If the buffer was too small to fit the value, Linux and MacOS react
- // differently:
- // Linux: returns an ERANGE error and "-1" bytes. However, the TrueNAS
- // SCALE distro sometimes returns E2BIG.
- // MacOS: truncates the value and returns "size" bytes. If the value
- // happens to be exactly as big as the buffer, we cannot know if it was
- // truncated, and we retry with a bigger buffer. Contrary to documentation,
- // MacOS never seems to return ERANGE!
- // To keep the code simple, we always check both conditions, and sometimes
- // double the buffer size without it being strictly necessary.
- if err == syscall.ERANGE || err == syscall.E2BIG || read == size {
- // The buffer was too small. Try again.
- size <<= 1
- if size >= maxBufSize {
- return nil, &Error{myname, path, name, syscall.EOVERFLOW}
- }
- continue
- }
- if err != nil {
- return nil, &Error{myname, path, name, err}
- }
- return data[:read], nil
- }
-}
-
-// Set associates name and data together as an attribute of path.
-func Set(path, name string, data []byte) error {
- if err := setxattr(path, name, data, 0); err != nil {
- return &Error{"xattr.Set", path, name, err}
- }
- return nil
-}
-
-// LSet is like Set but does not follow a symlink at
-// the end of the path.
-func LSet(path, name string, data []byte) error {
- if err := lsetxattr(path, name, data, 0); err != nil {
- return &Error{"xattr.LSet", path, name, err}
- }
- return nil
-}
-
-// FSet is like Set but accepts a os.File instead of a file path.
-func FSet(f *os.File, name string, data []byte) error {
- if err := fsetxattr(f, name, data, 0); err != nil {
- return &Error{"xattr.FSet", f.Name(), name, err}
- }
- return nil
-}
-
-// SetWithFlags associates name and data together as an attribute of path.
-// Forwards the flags parameter to the syscall layer.
-func SetWithFlags(path, name string, data []byte, flags int) error {
- if err := setxattr(path, name, data, flags); err != nil {
- return &Error{"xattr.SetWithFlags", path, name, err}
- }
- return nil
-}
-
-// LSetWithFlags is like SetWithFlags but does not follow a symlink at
-// the end of the path.
-func LSetWithFlags(path, name string, data []byte, flags int) error {
- if err := lsetxattr(path, name, data, flags); err != nil {
- return &Error{"xattr.LSetWithFlags", path, name, err}
- }
- return nil
-}
-
-// FSetWithFlags is like SetWithFlags but accepts a os.File instead of a file path.
-func FSetWithFlags(f *os.File, name string, data []byte, flags int) error {
- if err := fsetxattr(f, name, data, flags); err != nil {
- return &Error{"xattr.FSetWithFlags", f.Name(), name, err}
- }
- return nil
-}
-
-// Remove removes the attribute associated with the given path.
-func Remove(path, name string) error {
- if err := removexattr(path, name); err != nil {
- return &Error{"xattr.Remove", path, name, err}
- }
- return nil
-}
-
-// LRemove is like Remove but does not follow a symlink at the end of the
-// path.
-func LRemove(path, name string) error {
- if err := lremovexattr(path, name); err != nil {
- return &Error{"xattr.LRemove", path, name, err}
- }
- return nil
-}
-
-// FRemove is like Remove but accepts a os.File instead of a file path.
-func FRemove(f *os.File, name string) error {
- if err := fremovexattr(f, name); err != nil {
- return &Error{"xattr.FRemove", f.Name(), name, err}
- }
- return nil
-}
-
-// List retrieves a list of names of extended attributes associated
-// with the given path in the file system.
-func List(path string) ([]string, error) {
- return list(path, func(data []byte) (int, error) {
- return listxattr(path, data)
- })
-}
-
-// LList is like List but does not follow a symlink at the end of the
-// path.
-func LList(path string) ([]string, error) {
- return list(path, func(data []byte) (int, error) {
- return llistxattr(path, data)
- })
-}
-
-// FList is like List but accepts a os.File instead of a file path.
-func FList(f *os.File) ([]string, error) {
- return list(f.Name(), func(data []byte) (int, error) {
- return flistxattr(f, data)
- })
-}
-
-type listxattrFunc func(data []byte) (int, error)
-
-// list contains the buffer allocation logic used by both List and LList.
-func list(path string, listxattrFunc listxattrFunc) ([]string, error) {
- myname := "xattr.list"
- // find size.
- size, err := listxattrFunc(nil)
- if err != nil {
- return nil, &Error{myname, path, "", err}
- }
- if size > 0 {
- // `size + 1` because of ERANGE error when reading
- // from a SMB1 mount point (https://github.com/pkg/xattr/issues/16).
- buf := make([]byte, size+1)
- // Read into buffer of that size.
- read, err := listxattrFunc(buf)
- if err != nil {
- return nil, &Error{myname, path, "", err}
- }
- return stringsFromByteSlice(buf[:read]), nil
- }
- return []string{}, nil
-}
-
-// bytePtrFromSlice returns a pointer to array of bytes and a size.
-func bytePtrFromSlice(data []byte) (ptr *byte, size int) {
- size = len(data)
- if size > 0 {
- ptr = &data[0]
- }
- return
-}
diff --git a/vendor/github.com/pkg/xattr/xattr_bsd.go b/vendor/github.com/pkg/xattr/xattr_bsd.go
deleted file mode 100644
index f4a3f95390490..0000000000000
--- a/vendor/github.com/pkg/xattr/xattr_bsd.go
+++ /dev/null
@@ -1,201 +0,0 @@
-//go:build freebsd || netbsd
-// +build freebsd netbsd
-
-package xattr
-
-import (
- "os"
- "syscall"
- "unsafe"
-)
-
-const (
- // XATTR_SUPPORTED will be true if the current platform is supported
- XATTR_SUPPORTED = true
-
- EXTATTR_NAMESPACE_USER = 1
-
- // ENOATTR is not exported by the syscall package on Linux, because it is
- // an alias for ENODATA. We export it here so it is available on all
- // our supported platforms.
- ENOATTR = syscall.ENOATTR
-)
-
-func getxattr(path string, name string, data []byte) (int, error) {
- return sysGet(syscall.SYS_EXTATTR_GET_FILE, path, name, data)
-}
-
-func lgetxattr(path string, name string, data []byte) (int, error) {
- return sysGet(syscall.SYS_EXTATTR_GET_LINK, path, name, data)
-}
-
-func fgetxattr(f *os.File, name string, data []byte) (int, error) {
- return getxattr(f.Name(), name, data)
-}
-
-// sysGet is called by getxattr and lgetxattr with the appropriate syscall
-// number. This works because syscalls have the same signature and return
-// values.
-func sysGet(syscallNum uintptr, path string, name string, data []byte) (int, error) {
- ptr, nbytes := bytePtrFromSlice(data)
- /*
- ssize_t extattr_get_file(
- const char *path,
- int attrnamespace,
- const char *attrname,
- void *data,
- size_t nbytes);
-
- ssize_t extattr_get_link(
- const char *path,
- int attrnamespace,
- const char *attrname,
- void *data,
- size_t nbytes);
- */
- r0, _, err := syscall.Syscall6(syscallNum, uintptr(unsafe.Pointer(syscall.StringBytePtr(path))),
- EXTATTR_NAMESPACE_USER, uintptr(unsafe.Pointer(syscall.StringBytePtr(name))),
- uintptr(unsafe.Pointer(ptr)), uintptr(nbytes), 0)
- if err != syscall.Errno(0) {
- return int(r0), err
- }
- return int(r0), nil
-}
-
-func setxattr(path string, name string, data []byte, flags int) error {
- return sysSet(syscall.SYS_EXTATTR_SET_FILE, path, name, data)
-}
-
-func lsetxattr(path string, name string, data []byte, flags int) error {
- return sysSet(syscall.SYS_EXTATTR_SET_LINK, path, name, data)
-}
-
-func fsetxattr(f *os.File, name string, data []byte, flags int) error {
- return setxattr(f.Name(), name, data, flags)
-}
-
-// sysSet is called by setxattr and lsetxattr with the appropriate syscall
-// number. This works because syscalls have the same signature and return
-// values.
-func sysSet(syscallNum uintptr, path string, name string, data []byte) error {
- ptr, nbytes := bytePtrFromSlice(data)
- /*
- ssize_t extattr_set_file(
- const char *path,
- int attrnamespace,
- const char *attrname,
- const void *data,
- size_t nbytes
- );
-
- ssize_t extattr_set_link(
- const char *path,
- int attrnamespace,
- const char *attrname,
- const void *data,
- size_t nbytes
- );
- */
- r0, _, err := syscall.Syscall6(syscallNum, uintptr(unsafe.Pointer(syscall.StringBytePtr(path))),
- EXTATTR_NAMESPACE_USER, uintptr(unsafe.Pointer(syscall.StringBytePtr(name))),
- uintptr(unsafe.Pointer(ptr)), uintptr(nbytes), 0)
- if err != syscall.Errno(0) {
- return err
- }
- if int(r0) != nbytes {
- return syscall.E2BIG
- }
- return nil
-}
-
-func removexattr(path string, name string) error {
- return sysRemove(syscall.SYS_EXTATTR_DELETE_FILE, path, name)
-}
-
-func lremovexattr(path string, name string) error {
- return sysRemove(syscall.SYS_EXTATTR_DELETE_LINK, path, name)
-}
-
-func fremovexattr(f *os.File, name string) error {
- return removexattr(f.Name(), name)
-}
-
-// sysSet is called by removexattr and lremovexattr with the appropriate syscall
-// number. This works because syscalls have the same signature and return
-// values.
-func sysRemove(syscallNum uintptr, path string, name string) error {
- /*
- int extattr_delete_file(
- const char *path,
- int attrnamespace,
- const char *attrname
- );
-
- int extattr_delete_link(
- const char *path,
- int attrnamespace,
- const char *attrname
- );
- */
- _, _, err := syscall.Syscall(syscallNum, uintptr(unsafe.Pointer(syscall.StringBytePtr(path))),
- EXTATTR_NAMESPACE_USER, uintptr(unsafe.Pointer(syscall.StringBytePtr(name))),
- )
- if err != syscall.Errno(0) {
- return err
- }
- return nil
-}
-
-func listxattr(path string, data []byte) (int, error) {
- return sysList(syscall.SYS_EXTATTR_LIST_FILE, path, data)
-}
-
-func llistxattr(path string, data []byte) (int, error) {
- return sysList(syscall.SYS_EXTATTR_LIST_LINK, path, data)
-}
-
-func flistxattr(f *os.File, data []byte) (int, error) {
- return listxattr(f.Name(), data)
-}
-
-// sysSet is called by listxattr and llistxattr with the appropriate syscall
-// number. This works because syscalls have the same signature and return
-// values.
-func sysList(syscallNum uintptr, path string, data []byte) (int, error) {
- ptr, nbytes := bytePtrFromSlice(data)
- /*
- ssize_t extattr_list_file(
- const char *path,
- int attrnamespace,
- void *data,
- size_t nbytes
- );
-
- ssize_t extattr_list_link(
- const char *path,
- int attrnamespace,
- void *data,
- size_t nbytes
- );
- */
- r0, _, err := syscall.Syscall6(syscallNum, uintptr(unsafe.Pointer(syscall.StringBytePtr(path))),
- EXTATTR_NAMESPACE_USER, uintptr(unsafe.Pointer(ptr)), uintptr(nbytes), 0, 0)
- if err != syscall.Errno(0) {
- return int(r0), err
- }
- return int(r0), nil
-}
-
-// stringsFromByteSlice converts a sequence of attributes to a []string.
-// On FreeBSD, each entry consists of a single byte containing the length
-// of the attribute name, followed by the attribute name.
-// The name is _not_ terminated by NULL.
-func stringsFromByteSlice(buf []byte) (result []string) {
- index := 0
- for index < len(buf) {
- next := index + 1 + int(buf[index])
- result = append(result, string(buf[index+1:next]))
- index = next
- }
- return
-}
diff --git a/vendor/github.com/pkg/xattr/xattr_darwin.go b/vendor/github.com/pkg/xattr/xattr_darwin.go
deleted file mode 100644
index ee7a501dae5cb..0000000000000
--- a/vendor/github.com/pkg/xattr/xattr_darwin.go
+++ /dev/null
@@ -1,90 +0,0 @@
-//go:build darwin
-// +build darwin
-
-package xattr
-
-import (
- "os"
- "syscall"
-
- "golang.org/x/sys/unix"
-)
-
-// See https://opensource.apple.com/source/xnu/xnu-1504.15.3/bsd/sys/xattr.h.auto.html
-const (
- // XATTR_SUPPORTED will be true if the current platform is supported
- XATTR_SUPPORTED = true
-
- XATTR_NOFOLLOW = 0x0001
- XATTR_CREATE = 0x0002
- XATTR_REPLACE = 0x0004
- XATTR_NOSECURITY = 0x0008
- XATTR_NODEFAULT = 0x0010
- XATTR_SHOWCOMPRESSION = 0x0020
-
- // ENOATTR is not exported by the syscall package on Linux, because it is
- // an alias for ENODATA. We export it here so it is available on all
- // our supported platforms.
- ENOATTR = syscall.ENOATTR
-)
-
-func getxattr(path string, name string, data []byte) (int, error) {
- return unix.Getxattr(path, name, data)
-}
-
-func lgetxattr(path string, name string, data []byte) (int, error) {
- return unix.Lgetxattr(path, name, data)
-}
-
-func fgetxattr(f *os.File, name string, data []byte) (int, error) {
- return getxattr(f.Name(), name, data)
-}
-
-func setxattr(path string, name string, data []byte, flags int) error {
- return unix.Setxattr(path, name, data, flags)
-}
-
-func lsetxattr(path string, name string, data []byte, flags int) error {
- return unix.Lsetxattr(path, name, data, flags)
-}
-
-func fsetxattr(f *os.File, name string, data []byte, flags int) error {
- return setxattr(f.Name(), name, data, flags)
-}
-
-func removexattr(path string, name string) error {
- return unix.Removexattr(path, name)
-}
-
-func lremovexattr(path string, name string) error {
- return unix.Lremovexattr(path, name)
-}
-
-func fremovexattr(f *os.File, name string) error {
- return removexattr(f.Name(), name)
-}
-
-func listxattr(path string, data []byte) (int, error) {
- return unix.Listxattr(path, data)
-}
-
-func llistxattr(path string, data []byte) (int, error) {
- return unix.Llistxattr(path, data)
-}
-
-func flistxattr(f *os.File, data []byte) (int, error) {
- return listxattr(f.Name(), data)
-}
-
-// stringsFromByteSlice converts a sequence of attributes to a []string.
-// On Darwin and Linux, each entry is a NULL-terminated string.
-func stringsFromByteSlice(buf []byte) (result []string) {
- offset := 0
- for index, b := range buf {
- if b == 0 {
- result = append(result, string(buf[offset:index]))
- offset = index + 1
- }
- }
- return
-}
diff --git a/vendor/github.com/pkg/xattr/xattr_linux.go b/vendor/github.com/pkg/xattr/xattr_linux.go
deleted file mode 100644
index 879085ee5d453..0000000000000
--- a/vendor/github.com/pkg/xattr/xattr_linux.go
+++ /dev/null
@@ -1,142 +0,0 @@
-//go:build linux
-// +build linux
-
-package xattr
-
-import (
- "os"
- "syscall"
-
- "golang.org/x/sys/unix"
-)
-
-const (
- // XATTR_SUPPORTED will be true if the current platform is supported
- XATTR_SUPPORTED = true
-
- XATTR_CREATE = unix.XATTR_CREATE
- XATTR_REPLACE = unix.XATTR_REPLACE
-
- // ENOATTR is not exported by the syscall package on Linux, because it is
- // an alias for ENODATA. We export it here so it is available on all
- // our supported platforms.
- ENOATTR = syscall.ENODATA
-)
-
-// On Linux, FUSE and CIFS filesystems can return EINTR for interrupted system
-// calls. This function works around this by retrying system calls until they
-// stop returning EINTR.
-//
-// See https://github.com/golang/go/commit/6b420169d798c7ebe733487b56ea5c3fa4aab5ce.
-func ignoringEINTR(fn func() error) (err error) {
- for {
- err = fn()
- if err != unix.EINTR {
- break
- }
- }
- return err
-}
-
-func getxattr(path string, name string, data []byte) (int, error) {
- var r int
- err := ignoringEINTR(func() (err error) {
- r, err = unix.Getxattr(path, name, data)
- return err
- })
- return r, err
-}
-
-func lgetxattr(path string, name string, data []byte) (int, error) {
- var r int
- err := ignoringEINTR(func() (err error) {
- r, err = unix.Lgetxattr(path, name, data)
- return err
- })
- return r, err
-}
-
-func fgetxattr(f *os.File, name string, data []byte) (int, error) {
- var r int
- err := ignoringEINTR(func() (err error) {
- r, err = unix.Fgetxattr(int(f.Fd()), name, data)
- return err
- })
- return r, err
-}
-
-func setxattr(path string, name string, data []byte, flags int) error {
- return ignoringEINTR(func() (err error) {
- return unix.Setxattr(path, name, data, flags)
- })
-}
-
-func lsetxattr(path string, name string, data []byte, flags int) error {
- return ignoringEINTR(func() (err error) {
- return unix.Lsetxattr(path, name, data, flags)
- })
-}
-
-func fsetxattr(f *os.File, name string, data []byte, flags int) error {
- return ignoringEINTR(func() (err error) {
- return unix.Fsetxattr(int(f.Fd()), name, data, flags)
- })
-}
-
-func removexattr(path string, name string) error {
- return ignoringEINTR(func() (err error) {
- return unix.Removexattr(path, name)
- })
-}
-
-func lremovexattr(path string, name string) error {
- return ignoringEINTR(func() (err error) {
- return unix.Lremovexattr(path, name)
- })
-}
-
-func fremovexattr(f *os.File, name string) error {
- return ignoringEINTR(func() (err error) {
- return unix.Fremovexattr(int(f.Fd()), name)
- })
-}
-
-func listxattr(path string, data []byte) (int, error) {
- var r int
- err := ignoringEINTR(func() (err error) {
- r, err = unix.Listxattr(path, data)
- return err
- })
- return r, err
-}
-
-func llistxattr(path string, data []byte) (int, error) {
- var r int
- err := ignoringEINTR(func() (err error) {
- r, err = unix.Llistxattr(path, data)
- return err
- })
- return r, err
-}
-
-func flistxattr(f *os.File, data []byte) (int, error) {
- var r int
- err := ignoringEINTR(func() (err error) {
- r, err = unix.Flistxattr(int(f.Fd()), data)
- return err
- })
- return r, err
-}
-
-// stringsFromByteSlice converts a sequence of attributes to a []string.
-// On Darwin and Linux, each entry is a NULL-terminated string.
-func stringsFromByteSlice(buf []byte) (result []string) {
- offset := 0
- for index, b := range buf {
- if b == 0 {
- result = append(result, string(buf[offset:index]))
- offset = index + 1
- }
- }
- return
-}
diff --git a/vendor/github.com/pkg/xattr/xattr_solaris.go b/vendor/github.com/pkg/xattr/xattr_solaris.go
deleted file mode 100644
index 7c98b4afbac25..0000000000000
--- a/vendor/github.com/pkg/xattr/xattr_solaris.go
+++ /dev/null
@@ -1,175 +0,0 @@
-//go:build solaris
-// +build solaris
-
-package xattr
-
-import (
- "os"
- "syscall"
-
- "golang.org/x/sys/unix"
-)
-
-const (
- // XATTR_SUPPORTED will be true if the current platform is supported
- XATTR_SUPPORTED = true
-
- XATTR_CREATE = 0x1
- XATTR_REPLACE = 0x2
-
- // ENOATTR is not exported by the syscall package on Linux, because it is
- // an alias for ENODATA. We export it here so it is available on all
- // our supported platforms.
- ENOATTR = syscall.ENODATA
-)
-
-func getxattr(path string, name string, data []byte) (int, error) {
- f, err := openNonblock(path)
- if err != nil {
- return 0, err
- }
- defer func() {
- _ = f.Close()
- }()
- return fgetxattr(f, name, data)
-}
-
-func lgetxattr(path string, name string, data []byte) (int, error) {
- return 0, unix.ENOTSUP
-}
-
-func fgetxattr(f *os.File, name string, data []byte) (int, error) {
- fd, err := unix.Openat(int(f.Fd()), name, unix.O_RDONLY|unix.O_XATTR, 0)
- if err != nil {
- return 0, err
- }
- defer func() {
- _ = unix.Close(fd)
- }()
- return unix.Read(fd, data)
-}
-
-func setxattr(path string, name string, data []byte, flags int) error {
- f, err := openNonblock(path)
- if err != nil {
- return err
- }
- err = fsetxattr(f, name, data, flags)
- if err != nil {
- _ = f.Close()
- return err
- }
- return f.Close()
-}
-
-func lsetxattr(path string, name string, data []byte, flags int) error {
- return unix.ENOTSUP
-}
-
-func fsetxattr(f *os.File, name string, data []byte, flags int) error {
- mode := unix.O_WRONLY | unix.O_XATTR
- if flags&XATTR_REPLACE != 0 {
- mode |= unix.O_TRUNC
- } else if flags&XATTR_CREATE != 0 {
- mode |= unix.O_CREAT | unix.O_EXCL
- } else {
- mode |= unix.O_CREAT | unix.O_TRUNC
- }
- fd, err := unix.Openat(int(f.Fd()), name, mode, 0666)
- if err != nil {
- return err
- }
- if _, err = unix.Write(fd, data); err != nil {
- _ = unix.Close(fd)
- return err
- }
- return unix.Close(fd)
-}
-
-func removexattr(path string, name string) error {
- mode := unix.O_RDONLY | unix.O_XATTR | unix.O_NONBLOCK | unix.O_CLOEXEC
- fd, err := unix.Open(path, mode, 0)
- if err != nil {
- return err
- }
- f := os.NewFile(uintptr(fd), path)
- defer func() {
- _ = f.Close()
- }()
- return fremovexattr(f, name)
-}
-
-func lremovexattr(path string, name string) error {
- return unix.ENOTSUP
-}
-
-func fremovexattr(f *os.File, name string) error {
- fd, err := unix.Openat(int(f.Fd()), ".", unix.O_XATTR, 0)
- if err != nil {
- return err
- }
- defer func() {
- _ = unix.Close(fd)
- }()
- return unix.Unlinkat(fd, name, 0)
-}
-
-func listxattr(path string, data []byte) (int, error) {
- f, err := openNonblock(path)
- if err != nil {
- return 0, err
- }
- defer func() {
- _ = f.Close()
- }()
- return flistxattr(f, data)
-}
-
-func llistxattr(path string, data []byte) (int, error) {
- return 0, unix.ENOTSUP
-}
-
-func flistxattr(f *os.File, data []byte) (int, error) {
- fd, err := unix.Openat(int(f.Fd()), ".", unix.O_RDONLY|unix.O_XATTR, 0)
- if err != nil {
- return 0, unix.ENOTSUP
- }
- xf := os.NewFile(uintptr(fd), f.Name())
- defer func() {
- _ = xf.Close()
- }()
- names, err := xf.Readdirnames(-1)
- if err != nil {
- return 0, err
- }
- var buf []byte
- for _, name := range names {
- buf = append(buf, append([]byte(name), '\000')...)
- }
- if data == nil {
- return len(buf), nil
- }
- return copy(data, buf), nil
-}
-
-// Like os.Open, but passes O_NONBLOCK to the open(2) syscall.
-func openNonblock(path string) (*os.File, error) {
- fd, err := unix.Open(path, unix.O_RDONLY|unix.O_CLOEXEC|unix.O_NONBLOCK, 0)
- if err != nil {
- return nil, err
- }
- return os.NewFile(uintptr(fd), path), err
-}
-
-// stringsFromByteSlice converts a sequence of attributes to a []string.
-// We simulate Linux/Darwin, where each entry is a NULL-terminated string.
-func stringsFromByteSlice(buf []byte) (result []string) {
- offset := 0
- for index, b := range buf {
- if b == 0 {
- result = append(result, string(buf[offset:index]))
- offset = index + 1
- }
- }
- return
-}
diff --git a/vendor/github.com/pkg/xattr/xattr_unsupported.go b/vendor/github.com/pkg/xattr/xattr_unsupported.go
deleted file mode 100644
index 8886fbdc4216e..0000000000000
--- a/vendor/github.com/pkg/xattr/xattr_unsupported.go
+++ /dev/null
@@ -1,70 +0,0 @@
-//go:build !linux && !freebsd && !netbsd && !darwin && !solaris
-// +build !linux,!freebsd,!netbsd,!darwin,!solaris
-
-package xattr
-
-import (
- "os"
- "syscall"
-)
-
-const (
- // We need to use the default for non supported operating systems
- ENOATTR = syscall.Errno(0x59)
-)
-
-// XATTR_SUPPORTED will be true if the current platform is supported
-const XATTR_SUPPORTED = false
-
-func getxattr(path string, name string, data []byte) (int, error) {
- return 0, nil
-}
-
-func lgetxattr(path string, name string, data []byte) (int, error) {
- return 0, nil
-}
-
-func fgetxattr(f *os.File, name string, data []byte) (int, error) {
- return 0, nil
-}
-
-func setxattr(path string, name string, data []byte, flags int) error {
- return nil
-}
-
-func lsetxattr(path string, name string, data []byte, flags int) error {
- return nil
-}
-
-func fsetxattr(f *os.File, name string, data []byte, flags int) error {
- return nil
-}
-
-func removexattr(path string, name string) error {
- return nil
-}
-
-func lremovexattr(path string, name string) error {
- return nil
-}
-
-func fremovexattr(f *os.File, name string) error {
- return nil
-}
-
-func listxattr(path string, data []byte) (int, error) {
- return 0, nil
-}
-
-func llistxattr(path string, data []byte) (int, error) {
- return 0, nil
-}
-
-func flistxattr(f *os.File, data []byte) (int, error) {
- return 0, nil
-}
-
-// dummy
-func stringsFromByteSlice(buf []byte) (result []string) {
- return []string{}
-}
diff --git a/vendor/github.com/rs/xid/.gitignore b/vendor/github.com/rs/xid/.gitignore
new file mode 100644
index 0000000000000..81be9277fc7eb
--- /dev/null
+++ b/vendor/github.com/rs/xid/.gitignore
@@ -0,0 +1,3 @@
+/.idea
+/.vscode
+.DS_Store
\ No newline at end of file
diff --git a/vendor/github.com/rs/xid/README.md b/vendor/github.com/rs/xid/README.md
index 974e67d29b355..1bf45bd11b34e 100644
--- a/vendor/github.com/rs/xid/README.md
+++ b/vendor/github.com/rs/xid/README.md
@@ -4,7 +4,7 @@
Package xid is a globally unique id generator library, ready to safely be used directly in your server code.
-Xid uses the Mongo Object ID algorithm to generate globally unique ids with a different serialization (base64) to make it shorter when transported as a string:
+Xid uses the Mongo Object ID algorithm to generate globally unique ids with a different serialization ([base32hex](https://datatracker.ietf.org/doc/html/rfc4648#page-10)) to make it shorter when transported as a string:
https://docs.mongodb.org/manual/reference/object-id/
- 4-byte value representing the seconds since the Unix epoch,
@@ -13,7 +13,7 @@ https://docs.mongodb.org/manual/reference/object-id/
- 3-byte counter, starting with a random value.
The binary representation of the id is compatible with Mongo 12 bytes Object IDs.
-The string representation is using base32 hex (w/o padding) for better space efficiency
+The string representation is using [base32hex](https://datatracker.ietf.org/doc/html/rfc4648#page-10) (w/o padding) for better space efficiency
when stored in that form (20 bytes). The hex variant of base32 is used to retain the
sortable property of the id.
@@ -71,8 +71,10 @@ References:
- Java port by [0xShamil](https://github.com/0xShamil/): https://github.com/0xShamil/java-xid
- Dart port by [Peter Bwire](https://github.com/pitabwire): https://pub.dev/packages/xid
- PostgreSQL port by [Rasmus Holm](https://github.com/crholm): https://github.com/modfin/pg-xid
-- Swift port by [Uditha Atukorala](https://github.com/uditha-atukorala): https://github.com/uditha-atukorala/swift-xid
-- C++ port by [Uditha Atukorala](https://github.com/uditha-atukorala): https://github.com/uditha-atukorala/libxid
+- Swift port by [Uditha Atukorala](https://github.com/uatuko): https://github.com/uatuko/swift-xid
+- C++ port by [Uditha Atukorala](https://github.com/uatuko): https://github.com/uatuko/libxid
+- Typescript & Javascript port by [Yiwen AI](https://github.com/yiwen-ai): https://github.com/yiwen-ai/xid-ts
+- Gleam port by [Alexandre Del Vecchio](https://github.com/defgenx): https://github.com/defgenx/gxid
## Install
diff --git a/vendor/github.com/rs/xid/hostid_darwin.go b/vendor/github.com/rs/xid/hostid_darwin.go
index 08351ff72c679..17351563a814f 100644
--- a/vendor/github.com/rs/xid/hostid_darwin.go
+++ b/vendor/github.com/rs/xid/hostid_darwin.go
@@ -2,8 +2,33 @@
package xid
-import "syscall"
+import (
+ "errors"
+ "os/exec"
+ "strings"
+)
func readPlatformMachineID() (string, error) {
- return syscall.Sysctl("kern.uuid")
+ ioreg, err := exec.LookPath("ioreg")
+ if err != nil {
+ return "", err
+ }
+
+ cmd := exec.Command(ioreg, "-rd1", "-c", "IOPlatformExpertDevice")
+ out, err := cmd.CombinedOutput()
+ if err != nil {
+ return "", err
+ }
+
+ for _, line := range strings.Split(string(out), "\n") {
+ if strings.Contains(line, "IOPlatformUUID") {
+ parts := strings.SplitAfter(line, `" = "`)
+ if len(parts) == 2 {
+ uuid := strings.TrimRight(parts[1], `"`)
+ return strings.ToLower(uuid), nil
+ }
+ }
+ }
+
+ return "", errors.New("cannot find host id")
}
diff --git a/vendor/github.com/rs/xid/hostid_windows.go b/vendor/github.com/rs/xid/hostid_windows.go
index ec2593ee31ce5..a4d98ab0e7b83 100644
--- a/vendor/github.com/rs/xid/hostid_windows.go
+++ b/vendor/github.com/rs/xid/hostid_windows.go
@@ -11,11 +11,17 @@ import (
func readPlatformMachineID() (string, error) {
// source: https://github.com/shirou/gopsutil/blob/master/host/host_syscall.go
var h syscall.Handle
- err := syscall.RegOpenKeyEx(syscall.HKEY_LOCAL_MACHINE, syscall.StringToUTF16Ptr(`SOFTWARE\Microsoft\Cryptography`), 0, syscall.KEY_READ|syscall.KEY_WOW64_64KEY, &h)
+
+ regKeyCryptoPtr, err := syscall.UTF16PtrFromString(`SOFTWARE\Microsoft\Cryptography`)
+ if err != nil {
+ return "", fmt.Errorf(`error reading registry key "SOFTWARE\Microsoft\Cryptography": %w`, err)
+ }
+
+ err = syscall.RegOpenKeyEx(syscall.HKEY_LOCAL_MACHINE, regKeyCryptoPtr, 0, syscall.KEY_READ|syscall.KEY_WOW64_64KEY, &h)
if err != nil {
return "", err
}
- defer syscall.RegCloseKey(h)
+ defer func() { _ = syscall.RegCloseKey(h) }()
const syscallRegBufLen = 74 // len(`{`) + len(`abcdefgh-1234-456789012-123345456671` * 2) + len(`}`) // 2 == bytes/UTF16
const uuidLen = 36
@@ -23,9 +29,15 @@ func readPlatformMachineID() (string, error) {
var regBuf [syscallRegBufLen]uint16
bufLen := uint32(syscallRegBufLen)
var valType uint32
- err = syscall.RegQueryValueEx(h, syscall.StringToUTF16Ptr(`MachineGuid`), nil, &valType, (*byte)(unsafe.Pointer(®Buf[0])), &bufLen)
+
+ mGuidPtr, err := syscall.UTF16PtrFromString(`MachineGuid`)
if err != nil {
- return "", err
+ return "", fmt.Errorf("error reading machine GUID: %w", err)
+ }
+
+ err = syscall.RegQueryValueEx(h, mGuidPtr, nil, &valType, (*byte)(unsafe.Pointer(®Buf[0])), &bufLen)
+ if err != nil {
+ return "", fmt.Errorf("error parsing ")
}
hostID := syscall.UTF16ToString(regBuf[:])
diff --git a/vendor/github.com/rs/xid/id.go b/vendor/github.com/rs/xid/id.go
index fcd7a0413519e..e88984d9f1f81 100644
--- a/vendor/github.com/rs/xid/id.go
+++ b/vendor/github.com/rs/xid/id.go
@@ -54,7 +54,6 @@ import (
"sort"
"sync/atomic"
"time"
- "unsafe"
)
// Code inspired from mgo/bson ObjectId
@@ -172,7 +171,7 @@ func FromString(id string) (ID, error) {
func (id ID) String() string {
text := make([]byte, encodedLen)
encode(text, id[:])
- return *(*string)(unsafe.Pointer(&text))
+ return string(text)
}
// Encode encodes the id using base32 encoding, writing 20 bytes to dst and return it.
@@ -206,23 +205,23 @@ func encode(dst, id []byte) {
dst[19] = encoding[(id[11]<<4)&0x1F]
dst[18] = encoding[(id[11]>>1)&0x1F]
- dst[17] = encoding[(id[11]>>6)&0x1F|(id[10]<<2)&0x1F]
+ dst[17] = encoding[(id[11]>>6)|(id[10]<<2)&0x1F]
dst[16] = encoding[id[10]>>3]
dst[15] = encoding[id[9]&0x1F]
dst[14] = encoding[(id[9]>>5)|(id[8]<<3)&0x1F]
dst[13] = encoding[(id[8]>>2)&0x1F]
dst[12] = encoding[id[8]>>7|(id[7]<<1)&0x1F]
- dst[11] = encoding[(id[7]>>4)&0x1F|(id[6]<<4)&0x1F]
+ dst[11] = encoding[(id[7]>>4)|(id[6]<<4)&0x1F]
dst[10] = encoding[(id[6]>>1)&0x1F]
- dst[9] = encoding[(id[6]>>6)&0x1F|(id[5]<<2)&0x1F]
+ dst[9] = encoding[(id[6]>>6)|(id[5]<<2)&0x1F]
dst[8] = encoding[id[5]>>3]
dst[7] = encoding[id[4]&0x1F]
dst[6] = encoding[id[4]>>5|(id[3]<<3)&0x1F]
dst[5] = encoding[(id[3]>>2)&0x1F]
dst[4] = encoding[id[3]>>7|(id[2]<<1)&0x1F]
- dst[3] = encoding[(id[2]>>4)&0x1F|(id[1]<<4)&0x1F]
+ dst[3] = encoding[(id[2]>>4)|(id[1]<<4)&0x1F]
dst[2] = encoding[(id[1]>>1)&0x1F]
- dst[1] = encoding[(id[1]>>6)&0x1F|(id[0]<<2)&0x1F]
+ dst[1] = encoding[(id[1]>>6)|(id[0]<<2)&0x1F]
dst[0] = encoding[id[0]>>3]
}
diff --git a/vendor/github.com/thanos-io/objstore/CHANGELOG.md b/vendor/github.com/thanos-io/objstore/CHANGELOG.md
index b5d7d12c8d81f..6aae677ac289f 100644
--- a/vendor/github.com/thanos-io/objstore/CHANGELOG.md
+++ b/vendor/github.com/thanos-io/objstore/CHANGELOG.md
@@ -49,6 +49,7 @@ We use *breaking :warning:* to mark changes that are not backward compatible (re
- [#100](https://github.com/thanos-io/objstore/pull/100) s3: add DisableMultipart option
- [#116](https://github.com/thanos-io/objstore/pull/116) Azure: Add new storage_create_container configuration property
- [#128](https://github.com/thanos-io/objstore/pull/128) GCS: Add support for `ChunkSize` for writer.
+- [#130](https://github.com/thanos-io/objstore/pull/130) feat: Decouple creating bucket metrics from instrumenting the bucket
### Changed
- [#38](https://github.com/thanos-io/objstore/pull/38) *: Upgrade minio-go version to `v7.0.45`.
diff --git a/vendor/github.com/thanos-io/objstore/objstore.go b/vendor/github.com/thanos-io/objstore/objstore.go
index 31c167ebd7754..87ec9e9863561 100644
--- a/vendor/github.com/thanos-io/objstore/objstore.go
+++ b/vendor/github.com/thanos-io/objstore/objstore.go
@@ -400,11 +400,8 @@ type IsOpFailureExpectedFunc func(error) bool
var _ InstrumentedBucket = &metricBucket{}
-// WrapWithMetrics takes a bucket and registers metrics with the given registry for
-// operations run against the bucket.
-func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBucket {
- bkt := &metricBucket{
- bkt: b,
+func BucketMetrics(reg prometheus.Registerer, name string) *Metrics {
+ return &Metrics{
isOpFailureExpected: func(err error) bool { return false },
ops: promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
Name: "objstore_bucket_operations_total",
@@ -430,8 +427,8 @@ func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBu
ConstLabels: prometheus.Labels{"bucket": name},
Buckets: prometheus.ExponentialBuckets(2<<14, 2, 16), // 32KiB, 64KiB, ... 1GiB
// Use factor=2 for native histograms, which gives similar buckets as the original exponential buckets.
- NativeHistogramBucketFactor: 2,
- NativeHistogramMaxBucketNumber: 100,
+ NativeHistogramBucketFactor: 2,
+ NativeHistogramMaxBucketNumber: 100,
NativeHistogramMinResetDuration: 1 * time.Hour,
}, []string{"operation"}),
@@ -441,8 +438,8 @@ func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBu
ConstLabels: prometheus.Labels{"bucket": name},
Buckets: []float64{0.001, 0.01, 0.1, 0.3, 0.6, 1, 3, 6, 9, 20, 30, 60, 90, 120},
// Use the recommended defaults for native histograms with 10% growth factor.
- NativeHistogramBucketFactor: 1.1,
- NativeHistogramMaxBucketNumber: 100,
+ NativeHistogramBucketFactor: 1.1,
+ NativeHistogramMaxBucketNumber: 100,
NativeHistogramMinResetDuration: 1 * time.Hour,
}, []string{"operation"}),
@@ -452,6 +449,27 @@ func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBu
ConstLabels: prometheus.Labels{"bucket": name},
}),
}
+}
+
+// WrapWithMetrics takes a bucket and registers metrics with the given registry for
+// operations run against the bucket.
+func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBucket {
+ metrics := BucketMetrics(reg, name)
+ return wrapWithMetrics(b, metrics)
+}
+
+// WrapWith takes a `bucket` and `metrics` that returns instrumented bucket.
+// Similar to WrapWithMetrics, but `metrics` can be passed separately as an argument.
+func WrapWith(b Bucket, metrics *Metrics) *metricBucket {
+ return wrapWithMetrics(b, metrics)
+}
+
+func wrapWithMetrics(b Bucket, metrics *Metrics) *metricBucket {
+ bkt := &metricBucket{
+ bkt: b,
+ metrics: metrics,
+ }
+
for _, op := range []string{
OpIter,
OpGet,
@@ -461,10 +479,10 @@ func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBu
OpDelete,
OpAttributes,
} {
- bkt.ops.WithLabelValues(op)
- bkt.opsFailures.WithLabelValues(op)
- bkt.opsDuration.WithLabelValues(op)
- bkt.opsFetchedBytes.WithLabelValues(op)
+ bkt.metrics.ops.WithLabelValues(op)
+ bkt.metrics.opsFailures.WithLabelValues(op)
+ bkt.metrics.opsDuration.WithLabelValues(op)
+ bkt.metrics.opsFetchedBytes.WithLabelValues(op)
}
// fetched bytes only relevant for get, getrange and upload
@@ -473,14 +491,12 @@ func WrapWithMetrics(b Bucket, reg prometheus.Registerer, name string) *metricBu
OpGetRange,
OpUpload,
} {
- bkt.opsTransferredBytes.WithLabelValues(op)
+ bkt.metrics.opsTransferredBytes.WithLabelValues(op)
}
return bkt
}
-type metricBucket struct {
- bkt Bucket
-
+type Metrics struct {
ops *prometheus.CounterVec
opsFailures *prometheus.CounterVec
isOpFailureExpected IsOpFailureExpectedFunc
@@ -491,16 +507,23 @@ type metricBucket struct {
lastSuccessfulUploadTime prometheus.Gauge
}
+type metricBucket struct {
+ bkt Bucket
+ metrics *Metrics
+}
+
func (b *metricBucket) WithExpectedErrs(fn IsOpFailureExpectedFunc) Bucket {
return &metricBucket{
- bkt: b.bkt,
- ops: b.ops,
- opsFailures: b.opsFailures,
- opsFetchedBytes: b.opsFetchedBytes,
- opsTransferredBytes: b.opsTransferredBytes,
- isOpFailureExpected: fn,
- opsDuration: b.opsDuration,
- lastSuccessfulUploadTime: b.lastSuccessfulUploadTime,
+ bkt: b.bkt,
+ metrics: &Metrics{
+ ops: b.metrics.ops,
+ opsFailures: b.metrics.opsFailures,
+ opsFetchedBytes: b.metrics.opsFetchedBytes,
+ opsTransferredBytes: b.metrics.opsTransferredBytes,
+ isOpFailureExpected: fn,
+ opsDuration: b.metrics.opsDuration,
+ lastSuccessfulUploadTime: b.metrics.lastSuccessfulUploadTime,
+ },
}
}
@@ -510,43 +533,43 @@ func (b *metricBucket) ReaderWithExpectedErrs(fn IsOpFailureExpectedFunc) Bucket
func (b *metricBucket) Iter(ctx context.Context, dir string, f func(name string) error, options ...IterOption) error {
const op = OpIter
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
start := time.Now()
err := b.bkt.Iter(ctx, dir, f, options...)
if err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
}
- b.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
+ b.metrics.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
return err
}
func (b *metricBucket) Attributes(ctx context.Context, name string) (ObjectAttributes, error) {
const op = OpAttributes
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
start := time.Now()
attrs, err := b.bkt.Attributes(ctx, name)
if err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
return attrs, err
}
- b.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
+ b.metrics.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
return attrs, nil
}
func (b *metricBucket) Get(ctx context.Context, name string) (io.ReadCloser, error) {
const op = OpGet
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
rc, err := b.bkt.Get(ctx, name)
if err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
return nil, err
}
@@ -554,22 +577,22 @@ func (b *metricBucket) Get(ctx context.Context, name string) (io.ReadCloser, err
rc,
true,
op,
- b.opsDuration,
- b.opsFailures,
- b.isOpFailureExpected,
- b.opsFetchedBytes,
- b.opsTransferredBytes,
+ b.metrics.opsDuration,
+ b.metrics.opsFailures,
+ b.metrics.isOpFailureExpected,
+ b.metrics.opsFetchedBytes,
+ b.metrics.opsTransferredBytes,
), nil
}
func (b *metricBucket) GetRange(ctx context.Context, name string, off, length int64) (io.ReadCloser, error) {
const op = OpGetRange
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
rc, err := b.bkt.GetRange(ctx, name, off, length)
if err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
return nil, err
}
@@ -577,69 +600,69 @@ func (b *metricBucket) GetRange(ctx context.Context, name string, off, length in
rc,
true,
op,
- b.opsDuration,
- b.opsFailures,
- b.isOpFailureExpected,
- b.opsFetchedBytes,
- b.opsTransferredBytes,
+ b.metrics.opsDuration,
+ b.metrics.opsFailures,
+ b.metrics.isOpFailureExpected,
+ b.metrics.opsFetchedBytes,
+ b.metrics.opsTransferredBytes,
), nil
}
func (b *metricBucket) Exists(ctx context.Context, name string) (bool, error) {
const op = OpExists
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
start := time.Now()
ok, err := b.bkt.Exists(ctx, name)
if err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
return false, err
}
- b.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
+ b.metrics.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
return ok, nil
}
func (b *metricBucket) Upload(ctx context.Context, name string, r io.Reader) error {
const op = OpUpload
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
trc := newTimingReader(
r,
false,
op,
- b.opsDuration,
- b.opsFailures,
- b.isOpFailureExpected,
+ b.metrics.opsDuration,
+ b.metrics.opsFailures,
+ b.metrics.isOpFailureExpected,
nil,
- b.opsTransferredBytes,
+ b.metrics.opsTransferredBytes,
)
defer trc.Close()
err := b.bkt.Upload(ctx, name, trc)
if err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
return err
}
- b.lastSuccessfulUploadTime.SetToCurrentTime()
+ b.metrics.lastSuccessfulUploadTime.SetToCurrentTime()
return nil
}
func (b *metricBucket) Delete(ctx context.Context, name string) error {
const op = OpDelete
- b.ops.WithLabelValues(op).Inc()
+ b.metrics.ops.WithLabelValues(op).Inc()
start := time.Now()
if err := b.bkt.Delete(ctx, name); err != nil {
- if !b.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
- b.opsFailures.WithLabelValues(op).Inc()
+ if !b.metrics.isOpFailureExpected(err) && ctx.Err() != context.Canceled {
+ b.metrics.opsFailures.WithLabelValues(op).Inc()
}
return err
}
- b.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
+ b.metrics.opsDuration.WithLabelValues(op).Observe(time.Since(start).Seconds())
return nil
}
diff --git a/vendor/github.com/twmb/franz-go/LICENSE b/vendor/github.com/twmb/franz-go/LICENSE
new file mode 100644
index 0000000000000..36e18034325d5
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/LICENSE
@@ -0,0 +1,24 @@
+Copyright 2020, Travis Bischel.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the library nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/LICENSE b/vendor/github.com/twmb/franz-go/pkg/kadm/LICENSE
new file mode 100644
index 0000000000000..36e18034325d5
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/LICENSE
@@ -0,0 +1,24 @@
+Copyright 2020, Travis Bischel.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the library nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/acls.go b/vendor/github.com/twmb/franz-go/pkg/kadm/acls.go
new file mode 100644
index 0000000000000..62676b5b8c074
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/acls.go
@@ -0,0 +1,1117 @@
+package kadm
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "sync"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// ACLBuilder is a builder that is used for batch creating / listing / deleting
+// ACLS.
+//
+// An ACL consists of five components:
+//
+// - the user (principal)
+// - the host the user runs on
+// - what resource to access (topic name, group id, etc.)
+// - the operation (read, write)
+// - whether to allow or deny the above
+//
+// This builder allows for adding the above five components in batches and then
+// creating, listing, or deleting a batch of ACLs in one go. This builder
+// merges the fifth component (allowing or denying) into allowing principals
+// and hosts and denying principals and hosts. The builder must always have an
+// Allow or Deny. For creating, the host is optional and defaults to the
+// wildcard * that allows or denies all hosts. For listing / deleting, the host
+// is also required (specifying no hosts matches all hosts, but you must
+// specify this).
+//
+// Building works on a multiplying factor: every user, every host, every
+// resource, and every operation is combined (principals * hosts * resources *
+// operations).
+//
+// With the Kafka simple authorizer (and most reimplementations), all
+// principals are required to have the "User:" prefix. The PrefixUserExcept
+// function can be used to easily add the "User:" prefix if missing.
+//
+// The full set of operations and which requests require what operations is
+// described in a large doc comment on the ACLOperation type.
+//
+// Lastly, resources to access / deny access to can be created / matched based
+// on literal (exact) names, or on prefix names, or more. See the ACLPattern
+// docs for more information.
+type ACLBuilder struct {
+ any []string
+ anyResource bool
+ topics []string
+ anyTopic bool
+ groups []string
+ anyGroup bool
+ anyCluster bool
+ txnIDs []string
+ anyTxn bool
+ tokens []string
+ anyToken bool
+
+ allow []string
+ anyAllow bool
+ allowHosts []string
+ anyAllowHosts bool
+ deny []string
+ anyDeny bool
+ denyHosts []string
+ anyDenyHosts bool
+
+ ops []ACLOperation
+
+ pattern ACLPattern
+}
+
+// PrefixUser prefixes all allowed and denied principals with "User:".
+func (b *ACLBuilder) PrefixUser() {
+ b.PrefixUserExcept()
+}
+
+// PrefixUserExcept prefixes all allowed and denied principals with "User:",
+// unless they have any of the given except prefixes.
+func (b *ACLBuilder) PrefixUserExcept(except ...string) {
+ replace := func(u string) string {
+ if !strings.HasPrefix(u, "User:") {
+ for _, e := range except {
+ if strings.HasPrefix(u, e) {
+ return u
+ }
+ }
+ return "User:" + u
+ }
+ return u
+ }
+
+ for i, u := range b.allow {
+ b.allow[i] = replace(u)
+ }
+ for i, u := range b.deny {
+ b.deny[i] = replace(u)
+ }
+}
+
+// NewACLs returns a new ACL builder.
+func NewACLs() *ACLBuilder {
+ return new(ACLBuilder)
+}
+
+// AnyResource lists & deletes ACLs of any type matching the given names
+// (pending other filters). If no names are given, this matches all names.
+//
+// This returns the input pointer.
+//
+// This function does nothing for creating.
+func (b *ACLBuilder) AnyResource(name ...string) *ACLBuilder {
+ b.any = name
+ if len(name) == 0 {
+ b.anyResource = true
+ }
+ return b
+}
+
+// Topics lists/deletes/creates ACLs of resource type "topic" for the given
+// topics.
+//
+// This returns the input pointer.
+//
+// For listing or deleting, if this is provided no topics, all "topic" resource
+// type ACLs are matched. For creating, if no topics are provided, this
+// function does nothing.
+func (b *ACLBuilder) Topics(t ...string) *ACLBuilder {
+ b.topics = t
+ if len(t) == 0 {
+ b.anyTopic = true
+ }
+ return b
+}
+
+// MaybeTopics is the same as Topics, but does not match all topics if none are
+// provided.
+func (b *ACLBuilder) MaybeTopics(t ...string) *ACLBuilder { b.topics = t; return b }
+
+// Groups lists/deletes/creates ACLs of resource type "group" for the given
+// groups.
+//
+// This returns the input pointer.
+//
+// For listing or deleting, if this is provided no groups, all "group" resource
+// type ACLs are matched. For creating, if no groups are provided, this
+// function does nothing.
+func (b *ACLBuilder) Groups(g ...string) *ACLBuilder {
+ b.groups = g
+ if len(g) == 0 {
+ b.anyGroup = true
+ }
+ return b
+}
+
+// MaybeGroups is the same as Groups, but does not match all groups if none are
+// provided.
+func (b *ACLBuilder) MaybeGroups(g ...string) *ACLBuilder { b.groups = g; return b }
+
+// Clusters lists/deletes/creates ACLs of resource type "cluster".
+//
+// This returns the input pointer.
+//
+// There is only one type of cluster in Kafka, "kafka-cluster". Opting in to
+// listing or deleting by cluster inherently matches all ACLS of resource type
+// cluster. For creating, this function allows for creating cluster ACLs.
+func (b *ACLBuilder) Clusters() *ACLBuilder {
+ b.anyCluster = true
+ return b
+}
+
+// MaybeClusters is the same as Clusters, but only matches clusters if c is
+// true.
+func (b *ACLBuilder) MaybeClusters(c bool) *ACLBuilder { b.anyCluster = c; return b }
+
+// TransactionalIDs lists/deletes/creates ACLs of resource type
+// "transactional_id" for the given transactional IDs.
+//
+// This returns the input pointer.
+//
+// For listing or deleting, if this is provided no IDs, all "transactional_id"
+// resource type ACLs matched. For creating, if no IDs are provided, this
+// function does nothing.
+func (b *ACLBuilder) TransactionalIDs(x ...string) *ACLBuilder {
+ b.txnIDs = x
+ if len(x) == 0 {
+ b.anyTxn = true
+ }
+ return b
+}
+
+// MaybeTransactionalIDs is the same as TransactionalIDs, but does not match
+// all transactional ID's if none are provided.
+func (b *ACLBuilder) MaybeTransactionalIDs(x ...string) *ACLBuilder { b.txnIDs = x; return b }
+
+// DelegationTokens lists/deletes/creates ACLs of resource type
+// "delegation_token" for the given delegation tokens.
+//
+// This returns the input pointer.
+//
+// For listing or deleting, if this is provided no tokens, all
+// "delegation_token" resource type ACLs are matched. For creating, if no
+// tokens are provided, this function does nothing.
+func (b *ACLBuilder) DelegationTokens(t ...string) *ACLBuilder {
+ b.tokens = t
+ if len(t) == 0 {
+ b.anyToken = true
+ }
+ return b
+}
+
+// MaybeDelegationTokens is the same as DelegationTokens, but does not match
+// all tokens if none are provided.
+func (b *ACLBuilder) MaybeDelegationTokens(t ...string) *ACLBuilder { b.tokens = t; return b }
+
+// Allow sets the principals to add allow permissions for. For listing and
+// deleting, you must also use AllowHosts.
+//
+// This returns the input pointer.
+//
+// For creating, if this is not paired with AllowHosts, the user will have
+// access to all hosts (the wildcard *).
+//
+// For listing & deleting, if the principals are empty, this matches any user.
+func (b *ACLBuilder) Allow(principals ...string) *ACLBuilder {
+ b.allow = principals
+ if len(principals) == 0 {
+ b.anyAllow = true
+ }
+ return b
+}
+
+// MaybeAllow is the same as Allow, but does not match all allowed principals
+// if none are provided.
+func (b *ACLBuilder) MaybeAllow(principals ...string) *ACLBuilder { b.allow = principals; return b }
+
+// AllowHosts sets the hosts to add allow permissions for. If using this, you
+// must also use Allow.
+//
+// This returns the input pointer.
+//
+// For creating, if this is empty, the user will have access to all hosts (the
+// wildcard *) and this function is actually not necessary.
+//
+// For listing & deleting, if the hosts are empty, this matches any host.
+func (b *ACLBuilder) AllowHosts(hosts ...string) *ACLBuilder {
+ b.allowHosts = hosts
+ if len(hosts) == 0 {
+ b.anyAllowHosts = true
+ }
+ return b
+}
+
+// MaybeAllowHosts is the same as AllowHosts, but does not match all allowed
+// hosts if none are provided.
+func (b *ACLBuilder) MaybeAllowHosts(hosts ...string) *ACLBuilder { b.allowHosts = hosts; return b }
+
+// Deny sets the principals to add deny permissions for. For listing and
+// deleting, you must also use DenyHosts.
+//
+// This returns the input pointer.
+//
+// For creating, if this is not paired with DenyHosts, the user will be denied
+// access to all hosts (the wildcard *).
+//
+// For listing & deleting, if the principals are empty, this matches any user.
+func (b *ACLBuilder) Deny(principals ...string) *ACLBuilder {
+ b.deny = principals
+ if len(principals) == 0 {
+ b.anyDeny = true
+ }
+ return b
+}
+
+// MaybeDeny is the same as Deny, but does not match all denied principals if
+// none are provided.
+func (b *ACLBuilder) MaybeDeny(principals ...string) *ACLBuilder { b.deny = principals; return b }
+
+// DenyHosts sets the hosts to add deny permissions for. If using this, you
+// must also use Deny.
+//
+// This returns the input pointer.
+//
+// For creating, if this is empty, the user will be denied access to all hosts
+// (the wildcard *) and this function is actually not necessary.
+//
+// For listing & deleting, if the hosts are empty, this matches any host.
+func (b *ACLBuilder) DenyHosts(hosts ...string) *ACLBuilder {
+ b.denyHosts = hosts
+ if len(hosts) == 0 {
+ b.anyDenyHosts = true
+ }
+ return b
+}
+
+// MaybeDenyHosts is the same as DenyHosts, but does not match all denied
+// hosts if none are provided.
+func (b *ACLBuilder) MaybeDenyHosts(hosts ...string) *ACLBuilder { b.denyHosts = hosts; return b }
+
+// ACLOperation is a type alias for kmsg.ACLOperation, which is an enum
+// containing all Kafka ACL operations and has helper functions.
+//
+// Kafka requests require the following operations (broker <=> broker ACLs
+// elided):
+//
+// PRODUCING/CONSUMING
+// ===================
+// Produce WRITE on TOPIC for topics
+// WRITE on TRANSACTIONAL_ID for txn id (if transactionally producing)
+//
+// Fetch READ on TOPIC for topics
+//
+// ListOffsets DESCRIBE on TOPIC for topics
+//
+// Metadata DESCRIBE on TOPIC for topics
+// CREATE on CLUSTER for kafka-cluster (if automatically creating new topics)
+// CREATE on TOPIC for topics (if automatically creating new topics)
+//
+// OffsetForLeaderEpoch DESCRIBE on TOPIC for topics
+//
+// GROUPS
+// ======
+// FindCoordinator DESCRIBE on GROUP for group (if finding group coordinator)
+// DESCRIBE on TRANSACTIONAL_ID for id (if finding transactiona coordinator)
+//
+// OffsetCommit READ on GROUP for group
+// READ on TOPIC for topics
+//
+// OffsetFetch DESCRIBE on GROUP for group
+// DESCRIBE on TOPIC for topics
+//
+// OffsetDelete DELETE on GROUP For group
+// READ on TOPIC for topics
+//
+// JoinGroup READ on GROUP for group
+// Heartbeat READ on GROUP for group
+// LeaveGroup READ on GROUP for group
+// SyncGroup READ on GROUP for group
+//
+// DescribeGroup DESCRIBE on GROUP for groups
+//
+// ListGroups DESCRIBE on GROUP for groups
+// or, DESCRIBE on CLUSTER for kafka-cluster
+//
+// DeleteGroups DELETE on GROUP for groups
+//
+// TRANSACTIONS (including FindCoordinator above)
+// ============
+// InitProducerID WRITE on TRANSACTIONAL_ID for id, if using transactions
+// or, IDEMPOTENT_WRITE on CLUSTER for kafka-cluster, if pre Kafka 3.0
+// or, WRITE on TOPIC for any topic, if Kafka 3.0+
+//
+// AddPartitionsToTxn WRITE on TRANSACTIONAL_ID for id
+// WRITE on TOPIC for topics
+//
+// AddOffsetsToTxn WRITE on TRANSACTIONAL_ID for id
+// READ on GROUP for group
+//
+// EndTxn WRITE on TRANSACTIONAL_ID for id
+//
+// TxnOffsetCommit WRITE on TRANSACTIONAL_ID for id
+// READ on GROUP for group
+// READ on TOPIC for topics
+//
+// TOPIC ADMIN
+// ===========
+// CreateTopics CREATE on CLUSTER for kafka-cluster
+// CREATE on TOPIC for topics
+// DESCRIBE_CONFIGS on TOPIC for topics, for returning topic configs on create
+//
+// CreatePartitions ALTER on TOPIC for topics
+//
+// DeleteTopics DELETE on TOPIC for topics
+// DESCRIBE on TOPIC for topics, if deleting by topic id (in addition to prior ACL)
+//
+// DeleteRecords DELETE on TOPIC for topics
+//
+// CONFIG ADMIN
+// ============
+// DescribeConfigs DESCRIBE_CONFIGS on CLUSTER for kafka-cluster, for broker or broker-logger describing
+// DESCRIBE_CONFIGS on TOPIC for topics, for topic describing
+//
+// AlterConfigs ALTER_CONFIGS on CLUSTER for kafka-cluster, for broker altering
+// ALTER_CONFIGS on TOPIC for topics, for topic altering
+//
+// IncrementalAlterConfigs ALTER_CONFIGS on CLUSTER for kafka-cluster, for broker or broker-logger altering
+// ALTER_CONFIGS on TOPIC for topics, for topic altering
+//
+//
+// MISC ADMIN
+// ==========
+// AlterReplicaLogDirs ALTER on CLUSTER for kafka-cluster
+// DescribeLogDirs DESCRIBE on CLUSTER for kafka-cluster
+//
+// AlterPartitionAssignments ALTER on CLUSTER for kafka-cluster
+// ListPartitionReassignments DESCRIBE on CLUSTER for kafka-cluster
+//
+// DescribeDelegationTokens DESCRIBE on DELEGATION_TOKEN for id
+//
+// ElectLeaders ALTER on CLUSTER for kafka-cluster
+//
+// DescribeClientQuotas DESCRIBE_CONFIGS on CLUSTER for kafka-cluster
+// AlterClientQuotas ALTER_CONFIGS on CLUSTER for kafka-cluster
+//
+// DescribeUserScramCredentials DESCRIBE on CLUSTER for kafka-cluster
+// AlterUserScramCredentials ALTER on CLUSTER for kafka-cluster
+//
+// UpdateFeatures ALTER on CLUSTER for kafka-cluster
+//
+// DescribeCluster DESCRIBE on CLUSTER for kafka-cluster
+//
+// DescribeProducerIDs READ on TOPIC for topics
+// DescribeTransactions DESCRIBE on TRANSACTIONAL_ID for ids
+// DESCRIBE on TOPIC for topics
+// ListTransactions DESCRIBE on TRANSACTIONAL_ID for ids
+type ACLOperation = kmsg.ACLOperation
+
+const (
+ // OpUnknown is returned for unknown operations.
+ OpUnknown ACLOperation = kmsg.ACLOperationUnknown
+
+ // OpAny, used for listing and deleting, matches any operation.
+ OpAny ACLOperation = kmsg.ACLOperationAny
+
+ // OpAll is a shortcut for allowing / denying all operations.
+ OpAll ACLOperation = kmsg.ACLOperationAll
+
+ // OpRead is the READ operation.
+ OpRead ACLOperation = kmsg.ACLOperationRead
+
+ // OpWrite is the WRITE operation.
+ OpWrite ACLOperation = kmsg.ACLOperationWrite
+
+ // OpCreate is the CREATE operation.
+ OpCreate ACLOperation = kmsg.ACLOperationCreate
+
+ // OpDelete is the DELETE operation.
+ OpDelete ACLOperation = kmsg.ACLOperationDelete
+
+ // OpAlter is the ALTER operation.
+ OpAlter ACLOperation = kmsg.ACLOperationAlter
+
+ // OpDescribe is the DESCRIBE operation.
+ OpDescribe ACLOperation = kmsg.ACLOperationDescribe
+
+ // OpClusterAction is the CLUSTER_ACTION operation. This operation is
+ // used for any broker<=>broker communication and is not needed by
+ // clients.
+ OpClusterAction ACLOperation = kmsg.ACLOperationClusterAction
+
+ // OpDescribeConfigs is the DESCRIBE_CONFIGS operation.
+ OpDescribeConfigs ACLOperation = kmsg.ACLOperationDescribeConfigs
+
+ // OpAlterConfigs is the ALTER_CONFIGS operation.
+ OpAlterConfigs ACLOperation = kmsg.ACLOperationAlterConfigs
+
+ // OpIdempotentWrite is the IDEMPOTENT_WRITE operation. As of Kafka
+ // 3.0+, this has been deprecated and replaced by the ability to WRITE
+ // on any topic.
+ OpIdempotentWrite ACLOperation = kmsg.ACLOperationIdempotentWrite
+)
+
+// Operations sets operations to allow or deny. Passing no operations defaults
+// to OpAny.
+//
+// This returns the input pointer.
+//
+// For creating, OpAny returns an error, for it is strictly used for filters
+// (listing & deleting).
+func (b *ACLBuilder) Operations(operations ...ACLOperation) *ACLBuilder {
+ b.ops = operations
+ if len(operations) == 0 {
+ b.ops = []ACLOperation{OpAny}
+ }
+ return b
+}
+
+// MaybeOperations is the same as Operations, but does not match all operations
+// if none are provided.
+func (b *ACLBuilder) MaybeOperations(operations ...ACLOperation) *ACLBuilder {
+ if len(operations) > 0 {
+ b.Operations(operations...)
+ }
+ return b
+}
+
+// ACLPattern is a type alias for kmsg.ACLResourcePatternType, which is an enum
+// containing all Kafka ACL resource pattern options.
+//
+// Creating/listing/deleting ACLs works on a resource name basis: every ACL
+// created has a name, and every ACL filtered for listing / deleting matches by
+// name. The name by default is "literal", meaning created ACLs will have the
+// exact name, and matched ACLs must match completely.
+//
+// Prefixed names allow for creating an ACL that matches any prefix: principals
+// foo-bar and foo-baz both have the prefix "foo-", meaning a READ on TOPIC for
+// User:foo- with prefix pattern will allow both of those principals to read
+// the topic.
+//
+// Any and match are used for listing and deleting. Any will match any name, be
+// it literal or prefix or a wildcard name. There is no need for specifying
+// topics, groups, etc. when using any resource pattern.
+//
+// Alternatively, match requires a name, but it matches any literal name (exact
+// match), any prefix, and any wildcard.
+type ACLPattern = kmsg.ACLResourcePatternType
+
+const (
+ // ACLPatternUnknown is returned for unknown patterns.
+ ACLPatternUnknown ACLPattern = kmsg.ACLResourcePatternTypeUnknown
+
+ // ACLPatternAny is the ANY resource pattern.
+ ACLPatternAny ACLPattern = kmsg.ACLResourcePatternTypeAny
+
+ // ACLPatternMatch is the MATCH resource pattern.
+ ACLPatternMatch ACLPattern = kmsg.ACLResourcePatternTypeMatch
+
+ // ACLPatternLiteral is the LITERAL resource pattern, the default.
+ ACLPatternLiteral ACLPattern = kmsg.ACLResourcePatternTypeLiteral
+
+ // ACLPatternPrefixed is the PREFIXED resource pattern.
+ ACLPatternPrefixed ACLPattern = kmsg.ACLResourcePatternTypePrefixed
+)
+
+// ResourcePatternType sets the pattern type to use when creating or filtering
+// ACL resource names, overriding the default of LITERAL.
+//
+// This returns the input pointer.
+//
+// For creating, only LITERAL and PREFIXED are supported.
+func (b *ACLBuilder) ResourcePatternType(pattern ACLPattern) *ACLBuilder {
+ b.pattern = pattern
+ return b
+}
+
+// ValidateCreate returns an error if the builder is invalid for creating ACLs.
+func (b *ACLBuilder) ValidateCreate() error {
+ for _, op := range b.ops {
+ switch op {
+ case OpAny, OpUnknown:
+ return fmt.Errorf("invalid operation %s for creating ACLs", op)
+ }
+ }
+
+ switch b.pattern {
+ case ACLPatternLiteral, ACLPatternPrefixed:
+ default:
+ return fmt.Errorf("invalid acl resource pattern %s for creating ACLs", b.pattern)
+ }
+
+ if len(b.allowHosts) != 0 && len(b.allow) == 0 {
+ return fmt.Errorf("invalid allow hosts with no allow principals")
+ }
+ if len(b.denyHosts) != 0 && len(b.deny) == 0 {
+ return fmt.Errorf("invalid deny hosts with no deny principals")
+ }
+ return nil
+}
+
+// ValidateDelete is an alias for ValidateFilter.
+func (b *ACLBuilder) ValidateDelete() error { return b.ValidateFilter() }
+
+// ValidateDescribe is an alias for ValidateFilter.
+func (b *ACLBuilder) ValidateDescribe() error { return b.ValidateFilter() }
+
+// ValidateFilter returns an error if the builder is invalid for deleting or
+// describing ACLs (which both operate on a filter basis).
+func (b *ACLBuilder) ValidateFilter() error {
+ if len(b.allowHosts) != 0 && len(b.allow) == 0 && !b.anyAllow {
+ return fmt.Errorf("invalid allow hosts with no allow principals")
+ }
+ if len(b.allow) != 0 && len(b.allowHosts) == 0 && !b.anyAllowHosts {
+ return fmt.Errorf("invalid allow principals with no allow hosts")
+ }
+ if len(b.denyHosts) != 0 && len(b.deny) == 0 && !b.anyDeny {
+ return fmt.Errorf("invalid deny hosts with no deny principals")
+ }
+ if len(b.deny) != 0 && len(b.denyHosts) == 0 && !b.anyDenyHosts {
+ return fmt.Errorf("invalid deny principals with no deny hosts")
+ }
+ return nil
+}
+
+// HasAnyFilter returns whether any field in this builder is opted into "any",
+// meaning a wide glob. This would be if you used Topics with no topics, and so
+// on. This function can be used to detect if you accidentally opted into a
+// non-specific ACL.
+//
+// The evaluated fields are: resources, principals/hosts, a single OpAny
+// operation, and an Any pattern.
+func (b *ACLBuilder) HasAnyFilter() bool {
+ return b.anyResource ||
+ b.anyTopic ||
+ b.anyGroup ||
+ b.anyTxn ||
+ b.anyToken ||
+ b.anyAllow ||
+ b.anyAllowHosts ||
+ b.anyDeny ||
+ b.anyDenyHosts ||
+ b.hasOpAny() ||
+ b.pattern == ACLPatternAny
+}
+
+func (b *ACLBuilder) hasOpAny() bool {
+ for _, op := range b.ops {
+ if op == OpAny {
+ return true
+ }
+ }
+ return false
+}
+
+// HasResource returns true if the builder has a non-empty resource (topic,
+// group, ...), or if any resource has "any" set to true.
+func (b *ACLBuilder) HasResource() bool {
+ l := len(b.any) +
+ len(b.topics) +
+ len(b.groups) +
+ len(b.txnIDs) +
+ len(b.tokens)
+ return l > 0 ||
+ b.anyResource ||
+ b.anyTopic ||
+ b.anyGroup ||
+ b.anyCluster ||
+ b.anyTxn ||
+ b.anyToken
+}
+
+// HasPrincipals returns if any allow or deny principals have been set, or if
+// their "any" field is true.
+func (b *ACLBuilder) HasPrincipals() bool {
+ return len(b.allow) > 0 ||
+ b.anyAllow ||
+ len(b.deny) > 0 ||
+ b.anyDeny
+}
+
+// HasHosts returns if any allow or deny hosts have been set, or if their "any"
+// field is true.
+func (b *ACLBuilder) HasHosts() bool {
+ return len(b.allowHosts) > 0 ||
+ b.anyAllowHosts ||
+ len(b.denyHosts) > 0 ||
+ b.anyDenyHosts
+}
+
+func (b *ACLBuilder) dup() *ACLBuilder { // shallow copy
+ d := *b
+ return &d
+}
+
+// CreateACLsResult is a result for an individual ACL creation.
+type CreateACLsResult struct {
+ Principal string
+ Host string
+
+ Type kmsg.ACLResourceType // Type is the type of resource this is.
+ Name string // Name is the name of the resource allowed / denied.
+ Pattern ACLPattern // Pattern is the name pattern.
+ Operation ACLOperation // Operation is the operation allowed / denied.
+ Permission kmsg.ACLPermissionType // Permission is whether this is allowed / denied.
+
+ Err error // Err is the error for this ACL creation.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// CreateACLsResults contains all results to created ACLs.
+type CreateACLsResults []CreateACLsResult
+
+// CreateACLs creates a batch of ACLs using the ACL builder, validating the
+// input before issuing the CreateACLs request.
+//
+// If the input is invalid, or if the response fails, or if the response does
+// not contain as many ACLs as we issued in our create request, this returns an
+// error.
+func (cl *Client) CreateACLs(ctx context.Context, b *ACLBuilder) (CreateACLsResults, error) {
+ if err := b.ValidateCreate(); err != nil {
+ return nil, err
+ }
+ if len(b.allow) != 0 && len(b.allowHosts) == 0 {
+ b.allowHosts = []string{"*"}
+ }
+ if len(b.deny) != 0 && len(b.denyHosts) == 0 {
+ b.denyHosts = []string{"*"}
+ }
+
+ var clusters []string
+ if b.anyCluster {
+ clusters = []string{"kafka-cluster"}
+ }
+
+ req := kmsg.NewPtrCreateACLsRequest()
+ for _, typeNames := range []struct {
+ t kmsg.ACLResourceType
+ names []string
+ }{
+ {kmsg.ACLResourceTypeTopic, b.topics},
+ {kmsg.ACLResourceTypeGroup, b.groups},
+ {kmsg.ACLResourceTypeCluster, clusters},
+ {kmsg.ACLResourceTypeTransactionalId, b.txnIDs},
+ {kmsg.ACLResourceTypeDelegationToken, b.tokens},
+ } {
+ for _, name := range typeNames.names {
+ for _, op := range b.ops {
+ for _, perm := range []struct {
+ principals []string
+ hosts []string
+ permType kmsg.ACLPermissionType
+ }{
+ {b.allow, b.allowHosts, kmsg.ACLPermissionTypeAllow},
+ {b.deny, b.denyHosts, kmsg.ACLPermissionTypeDeny},
+ } {
+ for _, principal := range perm.principals {
+ for _, host := range perm.hosts {
+ c := kmsg.NewCreateACLsRequestCreation()
+ c.ResourceType = typeNames.t
+ c.ResourceName = name
+ c.ResourcePatternType = b.pattern
+ c.Operation = op
+ c.Principal = principal
+ c.Host = host
+ c.PermissionType = perm.permType
+ req.Creations = append(req.Creations, c)
+ }
+ }
+ }
+ }
+ }
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(resp.Results) != len(req.Creations) {
+ return nil, fmt.Errorf("received %d results to %d creations", len(resp.Results), len(req.Creations))
+ }
+
+ var rs CreateACLsResults
+ for i, r := range resp.Results {
+ c := &req.Creations[i]
+ rs = append(rs, CreateACLsResult{
+ Principal: c.Principal,
+ Host: c.Host,
+
+ Type: c.ResourceType,
+ Name: c.ResourceName,
+ Pattern: c.ResourcePatternType,
+ Operation: c.Operation,
+ Permission: c.PermissionType,
+
+ Err: kerr.ErrorForCode(r.ErrorCode),
+ ErrMessage: unptrStr(r.ErrorMessage),
+ })
+ }
+
+ return rs, nil
+}
+
+// DeletedACL an ACL that was deleted.
+type DeletedACL struct {
+ Principal string // Principal is this deleted ACL's principal.
+ Host string // Host is this deleted ACL's host.
+
+ Type kmsg.ACLResourceType // Type is this deleted ACL's resource type.
+ Name string // Name is this deleted ACL's resource name.
+ Pattern ACLPattern // Pattern is this deleted ACL's resource name pattern.
+ Operation ACLOperation // Operation is this deleted ACL's operation.
+ Permission kmsg.ACLPermissionType // Permission this deleted ACLs permission.
+
+ Err error // Err is non-nil if this match has an error.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// DeletedACLs contains ACLs that were deleted from a single delete filter.
+type DeletedACLs []DeletedACL
+
+// DeleteACLsResult contains the input used for a delete ACL filter, and the
+// deletes that the filter matched or the error for this filter.
+//
+// All fields but Deleted and Err are set from the request input. The response
+// sets either Deleted (potentially to nothing if the filter matched nothing)
+// or Err.
+type DeleteACLsResult struct {
+ Principal *string // Principal is the optional user that was used in this filter.
+ Host *string // Host is the optional host that was used in this filter.
+
+ Type kmsg.ACLResourceType // Type is the type of resource used for this filter.
+ Name *string // Name is the name of the resource used for this filter.
+ Pattern ACLPattern // Pattern is the name pattern used for this filter.
+ Operation ACLOperation // Operation is the operation used for this filter.
+ Permission kmsg.ACLPermissionType // Permission is permission used for this filter.
+
+ Deleted DeletedACLs // Deleted contains all ACLs this delete filter matched.
+
+ Err error // Err is non-nil if this filter has an error.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// DeleteACLsResults contains all results to deleted ACLs.
+type DeleteACLsResults []DeleteACLsResult
+
+// DeleteACLs deletes a batch of ACLs using the ACL builder, validating the
+// input before issuing the DeleteACLs request.
+//
+// If the input is invalid, or if the response fails, or if the response does
+// not contain as many ACL results as we issued in our delete request, this
+// returns an error.
+//
+// Deleting ACLs works on a filter basis: a single filter can match many ACLs.
+// For example, deleting with operation ANY matches any operation. For safety /
+// verification purposes, you an DescribeACLs with the same builder first to
+// see what would be deleted.
+func (cl *Client) DeleteACLs(ctx context.Context, b *ACLBuilder) (DeleteACLsResults, error) {
+ dels, _, err := createDelDescACL(b)
+ if err != nil {
+ return nil, err
+ }
+
+ req := kmsg.NewPtrDeleteACLsRequest()
+ req.Filters = dels
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if len(resp.Results) != len(req.Filters) {
+ return nil, fmt.Errorf("received %d results to %d filters", len(resp.Results), len(req.Filters))
+ }
+
+ var rs DeleteACLsResults
+ for i, r := range resp.Results {
+ f := &req.Filters[i]
+ var ms DeletedACLs
+ for _, m := range r.MatchingACLs {
+ ms = append(ms, DeletedACL{
+ Principal: m.Principal,
+ Host: m.Host,
+ Type: m.ResourceType,
+ Name: m.ResourceName,
+ Pattern: m.ResourcePatternType,
+ Operation: m.Operation,
+ Permission: m.PermissionType,
+ Err: kerr.ErrorForCode(m.ErrorCode),
+ ErrMessage: unptrStr(m.ErrorMessage),
+ })
+ }
+ rs = append(rs, DeleteACLsResult{
+ Principal: f.Principal,
+ Host: f.Host,
+ Type: f.ResourceType,
+ Name: f.ResourceName,
+ Pattern: f.ResourcePatternType,
+ Operation: f.Operation,
+ Permission: f.PermissionType,
+ Deleted: ms,
+ Err: kerr.ErrorForCode(r.ErrorCode),
+ ErrMessage: unptrStr(r.ErrorMessage),
+ })
+ }
+ return rs, nil
+}
+
+// DescribedACL is an ACL that was described.
+type DescribedACL struct {
+ Principal string // Principal is this described ACL's principal.
+ Host string // Host is this described ACL's host.
+
+ Type kmsg.ACLResourceType // Type is this described ACL's resource type.
+ Name string // Name is this described ACL's resource name.
+ Pattern ACLPattern // Pattern is this described ACL's resource name pattern.
+ Operation ACLOperation // Operation is this described ACL's operation.
+ Permission kmsg.ACLPermissionType // Permission this described ACLs permission.
+}
+
+// DescribedACLs contains ACLs that were described from a single describe
+// filter.
+type DescribedACLs []DescribedACL
+
+// DescribeACLsResults contains the input used for a describe ACL filter, and
+// the describes that the filter matched or the error for this filter.
+//
+// All fields but Described and Err are set from the request input. The
+// response sets either Described (potentially to nothing if the filter matched
+// nothing) or Err.
+type DescribeACLsResult struct {
+ Principal *string // Principal is the optional user that was used in this filter.
+ Host *string // Host is the optional host that was used in this filter.
+
+ Type kmsg.ACLResourceType // Type is the type of resource used for this filter.
+ Name *string // Name is the name of the resource used for this filter.
+ Pattern ACLPattern // Pattern is the name pattern used for this filter.
+ Operation ACLOperation // Operation is the operation used for this filter.
+ Permission kmsg.ACLPermissionType // Permission is permission used for this filter.
+
+ Described DescribedACLs // Described contains all ACLs this describe filter matched.
+
+ Err error // Err is non-nil if this filter has an error.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// DescribeACLsResults contains all results to described ACLs.
+type DescribeACLsResults []DescribeACLsResult
+
+// DescribeACLs describes a batch of ACLs using the ACL builder, validating the
+// input before issuing DescribeACLs requests.
+//
+// If the input is invalid, or if any response fails, this returns an error.
+//
+// Listing ACLs works on a filter basis: a single filter can match many ACLs.
+// For example, describing with operation ANY matches any operation. Under the
+// hood, this method issues one describe request per filter, because describing
+// ACLs does not work on a batch basis (unlike creating & deleting). The return
+// of this function can be used to see what would be deleted given the same
+// builder input.
+func (cl *Client) DescribeACLs(ctx context.Context, b *ACLBuilder) (DescribeACLsResults, error) {
+ _, descs, err := createDelDescACL(b)
+ if err != nil {
+ return nil, err
+ }
+
+ var (
+ ictx, cancel = context.WithCancel(ctx)
+ mu sync.Mutex
+ wg sync.WaitGroup
+ firstErr error
+ resps = make([]*kmsg.DescribeACLsResponse, len(descs))
+ )
+ defer cancel()
+ for i := range descs {
+ req := descs[i] // each req is unique per loop, we are not reusing req, this is safe
+ myIdx := i
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ resp, err := req.RequestWith(ictx, cl.cl)
+ resps[myIdx] = resp
+ if err == nil {
+ return
+ }
+ cancel()
+ mu.Lock()
+ defer mu.Unlock()
+ if firstErr == nil { // keep the first err
+ firstErr = err
+ }
+ }()
+ }
+ wg.Wait()
+ if firstErr != nil {
+ return nil, firstErr
+ }
+
+ var rs DescribeACLsResults
+ for i, r := range resps {
+ f := descs[i]
+ var ds DescribedACLs
+ for _, resource := range r.Resources {
+ for _, acl := range resource.ACLs {
+ ds = append(ds, DescribedACL{
+ Principal: acl.Principal,
+ Host: acl.Host,
+ Type: resource.ResourceType,
+ Name: resource.ResourceName,
+ Pattern: resource.ResourcePatternType,
+ Operation: acl.Operation,
+ Permission: acl.PermissionType,
+ })
+ }
+ }
+ rs = append(rs, DescribeACLsResult{
+ Principal: f.Principal,
+ Host: f.Host,
+ Type: f.ResourceType,
+ Name: f.ResourceName,
+ Pattern: f.ResourcePatternType,
+ Operation: f.Operation,
+ Permission: f.PermissionType,
+ Described: ds,
+ Err: kerr.ErrorForCode(r.ErrorCode),
+ ErrMessage: unptrStr(r.ErrorMessage),
+ })
+ }
+ return rs, nil
+}
+
+var sliceAny = []string{"any"}
+
+func createDelDescACL(b *ACLBuilder) ([]kmsg.DeleteACLsRequestFilter, []*kmsg.DescribeACLsRequest, error) {
+ if err := b.ValidateFilter(); err != nil {
+ return nil, nil, err
+ }
+
+ // As a special shortcut, if we have any allow and deny principals and
+ // hosts, we collapse these into one "any" group. The anyAny and
+ // anyAnyHosts vars are used in our looping below, and if we do this,
+ // we dup and set all the relevant fields to false to not expand them
+ // in our loops.
+ var anyAny, anyAnyHosts bool
+ if b.anyAllow && b.anyDeny && b.anyAllowHosts && b.anyDenyHosts {
+ anyAny = true
+ anyAnyHosts = true
+
+ b = b.dup()
+ b.allow = nil
+ b.allowHosts = nil
+ b.deny = nil
+ b.denyHosts = nil
+ b.anyAllow = false
+ b.anyAllowHosts = false
+ b.anyDeny = false
+ b.anyDenyHosts = false
+ }
+
+ var clusters []string
+ if b.anyCluster {
+ clusters = []string{"kafka-cluster"}
+ }
+ var deletions []kmsg.DeleteACLsRequestFilter
+ var describes []*kmsg.DescribeACLsRequest
+ for _, typeNames := range []struct {
+ t kmsg.ACLResourceType
+ names []string
+ any bool
+ }{
+ {kmsg.ACLResourceTypeAny, b.any, b.anyResource},
+ {kmsg.ACLResourceTypeTopic, b.topics, b.anyTopic},
+ {kmsg.ACLResourceTypeGroup, b.groups, b.anyGroup},
+ {kmsg.ACLResourceTypeCluster, clusters, b.anyCluster},
+ {kmsg.ACLResourceTypeTransactionalId, b.txnIDs, b.anyTxn},
+ {kmsg.ACLResourceTypeDelegationToken, b.tokens, b.anyToken},
+ } {
+ if typeNames.any {
+ typeNames.names = sliceAny
+ }
+ for _, name := range typeNames.names {
+ for _, op := range b.ops {
+ for _, perm := range []struct {
+ principals []string
+ anyPrincipal bool
+ hosts []string
+ anyHost bool
+ permType kmsg.ACLPermissionType
+ }{
+ {
+ b.allow,
+ b.anyAllow,
+ b.allowHosts,
+ b.anyAllowHosts,
+ kmsg.ACLPermissionTypeAllow,
+ },
+ {
+ b.deny,
+ b.anyDeny,
+ b.denyHosts,
+ b.anyDenyHosts,
+ kmsg.ACLPermissionTypeDeny,
+ },
+ {
+ nil,
+ anyAny,
+ nil,
+ anyAnyHosts,
+ kmsg.ACLPermissionTypeAny,
+ },
+ } {
+ if perm.anyPrincipal {
+ perm.principals = sliceAny
+ }
+ if perm.anyHost {
+ perm.hosts = sliceAny
+ }
+ for _, principal := range perm.principals {
+ for _, host := range perm.hosts {
+ deletion := kmsg.NewDeleteACLsRequestFilter()
+ describe := kmsg.NewPtrDescribeACLsRequest()
+
+ deletion.ResourceType = typeNames.t
+ describe.ResourceType = typeNames.t
+
+ if !typeNames.any {
+ deletion.ResourceName = kmsg.StringPtr(name)
+ describe.ResourceName = kmsg.StringPtr(name)
+ }
+
+ deletion.ResourcePatternType = b.pattern
+ describe.ResourcePatternType = b.pattern
+
+ deletion.Operation = op
+ describe.Operation = op
+
+ if !perm.anyPrincipal {
+ deletion.Principal = kmsg.StringPtr(principal)
+ describe.Principal = kmsg.StringPtr(principal)
+ }
+
+ if !perm.anyHost {
+ deletion.Host = kmsg.StringPtr(host)
+ describe.Host = kmsg.StringPtr(host)
+ }
+
+ deletion.PermissionType = perm.permType
+ describe.PermissionType = perm.permType
+
+ deletions = append(deletions, deletion)
+ describes = append(describes, describe)
+ }
+ }
+ }
+ }
+ }
+ }
+ return deletions, describes, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/configs.go b/vendor/github.com/twmb/franz-go/pkg/kadm/configs.go
new file mode 100644
index 0000000000000..e6e37245f7a6b
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/configs.go
@@ -0,0 +1,417 @@
+package kadm
+
+import (
+ "context"
+ "strconv"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// ConfigSynonym is a fallback value for a config.
+type ConfigSynonym struct {
+ Key string // Key is the fallback config name.
+ Value *string // Value is the fallback config value, if any (sensitive is elided).
+ Source kmsg.ConfigSource // Source is where this config synonym is defined from.
+}
+
+// Config is a configuration for a resource (topic, broker)
+type Config struct {
+ Key string // Key is the config name.
+ Value *string // Value is the config value, if any.
+ Sensitive bool // Sensitive is if this config is sensitive (if so, Value is nil).
+ Source kmsg.ConfigSource // Source is where this config is defined from.
+
+ // Synonyms contains fallback key/value pairs for this same
+ // configuration key in order or preference. That is, if a config entry
+ // is both dynamically defined and has a default value as well, the top
+ // level config will be the dynamic value, while the synonym will be
+ // the default.
+ Synonyms []ConfigSynonym
+}
+
+// MaybeValue returns the config's value if it is non-nil, otherwise an empty
+// string.
+func (c *Config) MaybeValue() string {
+ if c.Value != nil {
+ return *c.Value
+ }
+ return ""
+}
+
+// ResourceConfig contains the configuration values for a resource (topic,
+// broker, broker logger).
+type ResourceConfig struct {
+ Name string // Name is the name of this resource.
+ Configs []Config // Configs are the configs for this topic.
+ Err error // Err is any error preventing configs from loading (likely, an unknown topic).
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// ResourceConfigs contains the configuration values for many resources.
+type ResourceConfigs []ResourceConfig
+
+// On calls fn for the response config if it exists, returning the config and
+// the error returned from fn. If fn is nil, this simply returns the config.
+//
+// The fn is given a copy of the config. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the resource does not exist, this returns kerr.UnknownTopicOrPartition.
+func (rs ResourceConfigs) On(name string, fn func(*ResourceConfig) error) (ResourceConfig, error) {
+ for _, r := range rs {
+ if r.Name == name {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return ResourceConfig{}, kerr.UnknownTopicOrPartition
+}
+
+// DescribeTopicConfigs returns the configuration for the requested topics.
+//
+// This may return *ShardErrors.
+func (cl *Client) DescribeTopicConfigs(
+ ctx context.Context,
+ topics ...string,
+) (ResourceConfigs, error) {
+ if len(topics) == 0 {
+ return nil, nil
+ }
+ return cl.describeConfigs(ctx, kmsg.ConfigResourceTypeTopic, topics)
+}
+
+// DescribeBrokerConfigs returns configuration for the requested brokers. If no
+// brokers are requested, a single request is issued and any broker in the
+// cluster replies with the cluster-level dynamic config values.
+//
+// This may return *ShardErrors.
+func (cl *Client) DescribeBrokerConfigs(
+ ctx context.Context,
+ brokers ...int32,
+) (ResourceConfigs, error) {
+ var names []string
+ if len(brokers) == 0 {
+ names = append(names, "")
+ }
+ for _, b := range brokers {
+ names = append(names, strconv.Itoa(int(b)))
+ }
+ return cl.describeConfigs(ctx, kmsg.ConfigResourceTypeBroker, names)
+}
+
+func (cl *Client) describeConfigs(
+ ctx context.Context,
+ kind kmsg.ConfigResourceType,
+ names []string,
+) (ResourceConfigs, error) {
+ req := kmsg.NewPtrDescribeConfigsRequest()
+ req.IncludeSynonyms = true
+ for _, name := range names {
+ rr := kmsg.NewDescribeConfigsRequestResource()
+ rr.ResourceName = name
+ rr.ResourceType = kind
+ req.Resources = append(req.Resources, rr)
+ }
+ shards := cl.cl.RequestSharded(ctx, req)
+
+ var configs []ResourceConfig
+ return configs, shardErrEach(req, shards, func(kr kmsg.Response) error {
+ resp := kr.(*kmsg.DescribeConfigsResponse)
+ for _, r := range resp.Resources {
+ if err := maybeAuthErr(r.ErrorCode); err != nil {
+ return err
+ }
+ rc := ResourceConfig{
+ Name: r.ResourceName,
+ Err: kerr.ErrorForCode(r.ErrorCode),
+ ErrMessage: unptrStr(r.ErrorMessage),
+ }
+ for _, c := range r.Configs {
+ rcv := Config{
+ Key: c.Name,
+ Value: c.Value,
+ Sensitive: c.IsSensitive,
+ Source: c.Source,
+ }
+ for _, syn := range c.ConfigSynonyms {
+ rcv.Synonyms = append(rcv.Synonyms, ConfigSynonym{
+ Key: syn.Name,
+ Value: syn.Value,
+ Source: syn.Source,
+ })
+ }
+ rc.Configs = append(rc.Configs, rcv)
+ }
+ configs = append(configs, rc) // we are not storing in a map, no existence-check possible
+ }
+ return nil
+ })
+}
+
+// IncrementalOp is a typed int8 that is used for incrementally updating
+// configuration keys for topics and brokers.
+type IncrementalOp int8
+
+const (
+ // SetConfig is an incremental operation to set an individual config
+ // key.
+ SetConfig IncrementalOp = iota
+
+ // DeleteConfig is an incremental operation to delete an individual
+ // config key.
+ DeleteConfig
+
+ // AppendConfig is an incremental operation to append a value to a
+ // config key that is a list type.
+ AppendConfig
+
+ // SubtractConfig is an incremental operation to remove a value from a
+ // config key that is a list type.
+ SubtractConfig
+)
+
+// AlterConfig is an individual key/value operation to perform when altering
+// configs.
+//
+// This package includes a StringPtr function to aid in building config values.
+type AlterConfig struct {
+ Op IncrementalOp // Op is the incremental alter operation to perform. This is ignored for State alter functions.
+ Name string // Name is the name of the config to alter.
+ Value *string // Value is the value to use when altering, if any.
+}
+
+// AlteredConfigsResponse contains the response for an individual alteration.
+type AlterConfigsResponse struct {
+ Name string // Name is the name of this resource (topic name or broker number).
+ Err error // Err is non-nil if the config could not be altered.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// AlterConfigsResponses contains responses for many alterations.
+type AlterConfigsResponses []AlterConfigsResponse
+
+// On calls fn for the response name if it exists, returning the response and
+// the error returned from fn. If fn is nil, this simply returns the response.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the resource does not exist, this returns kerr.UnknownTopicOrPartition.
+func (rs AlterConfigsResponses) On(name string, fn func(*AlterConfigsResponse) error) (AlterConfigsResponse, error) {
+ for _, r := range rs {
+ if r.Name == name {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return AlterConfigsResponse{}, kerr.UnknownTopicOrPartition
+}
+
+// AlterTopicConfigs incrementally alters topic configuration values.
+//
+// This method requires talking to a cluster that supports
+// IncrementalAlterConfigs (officially introduced in Kafka v2.3, but many
+// broker reimplementations support this request even if they do not support
+// all other requests from Kafka v2.3).
+//
+// If you want to alter the entire configs state using the older AlterConfigs
+// request, use AlterTopicConfigsState.
+//
+// This may return *ShardErrors. You may consider checking
+// ValidateAlterTopicConfigs before using this method.
+func (cl *Client) AlterTopicConfigs(ctx context.Context, configs []AlterConfig, topics ...string) (AlterConfigsResponses, error) {
+ return cl.alterConfigs(ctx, false, configs, kmsg.ConfigResourceTypeTopic, topics)
+}
+
+// ValidateAlterTopicConfigs validates an incremental alter config for the given
+// topics.
+//
+// This returns exactly what AlterTopicConfigs returns, but does not actually
+// alter configurations.
+func (cl *Client) ValidateAlterTopicConfigs(ctx context.Context, configs []AlterConfig, topics ...string) (AlterConfigsResponses, error) {
+ return cl.alterConfigs(ctx, true, configs, kmsg.ConfigResourceTypeTopic, topics)
+}
+
+// AlterBrokerConfigs incrementally alters broker configuration values. If
+// brokers are specified, this updates each specific broker. If no brokers are
+// specified, this updates whole-cluster broker configuration values.
+//
+// This method requires talking to a cluster that supports
+// IncrementalAlterConfigs (officially introduced in Kafka v2.3, but many
+// broker reimplementations support this request even if they do not support
+// all other requests from Kafka v2.3).
+//
+// If you want to alter the entire configs state using the older AlterConfigs
+// request, use AlterBrokerConfigsState.
+//
+// This may return *ShardErrors. You may consider checking
+// ValidateAlterBrokerConfigs before using this method.
+func (cl *Client) AlterBrokerConfigs(ctx context.Context, configs []AlterConfig, brokers ...int32) (AlterConfigsResponses, error) {
+ var names []string
+ if len(brokers) == 0 {
+ names = append(names, "")
+ }
+ for _, broker := range brokers {
+ names = append(names, strconv.Itoa(int(broker)))
+ }
+ return cl.alterConfigs(ctx, false, configs, kmsg.ConfigResourceTypeBroker, names)
+}
+
+// ValidateAlterBrokerConfigs validates an incremental alter config for the given
+// brokers.
+//
+// This returns exactly what AlterBrokerConfigs returns, but does not actually
+// alter configurations.
+func (cl *Client) ValidateAlterBrokerConfigs(ctx context.Context, configs []AlterConfig, brokers ...int32) (AlterConfigsResponses, error) {
+ var names []string
+ if len(brokers) == 0 {
+ names = append(names, "")
+ }
+ for _, broker := range brokers {
+ names = append(names, strconv.Itoa(int(broker)))
+ }
+ return cl.alterConfigs(ctx, true, configs, kmsg.ConfigResourceTypeBroker, names)
+}
+
+func (cl *Client) alterConfigs(
+ ctx context.Context,
+ dry bool,
+ configs []AlterConfig,
+ kind kmsg.ConfigResourceType,
+ names []string,
+) (AlterConfigsResponses, error) {
+ req := kmsg.NewPtrIncrementalAlterConfigsRequest()
+ req.ValidateOnly = dry
+ for _, name := range names {
+ rr := kmsg.NewIncrementalAlterConfigsRequestResource()
+ rr.ResourceType = kind
+ rr.ResourceName = name
+ for _, config := range configs {
+ rc := kmsg.NewIncrementalAlterConfigsRequestResourceConfig()
+ rc.Name = config.Name
+ rc.Value = config.Value
+ switch config.Op {
+ case SetConfig:
+ rc.Op = kmsg.IncrementalAlterConfigOpSet
+ case DeleteConfig:
+ rc.Op = kmsg.IncrementalAlterConfigOpDelete
+ case AppendConfig:
+ rc.Op = kmsg.IncrementalAlterConfigOpAppend
+ case SubtractConfig:
+ rc.Op = kmsg.IncrementalAlterConfigOpSubtract
+ }
+ rr.Configs = append(rr.Configs, rc)
+ }
+ req.Resources = append(req.Resources, rr)
+ }
+
+ shards := cl.cl.RequestSharded(ctx, req)
+
+ var rs []AlterConfigsResponse
+ return rs, shardErrEach(req, shards, func(kr kmsg.Response) error {
+ resp := kr.(*kmsg.IncrementalAlterConfigsResponse)
+ for _, r := range resp.Resources {
+ rs = append(rs, AlterConfigsResponse{ // we are not storing in a map, no existence check possible
+ Name: r.ResourceName,
+ Err: kerr.ErrorForCode(r.ErrorCode),
+ ErrMessage: unptrStr(r.ErrorMessage),
+ })
+ }
+ return nil
+ })
+}
+
+// AlterTopicConfigsState alters the full state of topic configurations.
+// All prior configuration is lost.
+//
+// This may return *ShardErrors. You may consider checking
+// ValidateAlterTopicConfigs before using this method.
+func (cl *Client) AlterTopicConfigsState(ctx context.Context, configs []AlterConfig, topics ...string) (AlterConfigsResponses, error) {
+ return cl.alterConfigsState(ctx, false, configs, kmsg.ConfigResourceTypeTopic, topics)
+}
+
+// ValidateAlterTopicConfigs validates an AlterTopicConfigsState for the given
+// topics.
+//
+// This returns exactly what AlterTopicConfigsState returns, but does not
+// actually alter configurations.
+func (cl *Client) ValidateAlterTopicConfigsState(ctx context.Context, configs []AlterConfig, topics ...string) (AlterConfigsResponses, error) {
+ return cl.alterConfigsState(ctx, true, configs, kmsg.ConfigResourceTypeTopic, topics)
+}
+
+// AlterBrokerConfigs alters the full state of broker configurations. If
+// broker are specified, this updates each specific broker. If no brokers are
+// specified, this updates whole-cluster broker configuration values.
+// All prior configuration is lost.
+//
+// This may return *ShardErrors. You may consider checking
+// ValidateAlterBrokerConfigs before using this method.
+func (cl *Client) AlterBrokerConfigsState(ctx context.Context, configs []AlterConfig, brokers ...int32) (AlterConfigsResponses, error) {
+ var names []string
+ if len(brokers) == 0 {
+ names = append(names, "")
+ }
+ for _, broker := range brokers {
+ names = append(names, strconv.Itoa(int(broker)))
+ }
+ return cl.alterConfigsState(ctx, false, configs, kmsg.ConfigResourceTypeBroker, names)
+}
+
+// ValidateAlterBrokerConfigs validates an AlterBrokerconfigsState for the
+// given brokers.
+//
+// This returns exactly what AlterBrokerConfigs returns, but does not actually
+// alter configurations.
+func (cl *Client) ValidateAlterBrokerConfigsState(ctx context.Context, configs []AlterConfig, brokers ...int32) (AlterConfigsResponses, error) {
+ var names []string
+ if len(brokers) == 0 {
+ names = append(names, "")
+ }
+ for _, broker := range brokers {
+ names = append(names, strconv.Itoa(int(broker)))
+ }
+ return cl.alterConfigsState(ctx, true, configs, kmsg.ConfigResourceTypeBroker, names)
+}
+
+func (cl *Client) alterConfigsState(
+ ctx context.Context,
+ dry bool,
+ configs []AlterConfig,
+ kind kmsg.ConfigResourceType,
+ names []string,
+) (AlterConfigsResponses, error) {
+ req := kmsg.NewPtrAlterConfigsRequest()
+ req.ValidateOnly = dry
+ for _, name := range names {
+ rr := kmsg.NewAlterConfigsRequestResource()
+ rr.ResourceType = kind
+ rr.ResourceName = name
+ for _, config := range configs {
+ rc := kmsg.NewAlterConfigsRequestResourceConfig()
+ rc.Name = config.Name
+ rc.Value = config.Value
+ rr.Configs = append(rr.Configs, rc)
+ }
+ req.Resources = append(req.Resources, rr)
+ }
+
+ shards := cl.cl.RequestSharded(ctx, req)
+
+ var rs []AlterConfigsResponse
+ return rs, shardErrEach(req, shards, func(kr kmsg.Response) error {
+ resp := kr.(*kmsg.AlterConfigsResponse)
+ for _, r := range resp.Resources {
+ rs = append(rs, AlterConfigsResponse{ // we are not storing in a map, no existence check possible
+ Name: r.ResourceName,
+ Err: kerr.ErrorForCode(r.ErrorCode),
+ ErrMessage: unptrStr(r.ErrorMessage),
+ })
+ }
+ return nil
+ })
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/dtoken.go b/vendor/github.com/twmb/franz-go/pkg/kadm/dtoken.go
new file mode 100644
index 0000000000000..7591cf43a326d
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/dtoken.go
@@ -0,0 +1,229 @@
+package kadm
+
+import (
+ "context"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// Principal is a principal that owns or renews a delegation token. This is the
+// same as an ACL's principal, but rather than being a single string, the type
+// and name are split into two fields.
+type Principal struct {
+ Type string // Type is the type of a principal owner or renewer. If empty, this defaults to "User".
+ Name string // Name is the name of a principal owner or renewer.
+}
+
+// DelegationToken contains information about a delegation token.
+type DelegationToken struct {
+ // Owner is the owner of the delegation token.
+ Owner Principal
+ // TokenRequesterPrincipal is the principal of the creator of the
+ // token. This exists for v3+, where you can override the owner.
+ // For prior than v3, this is just the Owner.
+ TokenRequesterPrincipal Principal
+ // IssueTimestamp is timestamp the delegation token creation request
+ // is received within the broker.
+ IssueTimestamp time.Time
+ // ExpiryTimestamp is the timestamp the delegation token will expire.
+ // This field is:
+ // min(MaxTimestamp, IssueTimestamp+delegation.token.expiry.time.ms)
+ // where the default expiry is 24hr.
+ ExpiryTimestamp time.Time
+ // MaxTimestamp is the timestamp past which the delegation token cannot
+ // be renewed. This is either the requested MaxLifetime, or the
+ // broker's delegation.token.max.lifetime.ms which is 7d by default.
+ MaxTimestamp time.Time
+ // TokenID is the username of this token for use in authorization.
+ TokenID string
+ // HMAC is the password of this token for use for in authorization.
+ HMAC []byte
+ // Renewers is the list of principals that can renew this token in
+ // addition to the owner (which always can).
+ Renewers []Principal
+}
+
+// DelegationTokens contains a list of delegation tokens.
+type DelegationTokens []DelegationToken
+
+// CreateDelegationToken is a create delegation token request, allowing you to
+// create scoped tokens with the same ACLs as the creator. This allows you to
+// more easily manage authorization for a wide array of clients. All delegation
+// tokens use SCRAM-SHA-256 SASL for authorization.
+type CreateDelegationToken struct {
+ // Owner overrides the owner of the token from the principal issuing
+ // the request to the principal in this field. This allows a superuser
+ // to create tokens without requiring individual user credentials, and
+ // for a superuser to run clients on behalf of another user. These
+ // fields require Kafka 3.3+; see KIP-373 for more details.
+ Owner *Principal
+ // Renewers is a list of principals that can renew the delegation
+ // token in addition to the owner of the token. This list does not
+ // include the owner.
+ Renewers []Principal
+ // MaxLifetime is how long the delegation token is valid for.
+ // If -1, the default is the server's delegation.token.max.lifetime.ms,
+ // which is by default 7d.
+ MaxLifetime time.Duration
+}
+
+// CreateDelegationToken creates a delegation token, which is a scoped
+// SCRAM-SHA-256 username and password.
+//
+// Creating delegation tokens allows for an (ideally) quicker and easier method
+// of enabling authorization for a wide array of clients. Rather than having to
+// manage many passwords external to Kafka, you only need to manage a few
+// accounts and use those to create delegation tokens per client.
+//
+// Note that delegation tokens inherit the same ACLs as the user creating the
+// token. Thus, if you want to properly scope ACLs, you should not create
+// delegation tokens with admin accounts.
+//
+// This can return *AuthError.
+func (cl *Client) CreateDelegationToken(ctx context.Context, d CreateDelegationToken) (DelegationToken, error) {
+ req := kmsg.NewPtrCreateDelegationTokenRequest()
+ if d.Owner != nil {
+ req.OwnerPrincipalType = &d.Owner.Type
+ req.OwnerPrincipalName = &d.Owner.Name
+ }
+ for _, renewer := range d.Renewers {
+ rr := kmsg.NewCreateDelegationTokenRequestRenewer()
+ rr.PrincipalType = renewer.Type
+ rr.PrincipalName = renewer.Name
+ req.Renewers = append(req.Renewers, rr)
+ }
+ req.MaxLifetimeMillis = d.MaxLifetime.Milliseconds()
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return DelegationToken{}, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return DelegationToken{}, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return DelegationToken{}, err
+ }
+
+ t := DelegationToken{
+ Owner: Principal{
+ Type: resp.PrincipalType,
+ Name: resp.PrincipalName,
+ },
+ TokenRequesterPrincipal: Principal{
+ Type: resp.TokenRequesterPrincipalType,
+ Name: resp.TokenRequesterPrincipalName,
+ },
+ IssueTimestamp: time.UnixMilli(resp.IssueTimestamp).UTC(),
+ ExpiryTimestamp: time.UnixMilli(resp.ExpiryTimestamp).UTC(),
+ MaxTimestamp: time.UnixMilli(resp.MaxTimestamp).UTC(),
+ TokenID: resp.TokenID,
+ HMAC: resp.HMAC,
+ Renewers: append([]Principal(nil), d.Renewers...),
+ }
+ if resp.Version < 3 {
+ t.TokenRequesterPrincipal = t.Owner
+ }
+ return t, nil
+}
+
+// RenewDelegationToken renews a delegation token that has not yet hit its max
+// timestamp and returns the new expiry timestamp.
+//
+// This can return *AuthError.
+func (cl *Client) RenewDelegationToken(ctx context.Context, hmac []byte, renewTime time.Duration) (expiryTimestamp time.Time, err error) {
+ req := kmsg.NewPtrRenewDelegationTokenRequest()
+ req.HMAC = hmac
+ req.RenewTimeMillis = renewTime.Milliseconds()
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return time.Time{}, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return time.Time{}, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return time.Time{}, err
+ }
+ return time.UnixMilli(resp.ExpiryTimestamp).UTC(), nil
+}
+
+// ExpireDelegationToken changes a delegation token's expiry timestamp and
+// returns the new expiry timestamp, which is min(now+expiry, maxTimestamp).
+// This request can be used to force tokens to expire quickly, or to give
+// tokens a grace period before expiry. Using an expiry of -1 expires the token
+// immediately.
+//
+// This can return *AuthError.
+func (cl *Client) ExpireDelegationToken(ctx context.Context, hmac []byte, expiry time.Duration) (expiryTimestamp time.Time, err error) {
+ req := kmsg.NewPtrExpireDelegationTokenRequest()
+ req.HMAC = hmac
+ req.ExpiryPeriodMillis = expiry.Milliseconds()
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return time.Time{}, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return time.Time{}, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return time.Time{}, err
+ }
+ return time.UnixMilli(resp.ExpiryTimestamp).UTC(), nil
+}
+
+// DescribeDelegationTokens describes delegation tokens. This returns either
+// all delegation tokens, or returns only tokens with owners in the requested
+// owners list.
+//
+// This can return *AuthError.
+func (cl *Client) DescribeDelegationTokens(ctx context.Context, owners ...Principal) (DelegationTokens, error) {
+ req := kmsg.NewPtrDescribeDelegationTokenRequest()
+ for _, owner := range owners {
+ ro := kmsg.NewDescribeDelegationTokenRequestOwner()
+ ro.PrincipalType = owner.Type
+ ro.PrincipalName = owner.Name
+ req.Owners = append(req.Owners, ro)
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+
+ var ts DelegationTokens
+ for _, d := range resp.TokenDetails {
+ t := DelegationToken{
+ Owner: Principal{
+ Type: d.PrincipalType,
+ Name: d.PrincipalName,
+ },
+ TokenRequesterPrincipal: Principal{
+ Type: d.TokenRequesterPrincipalType,
+ Name: d.TokenRequesterPrincipalName,
+ },
+ IssueTimestamp: time.UnixMilli(d.IssueTimestamp).UTC(),
+ ExpiryTimestamp: time.UnixMilli(d.ExpiryTimestamp).UTC(),
+ MaxTimestamp: time.UnixMilli(d.MaxTimestamp).UTC(),
+ TokenID: d.TokenID,
+ HMAC: d.HMAC,
+ }
+ if resp.Version < 3 {
+ t.TokenRequesterPrincipal = t.Owner
+ }
+ for _, r := range d.Renewers {
+ t.Renewers = append(t.Renewers, Principal{
+ Type: r.PrincipalType,
+ Name: r.PrincipalName,
+ })
+ }
+ ts = append(ts, t)
+ }
+ return ts, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/errors.go b/vendor/github.com/twmb/franz-go/pkg/kadm/errors.go
new file mode 100644
index 0000000000000..878e62af93b05
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/errors.go
@@ -0,0 +1,134 @@
+package kadm
+
+import (
+ "errors"
+ "fmt"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kgo"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// AuthError can be returned from requests for resources that you are not
+// authorized for.
+type AuthError struct {
+ Err error // Err is the inner *kerr.Error authorization error.
+}
+
+func (a *AuthError) Error() string { return a.Err.Error() }
+func (a *AuthError) Unwrap() error { return a.Err }
+func (a *AuthError) Is(err error) bool { return a.Err == err }
+
+func maybeAuthErr(code int16) error {
+ switch err := kerr.ErrorForCode(code); err {
+ case kerr.ClusterAuthorizationFailed,
+ kerr.TopicAuthorizationFailed,
+ kerr.GroupAuthorizationFailed,
+ kerr.TransactionalIDAuthorizationFailed,
+ kerr.DelegationTokenAuthorizationFailed:
+ return &AuthError{err}
+ }
+ return nil
+}
+
+// ShardError is a piece of a request that failed. See ShardErrors for more
+// detail.
+type ShardError struct {
+ Req kmsg.Request // Req is a piece of the original request.
+ Err error // Err is the error that resulted in this request failing.
+
+ // Broker, if non-nil, is the broker this request was meant to be
+ // issued to. If the NodeID is -1, then this piece of the request
+ // failed before being mapped to a broker.
+ Broker BrokerDetail
+}
+
+// ShardErrors contains each individual error shard of a request.
+//
+// Under the hood, some requests to Kafka need to be mapped to brokers, split,
+// and sent to many brokers. The kgo.Client handles this all internally, but
+// returns the individual pieces that were requested as "shards". Internally,
+// each of these pieces can also fail, and they can all fail uniquely.
+//
+// The kadm package takes one further step and hides the failing pieces into
+// one meta error, the ShardErrors. Methods in this package that can return
+// this meta error are documented; if desired, you can use errors.As to check
+// and unwrap any ShardErrors return.
+//
+// If a request returns ShardErrors, it is possible that some aspects of the
+// request were still successful. You can check ShardErrors.AllFailed as a
+// shortcut for whether any of the response is usable or not.
+type ShardErrors struct {
+ Name string // Name is the name of the request these shard errors are for.
+ AllFailed bool // AllFailed indicates if the original request was entirely unsuccessful.
+ Errs []ShardError // Errs contains all individual shard errors.
+}
+
+func shardErrEach(req kmsg.Request, shards []kgo.ResponseShard, fn func(kmsg.Response) error) error {
+ return shardErrEachBroker(req, shards, func(_ BrokerDetail, resp kmsg.Response) error {
+ return fn(resp)
+ })
+}
+
+func shardErrEachBroker(req kmsg.Request, shards []kgo.ResponseShard, fn func(BrokerDetail, kmsg.Response) error) error {
+ se := ShardErrors{
+ Name: kmsg.NameForKey(req.Key()),
+ }
+ var ae *AuthError
+ for _, shard := range shards {
+ if shard.Err != nil {
+ se.Errs = append(se.Errs, ShardError{
+ Req: shard.Req,
+ Err: shard.Err,
+ Broker: shard.Meta,
+ })
+ continue
+ }
+ if err := fn(shard.Meta, shard.Resp); errors.As(err, &ae) {
+ return ae
+ }
+ }
+ se.AllFailed = len(shards) == len(se.Errs)
+ return se.into()
+}
+
+func (se *ShardErrors) into() error {
+ if se == nil || len(se.Errs) == 0 {
+ return nil
+ }
+ return se
+}
+
+// Merges two shard errors; the input errors should come from the same request.
+func mergeShardErrs(e1, e2 error) error {
+ var se1, se2 *ShardErrors
+ if !errors.As(e1, &se1) {
+ return e2
+ }
+ if !errors.As(e2, &se2) {
+ return e1
+ }
+ se1.Errs = append(se1.Errs, se2.Errs...)
+ se1.AllFailed = se1.AllFailed && se2.AllFailed
+ return se1
+}
+
+// Error returns an error indicating the name of the request that failed, the
+// number of separate errors, and the first error.
+func (e *ShardErrors) Error() string {
+ if len(e.Errs) == 0 {
+ return "INVALID: ShardErrors contains no errors!"
+ }
+ return fmt.Sprintf("request %s has %d separate shard errors, first: %s", e.Name, len(e.Errs), e.Errs[0].Err)
+}
+
+// Unwrap returns the underlying errors.
+func (e *ShardErrors) Unwrap() []error {
+ unwrapped := make([]error, 0, len(e.Errs))
+
+ for _, shardErr := range e.Errs {
+ unwrapped = append(unwrapped, shardErr.Err)
+ }
+
+ return unwrapped
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/groups.go b/vendor/github.com/twmb/franz-go/pkg/kadm/groups.go
new file mode 100644
index 0000000000000..9f1fc60b5a53b
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/groups.go
@@ -0,0 +1,1841 @@
+package kadm
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kgo"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// GroupMemberMetadata is the metadata that a client sent in a JoinGroup request.
+// This can have one of three types:
+//
+// *kmsg.ConsumerMemberMetadata, if the group's ProtocolType is "consumer"
+// *kmsg.ConnectMemberMetadata, if the group's ProtocolType is "connect"
+// []byte, if the group's ProtocolType is unknown
+type GroupMemberMetadata struct{ i any }
+
+// AsConsumer returns the metadata as a ConsumerMemberMetadata if possible.
+func (m GroupMemberMetadata) AsConsumer() (*kmsg.ConsumerMemberMetadata, bool) {
+ c, ok := m.i.(*kmsg.ConsumerMemberMetadata)
+ return c, ok
+}
+
+// AsConnect returns the metadata as ConnectMemberMetadata if possible.
+func (m GroupMemberMetadata) AsConnect() (*kmsg.ConnectMemberMetadata, bool) {
+ c, ok := m.i.(*kmsg.ConnectMemberMetadata)
+ return c, ok
+}
+
+// Raw returns the metadata as a raw byte slice, if it is neither of consumer
+// type nor connect type.
+func (m GroupMemberMetadata) Raw() ([]byte, bool) {
+ c, ok := m.i.([]byte)
+ return c, ok
+}
+
+// GroupMemberAssignment is the assignment that a leader sent / a member
+// received in a SyncGroup request. This can have one of three types:
+//
+// *kmsg.ConsumerMemberAssignment, if the group's ProtocolType is "consumer"
+// *kmsg.ConnectMemberAssignment, if the group's ProtocolType is "connect"
+// []byte, if the group's ProtocolType is unknown
+type GroupMemberAssignment struct{ i any }
+
+// AsConsumer returns the assignment as a ConsumerMemberAssignment if possible.
+func (m GroupMemberAssignment) AsConsumer() (*kmsg.ConsumerMemberAssignment, bool) {
+ c, ok := m.i.(*kmsg.ConsumerMemberAssignment)
+ return c, ok
+}
+
+// AsConnect returns the assignment as ConnectMemberAssignment if possible.
+func (m GroupMemberAssignment) AsConnect() (*kmsg.ConnectMemberAssignment, bool) {
+ c, ok := m.i.(*kmsg.ConnectMemberAssignment)
+ return c, ok
+}
+
+// Raw returns the assignment as a raw byte slice, if it is neither of consumer
+// type nor connect type.
+func (m GroupMemberAssignment) Raw() ([]byte, bool) {
+ c, ok := m.i.([]byte)
+ return c, ok
+}
+
+// DescribedGroupMember is the detail of an individual group member as returned
+// by a describe groups response.
+type DescribedGroupMember struct {
+ MemberID string // MemberID is the Kafka assigned member ID of this group member.
+ InstanceID *string // InstanceID is a potential user assigned instance ID of this group member (KIP-345).
+ ClientID string // ClientID is the Kafka client given ClientID of this group member.
+ ClientHost string // ClientHost is the host this member is running on.
+
+ Join GroupMemberMetadata // Join is what this member sent in its join group request; what it wants to consume.
+ Assigned GroupMemberAssignment // Assigned is what this member was assigned to consume by the leader.
+}
+
+// AssignedPartitions returns the set of unique topics and partitions that are
+// assigned across all members in this group.
+//
+// This function is only relevant if the group is of type "consumer".
+func (d *DescribedGroup) AssignedPartitions() TopicsSet {
+ s := make(TopicsSet)
+ for _, m := range d.Members {
+ if c, ok := m.Assigned.AsConsumer(); ok {
+ for _, t := range c.Topics {
+ s.Add(t.Topic, t.Partitions...)
+ }
+ }
+ }
+ return s
+}
+
+// DescribedGroup contains data from a describe groups response for a single
+// group.
+type DescribedGroup struct {
+ Group string // Group is the name of the described group.
+
+ Coordinator BrokerDetail // Coordinator is the coordinator broker for this group.
+ State string // State is the state this group is in (Empty, Dead, Stable, etc.).
+ ProtocolType string // ProtocolType is the type of protocol the group is using, "consumer" for normal consumers, "connect" for Kafka connect.
+ Protocol string // Protocol is the partition assignor strategy this group is using.
+ Members []DescribedGroupMember // Members contains the members of this group sorted first by InstanceID, or if nil, by MemberID.
+
+ Err error // Err is non-nil if the group could not be described.
+}
+
+// DescribedGroups contains data for multiple groups from a describe groups
+// response.
+type DescribedGroups map[string]DescribedGroup
+
+// AssignedPartitions returns the set of unique topics and partitions that are
+// assigned across all members in all groups. This is the all-group analogue to
+// DescribedGroup.AssignedPartitions.
+//
+// This function is only relevant for groups of type "consumer".
+func (ds DescribedGroups) AssignedPartitions() TopicsSet {
+ s := make(TopicsSet)
+ for _, g := range ds {
+ for _, m := range g.Members {
+ if c, ok := m.Assigned.AsConsumer(); ok {
+ for _, t := range c.Topics {
+ s.Add(t.Topic, t.Partitions...)
+ }
+ }
+ }
+ }
+ return s
+}
+
+// Sorted returns all groups sorted by group name.
+func (ds DescribedGroups) Sorted() []DescribedGroup {
+ s := make([]DescribedGroup, 0, len(ds))
+ for _, d := range ds {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Group < s[j].Group })
+ return s
+}
+
+// On calls fn for the group if it exists, returning the group and the error
+// returned from fn. If fn is nil, this simply returns the group.
+//
+// The fn is given a shallow copy of the group. This function returns the copy
+// as well; any modifications within fn are modifications on the returned copy.
+// Modifications on a described group's inner fields are persisted to the
+// original map (because slices are pointers).
+//
+// If the group does not exist, this returns kerr.GroupIDNotFound.
+func (rs DescribedGroups) On(group string, fn func(*DescribedGroup) error) (DescribedGroup, error) {
+ if len(rs) > 0 {
+ r, ok := rs[group]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return DescribedGroup{}, kerr.GroupIDNotFound
+}
+
+// Error iterates over all groups and returns the first error encountered, if
+// any.
+func (ds DescribedGroups) Error() error {
+ for _, d := range ds {
+ if d.Err != nil {
+ return d.Err
+ }
+ }
+ return nil
+}
+
+// Topics returns a sorted list of all group names.
+func (ds DescribedGroups) Names() []string {
+ all := make([]string, 0, len(ds))
+ for g := range ds {
+ all = append(all, g)
+ }
+ sort.Strings(all)
+ return all
+}
+
+// ListedGroup contains data from a list groups response for a single group.
+type ListedGroup struct {
+ Coordinator int32 // Coordinator is the node ID of the coordinator for this group.
+ Group string // Group is the name of this group.
+ ProtocolType string // ProtocolType is the type of protocol the group is using, "consumer" for normal consumers, "connect" for Kafka connect.
+ State string // State is the state this group is in (Empty, Dead, Stable, etc.; only if talking to Kafka 2.6+).
+}
+
+// ListedGroups contains information from a list groups response.
+type ListedGroups map[string]ListedGroup
+
+// Sorted returns all groups sorted by group name.
+func (ls ListedGroups) Sorted() []ListedGroup {
+ s := make([]ListedGroup, 0, len(ls))
+ for _, l := range ls {
+ s = append(s, l)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Group < s[j].Group })
+ return s
+}
+
+// Groups returns a sorted list of all group names.
+func (ls ListedGroups) Groups() []string {
+ all := make([]string, 0, len(ls))
+ for g := range ls {
+ all = append(all, g)
+ }
+ sort.Strings(all)
+ return all
+}
+
+// ListGroups returns all groups in the cluster. If you are talking to Kafka
+// 2.6+, filter states can be used to return groups only in the requested
+// states. By default, this returns all groups. In almost all cases,
+// DescribeGroups is more useful.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) ListGroups(ctx context.Context, filterStates ...string) (ListedGroups, error) {
+ req := kmsg.NewPtrListGroupsRequest()
+ req.StatesFilter = append(req.StatesFilter, filterStates...)
+ shards := cl.cl.RequestSharded(ctx, req)
+ list := make(ListedGroups)
+ return list, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.ListGroupsResponse)
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return err
+ }
+ for _, g := range resp.Groups {
+ list[g.Group] = ListedGroup{ // group only lives on one broker, no need to exist-check
+ Coordinator: b.NodeID,
+ Group: g.Group,
+ ProtocolType: g.ProtocolType,
+ State: g.GroupState,
+ }
+ }
+ return nil
+ })
+}
+
+// DescribeGroups describes either all groups specified, or all groups in the
+// cluster if none are specified.
+//
+// This may return *ShardErrors or *AuthError.
+//
+// If no groups are specified and this method first lists groups, and list
+// groups returns a *ShardErrors, this function describes all successfully
+// listed groups and appends the list shard errors to any describe shard
+// errors.
+//
+// If only one group is described, there will be at most one request issued,
+// and there is no need to deeply inspect the error.
+func (cl *Client) DescribeGroups(ctx context.Context, groups ...string) (DescribedGroups, error) {
+ var seList *ShardErrors
+ if len(groups) == 0 {
+ listed, err := cl.ListGroups(ctx)
+ switch {
+ case err == nil:
+ case errors.As(err, &seList):
+ default:
+ return nil, err
+ }
+ groups = listed.Groups()
+ if len(groups) == 0 {
+ return nil, err
+ }
+ }
+
+ req := kmsg.NewPtrDescribeGroupsRequest()
+ req.Groups = groups
+
+ shards := cl.cl.RequestSharded(ctx, req)
+ described := make(DescribedGroups)
+ err := shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.DescribeGroupsResponse)
+ for _, rg := range resp.Groups {
+ if err := maybeAuthErr(rg.ErrorCode); err != nil {
+ return err
+ }
+ g := DescribedGroup{
+ Group: rg.Group,
+ Coordinator: b,
+ State: rg.State,
+ ProtocolType: rg.ProtocolType,
+ Protocol: rg.Protocol,
+ Err: kerr.ErrorForCode(rg.ErrorCode),
+ }
+ for _, rm := range rg.Members {
+ gm := DescribedGroupMember{
+ MemberID: rm.MemberID,
+ InstanceID: rm.InstanceID,
+ ClientID: rm.ClientID,
+ ClientHost: rm.ClientHost,
+ }
+
+ var mi, ai any
+ switch g.ProtocolType {
+ case "consumer":
+ m := new(kmsg.ConsumerMemberMetadata)
+ a := new(kmsg.ConsumerMemberAssignment)
+
+ m.ReadFrom(rm.ProtocolMetadata)
+ a.ReadFrom(rm.MemberAssignment)
+
+ mi, ai = m, a
+ case "connect":
+ m := new(kmsg.ConnectMemberMetadata)
+ a := new(kmsg.ConnectMemberAssignment)
+
+ m.ReadFrom(rm.ProtocolMetadata)
+ a.ReadFrom(rm.MemberAssignment)
+
+ mi, ai = m, a
+ default:
+ mi, ai = rm.ProtocolMetadata, rm.MemberAssignment
+ }
+
+ gm.Join = GroupMemberMetadata{mi}
+ gm.Assigned = GroupMemberAssignment{ai}
+ g.Members = append(g.Members, gm)
+ }
+ sort.Slice(g.Members, func(i, j int) bool {
+ if g.Members[i].InstanceID != nil {
+ if g.Members[j].InstanceID == nil {
+ return true
+ }
+ return *g.Members[i].InstanceID < *g.Members[j].InstanceID
+ }
+ if g.Members[j].InstanceID != nil {
+ return false
+ }
+ return g.Members[i].MemberID < g.Members[j].MemberID
+ })
+ described[g.Group] = g // group only lives on one broker, no need to exist-check
+ }
+ return nil
+ })
+
+ var seDesc *ShardErrors
+ switch {
+ case err == nil:
+ return described, seList.into()
+ case errors.As(err, &seDesc):
+ if seList != nil {
+ seDesc.Errs = append(seList.Errs, seDesc.Errs...)
+ }
+ return described, seDesc.into()
+ default:
+ return nil, err
+ }
+}
+
+// DeleteGroupResponse contains the response for an individual deleted group.
+type DeleteGroupResponse struct {
+ Group string // Group is the group this response is for.
+ Err error // Err is non-nil if the group failed to be deleted.
+}
+
+// DeleteGroupResponses contains per-group responses to deleted groups.
+type DeleteGroupResponses map[string]DeleteGroupResponse
+
+// Sorted returns all deleted group responses sorted by group name.
+func (ds DeleteGroupResponses) Sorted() []DeleteGroupResponse {
+ s := make([]DeleteGroupResponse, 0, len(ds))
+ for _, d := range ds {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Group < s[j].Group })
+ return s
+}
+
+// On calls fn for the response group if it exists, returning the response and
+// the error returned from fn. If fn is nil, this simply returns the group.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the group does not exist, this returns kerr.GroupIDNotFound.
+func (rs DeleteGroupResponses) On(group string, fn func(*DeleteGroupResponse) error) (DeleteGroupResponse, error) {
+ if len(rs) > 0 {
+ r, ok := rs[group]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return DeleteGroupResponse{}, kerr.GroupIDNotFound
+}
+
+// Error iterates over all groups and returns the first error encountered, if
+// any.
+func (rs DeleteGroupResponses) Error() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// DeleteGroup deletes the specified group. This is similar to DeleteGroups,
+// but returns the kerr.ErrorForCode(response.ErrorCode) if the request/response
+// is successful.
+func (cl *Client) DeleteGroup(ctx context.Context, group string) (DeleteGroupResponse, error) {
+ rs, err := cl.DeleteGroups(ctx, group)
+ if err != nil {
+ return DeleteGroupResponse{}, err
+ }
+ g, exists := rs[group]
+ if !exists {
+ return DeleteGroupResponse{}, errors.New("requested group was not part of the delete group response")
+ }
+ return g, g.Err
+}
+
+// DeleteGroups deletes all groups specified.
+//
+// The purpose of this request is to allow operators a way to delete groups
+// after Kafka 1.1, which removed RetentionTimeMillis from offset commits. See
+// KIP-229 for more details.
+//
+// This may return *ShardErrors. This does not return on authorization
+// failures, instead, authorization failures are included in the responses.
+func (cl *Client) DeleteGroups(ctx context.Context, groups ...string) (DeleteGroupResponses, error) {
+ if len(groups) == 0 {
+ return nil, nil
+ }
+ req := kmsg.NewPtrDeleteGroupsRequest()
+ req.Groups = append(req.Groups, groups...)
+ shards := cl.cl.RequestSharded(ctx, req)
+
+ rs := make(map[string]DeleteGroupResponse)
+ return rs, shardErrEach(req, shards, func(kr kmsg.Response) error {
+ resp := kr.(*kmsg.DeleteGroupsResponse)
+ for _, g := range resp.Groups {
+ rs[g.Group] = DeleteGroupResponse{ // group is always on one broker, no need to exist-check
+ Group: g.Group,
+ Err: kerr.ErrorForCode(g.ErrorCode),
+ }
+ }
+ return nil
+ })
+}
+
+// LeaveGroupBuilder helps build a leave group request, rather than having
+// a function signature (string, string, ...string).
+//
+// All functions on this type accept and return the same pointer, allowing
+// for easy build-and-use usage.
+type LeaveGroupBuilder struct {
+ group string
+ reason *string
+ instanceIDs []*string
+}
+
+// LeaveGroup returns a LeaveGroupBuilder for the input group.
+func LeaveGroup(group string) *LeaveGroupBuilder {
+ return &LeaveGroupBuilder{
+ group: group,
+ }
+}
+
+// Reason attaches a reason to all members in the leave group request.
+// This requires Kafka 3.2+.
+func (b *LeaveGroupBuilder) Reason(reason string) *LeaveGroupBuilder {
+ b.reason = StringPtr(reason)
+ return b
+}
+
+// InstanceIDs are members to remove from a group.
+func (b *LeaveGroupBuilder) InstanceIDs(ids ...string) *LeaveGroupBuilder {
+ for _, id := range ids {
+ if id != "" {
+ b.instanceIDs = append(b.instanceIDs, StringPtr(id))
+ }
+ }
+ return b
+}
+
+// LeaveGroupResponse contains the response for an individual instance ID that
+// left a group.
+type LeaveGroupResponse struct {
+ Group string // Group is the group that was left.
+ InstanceID string // InstanceID is the instance ID that left the group.
+ MemberID string // MemberID is the member ID that left the group.
+ Err error // Err is non-nil if this member did not exist.
+}
+
+// LeaveGroupResponses contains responses for each member of a leave group
+// request. The map key is the instance ID that was removed from the group.
+type LeaveGroupResponses map[string]LeaveGroupResponse
+
+// Sorted returns all removed group members by instance ID.
+func (ls LeaveGroupResponses) Sorted() []LeaveGroupResponse {
+ s := make([]LeaveGroupResponse, 0, len(ls))
+ for _, l := range ls {
+ s = append(s, l)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].InstanceID < s[j].InstanceID })
+ return s
+}
+
+// EachError calls fn for every removed member that has a non-nil error.
+func (ls LeaveGroupResponses) EachError(fn func(l LeaveGroupResponse)) {
+ for _, l := range ls {
+ if l.Err != nil {
+ fn(l)
+ }
+ }
+}
+
+// Each calls fn for every removed member.
+func (ls LeaveGroupResponses) Each(fn func(l LeaveGroupResponse)) {
+ for _, l := range ls {
+ fn(l)
+ }
+}
+
+// Error iterates over all removed members and returns the first error
+// encountered, if any.
+func (ls LeaveGroupResponses) Error() error {
+ for _, l := range ls {
+ if l.Err != nil {
+ return l.Err
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for ls.Error() ==
+// nil.
+func (ls LeaveGroupResponses) Ok() bool {
+ return ls.Error() == nil
+}
+
+// LeaveGroup causes instance IDs to leave a group.
+//
+// This function allows manually removing members using instance IDs from a
+// group, which allows for fast scale down / host replacement (see KIP-345 for
+// more detail). This returns an *AuthErr if the use is not authorized to
+// remove members from groups.
+func (cl *Client) LeaveGroup(ctx context.Context, b *LeaveGroupBuilder) (LeaveGroupResponses, error) {
+ if b == nil || len(b.instanceIDs) == 0 {
+ return nil, nil
+ }
+ req := kmsg.NewPtrLeaveGroupRequest()
+ req.Group = b.group
+ for _, id := range b.instanceIDs {
+ m := kmsg.NewLeaveGroupRequestMember()
+ id := id
+ m.InstanceID = id
+ m.Reason = b.reason
+ req.Members = append(req.Members, m)
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+
+ resps := make(LeaveGroupResponses)
+ for _, m := range resp.Members {
+ if m.InstanceID == nil {
+ continue // highly unexpected, buggy kafka
+ }
+ resps[*m.InstanceID] = LeaveGroupResponse{
+ Group: b.group,
+ MemberID: m.MemberID,
+ InstanceID: *m.InstanceID,
+ Err: kerr.ErrorForCode(resp.ErrorCode),
+ }
+ }
+ return resps, err
+}
+
+// OffsetResponse contains the response for an individual offset for offset
+// methods.
+type OffsetResponse struct {
+ Offset
+ Err error // Err is non-nil if the offset operation failed.
+}
+
+// OffsetResponses contains per-partition responses to offset methods.
+type OffsetResponses map[string]map[int32]OffsetResponse
+
+// Lookup returns the offset at t and p and whether it exists.
+func (os OffsetResponses) Lookup(t string, p int32) (OffsetResponse, bool) {
+ if len(os) == 0 {
+ return OffsetResponse{}, false
+ }
+ ps := os[t]
+ if len(ps) == 0 {
+ return OffsetResponse{}, false
+ }
+ o, exists := ps[p]
+ return o, exists
+}
+
+// Keep filters the responses to only keep the input offsets.
+func (os OffsetResponses) Keep(o Offsets) {
+ os.DeleteFunc(func(r OffsetResponse) bool {
+ if len(o) == 0 {
+ return true // keep nothing, delete
+ }
+ ot := o[r.Topic]
+ if ot == nil {
+ return true // topic missing, delete
+ }
+ _, ok := ot[r.Partition]
+ return !ok // does not exist, delete
+ })
+}
+
+// Offsets returns these offset responses as offsets.
+func (os OffsetResponses) Offsets() Offsets {
+ i := make(Offsets)
+ os.Each(func(o OffsetResponse) {
+ i.Add(o.Offset)
+ })
+ return i
+}
+
+// KOffsets returns these offset responses as a kgo offset map.
+func (os OffsetResponses) KOffsets() map[string]map[int32]kgo.Offset {
+ return os.Offsets().KOffsets()
+}
+
+// DeleteFunc keeps only the offsets for which fn returns true.
+func (os OffsetResponses) KeepFunc(fn func(OffsetResponse) bool) {
+ for t, ps := range os {
+ for p, o := range ps {
+ if !fn(o) {
+ delete(ps, p)
+ }
+ }
+ if len(ps) == 0 {
+ delete(os, t)
+ }
+ }
+}
+
+// DeleteFunc deletes any offset for which fn returns true.
+func (os OffsetResponses) DeleteFunc(fn func(OffsetResponse) bool) {
+ os.KeepFunc(func(o OffsetResponse) bool { return !fn(o) })
+}
+
+// Add adds an offset for a given topic/partition to this OffsetResponses map
+// (even if it exists).
+func (os *OffsetResponses) Add(o OffsetResponse) {
+ if *os == nil {
+ *os = make(map[string]map[int32]OffsetResponse)
+ }
+ ot := (*os)[o.Topic]
+ if ot == nil {
+ ot = make(map[int32]OffsetResponse)
+ (*os)[o.Topic] = ot
+ }
+ ot[o.Partition] = o
+}
+
+// EachError calls fn for every offset that as a non-nil error.
+func (os OffsetResponses) EachError(fn func(o OffsetResponse)) {
+ for _, ps := range os {
+ for _, o := range ps {
+ if o.Err != nil {
+ fn(o)
+ }
+ }
+ }
+}
+
+// Sorted returns the responses sorted by topic and partition.
+func (os OffsetResponses) Sorted() []OffsetResponse {
+ var s []OffsetResponse
+ os.Each(func(o OffsetResponse) { s = append(s, o) })
+ sort.Slice(s, func(i, j int) bool {
+ return s[i].Topic < s[j].Topic ||
+ s[i].Topic == s[j].Topic && s[i].Partition < s[j].Partition
+ })
+ return s
+}
+
+// Each calls fn for every offset.
+func (os OffsetResponses) Each(fn func(OffsetResponse)) {
+ for _, ps := range os {
+ for _, o := range ps {
+ fn(o)
+ }
+ }
+}
+
+// Partitions returns the set of unique topics and partitions in these offsets.
+func (os OffsetResponses) Partitions() TopicsSet {
+ s := make(TopicsSet)
+ os.Each(func(o OffsetResponse) {
+ s.Add(o.Topic, o.Partition)
+ })
+ return s
+}
+
+// Error iterates over all offsets and returns the first error encountered, if
+// any. This can be used to check if an operation was entirely successful or
+// not.
+//
+// Note that offset operations can be partially successful. For example, some
+// offsets could succeed in an offset commit while others fail (maybe one topic
+// does not exist for some reason, or you are not authorized for one topic). If
+// this is something you need to worry about, you may need to check all offsets
+// manually.
+func (os OffsetResponses) Error() error {
+ for _, ps := range os {
+ for _, o := range ps {
+ if o.Err != nil {
+ return o.Err
+ }
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for os.Error() ==
+// nil.
+func (os OffsetResponses) Ok() bool {
+ return os.Error() == nil
+}
+
+// CommitOffsets issues an offset commit request for the input offsets.
+//
+// This function can be used to manually commit offsets when directly consuming
+// partitions outside of an actual consumer group. For example, if you assign
+// partitions manually, but want still use Kafka to checkpoint what you have
+// consumed, you can manually issue an offset commit request with this method.
+//
+// This does not return on authorization failures, instead, authorization
+// failures are included in the responses.
+func (cl *Client) CommitOffsets(ctx context.Context, group string, os Offsets) (OffsetResponses, error) {
+ req := kmsg.NewPtrOffsetCommitRequest()
+ req.Group = group
+ for t, ps := range os {
+ rt := kmsg.NewOffsetCommitRequestTopic()
+ rt.Topic = t
+ for p, o := range ps {
+ rp := kmsg.NewOffsetCommitRequestTopicPartition()
+ rp.Partition = p
+ rp.Offset = o.At
+ rp.LeaderEpoch = o.LeaderEpoch
+ if len(o.Metadata) > 0 {
+ rp.Metadata = kmsg.StringPtr(o.Metadata)
+ }
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+
+ rs := make(OffsetResponses)
+ for _, t := range resp.Topics {
+ rt := make(map[int32]OffsetResponse)
+ rs[t.Topic] = rt
+ for _, p := range t.Partitions {
+ rt[p.Partition] = OffsetResponse{
+ Offset: os[t.Topic][p.Partition],
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ }
+ }
+ }
+
+ for t, ps := range os {
+ respt := rs[t]
+ if respt == nil {
+ respt = make(map[int32]OffsetResponse)
+ rs[t] = respt
+ }
+ for p, o := range ps {
+ if _, exists := respt[p]; exists {
+ continue
+ }
+ respt[p] = OffsetResponse{
+ Offset: o,
+ Err: errOffsetCommitMissing,
+ }
+ }
+ }
+
+ return rs, nil
+}
+
+var errOffsetCommitMissing = errors.New("partition missing in commit response")
+
+// CommitAllOffsets is identical to CommitOffsets, but returns an error if the
+// offset commit was successful, but some offset within the commit failed to be
+// committed.
+//
+// This is a shortcut function provided to avoid checking two errors, but you
+// must be careful with this if partially successful commits can be a problem
+// for you.
+func (cl *Client) CommitAllOffsets(ctx context.Context, group string, os Offsets) error {
+ commits, err := cl.CommitOffsets(ctx, group, os)
+ if err != nil {
+ return err
+ }
+ return commits.Error()
+}
+
+// FetchOffsets issues an offset fetch requests for all topics and partitions
+// in the group. Because Kafka returns only partitions you are authorized to
+// fetch, this only returns an auth error if you are not authorized to describe
+// the group at all.
+//
+// This method requires talking to Kafka v0.11+.
+func (cl *Client) FetchOffsets(ctx context.Context, group string) (OffsetResponses, error) {
+ req := kmsg.NewPtrOffsetFetchRequest()
+ req.Group = group
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ rs := make(OffsetResponses)
+ for _, t := range resp.Topics {
+ rt := make(map[int32]OffsetResponse)
+ rs[t.Topic] = rt
+ for _, p := range t.Partitions {
+ if err := maybeAuthErr(p.ErrorCode); err != nil {
+ return nil, err
+ }
+ var meta string
+ if p.Metadata != nil {
+ meta = *p.Metadata
+ }
+ rt[p.Partition] = OffsetResponse{
+ Offset: Offset{
+ Topic: t.Topic,
+ Partition: p.Partition,
+ At: p.Offset,
+ LeaderEpoch: p.LeaderEpoch,
+ Metadata: meta,
+ },
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ }
+ }
+ }
+ return rs, nil
+}
+
+// FetchAllGroupTopics is a kadm "internal" topic name that can be used in
+// [FetchOffsetsForTopics]. By default, [FetchOffsetsForTopics] only returns
+// topics that are explicitly requested. Other topics that may be committed to
+// in the group are not returned. Using FetchAllRequestedTopics switches the
+// behavior to return the union of all committed topics and all requested
+// topics.
+const FetchAllGroupTopics = "|fetch-all-group-topics|"
+
+// FetchOffsetsForTopics is a helper function that returns the currently
+// committed offsets for the given group, as well as default -1 offsets for any
+// topic/partition that does not yet have a commit.
+//
+// If any partition fetched or listed has an error, this function returns an
+// error. The returned offset responses are ready to be used or converted
+// directly to pure offsets with `Into`, and again into kgo offsets with
+// another `Into`.
+//
+// By default, this function returns offsets for only the requested topics. You
+// can use the special "topic" [FetchAllGroupTopics] to return all committed-to
+// topics in addition to all requested topics.
+func (cl *Client) FetchOffsetsForTopics(ctx context.Context, group string, topics ...string) (OffsetResponses, error) {
+ os := make(Offsets)
+
+ var all bool
+ keept := topics[:0]
+ for _, topic := range topics {
+ if topic == FetchAllGroupTopics {
+ all = true
+ continue
+ }
+ keept = append(keept, topic)
+ }
+ topics = keept
+
+ if !all && len(topics) == 0 {
+ return make(OffsetResponses), nil
+ }
+
+ // We have to request metadata to learn all partitions in all the
+ // topics. The default returned offset for all partitions is filled in
+ // to be -1.
+ if len(topics) > 0 {
+ listed, err := cl.ListTopics(ctx, topics...)
+ if err != nil {
+ return nil, fmt.Errorf("unable to list topics: %w", err)
+ }
+
+ for _, topic := range topics {
+ t := listed[topic]
+ if t.Err != nil {
+ return nil, fmt.Errorf("unable to describe topics, topic err: %w", t.Err)
+ }
+ for _, p := range t.Partitions {
+ os.AddOffset(topic, p.Partition, -1, -1)
+ }
+ }
+ }
+
+ resps, err := cl.FetchOffsets(ctx, group)
+ if err != nil {
+ return nil, fmt.Errorf("unable to fetch offsets: %w", err)
+ }
+ if err := resps.Error(); err != nil {
+ return nil, fmt.Errorf("offset fetches had a load error, first error: %w", err)
+ }
+
+ // For any topic (and any partition) we explicitly asked for, if the
+ // partition does not exist in the response, we fill the default -1
+ // from above.
+ os.Each(func(o Offset) {
+ if _, ok := resps.Lookup(o.Topic, o.Partition); !ok {
+ resps.Add(OffsetResponse{Offset: o})
+ }
+ })
+
+ // If we are not requesting all group offsets, then we strip any topic
+ // that was not explicitly requested.
+ if !all {
+ tset := make(map[string]struct{})
+ for _, t := range topics {
+ tset[t] = struct{}{}
+ }
+ for t := range resps {
+ if _, ok := tset[t]; !ok {
+ delete(resps, t)
+ }
+ }
+ }
+ return resps, nil
+}
+
+// FetchOffsetsResponse contains a fetch offsets response for a single group.
+type FetchOffsetsResponse struct {
+ Group string // Group is the offsets these fetches correspond to.
+ Fetched OffsetResponses // Fetched contains offsets fetched for this group, if any.
+ Err error // Err contains any error preventing offsets from being fetched.
+}
+
+// CommittedPartitions returns the set of unique topics and partitions that
+// have been committed to in this group.
+func (r FetchOffsetsResponse) CommittedPartitions() TopicsSet {
+ return r.Fetched.Partitions()
+}
+
+// FetchOFfsetsResponses contains responses for many fetch offsets requests.
+type FetchOffsetsResponses map[string]FetchOffsetsResponse
+
+// EachError calls fn for every response that as a non-nil error.
+func (rs FetchOffsetsResponses) EachError(fn func(FetchOffsetsResponse)) {
+ for _, r := range rs {
+ if r.Err != nil {
+ fn(r)
+ }
+ }
+}
+
+// AllFailed returns whether all fetch offsets requests failed.
+func (rs FetchOffsetsResponses) AllFailed() bool {
+ var n int
+ rs.EachError(func(FetchOffsetsResponse) { n++ })
+ return len(rs) > 0 && n == len(rs)
+}
+
+// CommittedPartitions returns the set of unique topics and partitions that
+// have been committed to across all members in all responses. This is the
+// all-group analogue to FetchOffsetsResponse.CommittedPartitions.
+func (rs FetchOffsetsResponses) CommittedPartitions() TopicsSet {
+ s := make(TopicsSet)
+ for _, r := range rs {
+ s.Merge(r.CommittedPartitions())
+ }
+ return s
+}
+
+// On calls fn for the response group if it exists, returning the response and
+// the error returned from fn. If fn is nil, this simply returns the group.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the group does not exist, this returns kerr.GroupIDNotFound.
+func (rs FetchOffsetsResponses) On(group string, fn func(*FetchOffsetsResponse) error) (FetchOffsetsResponse, error) {
+ if len(rs) > 0 {
+ r, ok := rs[group]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return FetchOffsetsResponse{}, kerr.GroupIDNotFound
+}
+
+// Error iterates over all responses and returns the first error encountered,
+// if any.
+func (rs FetchOffsetsResponses) Error() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// FetchManyOffsets issues a fetch offsets requests for each group specified.
+//
+// This function is a batch version of FetchOffsets. FetchOffsets and
+// CommitOffsets are important to provide as simple APIs for users that manage
+// group offsets outside of a consumer group. Each individual group may have an
+// auth error.
+func (cl *Client) FetchManyOffsets(ctx context.Context, groups ...string) FetchOffsetsResponses {
+ fetched := make(FetchOffsetsResponses)
+ if len(groups) == 0 {
+ return fetched
+ }
+
+ req := kmsg.NewPtrOffsetFetchRequest()
+ for _, group := range groups {
+ rg := kmsg.NewOffsetFetchRequestGroup()
+ rg.Group = group
+ req.Groups = append(req.Groups, rg)
+ }
+
+ groupErr := func(g string, err error) {
+ fetched[g] = FetchOffsetsResponse{
+ Group: g,
+ Err: err,
+ }
+ }
+ allGroupsErr := func(req *kmsg.OffsetFetchRequest, err error) {
+ for _, g := range req.Groups {
+ groupErr(g.Group, err)
+ }
+ }
+
+ shards := cl.cl.RequestSharded(ctx, req)
+ for _, shard := range shards {
+ req := shard.Req.(*kmsg.OffsetFetchRequest)
+ if shard.Err != nil {
+ allGroupsErr(req, shard.Err)
+ continue
+ }
+ resp := shard.Resp.(*kmsg.OffsetFetchResponse)
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ allGroupsErr(req, err)
+ continue
+ }
+ for _, g := range resp.Groups {
+ if err := maybeAuthErr(g.ErrorCode); err != nil {
+ groupErr(g.Group, err)
+ continue
+ }
+ rs := make(OffsetResponses)
+ fg := FetchOffsetsResponse{
+ Group: g.Group,
+ Fetched: rs,
+ Err: kerr.ErrorForCode(g.ErrorCode),
+ }
+ fetched[g.Group] = fg // group coordinator owns all of a group, no need to check existence
+ for _, t := range g.Topics {
+ rt := make(map[int32]OffsetResponse)
+ rs[t.Topic] = rt
+ for _, p := range t.Partitions {
+ var meta string
+ if p.Metadata != nil {
+ meta = *p.Metadata
+ }
+ rt[p.Partition] = OffsetResponse{
+ Offset: Offset{
+ Topic: t.Topic,
+ Partition: p.Partition,
+ At: p.Offset,
+ LeaderEpoch: p.LeaderEpoch,
+ Metadata: meta,
+ },
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ }
+ }
+ }
+ }
+ }
+ return fetched
+}
+
+// DeleteOffsetsResponses contains the per topic, per partition errors. If an
+// offset deletion for a partition was successful, the error will be nil.
+type DeleteOffsetsResponses map[string]map[int32]error
+
+// Lookup returns the response at t and p and whether it exists.
+func (ds DeleteOffsetsResponses) Lookup(t string, p int32) (error, bool) {
+ if len(ds) == 0 {
+ return nil, false
+ }
+ ps := ds[t]
+ if len(ps) == 0 {
+ return nil, false
+ }
+ r, exists := ps[p]
+ return r, exists
+}
+
+// EachError calls fn for every partition that as a non-nil deletion error.
+func (ds DeleteOffsetsResponses) EachError(fn func(string, int32, error)) {
+ for t, ps := range ds {
+ for p, err := range ps {
+ if err != nil {
+ fn(t, p, err)
+ }
+ }
+ }
+}
+
+// Error iterates over all responses and returns the first error encountered,
+// if any.
+func (ds DeleteOffsetsResponses) Error() error {
+ for _, ps := range ds {
+ for _, err := range ps {
+ if err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+// DeleteOffsets deletes offsets for the given group.
+//
+// Originally, offset commits were persisted in Kafka for some retention time.
+// This posed problematic for infrequently committing consumers, so the
+// retention time concept was removed in Kafka v2.1 in favor of deleting
+// offsets for a group only when the group became empty. However, if a group
+// stops consuming from a topic, then the offsets will persist and lag
+// monitoring for the group will notice an ever increasing amount of lag for
+// these no-longer-consumed topics. Thus, Kafka v2.4 introduced an OffsetDelete
+// request to allow admins to manually delete offsets for no longer consumed
+// topics.
+//
+// This method requires talking to Kafka v2.4+. This returns an *AuthErr if the
+// user is not authorized to delete offsets in the group at all. This does not
+// return on per-topic authorization failures, instead, per-topic authorization
+// failures are included in the responses.
+func (cl *Client) DeleteOffsets(ctx context.Context, group string, s TopicsSet) (DeleteOffsetsResponses, error) {
+ if len(s) == 0 {
+ return nil, nil
+ }
+
+ req := kmsg.NewPtrOffsetDeleteRequest()
+ req.Group = group
+ for t, ps := range s {
+ rt := kmsg.NewOffsetDeleteRequestTopic()
+ rt.Topic = t
+ for p := range ps {
+ rp := kmsg.NewOffsetDeleteRequestTopicPartition()
+ rp.Partition = p
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+
+ r := make(DeleteOffsetsResponses)
+ for _, t := range resp.Topics {
+ rt := make(map[int32]error)
+ r[t.Topic] = rt
+ for _, p := range t.Partitions {
+ rt[p.Partition] = kerr.ErrorForCode(p.ErrorCode)
+ }
+ }
+ return r, nil
+}
+
+// GroupMemberLag is the lag between a group member's current offset commit and
+// the current end offset.
+//
+// If either the offset commits have load errors, or the listed end offsets
+// have load errors, the Lag field will be -1 and the Err field will be set (to
+// the first of either the commit error, or else the list error).
+//
+// If the group is in the Empty state, lag is calculated for all partitions in
+// a topic, but the member is nil. The calculate function assumes that any
+// assigned topic is meant to be entirely consumed. If the group is Empty and
+// topics could not be listed, some partitions may be missing.
+type GroupMemberLag struct {
+ // Member is a reference to the group member consuming this partition.
+ // If the group is in state Empty, the member will be nil.
+ Member *DescribedGroupMember
+ Topic string // Topic is the topic this lag is for.
+ Partition int32 // Partition is the partition this lag is for.
+
+ Commit Offset // Commit is this member's current offset commit.
+ Start ListedOffset // Start is a reference to the start of this partition, if provided. Start offsets are optional; if not provided, Start.Err is a non-nil error saying this partition is missing from list offsets. This is always present if lag is calculated via Client.Lag.
+ End ListedOffset // End is a reference to the end offset of this partition.
+ Lag int64 // Lag is how far behind this member is, or -1 if there is a commit error or list offset error.
+
+ Err error // Err is either the commit error, or the list end offsets error, or nil.
+}
+
+// IsEmpty returns if the this lag is for a group in the Empty state.
+func (g *GroupMemberLag) IsEmpty() bool { return g.Member == nil }
+
+// GroupLag is the per-topic, per-partition lag of members in a group.
+type GroupLag map[string]map[int32]GroupMemberLag
+
+// Lookup returns the lag at t and p and whether it exists.
+func (l GroupLag) Lookup(t string, p int32) (GroupMemberLag, bool) {
+ if len(l) == 0 {
+ return GroupMemberLag{}, false
+ }
+ ps := l[t]
+ if len(ps) == 0 {
+ return GroupMemberLag{}, false
+ }
+ m, exists := ps[p]
+ return m, exists
+}
+
+// Sorted returns the per-topic, per-partition lag by member sorted in order by
+// topic then partition.
+func (l GroupLag) Sorted() []GroupMemberLag {
+ var all []GroupMemberLag
+ for _, ps := range l {
+ for _, l := range ps {
+ all = append(all, l)
+ }
+ }
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ if l.Topic < r.Topic {
+ return true
+ }
+ if l.Topic > r.Topic {
+ return false
+ }
+ return l.Partition < r.Partition
+ })
+ return all
+}
+
+// IsEmpty returns if the group is empty.
+func (l GroupLag) IsEmpty() bool {
+ for _, ps := range l {
+ for _, m := range ps {
+ return m.IsEmpty()
+ }
+ }
+ return false
+}
+
+// Total returns the total lag across all topics.
+func (l GroupLag) Total() int64 {
+ var tot int64
+ for _, tl := range l.TotalByTopic() {
+ tot += tl.Lag
+ }
+ return tot
+}
+
+// TotalByTopic returns the total lag for each topic.
+func (l GroupLag) TotalByTopic() GroupTopicsLag {
+ m := make(map[string]TopicLag)
+ for t, ps := range l {
+ mt := TopicLag{
+ Topic: t,
+ }
+ for _, l := range ps {
+ if l.Lag > 0 {
+ mt.Lag += l.Lag
+ }
+ }
+ m[t] = mt
+ }
+ return m
+}
+
+// GroupTopicsLag is the total lag per topic within a group.
+type GroupTopicsLag map[string]TopicLag
+
+// TopicLag is the lag for an individual topic within a group.
+type TopicLag struct {
+ Topic string
+ Lag int64
+}
+
+// Sorted returns the per-topic lag, sorted by topic.
+func (l GroupTopicsLag) Sorted() []TopicLag {
+ var all []TopicLag
+ for _, tl := range l {
+ all = append(all, tl)
+ }
+ sort.Slice(all, func(i, j int) bool {
+ return all[i].Topic < all[j].Topic
+ })
+ return all
+}
+
+// DescribedGroupLag contains a described group and its lag, or the errors that
+// prevent the lag from being calculated.
+type DescribedGroupLag struct {
+ Group string // Group is the group name.
+
+ Coordinator BrokerDetail // Coordinator is the coordinator broker for this group.
+ State string // State is the state this group is in (Empty, Dead, Stable, etc.).
+ ProtocolType string // ProtocolType is the type of protocol the group is using, "consumer" for normal consumers, "connect" for Kafka connect.
+ Protocol string // Protocol is the partition assignor strategy this group is using.
+ Members []DescribedGroupMember // Members contains the members of this group sorted first by InstanceID, or if nil, by MemberID.
+ Lag GroupLag // Lag is the lag for the group.
+
+ DescribeErr error // DescribeErr is the error returned from describing the group, if any.
+ FetchErr error // FetchErr is the error returned from fetching offsets, if any.
+}
+
+// Err returns the first of DescribeErr or FetchErr that is non-nil.
+func (l *DescribedGroupLag) Error() error {
+ if l.DescribeErr != nil {
+ return l.DescribeErr
+ }
+ return l.FetchErr
+}
+
+// DescribedGroupLags is a map of group names to the described group with its
+// lag, or error for those groups.
+type DescribedGroupLags map[string]DescribedGroupLag
+
+// Sorted returns all lags sorted by group name.
+func (ls DescribedGroupLags) Sorted() []DescribedGroupLag {
+ s := make([]DescribedGroupLag, 0, len(ls))
+ for _, l := range ls {
+ s = append(s, l)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Group < s[j].Group })
+ return s
+}
+
+// EachError calls fn for every group that has a non-nil error.
+func (ls DescribedGroupLags) EachError(fn func(l DescribedGroupLag)) {
+ for _, l := range ls {
+ if l.Error() != nil {
+ fn(l)
+ }
+ }
+}
+
+// Each calls fn for every group.
+func (ls DescribedGroupLags) Each(fn func(l DescribedGroupLag)) {
+ for _, l := range ls {
+ fn(l)
+ }
+}
+
+// Error iterates over all groups and returns the first error encountered, if
+// any.
+func (ls DescribedGroupLags) Error() error {
+ for _, l := range ls {
+ if l.Error() != nil {
+ return l.Error()
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for ls.Error() ==
+// nil.
+func (ls DescribedGroupLags) Ok() bool {
+ return ls.Error() == nil
+}
+
+// Lag returns the lag for all input groups. This function is a shortcut for
+// the steps required to use CalculateGroupLagWithStartOffsets properly, with
+// some opinionated choices for error handling since calculating lag is
+// multi-request process. If a group cannot be described or the offsets cannot
+// be fetched, an error is returned for the group. If any topic cannot have its
+// end offsets listed, the lag for the partition has a corresponding error. If
+// any request fails with an auth error, this returns *AuthError.
+func (cl *Client) Lag(ctx context.Context, groups ...string) (DescribedGroupLags, error) {
+ set := make(map[string]struct{}, len(groups))
+ for _, g := range groups {
+ set[g] = struct{}{}
+ }
+ rem := func() []string {
+ groups = groups[:0]
+ for g := range set {
+ groups = append(groups, g)
+ }
+ return groups
+ }
+ lags := make(DescribedGroupLags)
+
+ described, err := cl.DescribeGroups(ctx, rem()...)
+ // For auth err: always return.
+ // For shard errors, if we had some partial success, then we continue
+ // to the rest of the logic in this function.
+ // If every shard failed, or on all other errors, we return.
+ var ae *AuthError
+ var se *ShardErrors
+ switch {
+ case errors.As(err, &ae):
+ return nil, err
+ case errors.As(err, &se) && !se.AllFailed:
+ for _, se := range se.Errs {
+ for _, g := range se.Req.(*kmsg.DescribeGroupsRequest).Groups {
+ lags[g] = DescribedGroupLag{
+ Group: g,
+ Coordinator: se.Broker,
+ DescribeErr: se.Err,
+ }
+ delete(set, g)
+ }
+ }
+ case err != nil:
+ return nil, err
+ }
+ for _, g := range described {
+ lags[g.Group] = DescribedGroupLag{
+ Group: g.Group,
+ Coordinator: g.Coordinator,
+ State: g.State,
+ ProtocolType: g.ProtocolType,
+ Protocol: g.Protocol,
+ Members: g.Members,
+ DescribeErr: g.Err,
+ }
+ if g.Err != nil {
+ delete(set, g.Group)
+ continue
+ }
+
+ // If the input set of groups is empty, DescribeGroups returns all groups.
+ // We add to `set` here so that the Lag function itself can calculate
+ // lag for all groups.
+ set[g.Group] = struct{}{}
+ }
+ if len(set) == 0 {
+ return lags, nil
+ }
+
+ // Same thought here. For auth errors, we always return.
+ // If a group offset fetch failed, we delete it from described
+ // because we cannot calculate lag for it.
+ fetched := cl.FetchManyOffsets(ctx, rem()...)
+ for _, r := range fetched {
+ switch {
+ case errors.As(r.Err, &ae):
+ return nil, err
+ case r.Err != nil:
+ l := lags[r.Group]
+ l.FetchErr = r.Err
+ lags[r.Group] = l
+ delete(set, r.Group)
+ delete(described, r.Group)
+ }
+ }
+ if len(set) == 0 {
+ return lags, nil
+ }
+
+ // We have to list the start & end offset for all assigned and
+ // committed partitions.
+ var startOffsets, endOffsets ListedOffsets
+ listPartitions := described.AssignedPartitions()
+ listPartitions.Merge(fetched.CommittedPartitions())
+ if topics := listPartitions.Topics(); len(topics) > 0 {
+ for _, list := range []struct {
+ fn func(context.Context, ...string) (ListedOffsets, error)
+ dst *ListedOffsets
+ }{
+ {cl.ListStartOffsets, &startOffsets},
+ {cl.ListEndOffsets, &endOffsets},
+ } {
+ listed, err := list.fn(ctx, topics...)
+ *list.dst = listed
+ // As above: return on auth error. If there are shard errors,
+ // the topics will be missing in the response and then
+ // CalculateGroupLag will return UnknownTopicOrPartition.
+ switch {
+ case errors.As(err, &ae):
+ return nil, err
+ case errors.As(err, &se):
+ // do nothing: these show up as errListMissing
+ case err != nil:
+ return nil, err
+ }
+ // For anything that lists with a single -1 partition, the
+ // topic does not exist. We add an UnknownTopicOrPartition
+ // error for all partitions that were committed to, so that
+ // this shows up in the lag output as UnknownTopicOrPartition
+ // rather than errListMissing.
+ for t, ps := range listed {
+ if len(ps) != 1 {
+ continue
+ }
+ if _, ok := ps[-1]; !ok {
+ continue
+ }
+ delete(ps, -1)
+ for p := range listPartitions[t] {
+ ps[p] = ListedOffset{
+ Topic: t,
+ Partition: p,
+ Err: kerr.UnknownTopicOrPartition,
+ }
+ }
+ }
+ }
+ }
+
+ for _, g := range described {
+ l := lags[g.Group]
+ l.Lag = CalculateGroupLagWithStartOffsets(g, fetched[g.Group].Fetched, startOffsets, endOffsets)
+ lags[g.Group] = l
+ }
+ return lags, nil
+}
+
+var noOffsets = make(ListedOffsets)
+
+// CalculateGroupLagWithStartOffsets returns the per-partition lag of all
+// members in a group. This function slightly expands on [CalculateGroupLag] to
+// handle calculating lag for partitions that (1) have no commits AND (2) have
+// some segments deleted (cleanup.policy=delete) such that the log start offset
+// is non-zero.
+//
+// As an example, if a group is consuming a partition with log end offset 30
+// and log start offset 10 and has not yet committed to the group, this
+// function can correctly tell you that the lag is 20, whereas
+// CalculateGroupLag would tell you the lag is 30.
+//
+// This function accepts 'nil' for startOffsets, which will result in the same
+// behavior as CalculateGroupLag. This function is useful if you have
+// infrequently committing groups against topics that have segments being
+// deleted.
+func CalculateGroupLagWithStartOffsets(
+ group DescribedGroup,
+ commit OffsetResponses,
+ startOffsets ListedOffsets,
+ endOffsets ListedOffsets,
+) GroupLag {
+ if commit == nil { // avoid panics below
+ commit = make(OffsetResponses)
+ }
+ if startOffsets == nil {
+ startOffsets = noOffsets
+ }
+ if endOffsets == nil {
+ endOffsets = noOffsets
+ }
+ if group.State == "Empty" {
+ return calculateEmptyLag(commit, startOffsets, endOffsets)
+ }
+
+ l := make(map[string]map[int32]GroupMemberLag)
+ for mi, m := range group.Members {
+ c, ok := m.Assigned.AsConsumer()
+ if !ok {
+ continue
+ }
+ for _, t := range c.Topics {
+ lt := l[t.Topic]
+ if lt == nil {
+ lt = make(map[int32]GroupMemberLag)
+ l[t.Topic] = lt
+ }
+
+ tcommit := commit[t.Topic]
+ tstart := startOffsets[t.Topic]
+ tend := endOffsets[t.Topic]
+ for _, p := range t.Partitions {
+ var (
+ pcommit = OffsetResponse{Offset: Offset{
+ Topic: t.Topic,
+ Partition: p,
+ At: -1,
+ }}
+ pend = ListedOffset{
+ Topic: t.Topic,
+ Partition: p,
+ Err: errListMissing,
+ }
+ pstart = pend
+ perr error
+ )
+
+ if tcommit != nil {
+ if pcommitActual, ok := tcommit[p]; ok {
+ pcommit = pcommitActual
+ }
+ }
+ perr = errListMissing
+ if tend != nil {
+ if pendActual, ok := tend[p]; ok {
+ pend = pendActual
+ perr = nil
+ }
+ }
+ if perr == nil {
+ if perr = pcommit.Err; perr == nil {
+ perr = pend.Err
+ }
+ }
+ if tstart != nil {
+ if pstartActual, ok := tstart[p]; ok {
+ pstart = pstartActual
+ }
+ }
+
+ lag := int64(-1)
+ if perr == nil {
+ lag = pend.Offset
+ if pstart.Err == nil {
+ lag = pend.Offset - pstart.Offset
+ }
+ if pcommit.At >= 0 {
+ lag = pend.Offset - pcommit.At
+ }
+ // It is possible for a commit to be after the
+ // end, in which case we will round to 0. We do
+ // this check here to also handle a potential non-commit
+ // weird pend < pstart scenario where a segment
+ // was deleted between listing offsets.
+ if lag < 0 {
+ lag = 0
+ }
+ }
+
+ lt[p] = GroupMemberLag{
+ Member: &group.Members[mi],
+ Topic: t.Topic,
+ Partition: p,
+ Commit: pcommit.Offset,
+ Start: pstart,
+ End: pend,
+ Lag: lag,
+ Err: perr,
+ }
+
+ }
+ }
+ }
+
+ return l
+}
+
+// CalculateGroupLag returns the per-partition lag of all members in a group.
+// The input to this method is the returns from the following methods (make
+// sure to check shard errors):
+//
+// // Note that FetchOffsets exists to fetch only one group's offsets,
+// // but some of the code below slightly changes.
+// groups := DescribeGroups(ctx, group)
+// commits := FetchManyOffsets(ctx, group)
+// var endOffsets ListedOffsets
+// listPartitions := described.AssignedPartitions()
+// listPartitions.Merge(commits.CommittedPartitions()
+// if topics := listPartitions.Topics(); len(topics) > 0 {
+// endOffsets = ListEndOffsets(ctx, listPartitions.Topics())
+// }
+// for _, group := range groups {
+// lag := CalculateGroupLag(group, commits[group.Group].Fetched, endOffsets)
+// }
+//
+// If assigned partitions are missing in the listed end offsets, the partition
+// will have an error indicating it is missing. A missing topic or partition in
+// the commits is assumed to be nothing committing yet.
+func CalculateGroupLag(
+ group DescribedGroup,
+ commit OffsetResponses,
+ endOffsets ListedOffsets,
+) GroupLag {
+ return CalculateGroupLagWithStartOffsets(group, commit, nil, endOffsets)
+}
+
+func calculateEmptyLag(commit OffsetResponses, startOffsets, endOffsets ListedOffsets) GroupLag {
+ l := make(map[string]map[int32]GroupMemberLag)
+ for t, ps := range commit {
+ lt := l[t]
+ if lt == nil {
+ lt = make(map[int32]GroupMemberLag)
+ l[t] = lt
+ }
+ tstart := startOffsets[t]
+ tend := endOffsets[t]
+ for p, pcommit := range ps {
+ var (
+ pend = ListedOffset{
+ Topic: t,
+ Partition: p,
+ Err: errListMissing,
+ }
+ pstart = pend
+ perr error
+ )
+
+ // In order of priority, perr (the error on the Lag
+ // calculation) is non-nil if:
+ //
+ // * The topic is missing from end ListOffsets
+ // * The partition is missing from end ListOffsets
+ // * OffsetFetch has an error on the partition
+ // * ListOffsets has an error on the partition
+ //
+ // If we have no error, then we can calculate lag.
+ // We *do* allow an error on start ListedOffsets;
+ // if there are no start offsets or the start offset
+ // has an error, it is not used for lag calculation.
+ perr = errListMissing
+ if tend != nil {
+ if pendActual, ok := tend[p]; ok {
+ pend = pendActual
+ perr = nil
+ }
+ }
+ if perr == nil {
+ if perr = pcommit.Err; perr == nil {
+ perr = pend.Err
+ }
+ }
+ if tstart != nil {
+ if pstartActual, ok := tstart[p]; ok {
+ pstart = pstartActual
+ }
+ }
+
+ lag := int64(-1)
+ if perr == nil {
+ lag = pend.Offset
+ if pstart.Err == nil {
+ lag = pend.Offset - pstart.Offset
+ }
+ if pcommit.At >= 0 {
+ lag = pend.Offset - pcommit.At
+ }
+ if lag < 0 {
+ lag = 0
+ }
+ }
+
+ lt[p] = GroupMemberLag{
+ Topic: t,
+ Partition: p,
+ Commit: pcommit.Offset,
+ Start: pstart,
+ End: pend,
+ Lag: lag,
+ Err: perr,
+ }
+ }
+ }
+
+ // Now we look at all topics that we calculated lag for, and check out
+ // the partitions we listed. If those partitions are missing from the
+ // lag calculations above, the partitions were not committed to and we
+ // count that as entirely lagging.
+ for t, lt := range l {
+ tstart := startOffsets[t]
+ tend := endOffsets[t]
+ for p, pend := range tend {
+ if _, ok := lt[p]; ok {
+ continue
+ }
+ pcommit := Offset{
+ Topic: t,
+ Partition: p,
+ At: -1,
+ LeaderEpoch: -1,
+ }
+ perr := pend.Err
+ lag := int64(-1)
+ if perr == nil {
+ lag = pend.Offset
+ }
+ pstart := ListedOffset{
+ Topic: t,
+ Partition: p,
+ Err: errListMissing,
+ }
+ if tstart != nil {
+ if pstartActual, ok := tstart[p]; ok {
+ pstart = pstartActual
+ if pstart.Err == nil {
+ lag = pend.Offset - pstart.Offset
+ if lag < 0 {
+ lag = 0
+ }
+ }
+ }
+ }
+ lt[p] = GroupMemberLag{
+ Topic: t,
+ Partition: p,
+ Commit: pcommit,
+ Start: pstart,
+ End: pend,
+ Lag: lag,
+ Err: perr,
+ }
+ }
+ }
+
+ return l
+}
+
+var errListMissing = errors.New("missing from list offsets")
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go b/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go
new file mode 100644
index 0000000000000..432fc4c76a3c1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go
@@ -0,0 +1,655 @@
+// Package kadm provides a helper Kafka admin client around a *kgo.Client.
+//
+// This package is meant to cover the common use cases for dropping into an
+// "admin" like interface for Kafka. As with any admin client, this package
+// must make opinionated decisions on what to provide and what to hide. The
+// underlying Kafka protocol gives more detailed information in responses, or
+// allows more fine tuning in requests, but most of the time, these details are
+// unnecessary.
+//
+// By virtue of making opinionated decisions, this package cannot satisfy every
+// need for requests and responses. If you need more control than this admin
+// client provides, you can use the kmsg package directly.
+//
+// This package contains a lot of types, but the main two types type to know
+// are Client and ShardErrors. Every other type is used for inputs or outputs
+// to methods on the client.
+//
+// The Client type is a simple small wrapper around a *kgo.Client that exists
+// solely to namespace methods. The ShardErrors type is a bit more complicated.
+// When issuing requests, under the hood some of these requests actually need
+// to be mapped to brokers and split, issuing different pieces of the input
+// request to different brokers. The *kgo.Client handles this all internally,
+// but (if using RequestSharded as directed), returns each response to each of
+// these split requests individually. Each response can fail or be successful.
+// This package goes one step further and merges these failures into one meta
+// failure, ShardErrors. Any function that returns ShardErrors is documented as
+// such, and if a function returns a non-nil ShardErrors, it is possible that
+// the returned data is actually valid and usable. If you care to, you can log
+// / react to the partial failures and continue using the partial successful
+// result. This is in contrast to other clients, which either require to to
+// request individual brokers directly, or they completely hide individual
+// failures, or they completely fail on any individual failure.
+//
+// For methods that list or describe things, this package often completely
+// fails responses on auth failures. If you use a method that accepts two
+// topics, one that you are authorized to and one that you are not, you will
+// not receive a partial successful response. Instead, you will receive an
+// AuthError. Methods that do *not* fail on auth errors are explicitly
+// documented as such.
+//
+// Users may often find it easy to work with lists of topics or partitions.
+// Rather than needing to build deeply nested maps directly, this package has a
+// few helper types that are worth knowing:
+//
+// TopicsList - a slice of topics and their partitions
+// TopicsSet - a set of topics, each containing a set of partitions
+// Partitions - a slice of partitions
+// OffsetsList - a slice of offsets
+// Offsets - a map of offsets
+//
+// These types are meant to be easy to build and use, and can be used as the
+// starting point for other types.
+//
+// Many functions in this package are variadic and return either a map or a
+// list of responses, and you may only use one element as input and are only
+// interested in one element of output. This package provides the following
+// functions to help:
+//
+// Any(map)
+// AnyE(map, err)
+// First(slice)
+// FirstE(slice, err)
+//
+// The intended use case of these is something like `kadm.AnyE(kadm.CreateTopics(..., "my-one-topic"))`,
+// such that you can immediately get the response for the one topic you are
+// creating.
+package kadm
+
+import (
+ "errors"
+ "regexp"
+ "runtime/debug"
+ "sort"
+ "sync"
+
+ "github.com/twmb/franz-go/pkg/kgo"
+)
+
+func unptrStr(s *string) string {
+ if s == nil {
+ return ""
+ }
+ return *s
+}
+
+var (
+ reVersion *regexp.Regexp
+ reVersionOnce sync.Once
+)
+
+// Copied from kgo, but we use the kadm package version.
+func softwareVersion() string {
+ info, ok := debug.ReadBuildInfo()
+ if ok {
+ reVersionOnce.Do(func() { reVersion = regexp.MustCompile(`^[a-zA-Z0-9](?:[a-zA-Z0-9.-]*[a-zA-Z0-9])?$`) })
+ for _, dep := range info.Deps {
+ if dep.Path == "github.com/twmb/franz-go/pkg/kadm" {
+ if reVersion.MatchString(dep.Version) {
+ return dep.Version
+ }
+ }
+ }
+ }
+ return "unknown"
+}
+
+// Client is an admin client.
+//
+// This is a simple wrapper around a *kgo.Client to provide helper admin methods.
+type Client struct {
+ cl *kgo.Client
+
+ timeoutMillis int32
+}
+
+// NewClient returns an admin client.
+func NewClient(cl *kgo.Client) *Client {
+ return &Client{cl, 15000} // 15s timeout default, matching kmsg
+}
+
+// NewOptClient returns a new client directly from kgo options. This is a
+// wrapper around creating a new *kgo.Client and then creating an admin client.
+func NewOptClient(opts ...kgo.Opt) (*Client, error) {
+ cl, err := kgo.NewClient(opts...)
+ if err != nil {
+ return nil, err
+ }
+ return NewClient(cl), nil
+}
+
+// Close closes the underlying *kgo.Client.
+func (cl *Client) Close() {
+ cl.cl.Close()
+}
+
+// SetTimeoutMillis sets the timeout to use for requests that have a timeout,
+// overriding the default of 15,000 (15s).
+//
+// Not all requests have timeouts. Most requests are expected to return
+// immediately or are expected to deliberately hang. The following requests
+// have timeout fields:
+//
+// Produce
+// CreateTopics
+// DeleteTopics
+// DeleteRecords
+// CreatePartitions
+// ElectLeaders
+// AlterPartitionAssignments
+// ListPartitionReassignments
+// UpdateFeatures
+//
+// Not all requests above are supported in the admin API.
+func (cl *Client) SetTimeoutMillis(millis int32) {
+ cl.timeoutMillis = millis
+}
+
+// StringPtr is a shortcut function to aid building configs for creating or
+// altering topics.
+func StringPtr(s string) *string {
+ return &s
+}
+
+// BrokerDetail is a type alias for kgo.BrokerMetadata.
+type BrokerDetail = kgo.BrokerMetadata
+
+// BrokerDetails contains the details for many brokers.
+type BrokerDetails []BrokerDetail
+
+// NodeIDs returns the IDs of all nodes.
+func (ds BrokerDetails) NodeIDs() []int32 {
+ var all []int32
+ for _, d := range ds {
+ all = append(all, d.NodeID)
+ }
+ return int32s(all)
+}
+
+// Partition is a partition for a topic.
+type Partition struct {
+ Topic string // Topic is the topic for this partition.
+ Partition int32 // Partition is this partition's number.
+}
+
+// Offset is an offset for a topic.
+type Offset struct {
+ Topic string
+ Partition int32
+ At int64 // Offset is the partition to set.
+ LeaderEpoch int32 // LeaderEpoch is the broker leader epoch of the record at this offset.
+ Metadata string // Metadata, if non-empty, is used for offset commits.
+}
+
+// Partitions wraps many partitions.
+type Partitions []Partition
+
+// TopicsSet returns these partitions as TopicsSet.
+func (ps Partitions) TopicsSet() TopicsSet {
+ s := make(TopicsSet)
+ for _, p := range ps {
+ s.Add(p.Topic, p.Partition)
+ }
+ return s
+}
+
+// TopicsList returns these partitions as sorted TopicsList.
+func (ps Partitions) TopicsList() TopicsList {
+ return ps.TopicsSet().Sorted()
+}
+
+// OffsetsList wraps many offsets and is a helper for building Offsets.
+type OffsetsList []Offset
+
+// Offsets returns this list as the non-list Offsets. All fields in each
+// Offset must be set properly.
+func (l OffsetsList) Offsets() Offsets {
+ os := make(Offsets)
+ for _, o := range l {
+ os.Add(o)
+ }
+ return os
+}
+
+// KOffsets returns this list as a kgo offset map.
+func (l OffsetsList) KOffsets() map[string]map[int32]kgo.Offset {
+ return l.Offsets().KOffsets()
+}
+
+// Offsets wraps many offsets and is the type used for offset functions.
+type Offsets map[string]map[int32]Offset
+
+// Lookup returns the offset at t and p and whether it exists.
+func (os Offsets) Lookup(t string, p int32) (Offset, bool) {
+ if len(os) == 0 {
+ return Offset{}, false
+ }
+ ps := os[t]
+ if len(ps) == 0 {
+ return Offset{}, false
+ }
+ o, exists := ps[p]
+ return o, exists
+}
+
+// Add adds an offset for a given topic/partition to this Offsets map.
+//
+// If the partition already exists, the offset is only added if:
+//
+// - the new leader epoch is higher than the old, or
+// - the leader epochs equal, and the new offset is higher than the old
+//
+// If you would like to add offsets forcefully no matter what, use the Delete
+// method before this.
+func (os *Offsets) Add(o Offset) {
+ if *os == nil {
+ *os = make(map[string]map[int32]Offset)
+ }
+ ot := (*os)[o.Topic]
+ if ot == nil {
+ ot = make(map[int32]Offset)
+ (*os)[o.Topic] = ot
+ }
+
+ prior, exists := ot[o.Partition]
+ if !exists || prior.LeaderEpoch < o.LeaderEpoch ||
+ prior.LeaderEpoch == o.LeaderEpoch && prior.At < o.At {
+ ot[o.Partition] = o
+ }
+}
+
+// Delete removes any offset at topic t and partition p.
+func (os Offsets) Delete(t string, p int32) {
+ if os == nil {
+ return
+ }
+ ot := os[t]
+ if ot == nil {
+ return
+ }
+ delete(ot, p)
+ if len(ot) == 0 {
+ delete(os, t)
+ }
+}
+
+// AddOffset is a helper to add an offset for a given topic and partition. The
+// leader epoch field must be -1 if you do not know the leader epoch or if
+// you do not have an offset yet.
+func (os *Offsets) AddOffset(t string, p int32, o int64, leaderEpoch int32) {
+ os.Add(Offset{
+ Topic: t,
+ Partition: p,
+ At: o,
+ LeaderEpoch: leaderEpoch,
+ })
+}
+
+// KeepFunc calls fn for every offset, keeping the offset if fn returns true.
+func (os Offsets) KeepFunc(fn func(o Offset) bool) {
+ for t, ps := range os {
+ for p, o := range ps {
+ if !fn(o) {
+ delete(ps, p)
+ }
+ }
+ if len(ps) == 0 {
+ delete(os, t)
+ }
+ }
+}
+
+// DeleteFunc calls fn for every offset, deleting the offset if fn returns
+// true.
+func (os Offsets) DeleteFunc(fn func(o Offset) bool) {
+ os.KeepFunc(func(o Offset) bool { return !fn(o) })
+}
+
+// Topics returns the set of topics and partitions currently used in these
+// offsets.
+func (os Offsets) TopicsSet() TopicsSet {
+ s := make(TopicsSet)
+ os.Each(func(o Offset) { s.Add(o.Topic, o.Partition) })
+ return s
+}
+
+// Each calls fn for each offset in these offsets.
+func (os Offsets) Each(fn func(Offset)) {
+ for _, ps := range os {
+ for _, o := range ps {
+ fn(o)
+ }
+ }
+}
+
+// KOffsets returns these offsets as a kgo offset map.
+func (os Offsets) KOffsets() map[string]map[int32]kgo.Offset {
+ tskgo := make(map[string]map[int32]kgo.Offset)
+ for t, ps := range os {
+ pskgo := make(map[int32]kgo.Offset)
+ for p, o := range ps {
+ pskgo[p] = kgo.NewOffset().
+ At(o.At).
+ WithEpoch(o.LeaderEpoch)
+ }
+ tskgo[t] = pskgo
+ }
+ return tskgo
+}
+
+// Sorted returns the offsets sorted by topic and partition.
+func (os Offsets) Sorted() []Offset {
+ var s []Offset
+ os.Each(func(o Offset) { s = append(s, o) })
+ sort.Slice(s, func(i, j int) bool {
+ return s[i].Topic < s[j].Topic ||
+ s[i].Topic == s[j].Topic && s[i].Partition < s[j].Partition
+ })
+ return s
+}
+
+// OffsetsFromFetches returns Offsets for the final record in any partition in
+// the fetches. This is a helper to enable committing an entire returned batch.
+//
+// This function looks at only the last record per partition, assuming that the
+// last record is the highest offset (which is the behavior returned by kgo's
+// Poll functions). The returned offsets are one past the offset contained in
+// the records.
+func OffsetsFromFetches(fs kgo.Fetches) Offsets {
+ os := make(Offsets)
+ fs.EachPartition(func(p kgo.FetchTopicPartition) {
+ if len(p.Records) == 0 {
+ return
+ }
+ r := p.Records[len(p.Records)-1]
+ os.AddOffset(r.Topic, r.Partition, r.Offset+1, r.LeaderEpoch)
+ })
+ return os
+}
+
+// OffsetsFromRecords returns offsets for all given records, using the highest
+// offset per partition. The returned offsets are one past the offset contained
+// in the records.
+func OffsetsFromRecords(rs ...kgo.Record) Offsets {
+ os := make(Offsets)
+ for _, r := range rs {
+ os.AddOffset(r.Topic, r.Partition, r.Offset+1, r.LeaderEpoch)
+ }
+ return os
+}
+
+// TopicsSet is a set of topics and, per topic, a set of partitions.
+//
+// All methods provided for TopicsSet are safe to use on a nil (default) set.
+type TopicsSet map[string]map[int32]struct{}
+
+// Lookup returns whether the topic and partition exists.
+func (s TopicsSet) Lookup(t string, p int32) bool {
+ if len(s) == 0 {
+ return false
+ }
+ ps := s[t]
+ if len(ps) == 0 {
+ return false
+ }
+ _, exists := ps[p]
+ return exists
+}
+
+// Each calls fn for each topic / partition in the topics set.
+func (s TopicsSet) Each(fn func(t string, p int32)) {
+ for t, ps := range s {
+ for p := range ps {
+ fn(t, p)
+ }
+ }
+}
+
+// EachPartitions calls fn for each topic and its partitions in the topics set.
+func (s TopicsSet) EachPartitions(fn func(t string, ps []int32)) {
+ for t, ps := range s {
+ sliced := make([]int32, 0, len(ps))
+ for p := range ps {
+ sliced = append(sliced, p)
+ }
+ fn(t, sliced)
+ }
+}
+
+// EmptyTopics returns all topics with no partitions.
+func (s TopicsSet) EmptyTopics() []string {
+ var e []string
+ for t, ps := range s {
+ if len(ps) == 0 {
+ e = append(e, t)
+ }
+ }
+ return e
+}
+
+// Add adds partitions for a topic to the topics set. If no partitions are
+// added, this still creates the topic.
+func (s *TopicsSet) Add(t string, ps ...int32) {
+ if *s == nil {
+ *s = make(map[string]map[int32]struct{})
+ }
+ existing := (*s)[t]
+ if existing == nil {
+ existing = make(map[int32]struct{}, len(ps))
+ (*s)[t] = existing
+ }
+ for _, p := range ps {
+ existing[p] = struct{}{}
+ }
+}
+
+// Delete removes partitions from a topic from the topics set. If the topic
+// ends up with no partitions, the topic is removed from the set.
+func (s TopicsSet) Delete(t string, ps ...int32) {
+ if s == nil || len(ps) == 0 {
+ return
+ }
+ existing := s[t]
+ if existing == nil {
+ return
+ }
+ for _, p := range ps {
+ delete(existing, p)
+ }
+ if len(existing) == 0 {
+ delete(s, t)
+ }
+}
+
+// Topics returns all topics in this set in sorted order.
+func (s TopicsSet) Topics() []string {
+ ts := make([]string, 0, len(s))
+ for t := range s {
+ ts = append(ts, t)
+ }
+ sort.Strings(ts)
+ return ts
+}
+
+// Merge merges another topic set into this one.
+func (s TopicsSet) Merge(other TopicsSet) {
+ for t, ps := range other {
+ for p := range ps {
+ s.Add(t, p)
+ }
+ }
+}
+
+// IntoList returns this set as a list.
+func (s TopicsSet) IntoList() TopicsList {
+ l := make(TopicsList, 0, len(s))
+ for t, ps := range s {
+ lps := make([]int32, 0, len(ps))
+ for p := range ps {
+ lps = append(lps, p)
+ }
+ l = append(l, TopicPartitions{
+ Topic: t,
+ Partitions: lps,
+ })
+ }
+ return l
+}
+
+// Sorted returns this set as a list in topic-sorted order, with each topic
+// having sorted partitions.
+func (s TopicsSet) Sorted() TopicsList {
+ l := make(TopicsList, 0, len(s))
+ for t, ps := range s {
+ tps := TopicPartitions{
+ Topic: t,
+ Partitions: make([]int32, 0, len(ps)),
+ }
+ for p := range ps {
+ tps.Partitions = append(tps.Partitions, p)
+ }
+ tps.Partitions = int32s(tps.Partitions)
+ l = append(l, tps)
+ }
+ sort.Slice(l, func(i, j int) bool { return l[i].Topic < l[j].Topic })
+ return l
+}
+
+// TopicPartitions is a topic and partitions.
+type TopicPartitions struct {
+ Topic string
+ Partitions []int32
+}
+
+// TopicsList is a list of topics and partitions.
+type TopicsList []TopicPartitions
+
+// Each calls fn for each topic / partition in the topics list.
+func (l TopicsList) Each(fn func(t string, p int32)) {
+ for _, t := range l {
+ for _, p := range t.Partitions {
+ fn(t.Topic, p)
+ }
+ }
+}
+
+// EachPartitions calls fn for each topic and its partitions in the topics
+// list.
+func (l TopicsList) EachPartitions(fn func(t string, ps []int32)) {
+ for _, t := range l {
+ fn(t.Topic, t.Partitions)
+ }
+}
+
+// EmptyTopics returns all topics with no partitions.
+func (l TopicsList) EmptyTopics() []string {
+ var e []string
+ for _, t := range l {
+ if len(t.Partitions) == 0 {
+ e = append(e, t.Topic)
+ }
+ }
+ return e
+}
+
+// Topics returns all topics in this set in sorted order.
+func (l TopicsList) Topics() []string {
+ ts := make([]string, 0, len(l))
+ for _, t := range l {
+ ts = append(ts, t.Topic)
+ }
+ sort.Strings(ts)
+ return ts
+}
+
+// IntoSet returns this list as a set.
+func (l TopicsList) IntoSet() TopicsSet {
+ s := make(TopicsSet)
+ for _, t := range l {
+ s.Add(t.Topic, t.Partitions...)
+ }
+ return s
+}
+
+// First returns the first element of the input slice and whether it exists.
+// This is the non-error-accepting equivalent of FirstE.
+//
+// Many client methods in kadm accept a variadic amount of input arguments and
+// return either a slice or a map of responses, but you often use the method
+// with only one argument. This function can help extract the one response you
+// are interested in.
+func First[S ~[]T, T any](s S) (T, bool) {
+ if len(s) == 0 {
+ var t T
+ return t, false
+ }
+ return s[0], true
+}
+
+// Any returns the first range element of the input map and whether it exists.
+// This is the non-error-accepting equivalent of AnyE.
+//
+// Many client methods in kadm accept a variadic amount of input arguments and
+// return either a slice or a map of responses, but you often use the method
+// with only one argument. This function can help extract the one response you
+// are interested in.
+func Any[M ~map[K]V, K comparable, V any](m M) (V, bool) {
+ for _, v := range m {
+ return v, true
+ }
+ var v V
+ return v, false
+}
+
+// ErrEmpty is returned from FirstE or AnyE if the input is empty.
+var ErrEmpty = errors.New("empty")
+
+// FirstE returns the first element of the input slice, or the input error
+// if it is non-nil. If the error is nil but the slice is empty, this returns
+// ErrEmpty. This is the error-accepting equivalent of First.
+//
+// Many client methods in kadm accept a variadic amount of input arguments and
+// return either a slice or a map of responses, but you often use the method
+// with only one argument. This function can help extract the one response you
+// are interested in.
+func FirstE[S ~[]T, T any](s S, err error) (T, error) {
+ if err != nil {
+ var t T
+ return t, err
+ }
+ if len(s) == 0 {
+ var t T
+ return t, ErrEmpty
+ }
+ return s[0], err
+}
+
+// AnyE returns the first range element of the input map, or the input error if
+// it is non-nil. If the error is nil but the map is empty, this returns
+// ErrEmpty. This is the error-accepting equivalent of Any.
+//
+// Many client methods in kadm accept a variadic amount of input arguments and
+// return either a slice or a map of responses, but you often use the method
+// with only one argument. This function can help extract the one response you
+// are interested in.
+func AnyE[M ~map[K]V, K comparable, V any](m M, err error) (V, error) {
+ if err != nil {
+ var v V
+ return v, err
+ }
+ for _, v := range m {
+ return v, nil
+ }
+ var v V
+ return v, ErrEmpty
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/logdirs.go b/vendor/github.com/twmb/franz-go/pkg/kadm/logdirs.go
new file mode 100644
index 0000000000000..c1487cbcce755
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/logdirs.go
@@ -0,0 +1,592 @@
+package kadm
+
+import (
+ "context"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// AlterReplicaLogDirsReq is the input for a request to alter replica log
+// directories. The key is the directory that all topics and partitions in
+// the topic set will move to.
+type AlterReplicaLogDirsReq map[string]TopicsSet
+
+// Add merges the input topic set into the given directory.
+func (r *AlterReplicaLogDirsReq) Add(d string, s TopicsSet) {
+ if *r == nil {
+ *r = make(map[string]TopicsSet)
+ }
+ existing := (*r)[d]
+ if existing == nil {
+ existing = make(TopicsSet)
+ (*r)[d] = existing
+ }
+ existing.Merge(s)
+}
+
+func (r AlterReplicaLogDirsReq) req() *kmsg.AlterReplicaLogDirsRequest {
+ req := kmsg.NewPtrAlterReplicaLogDirsRequest()
+ for dir, ts := range r {
+ rd := kmsg.NewAlterReplicaLogDirsRequestDir()
+ rd.Dir = dir
+ for t, ps := range ts {
+ rt := kmsg.NewAlterReplicaLogDirsRequestDirTopic()
+ rt.Topic = t
+ for p := range ps {
+ rt.Partitions = append(rt.Partitions, p)
+ }
+ rd.Topics = append(rd.Topics, rt)
+ }
+ req.Dirs = append(req.Dirs, rd)
+ }
+ return req
+}
+
+func (r AlterReplicaLogDirsReq) dirfor(t string, p int32) string {
+ for d, dts := range r {
+ if dts == nil {
+ continue
+ }
+ dtps, ok := dts[t] // does this dir contain this topic?
+ if !ok {
+ continue
+ }
+ if _, ok = dtps[p]; !ok { // does this topic in this dir contain this partition?
+ continue
+ }
+ return d // yes
+ }
+ return ""
+}
+
+// AlterAllReplicaLogDirsResponses contains per-broker responses to altered
+// partition directories.
+type AlterAllReplicaLogDirsResponses map[int32]AlterReplicaLogDirsResponses
+
+// Sorted returns the responses sorted by broker, topic, and partition.
+func (rs AlterAllReplicaLogDirsResponses) Sorted() []AlterReplicaLogDirsResponse {
+ var all []AlterReplicaLogDirsResponse
+ rs.Each(func(r AlterReplicaLogDirsResponse) {
+ all = append(all, r)
+ })
+ sort.Slice(all, func(i, j int) bool { return all[i].Less(all[j]) })
+ return all
+}
+
+// Each calls fn for every response.
+func (rs AlterAllReplicaLogDirsResponses) Each(fn func(AlterReplicaLogDirsResponse)) {
+ for _, ts := range rs {
+ ts.Each(fn)
+ }
+}
+
+// AlterReplicaLogDirsResponses contains responses to altered partition
+// directories for a single broker.
+type AlterReplicaLogDirsResponses map[string]map[int32]AlterReplicaLogDirsResponse
+
+// Sorted returns the responses sorted by topic and partition.
+func (rs AlterReplicaLogDirsResponses) Sorted() []AlterReplicaLogDirsResponse {
+ var all []AlterReplicaLogDirsResponse
+ rs.Each(func(r AlterReplicaLogDirsResponse) {
+ all = append(all, r)
+ })
+ sort.Slice(all, func(i, j int) bool { return all[i].Less(all[j]) })
+ return all
+}
+
+// Each calls fn for every response.
+func (rs AlterReplicaLogDirsResponses) Each(fn func(AlterReplicaLogDirsResponse)) {
+ for _, ps := range rs {
+ for _, r := range ps {
+ fn(r)
+ }
+ }
+}
+
+// AlterReplicaLogDirsResponse contains a the response for an individual
+// altered partition directory.
+type AlterReplicaLogDirsResponse struct {
+ Broker int32 // Broker is the broker this response came from.
+ Dir string // Dir is the directory this partition was requested to be moved to.
+ Topic string // Topic is the topic for this partition.
+ Partition int32 // Partition is the partition that was moved.
+ Err error // Err is non-nil if this move had an error.
+}
+
+// Less returns if the response is less than the other by broker, dir, topic,
+// and partition.
+func (a AlterReplicaLogDirsResponse) Less(other AlterReplicaLogDirsResponse) bool {
+ if a.Broker < other.Broker {
+ return true
+ }
+ if a.Broker > other.Broker {
+ return false
+ }
+ if a.Dir < other.Dir {
+ return true
+ }
+ if a.Dir > other.Dir {
+ return false
+ }
+ if a.Topic < other.Topic {
+ return true
+ }
+ if a.Topic > other.Topic {
+ return false
+ }
+ return a.Partition < other.Partition
+}
+
+func newAlterLogDirsResp(node int32, req AlterReplicaLogDirsReq, resp *kmsg.AlterReplicaLogDirsResponse) AlterReplicaLogDirsResponses {
+ a := make(AlterReplicaLogDirsResponses)
+ for _, kt := range resp.Topics {
+ ps := make(map[int32]AlterReplicaLogDirsResponse)
+ a[kt.Topic] = ps
+ for _, kp := range kt.Partitions {
+ ps[kp.Partition] = AlterReplicaLogDirsResponse{
+ Broker: node,
+ Dir: req.dirfor(kt.Topic, kp.Partition),
+ Topic: kt.Topic,
+ Partition: kp.Partition,
+ Err: kerr.ErrorForCode(kp.ErrorCode),
+ }
+ }
+ }
+ return a
+}
+
+// AlterAllReplicaLogDirs alters the log directories for the input topic
+// partitions, moving each partition to the requested directory. This function
+// moves all replicas on any broker.
+//
+// This may return *ShardErrors.
+func (cl *Client) AlterAllReplicaLogDirs(ctx context.Context, alter AlterReplicaLogDirsReq) (AlterAllReplicaLogDirsResponses, error) {
+ if len(alter) == 0 {
+ return make(AlterAllReplicaLogDirsResponses), nil
+ }
+ req := alter.req()
+ shards := cl.cl.RequestSharded(ctx, req)
+ resps := make(AlterAllReplicaLogDirsResponses)
+ return resps, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.AlterReplicaLogDirsResponse)
+ resps[b.NodeID] = newAlterLogDirsResp(b.NodeID, alter, resp) // one node ID, no need to unique-check
+ return nil
+ })
+}
+
+// AlterBrokerReplicaLogDirs alters the log directories for the input topic on the
+// given broker, moving each partition to the requested directory.
+func (cl *Client) AlterBrokerReplicaLogDirs(ctx context.Context, broker int32, alter AlterReplicaLogDirsReq) (AlterReplicaLogDirsResponses, error) {
+ if len(alter) == 0 {
+ return make(AlterReplicaLogDirsResponses), nil
+ }
+ b := cl.cl.Broker(int(broker))
+ kresp, err := b.RetriableRequest(ctx, alter.req())
+ if err != nil {
+ return nil, err
+ }
+ resp := kresp.(*kmsg.AlterReplicaLogDirsResponse)
+ return newAlterLogDirsResp(broker, alter, resp), nil
+}
+
+func describeLogDirsReq(s TopicsSet) *kmsg.DescribeLogDirsRequest {
+ req := kmsg.NewPtrDescribeLogDirsRequest()
+ for t, ps := range s {
+ rt := kmsg.NewDescribeLogDirsRequestTopic()
+ rt.Topic = t
+ for p := range ps {
+ rt.Partitions = append(rt.Partitions, p)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+ return req
+}
+
+// DescribedAllLogDirs contains per-broker responses to described log
+// directories.
+type DescribedAllLogDirs map[int32]DescribedLogDirs
+
+// Sorted returns each log directory sorted by broker, then by directory.
+func (ds DescribedAllLogDirs) Sorted() []DescribedLogDir {
+ var all []DescribedLogDir
+ ds.Each(func(d DescribedLogDir) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Broker < r.Broker || l.Broker == r.Broker && l.Dir < r.Dir
+ })
+ return all
+}
+
+// Each calls fn for every described log dir in all responses.
+func (ds DescribedAllLogDirs) Each(fn func(DescribedLogDir)) {
+ for _, bds := range ds {
+ bds.Each(fn)
+ }
+}
+
+// DescribedLogDirs contains per-directory responses to described log
+// directories for a single broker.
+type DescribedLogDirs map[string]DescribedLogDir
+
+// Lookup returns the described partition if it exists.
+func (ds DescribedLogDirs) Lookup(d, t string, p int32) (DescribedLogDirPartition, bool) {
+ dir, exists := ds[d]
+ if !exists {
+ return DescribedLogDirPartition{}, false
+ }
+ ps, exists := dir.Topics[t]
+ if !exists {
+ return DescribedLogDirPartition{}, false
+ }
+ dp, exists := ps[p]
+ if !exists {
+ return DescribedLogDirPartition{}, false
+ }
+ return dp, true
+}
+
+// LookupPartition returns the described partition if it exists in any
+// directory. Brokers should only have one replica of a partition, so this
+// should always find at most one partition.
+func (ds DescribedLogDirs) LookupPartition(t string, p int32) (DescribedLogDirPartition, bool) {
+ for _, dir := range ds {
+ ps, exists := dir.Topics[t]
+ if !exists {
+ continue
+ }
+ dp, exists := ps[p]
+ if !exists {
+ continue
+ }
+ return dp, true
+ }
+ return DescribedLogDirPartition{}, false
+}
+
+// Size returns the total size of all directories.
+func (ds DescribedLogDirs) Size() int64 {
+ var tot int64
+ ds.EachPartition(func(d DescribedLogDirPartition) {
+ tot += d.Size
+ })
+ return tot
+}
+
+// Error iterates over all directories and returns the first error encounted,
+// if any. This can be used to check if describing was entirely successful or
+// not.
+func (ds DescribedLogDirs) Error() error {
+ for _, d := range ds {
+ if d.Err != nil {
+ return d.Err
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for ds.Error() ==
+// nil.
+func (ds DescribedLogDirs) Ok() bool {
+ return ds.Error() == nil
+}
+
+// Sorted returns all directories sorted by dir.
+func (ds DescribedLogDirs) Sorted() []DescribedLogDir {
+ var all []DescribedLogDir
+ ds.Each(func(d DescribedLogDir) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Broker < r.Broker || l.Broker == r.Broker && l.Dir < r.Dir
+ })
+ return all
+}
+
+// SortedPartitions returns all partitions sorted by dir, then topic, then
+// partition.
+func (ds DescribedLogDirs) SortedPartitions() []DescribedLogDirPartition {
+ var all []DescribedLogDirPartition
+ ds.EachPartition(func(d DescribedLogDirPartition) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool { return all[i].Less(all[j]) })
+ return all
+}
+
+// SortedBySize returns all directories sorted from smallest total directory
+// size to largest.
+func (ds DescribedLogDirs) SortedBySize() []DescribedLogDir {
+ var all []DescribedLogDir
+ ds.Each(func(d DescribedLogDir) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ ls, rs := l.Size(), r.Size()
+ return ls < rs || ls == rs &&
+ (l.Broker < r.Broker || l.Broker == r.Broker &&
+ l.Dir < r.Dir)
+ })
+ return all
+}
+
+// SortedPartitionsBySize returns all partitions across all directories sorted
+// by smallest to largest, falling back to by broker, dir, topic, and
+// partition.
+func (ds DescribedLogDirs) SortedPartitionsBySize() []DescribedLogDirPartition {
+ var all []DescribedLogDirPartition
+ ds.EachPartition(func(d DescribedLogDirPartition) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool { return all[i].LessBySize(all[j]) })
+ return all
+}
+
+// SmallestPartitionBySize returns the smallest partition by directory size, or
+// no partition if there are no partitions.
+func (ds DescribedLogDirs) SmallestPartitionBySize() (DescribedLogDirPartition, bool) {
+ sorted := ds.SortedPartitionsBySize()
+ if len(sorted) == 0 {
+ return DescribedLogDirPartition{}, false
+ }
+ return sorted[0], true
+}
+
+// LargestPartitionBySize returns the largest partition by directory size, or
+// no partition if there are no partitions.
+func (ds DescribedLogDirs) LargestPartitionBySize() (DescribedLogDirPartition, bool) {
+ sorted := ds.SortedPartitionsBySize()
+ if len(sorted) == 0 {
+ return DescribedLogDirPartition{}, false
+ }
+ return sorted[len(sorted)-1], true
+}
+
+// Each calls fn for each log directory.
+func (ds DescribedLogDirs) Each(fn func(DescribedLogDir)) {
+ for _, d := range ds {
+ fn(d)
+ }
+}
+
+// Each calls fn for each partition in any directory.
+func (ds DescribedLogDirs) EachPartition(fn func(d DescribedLogDirPartition)) {
+ for _, d := range ds {
+ d.Topics.Each(fn)
+ }
+}
+
+// EachError calls fn for every directory that has a non-nil error.
+func (ds DescribedLogDirs) EachError(fn func(DescribedLogDir)) {
+ for _, d := range ds {
+ if d.Err != nil {
+ fn(d)
+ }
+ }
+}
+
+// DescribedLogDir is a described log directory.
+type DescribedLogDir struct {
+ Broker int32 // Broker is the broker being described.
+ Dir string // Dir is the described directory.
+ Topics DescribedLogDirTopics // Partitions are the partitions in this directory.
+ Err error // Err is non-nil if this directory could not be described.
+}
+
+// Size returns the total size of all partitions in this directory. This is
+// a shortcut for .Topics.Size().
+func (ds DescribedLogDir) Size() int64 {
+ return ds.Topics.Size()
+}
+
+// DescribedLogDirTopics contains per-partition described log directories.
+type DescribedLogDirTopics map[string]map[int32]DescribedLogDirPartition
+
+// Lookup returns the described partition if it exists.
+func (ds DescribedLogDirTopics) Lookup(t string, p int32) (DescribedLogDirPartition, bool) {
+ ps, exists := ds[t]
+ if !exists {
+ return DescribedLogDirPartition{}, false
+ }
+ d, exists := ps[p]
+ return d, exists
+}
+
+// Size returns the total size of all partitions in this directory.
+func (ds DescribedLogDirTopics) Size() int64 {
+ var tot int64
+ ds.Each(func(d DescribedLogDirPartition) {
+ tot += d.Size
+ })
+ return tot
+}
+
+// Sorted returns all partitions sorted by topic then partition.
+func (ds DescribedLogDirTopics) Sorted() []DescribedLogDirPartition {
+ var all []DescribedLogDirPartition
+ ds.Each(func(d DescribedLogDirPartition) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool { return all[i].Less(all[j]) })
+ return all
+}
+
+// SortedBySize returns all partitions sorted by smallest size to largest. If
+// partitions are of equal size, the sorting is topic then partition.
+func (ds DescribedLogDirTopics) SortedBySize() []DescribedLogDirPartition {
+ var all []DescribedLogDirPartition
+ ds.Each(func(d DescribedLogDirPartition) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool { return all[i].LessBySize(all[j]) })
+ return all
+}
+
+// Each calls fn for every partition.
+func (ds DescribedLogDirTopics) Each(fn func(p DescribedLogDirPartition)) {
+ for _, ps := range ds {
+ for _, d := range ps {
+ fn(d)
+ }
+ }
+}
+
+// DescribedLogDirPartition is the information for a single partitions described
+// log directory.
+type DescribedLogDirPartition struct {
+ Broker int32 // Broker is the broker this partition is on.
+ Dir string // Dir is the directory this partition lives in.
+ Topic string // Topic is the topic for this partition.
+ Partition int32 // Partition is this partition.
+ Size int64 // Size is the total size of the log segments of this partition, in bytes.
+
+ // OffsetLag is how far behind the log end offset this partition is.
+ // The math is:
+ //
+ // if IsFuture {
+ // logEndOffset - futureLogEndOffset
+ // } else {
+ // max(highWaterMark - logEndOffset)
+ // }
+ //
+ OffsetLag int64
+ // IsFuture is true if this replica was created by an
+ // AlterReplicaLogDirsRequest and will replace the current log of the
+ // replica in the future.
+ IsFuture bool
+}
+
+// Less returns if one dir partition is less than the other, by dir, topic,
+// partition, and size.
+func (p DescribedLogDirPartition) Less(other DescribedLogDirPartition) bool {
+ if p.Broker < other.Broker {
+ return true
+ }
+ if p.Broker > other.Broker {
+ return false
+ }
+ if p.Dir < other.Dir {
+ return true
+ }
+ if p.Dir > other.Dir {
+ return false
+ }
+ if p.Topic < other.Topic {
+ return true
+ }
+ if p.Topic > other.Topic {
+ return false
+ }
+ if p.Partition < other.Partition {
+ return true
+ }
+ if p.Partition > other.Partition {
+ return false
+ }
+ return p.Size < other.Size
+}
+
+// LessBySize returns if one dir partition is less than the other by size,
+// otherwise by normal Less semantics.
+func (p DescribedLogDirPartition) LessBySize(other DescribedLogDirPartition) bool {
+ if p.Size < other.Size {
+ return true
+ }
+ return p.Less(other)
+}
+
+func newDescribeLogDirsResp(node int32, resp *kmsg.DescribeLogDirsResponse) DescribedLogDirs {
+ ds := make(DescribedLogDirs)
+ for _, rd := range resp.Dirs {
+ d := DescribedLogDir{
+ Broker: node,
+ Dir: rd.Dir,
+ Topics: make(DescribedLogDirTopics),
+ Err: kerr.ErrorForCode(rd.ErrorCode),
+ }
+ for _, rt := range rd.Topics {
+ t := make(map[int32]DescribedLogDirPartition)
+ d.Topics[rt.Topic] = t
+ for _, rp := range rt.Partitions {
+ t[rp.Partition] = DescribedLogDirPartition{
+ Broker: node,
+ Dir: rd.Dir,
+ Topic: rt.Topic,
+ Partition: rp.Partition,
+ Size: rp.Size,
+ OffsetLag: rp.OffsetLag,
+ IsFuture: rp.IsFuture,
+ }
+ }
+ }
+ ds[rd.Dir] = d
+ }
+ return ds
+}
+
+// DescribeAllLogDirs describes the log directores for every input topic
+// partition on every broker. If the input set is nil, this describes all log
+// directories.
+//
+// This may return *ShardErrors.
+func (cl *Client) DescribeAllLogDirs(ctx context.Context, s TopicsSet) (DescribedAllLogDirs, error) {
+ req := describeLogDirsReq(s)
+ shards := cl.cl.RequestSharded(ctx, req)
+ resps := make(DescribedAllLogDirs)
+ return resps, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.DescribeLogDirsResponse)
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return err
+ }
+ resps[b.NodeID] = newDescribeLogDirsResp(b.NodeID, resp) // one node ID, no need to unique-check
+ return nil
+ })
+}
+
+// DescribeBrokerLogDirs describes the log directories for the input topic
+// partitions on the given broker. If the input set is nil, this describes all
+// log directories.
+func (cl *Client) DescribeBrokerLogDirs(ctx context.Context, broker int32, s TopicsSet) (DescribedLogDirs, error) {
+ req := describeLogDirsReq(s)
+ b := cl.cl.Broker(int(broker))
+ kresp, err := b.RetriableRequest(ctx, req)
+ if err != nil {
+ return nil, err
+ }
+ resp := kresp.(*kmsg.DescribeLogDirsResponse)
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ return newDescribeLogDirsResp(broker, resp), nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/metadata.go b/vendor/github.com/twmb/franz-go/pkg/kadm/metadata.go
new file mode 100644
index 0000000000000..f2797186ee621
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/metadata.go
@@ -0,0 +1,518 @@
+package kadm
+
+import (
+ "bytes"
+ "context"
+ "encoding/base64"
+ "fmt"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kgo"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TopicID is the 16 byte underlying topic ID.
+type TopicID [16]byte
+
+// String returns the topic ID encoded as base64.
+func (t TopicID) String() string { return base64.StdEncoding.EncodeToString(t[:]) }
+
+// MarshalJSON returns the topic ID encoded as quoted base64.
+func (t TopicID) MarshalJSON() ([]byte, error) { return []byte(`"` + t.String() + `"`), nil }
+
+// Less returns if this ID is less than the other, byte by byte.
+func (t TopicID) Less(other TopicID) bool {
+ return bytes.Compare(t[:], other[:]) == -1
+}
+
+// PartitionDetail is the detail of a partition as returned by a metadata
+// response. If the partition fails to load / has an error, then only the
+// partition number itself and the Err fields will be set.
+type PartitionDetail struct {
+ Topic string // Topic is the topic this partition belongs to.
+ Partition int32 // Partition is the partition number these details are for.
+
+ Leader int32 // Leader is the broker leader, if there is one, otherwise -1.
+ LeaderEpoch int32 // LeaderEpoch is the leader's current epoch.
+ Replicas []int32 // Replicas is the list of replicas.
+ ISR []int32 // ISR is the list of in sync replicas.
+ OfflineReplicas []int32 // OfflineReplicas is the list of offline replicas.
+
+ Err error // Err is non-nil if the partition currently has a load error.
+}
+
+// PartitionDetails contains details for partitions as returned by a metadata
+// response.
+type PartitionDetails map[int32]PartitionDetail
+
+// Sorted returns the partitions in sorted order.
+func (ds PartitionDetails) Sorted() []PartitionDetail {
+ s := make([]PartitionDetail, 0, len(ds))
+ for _, d := range ds {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Partition < s[j].Partition })
+ return s
+}
+
+// Numbers returns a sorted list of all partition numbers.
+func (ds PartitionDetails) Numbers() []int32 {
+ all := make([]int32, 0, len(ds))
+ for p := range ds {
+ all = append(all, p)
+ }
+ return int32s(all)
+}
+
+// NumReplicas returns the number of replicas for these partitions
+//
+// It is assumed that all partitions have the same number of replicas, so this
+// simply returns the number of replicas in the first encountered partition.
+func (ds PartitionDetails) NumReplicas() int {
+ for _, p := range ds {
+ return len(p.Replicas)
+ }
+ return 0
+}
+
+// TopicDetail is the detail of a topic as returned by a metadata response. If
+// the topic fails to load / has an error, then there will be no partitions.
+type TopicDetail struct {
+ Topic string // Topic is the topic these details are for.
+
+ ID TopicID // TopicID is the topic's ID, or all 0 if the broker does not support IDs.
+ IsInternal bool // IsInternal is whether the topic is an internal topic.
+ Partitions PartitionDetails // Partitions contains details about the topic's partitions.
+
+ Err error // Err is non-nil if the topic could not be loaded.
+}
+
+// TopicDetails contains details for topics as returned by a metadata response.
+type TopicDetails map[string]TopicDetail
+
+// Topics returns a sorted list of all topic names.
+func (ds TopicDetails) Names() []string {
+ all := make([]string, 0, len(ds))
+ for t := range ds {
+ all = append(all, t)
+ }
+ sort.Strings(all)
+ return all
+}
+
+// Sorted returns all topics in sorted order.
+func (ds TopicDetails) Sorted() []TopicDetail {
+ s := make([]TopicDetail, 0, len(ds))
+ for _, d := range ds {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool {
+ if s[i].Topic == "" {
+ if s[j].Topic == "" {
+ return bytes.Compare(s[i].ID[:], s[j].ID[:]) == -1
+ }
+ return true
+ }
+ if s[j].Topic == "" {
+ return false
+ }
+ return s[i].Topic < s[j].Topic
+ })
+ return s
+}
+
+// Has returns whether the topic details has the given topic and, if so, that
+// the topic's load error is not an unknown topic error.
+func (ds TopicDetails) Has(topic string) bool {
+ d, ok := ds[topic]
+ return ok && d.Err != kerr.UnknownTopicOrPartition
+}
+
+// FilterInternal deletes any internal topics from this set of topic details.
+func (ds TopicDetails) FilterInternal() {
+ for t, d := range ds {
+ if d.IsInternal {
+ delete(ds, t)
+ }
+ }
+}
+
+// EachPartition calls fn for every partition in all topics.
+func (ds TopicDetails) EachPartition(fn func(PartitionDetail)) {
+ for _, td := range ds {
+ for _, d := range td.Partitions {
+ fn(d)
+ }
+ }
+}
+
+// EachError calls fn for each topic that could not be loaded.
+func (ds TopicDetails) EachError(fn func(TopicDetail)) {
+ for _, td := range ds {
+ if td.Err != nil {
+ fn(td)
+ }
+ }
+}
+
+// Error iterates over all topic details and returns the first error
+// encountered, if any.
+func (ds TopicDetails) Error() error {
+ for _, t := range ds {
+ if t.Err != nil {
+ return t.Err
+ }
+ }
+ return nil
+}
+
+// TopicsSet returns the topics and partitions as a set.
+func (ds TopicDetails) TopicsSet() TopicsSet {
+ var s TopicsSet
+ ds.EachPartition(func(d PartitionDetail) {
+ s.Add(d.Topic, d.Partition)
+ })
+ return s
+}
+
+// TopicsList returns the topics and partitions as a list.
+func (ds TopicDetails) TopicsList() TopicsList {
+ return ds.TopicsSet().Sorted()
+}
+
+// Metadata is the data from a metadata response.
+type Metadata struct {
+ Cluster string // Cluster is the cluster name, if any.
+ Controller int32 // Controller is the node ID of the controller broker, if available, otherwise -1.
+ Brokers BrokerDetails // Brokers contains broker details, sorted by default.
+ Topics TopicDetails // Topics contains topic details.
+}
+
+func int32s(is []int32) []int32 {
+ sort.Slice(is, func(i, j int) bool { return is[i] < is[j] })
+ return is
+}
+
+// ListBrokers issues a metadata request and returns BrokerDetails. This
+// returns an error if the request fails to be issued, or an *AuthError.
+func (cl *Client) ListBrokers(ctx context.Context) (BrokerDetails, error) {
+ m, err := cl.Metadata(ctx)
+ if err != nil {
+ return nil, err
+ }
+ return m.Brokers, nil
+}
+
+// BrokerMetadata issues a metadata request and returns it, and does not ask
+// for any topics.
+//
+// This returns an error if the request fails to be issued, or an *AuthErr.
+func (cl *Client) BrokerMetadata(ctx context.Context) (Metadata, error) {
+ return cl.metadata(ctx, true, nil)
+}
+
+// Metadata issues a metadata request and returns it. Specific topics to
+// describe can be passed as additional arguments. If no topics are specified,
+// all topics are requested.
+//
+// This returns an error if the request fails to be issued, or an *AuthErr.
+func (cl *Client) Metadata(
+ ctx context.Context,
+ topics ...string,
+) (Metadata, error) {
+ return cl.metadata(ctx, false, topics)
+}
+
+func (cl *Client) metadata(ctx context.Context, noTopics bool, topics []string) (Metadata, error) {
+ req := kmsg.NewPtrMetadataRequest()
+ for _, t := range topics {
+ rt := kmsg.NewMetadataRequestTopic()
+ rt.Topic = kmsg.StringPtr(t)
+ req.Topics = append(req.Topics, rt)
+ }
+ if noTopics {
+ req.Topics = []kmsg.MetadataRequestTopic{}
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return Metadata{}, err
+ }
+
+ tds := make(map[string]TopicDetail, len(resp.Topics))
+ for _, t := range resp.Topics {
+ if err := maybeAuthErr(t.ErrorCode); err != nil {
+ return Metadata{}, err
+ }
+ td := TopicDetail{
+ ID: t.TopicID,
+ Partitions: make(map[int32]PartitionDetail),
+ IsInternal: t.IsInternal,
+ Err: kerr.ErrorForCode(t.ErrorCode),
+ }
+ if t.Topic != nil {
+ td.Topic = *t.Topic
+ }
+ for _, p := range t.Partitions {
+ td.Partitions[p.Partition] = PartitionDetail{
+ Topic: td.Topic,
+ Partition: p.Partition,
+
+ Leader: p.Leader,
+ LeaderEpoch: p.LeaderEpoch,
+ Replicas: p.Replicas,
+ ISR: p.ISR,
+ OfflineReplicas: p.OfflineReplicas,
+
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ }
+ }
+ tds[*t.Topic] = td
+ }
+
+ m := Metadata{
+ Controller: resp.ControllerID,
+ Topics: tds,
+ }
+ if resp.ClusterID != nil {
+ m.Cluster = *resp.ClusterID
+ }
+
+ for _, b := range resp.Brokers {
+ m.Brokers = append(m.Brokers, kgo.BrokerMetadata{
+ NodeID: b.NodeID,
+ Host: b.Host,
+ Port: b.Port,
+ Rack: b.Rack,
+ })
+ }
+ sort.Slice(m.Brokers, func(i, j int) bool { return m.Brokers[i].NodeID < m.Brokers[j].NodeID })
+
+ if len(topics) > 0 && len(m.Topics) != len(topics) {
+ return Metadata{}, fmt.Errorf("metadata returned only %d topics of %d requested", len(m.Topics), len(topics))
+ }
+
+ return m, nil
+}
+
+// ListedOffset contains record offset information.
+type ListedOffset struct {
+ Topic string // Topic is the topic this offset is for.
+ Partition int32 // Partition is the partition this offset is for.
+
+ Timestamp int64 // Timestamp is the millisecond of the offset if listing after a time, otherwise -1.
+ Offset int64 // Offset is the record offset, or -1 if one could not be found.
+ LeaderEpoch int32 // LeaderEpoch is the leader epoch at this offset, if any, otherwise -1.
+
+ Err error // Err is non-nil if the partition has a load error.
+}
+
+// ListedOffsets contains per-partition record offset information that is
+// returned from any of the List.*Offsets functions.
+type ListedOffsets map[string]map[int32]ListedOffset
+
+// Lookup returns the offset at t and p and whether it exists.
+func (l ListedOffsets) Lookup(t string, p int32) (ListedOffset, bool) {
+ if len(l) == 0 {
+ return ListedOffset{}, false
+ }
+ ps := l[t]
+ if len(ps) == 0 {
+ return ListedOffset{}, false
+ }
+ o, exists := ps[p]
+ return o, exists
+}
+
+// Each calls fn for each listed offset.
+func (l ListedOffsets) Each(fn func(ListedOffset)) {
+ for _, ps := range l {
+ for _, o := range ps {
+ fn(o)
+ }
+ }
+}
+
+// Error iterates over all offsets and returns the first error encountered, if
+// any. This can be to check if a listing was entirely successful or not.
+//
+// Note that offset listing can be partially successful. For example, some
+// offsets could succeed to be listed, while other could fail (maybe one
+// partition is offline). If this is something you need to worry about, you may
+// need to check all offsets manually.
+func (l ListedOffsets) Error() error {
+ for _, ps := range l {
+ for _, o := range ps {
+ if o.Err != nil {
+ return o.Err
+ }
+ }
+ }
+ return nil
+}
+
+// Offsets returns these listed offsets as offsets.
+func (l ListedOffsets) Offsets() Offsets {
+ o := make(Offsets)
+ l.Each(func(l ListedOffset) {
+ o.Add(Offset{
+ Topic: l.Topic,
+ Partition: l.Partition,
+ At: l.Offset,
+ LeaderEpoch: l.LeaderEpoch,
+ })
+ })
+ return o
+}
+
+// KOffsets returns these listed offsets as a kgo offset map.
+func (l ListedOffsets) KOffsets() map[string]map[int32]kgo.Offset {
+ return l.Offsets().KOffsets()
+}
+
+// ListStartOffsets returns the start (oldest) offsets for each partition in
+// each requested topic. In Kafka terms, this returns the log start offset. If
+// no topics are specified, all topics are listed. If a requested topic does
+// not exist, no offsets for it are listed and it is not present in the
+// response.
+//
+// If any topics being listed do not exist, a special -1 partition is added
+// to the response with the expected error code kerr.UnknownTopicOrPartition.
+//
+// This may return *ShardErrors.
+func (cl *Client) ListStartOffsets(ctx context.Context, topics ...string) (ListedOffsets, error) {
+ return cl.listOffsets(ctx, 0, -2, topics)
+}
+
+// ListEndOffsets returns the end (newest) offsets for each partition in each
+// requested topic. In Kafka terms, this returns high watermarks. If no topics
+// are specified, all topics are listed. If a requested topic does not exist,
+// no offsets for it are listed and it is not present in the response.
+//
+// If any topics being listed do not exist, a special -1 partition is added
+// to the response with the expected error code kerr.UnknownTopicOrPartition.
+//
+// This may return *ShardErrors.
+func (cl *Client) ListEndOffsets(ctx context.Context, topics ...string) (ListedOffsets, error) {
+ return cl.listOffsets(ctx, 0, -1, topics)
+}
+
+// ListCommittedOffsets returns newest committed offsets for each partition in
+// each requested topic. A committed offset may be slightly less than the
+// latest offset. In Kafka terms, committed means the last stable offset, and
+// newest means the high watermark. Record offsets in active, uncommitted
+// transactions will not be returned. If no topics are specified, all topics
+// are listed. If a requested topic does not exist, no offsets for it are
+// listed and it is not present in the response.
+//
+// If any topics being listed do not exist, a special -1 partition is added
+// to the response with the expected error code kerr.UnknownTopicOrPartition.
+//
+// This may return *ShardErrors.
+func (cl *Client) ListCommittedOffsets(ctx context.Context, topics ...string) (ListedOffsets, error) {
+ return cl.listOffsets(ctx, 1, -1, topics)
+}
+
+// ListOffsetsAfterMilli returns the first offsets after the requested
+// millisecond timestamp. Unlike listing start/end/committed offsets, offsets
+// returned from this function also include the timestamp of the offset. If no
+// topics are specified, all topics are listed. If a partition has no offsets
+// after the requested millisecond, the offset will be the current end offset.
+// If a requested topic does not exist, no offsets for it are listed and it is
+// not present in the response.
+//
+// If any topics being listed do not exist, a special -1 partition is added
+// to the response with the expected error code kerr.UnknownTopicOrPartition.
+//
+// This may return *ShardErrors.
+func (cl *Client) ListOffsetsAfterMilli(ctx context.Context, millisecond int64, topics ...string) (ListedOffsets, error) {
+ return cl.listOffsets(ctx, 0, millisecond, topics)
+}
+
+func (cl *Client) listOffsets(ctx context.Context, isolation int8, timestamp int64, topics []string) (ListedOffsets, error) {
+ tds, err := cl.ListTopics(ctx, topics...)
+ if err != nil {
+ return nil, err
+ }
+
+ // If we request with timestamps, we may request twice: once for after
+ // timestamps, and once for any -1 (and no error) offsets where the
+ // timestamp is in the future.
+ list := make(ListedOffsets)
+
+ for _, td := range tds {
+ if td.Err != nil {
+ list[td.Topic] = map[int32]ListedOffset{
+ -1: {
+ Topic: td.Topic,
+ Partition: -1,
+ Err: td.Err,
+ },
+ }
+ }
+ }
+ rerequest := make(map[string][]int32)
+ shardfn := func(kr kmsg.Response) error {
+ resp := kr.(*kmsg.ListOffsetsResponse)
+ for _, t := range resp.Topics {
+ lt, ok := list[t.Topic]
+ if !ok {
+ lt = make(map[int32]ListedOffset)
+ list[t.Topic] = lt
+ }
+ for _, p := range t.Partitions {
+ if err := maybeAuthErr(p.ErrorCode); err != nil {
+ return err
+ }
+ lt[p.Partition] = ListedOffset{
+ Topic: t.Topic,
+ Partition: p.Partition,
+ Timestamp: p.Timestamp,
+ Offset: p.Offset,
+ LeaderEpoch: p.LeaderEpoch,
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ }
+ if timestamp != -1 && p.Offset == -1 && p.ErrorCode == 0 {
+ rerequest[t.Topic] = append(rerequest[t.Topic], p.Partition)
+ }
+ }
+ }
+ return nil
+ }
+
+ req := kmsg.NewPtrListOffsetsRequest()
+ req.IsolationLevel = isolation
+ for t, td := range tds {
+ rt := kmsg.NewListOffsetsRequestTopic()
+ if td.Err != nil {
+ continue
+ }
+ rt.Topic = t
+ for p := range td.Partitions {
+ rp := kmsg.NewListOffsetsRequestTopicPartition()
+ rp.Partition = p
+ rp.Timestamp = timestamp
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+ shards := cl.cl.RequestSharded(ctx, req)
+ err = shardErrEach(req, shards, shardfn)
+ if len(rerequest) > 0 {
+ req.Topics = req.Topics[:0]
+ for t, ps := range rerequest {
+ rt := kmsg.NewListOffsetsRequestTopic()
+ rt.Topic = t
+ for _, p := range ps {
+ rp := kmsg.NewListOffsetsRequestTopicPartition()
+ rp.Partition = p
+ rp.Timestamp = -1 // we always list end offsets when rerequesting
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+ shards = cl.cl.RequestSharded(ctx, req)
+ err = mergeShardErrs(err, shardErrEach(req, shards, shardfn))
+ }
+ return list, err
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/misc.go b/vendor/github.com/twmb/franz-go/pkg/kadm/misc.go
new file mode 100644
index 0000000000000..05add54153515
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/misc.go
@@ -0,0 +1,959 @@
+package kadm
+
+import (
+ "context"
+ "crypto/rand"
+ "crypto/sha256"
+ "crypto/sha512"
+ "errors"
+ "fmt"
+ "sort"
+ "strings"
+ "sync"
+
+ "golang.org/x/crypto/pbkdf2"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+ "github.com/twmb/franz-go/pkg/kversion"
+)
+
+// ErrAndMessage is returned as the error from requests that were successfully
+// responded to, but the response indicates failure with a message.
+type ErrAndMessage struct {
+ Err error // Err is the response ErrorCode.
+ ErrMessage string // Message is the response ErrorMessage.
+}
+
+func (e *ErrAndMessage) Error() string {
+ var ke *kerr.Error
+ if errors.As(e.Err, &ke) && e.ErrMessage != "" {
+ return ke.Message + ": " + e.ErrMessage
+ }
+ return e.Err.Error()
+}
+
+func (e *ErrAndMessage) Unwrap() error {
+ return e.Err
+}
+
+// FindCoordinatorResponse contains information for the coordinator for a group
+// or transactional ID.
+type FindCoordinatorResponse struct {
+ Name string // Name is the coordinator key this response is for.
+ NodeID int32 // NodeID is the node ID of the coordinator for this key.
+ Host string // Host is the host of the coordinator for this key.
+ Port int32 // Port is the port of the coordinator for this key.
+ Err error // Err is any error encountered when requesting the coordinator.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// FindCoordinatorResponses contains responses to finding coordinators for
+// groups or transactions.
+type FindCoordinatorResponses map[string]FindCoordinatorResponse
+
+// AllFailed returns whether all responses are errored.
+func (rs FindCoordinatorResponses) AllFailed() bool {
+ var n int
+ rs.EachError(func(FindCoordinatorResponse) { n++ })
+ return len(rs) > 0 && n == len(rs)
+}
+
+// Sorted returns all coordinator responses sorted by name.
+func (rs FindCoordinatorResponses) Sorted() []FindCoordinatorResponse {
+ s := make([]FindCoordinatorResponse, 0, len(rs))
+ for _, r := range rs {
+ s = append(s, r)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Name < s[j].Name })
+ return s
+}
+
+// EachError calls fn for every response that has a non-nil error.
+func (rs FindCoordinatorResponses) EachError(fn func(FindCoordinatorResponse)) {
+ for _, r := range rs {
+ if r.Err != nil {
+ fn(r)
+ }
+ }
+}
+
+// Each calls fn for every response.
+func (rs FindCoordinatorResponses) Each(fn func(FindCoordinatorResponse)) {
+ for _, r := range rs {
+ fn(r)
+ }
+}
+
+// Error iterates over all responses and returns the first error encountered,
+// if any.
+func (rs FindCoordinatorResponses) Error() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for rs.Error() ==
+// nil.
+func (rs FindCoordinatorResponses) Ok() bool {
+ return rs.Error() == nil
+}
+
+// FindGroupCoordinators returns the coordinator for all requested group names.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) FindGroupCoordinators(ctx context.Context, groups ...string) FindCoordinatorResponses {
+ return cl.findCoordinators(ctx, 0, groups...)
+}
+
+// FindTxnCoordinators returns the coordinator for all requested transactional
+// IDs.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) FindTxnCoordinators(ctx context.Context, txnIDs ...string) FindCoordinatorResponses {
+ return cl.findCoordinators(ctx, 1, txnIDs...)
+}
+
+func (cl *Client) findCoordinators(ctx context.Context, kind int8, names ...string) FindCoordinatorResponses {
+ resps := make(FindCoordinatorResponses)
+ if len(names) == 0 {
+ return resps
+ }
+
+ req := kmsg.NewPtrFindCoordinatorRequest()
+ req.CoordinatorType = kind
+ req.CoordinatorKeys = names
+
+ keyErr := func(k string, err error) {
+ resps[k] = FindCoordinatorResponse{
+ Name: k,
+ Err: err,
+ }
+ }
+ allKeysErr := func(req *kmsg.FindCoordinatorRequest, err error) {
+ for _, k := range req.CoordinatorKeys {
+ keyErr(k, err)
+ }
+ }
+
+ shards := cl.cl.RequestSharded(ctx, req)
+ for _, shard := range shards {
+ req := shard.Req.(*kmsg.FindCoordinatorRequest)
+ if shard.Err != nil {
+ allKeysErr(req, shard.Err)
+ continue
+ }
+ resp := shard.Resp.(*kmsg.FindCoordinatorResponse)
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ allKeysErr(req, err)
+ continue
+ }
+ for _, c := range resp.Coordinators {
+ if err := maybeAuthErr(c.ErrorCode); err != nil {
+ keyErr(c.Key, err)
+ continue
+ }
+ resps[c.Key] = FindCoordinatorResponse{ // key is always on one broker, no need to check existence
+ Name: c.Key,
+ NodeID: c.NodeID,
+ Host: c.Host,
+ Port: c.Port,
+ Err: kerr.ErrorForCode(c.ErrorCode),
+ ErrMessage: unptrStr(c.ErrorMessage),
+ }
+ }
+ }
+ return resps
+}
+
+type minmax struct {
+ min, max int16
+}
+
+// BrokerApiVersions contains the API versions for a single broker.
+type BrokerApiVersions struct {
+ NodeID int32 // NodeID is the node this API versions response is for.
+
+ raw *kmsg.ApiVersionsResponse
+ keyVersions map[int16]minmax
+
+ Err error // Err is non-nil if the API versions request failed.
+}
+
+// Raw returns the raw API versions response.
+func (v *BrokerApiVersions) Raw() *kmsg.ApiVersionsResponse {
+ return v.raw
+}
+
+// KeyVersions returns the broker's min and max version for an API key and
+// whether this broker supports the request.
+func (v *BrokerApiVersions) KeyVersions(key int16) (min, max int16, exists bool) {
+ vs, exists := v.keyVersions[key]
+ return vs.min, vs.max, exists
+}
+
+// KeyVersions returns the broker's min version for an API key and whether this
+// broker supports the request.
+func (v *BrokerApiVersions) KeyMinVersion(key int16) (min int16, exists bool) {
+ min, _, exists = v.KeyVersions(key)
+ return min, exists
+}
+
+// KeyVersions returns the broker's max version for an API key and whether this
+// broker supports the request.
+func (v *BrokerApiVersions) KeyMaxVersion(key int16) (max int16, exists bool) {
+ _, max, exists = v.KeyVersions(key)
+ return max, exists
+}
+
+// EachKeySorted calls fn for every API key in the broker response, from the
+// smallest API key to the largest.
+func (v *BrokerApiVersions) EachKeySorted(fn func(key, min, max int16)) {
+ type kmm struct {
+ k, min, max int16
+ }
+ kmms := make([]kmm, 0, len(v.keyVersions))
+ for key, minmax := range v.keyVersions {
+ kmms = append(kmms, kmm{key, minmax.min, minmax.max})
+ }
+ sort.Slice(kmms, func(i, j int) bool { return kmms[i].k < kmms[j].k })
+ for _, kmm := range kmms {
+ fn(kmm.k, kmm.min, kmm.max)
+ }
+}
+
+// VersionGuess returns the best guess of Kafka that this broker is. This is a
+// shorcut for:
+//
+// kversion.FromApiVersionsResponse(v.Raw()).VersionGuess(opt...)
+//
+// Check the kversion.VersionGuess API docs for more details.
+func (v *BrokerApiVersions) VersionGuess(opt ...kversion.VersionGuessOpt) string {
+ return kversion.FromApiVersionsResponse(v.raw).VersionGuess(opt...)
+}
+
+// BrokerApiVersions contains API versions for all brokers that are reachable
+// from a metadata response.
+type BrokersApiVersions map[int32]BrokerApiVersions
+
+// Sorted returns all broker responses sorted by node ID.
+func (vs BrokersApiVersions) Sorted() []BrokerApiVersions {
+ s := make([]BrokerApiVersions, 0, len(vs))
+ for _, v := range vs {
+ s = append(s, v)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].NodeID < s[j].NodeID })
+ return s
+}
+
+// Each calls fn for every broker response.
+func (vs BrokersApiVersions) Each(fn func(BrokerApiVersions)) {
+ for _, v := range vs {
+ fn(v)
+ }
+}
+
+// ApiVersions queries every broker in a metadata response for their API
+// versions. This returns an error only if the metadata request fails.
+func (cl *Client) ApiVersions(ctx context.Context) (BrokersApiVersions, error) {
+ m, err := cl.BrokerMetadata(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ var mu sync.Mutex
+ var wg sync.WaitGroup
+ vs := make(BrokersApiVersions, len(m.Brokers))
+ for _, n := range m.Brokers.NodeIDs() {
+ n := n
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ req := kmsg.NewPtrApiVersionsRequest()
+ req.ClientSoftwareName = "kadm"
+ req.ClientSoftwareVersion = softwareVersion()
+ v := BrokerApiVersions{NodeID: n, keyVersions: make(map[int16]minmax)}
+ v.raw, v.Err = req.RequestWith(ctx, cl.cl.Broker(int(n)))
+
+ mu.Lock()
+ defer mu.Unlock()
+ defer func() { vs[n] = v }()
+ if v.Err != nil {
+ return
+ }
+
+ v.Err = kerr.ErrorForCode(v.raw.ErrorCode)
+ for _, k := range v.raw.ApiKeys {
+ v.keyVersions[k.ApiKey] = minmax{
+ min: k.MinVersion,
+ max: k.MaxVersion,
+ }
+ }
+ }()
+ }
+ wg.Wait()
+
+ return vs, nil
+}
+
+// ClientQuotaEntityComponent is a quota entity component.
+type ClientQuotaEntityComponent struct {
+ Type string // Type is the entity type ("user", "client-id", "ip").
+ Name *string // Name is the entity name, or null if the default.
+}
+
+// String returns key=value, or key= if value is nil.
+func (d ClientQuotaEntityComponent) String() string {
+ if d.Name == nil {
+ return d.Type + "="
+ }
+ return fmt.Sprintf("%s=%s", d.Type, *d.Name)
+}
+
+// ClientQuotaEntity contains the components that make up a single entity.
+type ClientQuotaEntity []ClientQuotaEntityComponent
+
+// String returns {key=value, key=value}, joining all entities with a ", " and
+// wrapping in braces.
+func (ds ClientQuotaEntity) String() string {
+ var ss []string
+ for _, d := range ds {
+ ss = append(ss, d.String())
+ }
+ return "{" + strings.Join(ss, ", ") + "}"
+}
+
+// ClientQuotaValue is a quota name and value.
+type ClientQuotaValue struct {
+ Key string // Key is the quota configuration key.
+ Value float64 // Value is the quota configuration value.
+}
+
+// String returns key=value.
+func (d ClientQuotaValue) String() string {
+ return fmt.Sprintf("%s=%f", d.Key, d.Value)
+}
+
+// ClientQuotaValues contains all client quota values.
+type ClientQuotaValues []ClientQuotaValue
+
+// QuotasMatchType specifies how to match a described client quota entity.
+//
+// 0 means to match the name exactly: user=foo will only match components of
+// entity type "user" and entity name "foo".
+//
+// 1 means to match the default of the name: entity type "user" with a default
+// match will return the default quotas for user entities.
+//
+// 2 means to match any name: entity type "user" with any matching will return
+// both names and defaults.
+type QuotasMatchType = kmsg.QuotasMatchType
+
+// DescribeClientQuotaComponent is an input entity component to describing
+// client quotas: we define the type of quota ("client-id", "user"), how to
+// match, and the match name if needed.
+type DescribeClientQuotaComponent struct {
+ Type string // Type is the type of entity component to describe ("user", "client-id", "ip").
+ MatchName *string // MatchName is the name to match again; this is only needed when MatchType is 0 (exact).
+ MatchType QuotasMatchType // MatchType is how to match an entity.
+}
+
+// DescribedClientQuota contains a described quota. A single quota is made up
+// of multiple entities and multiple values, for example, "user=foo" is one
+// component of the entity, and "client-id=bar" is another.
+type DescribedClientQuota struct {
+ Entity ClientQuotaEntity // Entity is the entity of this described client quota.
+ Values ClientQuotaValues // Values contains the quota valies for this entity.
+}
+
+// DescribedClientQuota contains client quotas that were described.
+type DescribedClientQuotas []DescribedClientQuota
+
+// DescribeClientQuotas describes client quotas. If strict is true, the
+// response includes only the requested components.
+func (cl *Client) DescribeClientQuotas(ctx context.Context, strict bool, entityComponents []DescribeClientQuotaComponent) (DescribedClientQuotas, error) {
+ req := kmsg.NewPtrDescribeClientQuotasRequest()
+ req.Strict = strict
+ for _, entity := range entityComponents {
+ rc := kmsg.NewDescribeClientQuotasRequestComponent()
+ rc.EntityType = entity.Type
+ rc.Match = entity.MatchName
+ rc.MatchType = entity.MatchType
+ req.Components = append(req.Components, rc)
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, &ErrAndMessage{err, unptrStr(resp.ErrorMessage)}
+ }
+ var qs DescribedClientQuotas
+ for _, entry := range resp.Entries {
+ var q DescribedClientQuota
+ for _, e := range entry.Entity {
+ q.Entity = append(q.Entity, ClientQuotaEntityComponent{
+ Type: e.Type,
+ Name: e.Name,
+ })
+ }
+ for _, v := range entry.Values {
+ q.Values = append(q.Values, ClientQuotaValue{
+ Key: v.Key,
+ Value: v.Value,
+ })
+ }
+ qs = append(qs, q)
+ }
+ return qs, nil
+}
+
+// AlterClientQuotaOp sets or remove a client quota.
+type AlterClientQuotaOp struct {
+ Key string // Key is the quota configuration key to set or remove.
+ Value float64 // Value is the quota configuration value to set or remove.
+ Remove bool // Remove, if true, removes this quota rather than sets it.
+}
+
+// AlterClientQuotaEntry pairs an entity with quotas to set or remove.
+type AlterClientQuotaEntry struct {
+ Entity ClientQuotaEntity // Entity is the entity to alter quotas for.
+ Ops []AlterClientQuotaOp // Ops are quotas to set or remove.
+}
+
+// AlteredClientQuota is the result for a single entity that was altered.
+type AlteredClientQuota struct {
+ Entity ClientQuotaEntity // Entity is the entity this result is for.
+ Err error // Err is non-nil if the alter operation on this entity failed.
+ ErrMessage string // ErrMessage is an optional additional message on error.
+}
+
+// AlteredClientQuotas contains results for all altered entities.
+type AlteredClientQuotas []AlteredClientQuota
+
+// AlterClientQuotas alters quotas for the input entries. You may consider
+// checking ValidateAlterClientQuotas before using this method.
+func (cl *Client) AlterClientQuotas(ctx context.Context, entries []AlterClientQuotaEntry) (AlteredClientQuotas, error) {
+ return cl.alterClientQuotas(ctx, false, entries)
+}
+
+// ValidateAlterClientQuotas validates an alter client quota request. This
+// returns exactly what AlterClientQuotas returns, but does not actually alter
+// quotas.
+func (cl *Client) ValidateAlterClientQuotas(ctx context.Context, entries []AlterClientQuotaEntry) (AlteredClientQuotas, error) {
+ return cl.alterClientQuotas(ctx, true, entries)
+}
+
+func (cl *Client) alterClientQuotas(ctx context.Context, validate bool, entries []AlterClientQuotaEntry) (AlteredClientQuotas, error) {
+ req := kmsg.NewPtrAlterClientQuotasRequest()
+ req.ValidateOnly = validate
+ for _, entry := range entries {
+ re := kmsg.NewAlterClientQuotasRequestEntry()
+ for _, c := range entry.Entity {
+ rec := kmsg.NewAlterClientQuotasRequestEntryEntity()
+ rec.Type = c.Type
+ rec.Name = c.Name
+ re.Entity = append(re.Entity, rec)
+ }
+ for _, op := range entry.Ops {
+ reo := kmsg.NewAlterClientQuotasRequestEntryOp()
+ reo.Key = op.Key
+ reo.Value = op.Value
+ reo.Remove = op.Remove
+ re.Ops = append(re.Ops, reo)
+ }
+ req.Entries = append(req.Entries, re)
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ var as AlteredClientQuotas
+ for _, entry := range resp.Entries {
+ var e ClientQuotaEntity
+ for _, c := range entry.Entity {
+ e = append(e, ClientQuotaEntityComponent{
+ Type: c.Type,
+ Name: c.Name,
+ })
+ }
+ a := AlteredClientQuota{
+ Entity: e,
+ Err: kerr.ErrorForCode(entry.ErrorCode),
+ ErrMessage: unptrStr(entry.ErrorMessage),
+ }
+ as = append(as, a)
+ }
+ return as, nil
+}
+
+// ScramMechanism is a SCRAM mechanism.
+type ScramMechanism int8
+
+const (
+ // ScramSha256 represents the SCRAM-SHA-256 mechanism.
+ ScramSha256 ScramMechanism = 1
+ // ScramSha512 represents the SCRAM-SHA-512 mechanism.
+ ScramSha512 ScramMechanism = 2
+)
+
+// String returns either SCRAM-SHA-256, SCRAM-SHA-512, or UNKNOWN.
+func (s ScramMechanism) String() string {
+ switch s {
+ case ScramSha256:
+ return "SCRAM-SHA-256"
+ case ScramSha512:
+ return "SCRAM-SHA-512"
+ default:
+ return "UNKNOWN"
+ }
+}
+
+// CredInfo contains the SCRAM mechanism and iterations for a password.
+type CredInfo struct {
+ // Mechanism is the SCRAM mechanism a password exists for. This is 0
+ // for UNKNOWN, 1 for SCRAM-SHA-256, and 2 for SCRAM-SHA-512.
+ Mechanism ScramMechanism
+ // Iterations is the number of SCRAM iterations for this password.
+ Iterations int32
+}
+
+// String returns MECHANISM=iterations={c.Iterations}.
+func (c CredInfo) String() string {
+ return fmt.Sprintf("%s=iterations=%d", c.Mechanism, c.Iterations)
+}
+
+// DescribedUserSCRAM contains a user, the SCRAM mechanisms that the user has
+// passwords for, and if describing the user SCRAM credentials errored.
+type DescribedUserSCRAM struct {
+ User string // User is the user this described user credential is for.
+ CredInfos []CredInfo // CredInfos contains SCRAM mechanisms the user has passwords for.
+ Err error // Err is any error encountered when describing the user.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// DescribedUserSCRAMs contains described user SCRAM credentials keyed by user.
+type DescribedUserSCRAMs map[string]DescribedUserSCRAM
+
+// Sorted returns the described user credentials ordered by user.
+func (ds DescribedUserSCRAMs) Sorted() []DescribedUserSCRAM {
+ s := make([]DescribedUserSCRAM, 0, len(ds))
+ for _, d := range ds {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].User < s[j].User })
+ return s
+}
+
+// AllFailed returns whether all described user credentials are errored.
+func (ds DescribedUserSCRAMs) AllFailed() bool {
+ var n int
+ ds.EachError(func(DescribedUserSCRAM) { n++ })
+ return len(ds) > 0 && n == len(ds)
+}
+
+// EachError calls fn for every described user that has a non-nil error.
+func (ds DescribedUserSCRAMs) EachError(fn func(DescribedUserSCRAM)) {
+ for _, d := range ds {
+ if d.Err != nil {
+ fn(d)
+ }
+ }
+}
+
+// Each calls fn for every described user.
+func (ds DescribedUserSCRAMs) Each(fn func(DescribedUserSCRAM)) {
+ for _, d := range ds {
+ fn(d)
+ }
+}
+
+// Error iterates over all described users and returns the first error
+// encountered, if any.
+func (ds DescribedUserSCRAMs) Error() error {
+ for _, d := range ds {
+ if d.Err != nil {
+ return d.Err
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for rs.Error() ==
+// nil.
+func (ds DescribedUserSCRAMs) Ok() bool {
+ return ds.Error() == nil
+}
+
+// DescribeUserSCRAMs returns a small bit of information about all users in the
+// input request that have SCRAM passwords configured. No users requests all
+// users.
+func (cl *Client) DescribeUserSCRAMs(ctx context.Context, users ...string) (DescribedUserSCRAMs, error) {
+ req := kmsg.NewPtrDescribeUserSCRAMCredentialsRequest()
+ for _, u := range users {
+ ru := kmsg.NewDescribeUserSCRAMCredentialsRequestUser()
+ ru.Name = u
+ req.Users = append(req.Users, ru)
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ rs := make(DescribedUserSCRAMs)
+ for _, res := range resp.Results {
+ r := DescribedUserSCRAM{
+ User: res.User,
+ Err: kerr.ErrorForCode(res.ErrorCode),
+ ErrMessage: unptrStr(res.ErrorMessage),
+ }
+ for _, i := range res.CredentialInfos {
+ r.CredInfos = append(r.CredInfos, CredInfo{
+ Mechanism: ScramMechanism(i.Mechanism),
+ Iterations: i.Iterations,
+ })
+ }
+ rs[r.User] = r
+ }
+ return rs, nil
+}
+
+// DeleteSCRAM deletes a password with the given mechanism for the user.
+type DeleteSCRAM struct {
+ User string // User is the username to match for deletion.
+ Mechanism ScramMechanism // Mechanism is the mechanism to match to delete a password for.
+}
+
+// UpsertSCRAM either updates or creates (inserts) a new password for a user.
+// There are two ways to specify a password: either with the Password field
+// directly, or by specifying both Salt and SaltedPassword. If you specify just
+// a password, this package generates a 24 byte salt and uses pbkdf2 to create
+// the salted password.
+type UpsertSCRAM struct {
+ User string // User is the username to use.
+ Mechanism ScramMechanism // Mechanism is the mechanism to use.
+ Iterations int32 // Iterations is the SCRAM iterations to use; must be between 4096 and 16384.
+ Password string // Password is the password to salt and convert to a salted password. Requires Salt and SaltedPassword to be empty.
+ Salt []byte // Salt must be paired with SaltedPassword and requires Password to be empty.
+ SaltedPassword []byte // SaltedPassword must be paired with Salt and requires Password to be empty.
+}
+
+// AlteredUserSCRAM is the result of an alter operation.
+type AlteredUserSCRAM struct {
+ User string // User is the username that was altered.
+ Err error // Err is any error encountered when altering the user.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// AlteredUserSCRAMs contains altered user SCRAM credentials keyed by user.
+type AlteredUserSCRAMs map[string]AlteredUserSCRAM
+
+// Sorted returns the altered user credentials ordered by user.
+func (as AlteredUserSCRAMs) Sorted() []AlteredUserSCRAM {
+ s := make([]AlteredUserSCRAM, 0, len(as))
+ for _, a := range as {
+ s = append(s, a)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].User < s[j].User })
+ return s
+}
+
+// AllFailed returns whether all altered user credentials are errored.
+func (as AlteredUserSCRAMs) AllFailed() bool {
+ var n int
+ as.EachError(func(AlteredUserSCRAM) { n++ })
+ return len(as) > 0 && n == len(as)
+}
+
+// EachError calls fn for every altered user that has a non-nil error.
+func (as AlteredUserSCRAMs) EachError(fn func(AlteredUserSCRAM)) {
+ for _, a := range as {
+ if a.Err != nil {
+ fn(a)
+ }
+ }
+}
+
+// Each calls fn for every altered user.
+func (as AlteredUserSCRAMs) Each(fn func(AlteredUserSCRAM)) {
+ for _, a := range as {
+ fn(a)
+ }
+}
+
+// Error iterates over all altered users and returns the first error
+// encountered, if any.
+func (as AlteredUserSCRAMs) Error() error {
+ for _, a := range as {
+ if a.Err != nil {
+ return a.Err
+ }
+ }
+ return nil
+}
+
+// Ok returns true if there are no errors. This is a shortcut for rs.Error() ==
+// nil.
+func (as AlteredUserSCRAMs) Ok() bool {
+ return as.Error() == nil
+}
+
+// AlterUserSCRAMs deletes, updates, or creates (inserts) user SCRAM
+// credentials. Note that a username can only appear once across both upserts
+// and deletes. This modifies elements of the upsert slice that need to have a
+// salted password generated.
+func (cl *Client) AlterUserSCRAMs(ctx context.Context, del []DeleteSCRAM, upsert []UpsertSCRAM) (AlteredUserSCRAMs, error) {
+ for i, u := range upsert {
+ if u.Password != "" {
+ if len(u.Salt) > 0 || len(u.SaltedPassword) > 0 {
+ return nil, fmt.Errorf("user %s: cannot specify both a password and a salt / salted password", u.User)
+ }
+ u.Salt = make([]byte, 24)
+ if _, err := rand.Read(u.Salt); err != nil {
+ return nil, fmt.Errorf("user %s: unable to generate salt: %v", u.User, err)
+ }
+ switch u.Mechanism {
+ case ScramSha256:
+ u.SaltedPassword = pbkdf2.Key([]byte(u.Password), u.Salt, int(u.Iterations), sha256.Size, sha256.New)
+ case ScramSha512:
+ u.SaltedPassword = pbkdf2.Key([]byte(u.Password), u.Salt, int(u.Iterations), sha512.Size, sha512.New)
+ default:
+ return nil, fmt.Errorf("user %s: unknown mechanism, unable to generate password", u.User)
+ }
+ upsert[i] = u
+ } else {
+ if len(u.Salt) == 0 || len(u.SaltedPassword) == 0 {
+ return nil, fmt.Errorf("user %s: must specify either a password or a salt and salted password", u.User)
+ }
+ }
+ }
+
+ req := kmsg.NewPtrAlterUserSCRAMCredentialsRequest()
+ for _, d := range del {
+ rd := kmsg.NewAlterUserSCRAMCredentialsRequestDeletion()
+ rd.Name = d.User
+ rd.Mechanism = int8(d.Mechanism)
+ req.Deletions = append(req.Deletions, rd)
+ }
+ for _, u := range upsert {
+ ru := kmsg.NewAlterUserSCRAMCredentialsRequestUpsertion()
+ ru.Name = u.User
+ ru.Mechanism = int8(u.Mechanism)
+ ru.Iterations = u.Iterations
+ ru.Salt = u.Salt
+ ru.SaltedPassword = u.SaltedPassword
+ req.Upsertions = append(req.Upsertions, ru)
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ rs := make(AlteredUserSCRAMs)
+ for _, res := range resp.Results {
+ if err := maybeAuthErr(res.ErrorCode); err != nil {
+ return nil, err
+ }
+ r := AlteredUserSCRAM{
+ User: res.User,
+ Err: kerr.ErrorForCode(res.ErrorCode),
+ ErrMessage: unptrStr(res.ErrorMessage),
+ }
+ rs[r.User] = r
+ }
+ return rs, nil
+}
+
+// ElectLeadersHow is how partition leaders should be elected.
+type ElectLeadersHow int8
+
+const (
+ // ElectPreferredReplica elects the preferred replica for a partition.
+ ElectPreferredReplica ElectLeadersHow = 0
+ // ElectLiveReplica elects the first life replica if there are no
+ // in-sync replicas (i.e., this is unclean leader election).
+ ElectLiveReplica ElectLeadersHow = 1
+)
+
+// ElectLeadersResult is the result for a single partition in an elect leaders
+// request.
+type ElectLeadersResult struct {
+ Topic string // Topic is the topic this result is for.
+ Partition int32 // Partition is the partition this result is for.
+ How ElectLeadersHow // How is the type of election that was performed.
+ Err error // Err is non-nil if electing this partition's leader failed, such as the partition not existing or the preferred leader is not available and you used ElectPreferredReplica.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// ElectLeadersResults contains per-topic, per-partition results for an elect
+// leaders request.
+type ElectLeadersResults map[string]map[int32]ElectLeadersResult
+
+// ElectLeaders elects leaders for partitions. This request was added in Kafka
+// 2.2 to replace the previously-ZooKeeper-only option of triggering leader
+// elections. See KIP-183 for more details.
+//
+// Kafka 2.4 introduced the ability to use unclean leader election. If you use
+// unclean leader election on a Kafka 2.2 or 2.3 cluster, the client will
+// instead fall back to preferred replica (clean) leader election. You can
+// check the result's How function (or field) to see.
+//
+// If s is nil, this will elect leaders for all partitions.
+//
+// This will return *AuthError if you do not have ALTER on CLUSTER for
+// kafka-cluster.
+func (cl *Client) ElectLeaders(ctx context.Context, how ElectLeadersHow, s TopicsSet) (ElectLeadersResults, error) {
+ req := kmsg.NewPtrElectLeadersRequest()
+ req.ElectionType = int8(how)
+ for _, t := range s.IntoList() {
+ rt := kmsg.NewElectLeadersRequestTopic()
+ rt.Topic = t.Topic
+ rt.Partitions = t.Partitions
+ req.Topics = append(req.Topics, rt)
+ }
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return nil, err
+ }
+ if resp.Version == 0 { // v0 does not have the election type field
+ how = ElectPreferredReplica
+ }
+ rs := make(ElectLeadersResults)
+ for _, t := range resp.Topics {
+ rt := make(map[int32]ElectLeadersResult)
+ rs[t.Topic] = rt
+ for _, p := range t.Partitions {
+ if err := maybeAuthErr(p.ErrorCode); err != nil {
+ return nil, err // v0 has no top-level err
+ }
+ rt[p.Partition] = ElectLeadersResult{
+ Topic: t.Topic,
+ Partition: p.Partition,
+ How: how,
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ ErrMessage: unptrStr(p.ErrorMessage),
+ }
+ }
+ }
+ return rs, nil
+}
+
+// OffsetForLeaderEpochRequest contains topics, partitions, and leader epochs
+// to request offsets for in an OffsetForLeaderEpoch.
+type OffsetForLeaderEpochRequest map[string]map[int32]int32
+
+// Add adds a topic, partition, and leader epoch to the request.
+func (l *OffsetForLeaderEpochRequest) Add(topic string, partition, leaderEpoch int32) {
+ if *l == nil {
+ *l = make(map[string]map[int32]int32)
+ }
+ t := (*l)[topic]
+ if t == nil {
+ t = make(map[int32]int32)
+ (*l)[topic] = t
+ }
+ t[partition] = leaderEpoch
+}
+
+// OffsetForLeaderEpoch contains a response for a single partition in an
+// OffsetForLeaderEpoch request.
+type OffsetForLeaderEpoch struct {
+ NodeID int32 // NodeID is the node that is the leader of this topic / partition.
+ Topic string // Topic is the topic this leader epoch response is for.
+ Partition int32 // Partition is the partition this leader epoch response is for.
+
+ // LeaderEpoch is either
+ //
+ // 1) -1, if the requested LeaderEpoch is unknown.
+ //
+ // 2) Less than the requested LeaderEpoch, if the requested LeaderEpoch
+ // exists but has no records in it. For example, epoch 1 had end offset
+ // 37, then epoch 2 and 3 had no records: if you request LeaderEpoch 3,
+ // this will return LeaderEpoch 1 with EndOffset 37.
+ //
+ // 3) Equal to the requested LeaderEpoch, if the requested LeaderEpoch
+ // is equal to or less than the current epoch for the partition.
+ LeaderEpoch int32
+
+ // EndOffset is either
+ //
+ // 1) The LogEndOffset, if the broker has the same LeaderEpoch as the
+ // request.
+ //
+ // 2) the beginning offset of the next LeaderEpoch, if the broker has a
+ // higher LeaderEpoch.
+ //
+ // The second option allows the user to detect data loss: if the
+ // consumer consumed past the EndOffset that is returned, then the
+ // consumer should reset to the returned offset and the consumer knows
+ // that everything from the returned offset to the requested offset was
+ // lost.
+ EndOffset int64
+
+ // Err is non-nil if this partition had a response error.
+ Err error
+}
+
+// OffsetsForLeaderEpochs contains responses for partitions in a
+// OffsetForLeaderEpochRequest.
+type OffsetsForLeaderEpochs map[string]map[int32]OffsetForLeaderEpoch
+
+// OffsetForLeaderEpoch requests end offsets for the requested leader epoch in
+// partitions in the request. This is a relatively advanced and client internal
+// request, for more details, see the doc comments on the OffsetForLeaderEpoch
+// type.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) OffetForLeaderEpoch(ctx context.Context, r OffsetForLeaderEpochRequest) (OffsetsForLeaderEpochs, error) {
+ req := kmsg.NewPtrOffsetForLeaderEpochRequest()
+ for t, ps := range r {
+ rt := kmsg.NewOffsetForLeaderEpochRequestTopic()
+ rt.Topic = t
+ for p, e := range ps {
+ rp := kmsg.NewOffsetForLeaderEpochRequestTopicPartition()
+ rp.Partition = p
+ rp.LeaderEpoch = e
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+ shards := cl.cl.RequestSharded(ctx, req)
+ ls := make(OffsetsForLeaderEpochs)
+ return ls, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.OffsetForLeaderEpochResponse)
+ for _, rt := range resp.Topics {
+ lps, exists := ls[rt.Topic]
+ if !exists { // topic partitions could be spread around brokers, need to check existence
+ lps = make(map[int32]OffsetForLeaderEpoch)
+ ls[rt.Topic] = lps
+ }
+ for _, rp := range rt.Partitions {
+ if err := maybeAuthErr(rp.ErrorCode); err != nil {
+ return err
+ }
+ lps[rp.Partition] = OffsetForLeaderEpoch{ // one partition globally, no need to exist check
+ NodeID: b.NodeID,
+ Topic: rt.Topic,
+ Partition: rp.Partition,
+ LeaderEpoch: rp.LeaderEpoch,
+ EndOffset: rp.EndOffset,
+ Err: kerr.ErrorForCode(rp.ErrorCode),
+ }
+ }
+ }
+ return nil
+ })
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/partas.go b/vendor/github.com/twmb/franz-go/pkg/kadm/partas.go
new file mode 100644
index 0000000000000..84b67790d8849
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/partas.go
@@ -0,0 +1,208 @@
+package kadm
+
+import (
+ "context"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// AlterPartitionAssignmentsReq is the input for a request to alter partition
+// assignments. The keys are topics and partitions, and the final slice
+// corresponds to brokers that replicas will be assigneed to. If the brokers
+// for a given partition are null, the request will *cancel* any active
+// reassignment for that partition.
+type AlterPartitionAssignmentsReq map[string]map[int32][]int32
+
+// Assign specifies brokers that a partition should be placed on. Using null
+// for the brokers cancels a pending reassignment of the parititon.
+func (r *AlterPartitionAssignmentsReq) Assign(t string, p int32, brokers []int32) {
+ if *r == nil {
+ *r = make(map[string]map[int32][]int32)
+ }
+ ps := (*r)[t]
+ if ps == nil {
+ ps = make(map[int32][]int32)
+ (*r)[t] = ps
+ }
+ ps[p] = brokers
+}
+
+// CancelAssign cancels a reassignment of the given partition.
+func (r *AlterPartitionAssignmentsReq) CancelAssign(t string, p int32) {
+ r.Assign(t, p, nil)
+}
+
+// AlterPartitionAssignmentsResponse contains a response for an individual
+// partition that was assigned.
+type AlterPartitionAssignmentsResponse struct {
+ Topic string // Topic is the topic that was assigned.
+ Partition int32 // Partition is the partition that was assigned.
+ Err error // Err is non-nil if this assignment errored.
+ ErrMessage string // ErrMessage is an optional additional message on error.
+}
+
+// AlterPartitionAssignmentsResponses contains responses to all partitions in an
+// alter assignment request.
+type AlterPartitionAssignmentsResponses map[string]map[int32]AlterPartitionAssignmentsResponse
+
+// Sorted returns the responses sorted by topic and partition.
+func (rs AlterPartitionAssignmentsResponses) Sorted() []AlterPartitionAssignmentsResponse {
+ var all []AlterPartitionAssignmentsResponse
+ rs.Each(func(r AlterPartitionAssignmentsResponse) {
+ all = append(all, r)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && l.Partition < r.Partition
+ })
+ return all
+}
+
+// Each calls fn for every response.
+func (rs AlterPartitionAssignmentsResponses) Each(fn func(AlterPartitionAssignmentsResponse)) {
+ for _, ps := range rs {
+ for _, r := range ps {
+ fn(r)
+ }
+ }
+}
+
+// Error returns the first error in the responses, if any.
+func (rs AlterPartitionAssignmentsResponses) Error() error {
+ for _, ps := range rs {
+ for _, r := range ps {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ }
+ return nil
+}
+
+// AlterPartitionAssignments alters partition assignments for the requested
+// partitions, returning an error if the response could not be issued or if
+// you do not have permissions.
+func (cl *Client) AlterPartitionAssignments(ctx context.Context, req AlterPartitionAssignmentsReq) (AlterPartitionAssignmentsResponses, error) {
+ if len(req) == 0 {
+ return make(AlterPartitionAssignmentsResponses), nil
+ }
+
+ kreq := kmsg.NewPtrAlterPartitionAssignmentsRequest()
+ kreq.TimeoutMillis = cl.timeoutMillis
+ for t, ps := range req {
+ rt := kmsg.NewAlterPartitionAssignmentsRequestTopic()
+ rt.Topic = t
+ for p, bs := range ps {
+ rp := kmsg.NewAlterPartitionAssignmentsRequestTopicPartition()
+ rp.Partition = p
+ rp.Replicas = bs
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ kreq.Topics = append(kreq.Topics, rt)
+ }
+
+ kresp, err := kreq.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err = kerr.ErrorForCode(kresp.ErrorCode); err != nil {
+ return nil, &ErrAndMessage{err, unptrStr(kresp.ErrorMessage)}
+ }
+
+ a := make(AlterPartitionAssignmentsResponses)
+ for _, kt := range kresp.Topics {
+ ps := make(map[int32]AlterPartitionAssignmentsResponse)
+ a[kt.Topic] = ps
+ for _, kp := range kt.Partitions {
+ ps[kp.Partition] = AlterPartitionAssignmentsResponse{
+ Topic: kt.Topic,
+ Partition: kp.Partition,
+ Err: kerr.ErrorForCode(kp.ErrorCode),
+ ErrMessage: unptrStr(kp.ErrorMessage),
+ }
+ }
+ }
+ return a, nil
+}
+
+// ListPartitionReassignmentsResponse contains a response for an individual
+// partition that was listed.
+type ListPartitionReassignmentsResponse struct {
+ Topic string // Topic is the topic that was listed.
+ Partition int32 // Partition is the partition that was listed.
+ Replicas []int32 // Replicas are the partition's current replicas.
+ AddingReplicas []int32 // AddingReplicas are replicas currently being added to the partition.
+ RemovingReplicas []int32 // RemovingReplicas are replicas currently being removed from the partition.
+}
+
+// ListPartitionReassignmentsResponses contains responses to all partitions in
+// a list reassignment request.
+type ListPartitionReassignmentsResponses map[string]map[int32]ListPartitionReassignmentsResponse
+
+// Sorted returns the responses sorted by topic and partition.
+func (rs ListPartitionReassignmentsResponses) Sorted() []ListPartitionReassignmentsResponse {
+ var all []ListPartitionReassignmentsResponse
+ rs.Each(func(r ListPartitionReassignmentsResponse) {
+ all = append(all, r)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && l.Partition < r.Partition
+ })
+ return all
+}
+
+// Each calls fn for every response.
+func (rs ListPartitionReassignmentsResponses) Each(fn func(ListPartitionReassignmentsResponse)) {
+ for _, ps := range rs {
+ for _, r := range ps {
+ fn(r)
+ }
+ }
+}
+
+// ListPartitionReassignments lists the state of any active reassignments for
+// all requested partitions, returning an error if the response could not be
+// issued or if you do not have permissions.
+func (cl *Client) ListPartitionReassignments(ctx context.Context, s TopicsSet) (ListPartitionReassignmentsResponses, error) {
+ if len(s) == 0 {
+ return make(ListPartitionReassignmentsResponses), nil
+ }
+
+ kreq := kmsg.NewPtrListPartitionReassignmentsRequest()
+ kreq.TimeoutMillis = cl.timeoutMillis
+ for t, ps := range s {
+ rt := kmsg.NewListPartitionReassignmentsRequestTopic()
+ rt.Topic = t
+ for p := range ps {
+ rt.Partitions = append(rt.Partitions, p)
+ }
+ kreq.Topics = append(kreq.Topics, rt)
+ }
+
+ kresp, err := kreq.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+ if err = kerr.ErrorForCode(kresp.ErrorCode); err != nil {
+ return nil, &ErrAndMessage{err, unptrStr(kresp.ErrorMessage)}
+ }
+
+ a := make(ListPartitionReassignmentsResponses)
+ for _, kt := range kresp.Topics {
+ ps := make(map[int32]ListPartitionReassignmentsResponse)
+ a[kt.Topic] = ps
+ for _, kp := range kt.Partitions {
+ ps[kp.Partition] = ListPartitionReassignmentsResponse{
+ Topic: kt.Topic,
+ Partition: kp.Partition,
+ Replicas: kp.Replicas,
+ AddingReplicas: kp.AddingReplicas,
+ RemovingReplicas: kp.RemovingReplicas,
+ }
+ }
+ }
+ return a, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/topics.go b/vendor/github.com/twmb/franz-go/pkg/kadm/topics.go
new file mode 100644
index 0000000000000..408506e5df4fc
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/topics.go
@@ -0,0 +1,645 @@
+package kadm
+
+import (
+ "context"
+ "errors"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// ListTopics issues a metadata request and returns TopicDetails. Specific
+// topics to describe can be passed as additional arguments. If no topics are
+// specified, all topics are requested. Internal topics are not returned unless
+// specifically requested. To see all topics including internal topics, use
+// ListTopicsWithInternal.
+//
+// This returns an error if the request fails to be issued, or an *AuthError.
+func (cl *Client) ListTopics(
+ ctx context.Context,
+ topics ...string,
+) (TopicDetails, error) {
+ t, err := cl.ListTopicsWithInternal(ctx, topics...)
+ if err != nil {
+ return nil, err
+ }
+ t.FilterInternal()
+ return t, nil
+}
+
+// ListTopicsWithInternal is the same as ListTopics, but does not filter
+// internal topics before returning.
+func (cl *Client) ListTopicsWithInternal(
+ ctx context.Context,
+ topics ...string,
+) (TopicDetails, error) {
+ m, err := cl.Metadata(ctx, topics...)
+ if err != nil {
+ return nil, err
+ }
+ return m.Topics, nil
+}
+
+// CreateTopicResponse contains the response for an individual created topic.
+type CreateTopicResponse struct {
+ Topic string // Topic is the topic that was created.
+ ID TopicID // ID is the topic ID for this topic, if talking to Kafka v2.8+.
+ Err error // Err is any error preventing this topic from being created.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+ NumPartitions int32 // NumPartitions is the number of partitions in the response, if talking to Kafka v2.4+.
+ ReplicationFactor int16 // ReplicationFactor is how many replicas every partition has for this topic, if talking to Kafka 2.4+.
+ Configs map[string]Config // Configs contains the topic configuration (minus config synonyms), if talking to Kafka 2.4+.
+}
+
+// CreateTopicRepsonses contains per-topic responses for created topics.
+type CreateTopicResponses map[string]CreateTopicResponse
+
+// Sorted returns all create topic responses sorted first by topic ID, then by
+// topic name.
+func (rs CreateTopicResponses) Sorted() []CreateTopicResponse {
+ s := make([]CreateTopicResponse, 0, len(rs))
+ for _, d := range rs {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool {
+ l, r := s[i], s[j]
+ if l.ID.Less(r.ID) {
+ return true
+ }
+ return l.Topic < r.Topic
+ })
+ return s
+}
+
+// On calls fn for the response topic if it exists, returning the response and
+// the error returned from fn. If fn is nil, this simply returns the response.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the topic does not exist, this returns kerr.UnknownTopicOrPartition.
+func (rs CreateTopicResponses) On(topic string, fn func(*CreateTopicResponse) error) (CreateTopicResponse, error) {
+ if len(rs) > 0 {
+ r, ok := rs[topic]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return CreateTopicResponse{}, kerr.UnknownTopicOrPartition
+}
+
+// Error iterates over all responses and returns the first error
+// encountered, if any.
+func (rs CreateTopicResponses) Error() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// CreateTopic issues a create topics request with the given partitions,
+// replication factor, and (optional) configs for the given topic name.
+// This is similar to CreateTopics, but returns the kerr.ErrorForCode(response.ErrorCode)
+// if the request/response is successful.
+func (cl *Client) CreateTopic(
+ ctx context.Context,
+ partitions int32,
+ replicationFactor int16,
+ configs map[string]*string,
+ topic string,
+) (CreateTopicResponse, error) {
+ createTopicResponse, err := cl.CreateTopics(
+ ctx,
+ partitions,
+ replicationFactor,
+ configs,
+ topic,
+ )
+ if err != nil {
+ return CreateTopicResponse{}, err
+ }
+
+ response, exists := createTopicResponse[topic]
+ if !exists {
+ return CreateTopicResponse{}, errors.New("requested topic was not part of create topic response")
+ }
+
+ return response, response.Err
+}
+
+// CreateTopics issues a create topics request with the given partitions,
+// replication factor, and (optional) configs for every topic. Under the hood,
+// this uses the default 15s request timeout and lets Kafka choose where to
+// place partitions.
+//
+// Version 4 of the underlying create topic request was introduced in Kafka 2.4
+// and brought client support for creation defaults. If talking to a 2.4+
+// cluster, you can use -1 for partitions and replicationFactor to use broker
+// defaults.
+//
+// This package includes a StringPtr function to aid in building config values.
+//
+// This does not return an error on authorization failures, instead,
+// authorization failures are included in the responses. This only returns an
+// error if the request fails to be issued. You may consider checking
+// ValidateCreateTopics before using this method.
+func (cl *Client) CreateTopics(
+ ctx context.Context,
+ partitions int32,
+ replicationFactor int16,
+ configs map[string]*string,
+ topics ...string,
+) (CreateTopicResponses, error) {
+ return cl.createTopics(ctx, false, partitions, replicationFactor, configs, topics)
+}
+
+// ValidateCreateTopics validates a create topics request with the given
+// partitions, replication factor, and (optional) configs for every topic.
+//
+// This package includes a StringPtr function to aid in building config values.
+//
+// This uses the same logic as CreateTopics, but with the request's
+// ValidateOnly field set to true. The response is the same response you would
+// receive from CreateTopics, but no topics are actually created.
+func (cl *Client) ValidateCreateTopics(
+ ctx context.Context,
+ partitions int32,
+ replicationFactor int16,
+ configs map[string]*string,
+ topics ...string,
+) (CreateTopicResponses, error) {
+ return cl.createTopics(ctx, true, partitions, replicationFactor, configs, topics)
+}
+
+func (cl *Client) createTopics(ctx context.Context, dry bool, p int32, rf int16, configs map[string]*string, topics []string) (CreateTopicResponses, error) {
+ if len(topics) == 0 {
+ return make(CreateTopicResponses), nil
+ }
+
+ req := kmsg.NewCreateTopicsRequest()
+ req.TimeoutMillis = cl.timeoutMillis
+ req.ValidateOnly = dry
+ for _, t := range topics {
+ rt := kmsg.NewCreateTopicsRequestTopic()
+ rt.Topic = t
+ rt.NumPartitions = p
+ rt.ReplicationFactor = rf
+ for k, v := range configs {
+ rc := kmsg.NewCreateTopicsRequestTopicConfig()
+ rc.Name = k
+ rc.Value = v
+ rt.Configs = append(rt.Configs, rc)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+
+ rs := make(CreateTopicResponses)
+ for _, t := range resp.Topics {
+ rt := CreateTopicResponse{
+ Topic: t.Topic,
+ ID: t.TopicID,
+ Err: kerr.ErrorForCode(t.ErrorCode),
+ ErrMessage: unptrStr(t.ErrorMessage),
+ NumPartitions: t.NumPartitions,
+ ReplicationFactor: t.ReplicationFactor,
+ Configs: make(map[string]Config),
+ }
+ for _, c := range t.Configs {
+ rt.Configs[c.Name] = Config{
+ Key: c.Name,
+ Value: c.Value,
+ Source: kmsg.ConfigSource(c.Source),
+ Sensitive: c.IsSensitive,
+ }
+ }
+ rs[t.Topic] = rt
+ }
+ return rs, nil
+}
+
+// DeleteTopicResponse contains the response for an individual deleted topic.
+type DeleteTopicResponse struct {
+ Topic string // Topic is the topic that was deleted, if not using topic IDs.
+ ID TopicID // ID is the topic ID for this topic, if talking to Kafka v2.8+ and using topic IDs.
+ Err error // Err is any error preventing this topic from being deleted.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// DeleteTopicResponses contains per-topic responses for deleted topics.
+type DeleteTopicResponses map[string]DeleteTopicResponse
+
+// Sorted returns all delete topic responses sorted first by topic ID, then by
+// topic name.
+func (rs DeleteTopicResponses) Sorted() []DeleteTopicResponse {
+ s := make([]DeleteTopicResponse, 0, len(rs))
+ for _, d := range rs {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool {
+ l, r := s[i], s[j]
+ if l.ID.Less(r.ID) {
+ return true
+ }
+ return l.Topic < r.Topic
+ })
+ return s
+}
+
+// On calls fn for the response topic if it exists, returning the response and
+// the error returned from fn. If fn is nil, this simply returns the response.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the topic does not exist, this returns kerr.UnknownTopicOrPartition.
+func (rs DeleteTopicResponses) On(topic string, fn func(*DeleteTopicResponse) error) (DeleteTopicResponse, error) {
+ if len(rs) > 0 {
+ r, ok := rs[topic]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return DeleteTopicResponse{}, kerr.UnknownTopicOrPartition
+}
+
+// Error iterates over all responses and returns the first error
+// encountered, if any.
+func (rs DeleteTopicResponses) Error() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// DeleteTopic issues a delete topic request for the given topic name with a
+// (by default) 15s timeout. This is similar to DeleteTopics, but returns the
+// kerr.ErrorForCode(response.ErrorCode) if the request/response is successful.
+func (cl *Client) DeleteTopic(ctx context.Context, topic string) (DeleteTopicResponse, error) {
+ rs, err := cl.DeleteTopics(ctx, topic)
+ if err != nil {
+ return DeleteTopicResponse{}, err
+ }
+ r, exists := rs[topic]
+ if !exists {
+ return DeleteTopicResponse{}, errors.New("requested topic was not part of delete topic response")
+ }
+ return r, r.Err
+}
+
+// DeleteTopics issues a delete topics request for the given topic names with a
+// (by default) 15s timeout.
+//
+// This does not return an error on authorization failures, instead,
+// authorization failures are included in the responses. This only returns an
+// error if the request fails to be issued.
+func (cl *Client) DeleteTopics(ctx context.Context, topics ...string) (DeleteTopicResponses, error) {
+ if len(topics) == 0 {
+ return make(DeleteTopicResponses), nil
+ }
+
+ req := kmsg.NewDeleteTopicsRequest()
+ req.TimeoutMillis = cl.timeoutMillis
+ req.TopicNames = topics
+ for _, t := range topics {
+ rt := kmsg.NewDeleteTopicsRequestTopic()
+ rt.Topic = kmsg.StringPtr(t)
+ req.Topics = append(req.Topics, rt)
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+
+ rs := make(DeleteTopicResponses)
+ for _, t := range resp.Topics {
+ // A valid Kafka will return non-nil topics here, because we
+ // are deleting by topic name, not ID. We still check to be
+ // sure, but multiple invalid (nil) topics will collide.
+ var topic string
+ if t.Topic != nil {
+ topic = *t.Topic
+ }
+ rs[topic] = DeleteTopicResponse{
+ Topic: topic,
+ ID: t.TopicID,
+ Err: kerr.ErrorForCode(t.ErrorCode),
+ ErrMessage: unptrStr(t.ErrorMessage),
+ }
+ }
+ return rs, nil
+}
+
+// DeleteRecordsResponse contains the response for an individual partition from
+// a delete records request.
+type DeleteRecordsResponse struct {
+ Topic string // Topic is the topic this response is for.
+ Partition int32 // Partition is the partition this response is for.
+ LowWatermark int64 // LowWatermark is the new earliest / start offset for this partition if the request was successful.
+ Err error // Err is any error preventing the delete records request from being successful for this partition.
+}
+
+// DeleteRecordsResponses contains per-partition responses to a delete records request.
+type DeleteRecordsResponses map[string]map[int32]DeleteRecordsResponse
+
+// Lookup returns the response at t and p and whether it exists.
+func (ds DeleteRecordsResponses) Lookup(t string, p int32) (DeleteRecordsResponse, bool) {
+ if len(ds) == 0 {
+ return DeleteRecordsResponse{}, false
+ }
+ ps := ds[t]
+ if len(ps) == 0 {
+ return DeleteRecordsResponse{}, false
+ }
+ r, exists := ps[p]
+ return r, exists
+}
+
+// Each calls fn for every delete records response.
+func (ds DeleteRecordsResponses) Each(fn func(DeleteRecordsResponse)) {
+ for _, ps := range ds {
+ for _, d := range ps {
+ fn(d)
+ }
+ }
+}
+
+// Sorted returns all delete records responses sorted first by topic, then by
+// partition.
+func (rs DeleteRecordsResponses) Sorted() []DeleteRecordsResponse {
+ var s []DeleteRecordsResponse
+ for _, ps := range rs {
+ for _, d := range ps {
+ s = append(s, d)
+ }
+ }
+ sort.Slice(s, func(i, j int) bool {
+ l, r := s[i], s[j]
+ if l.Topic < r.Topic {
+ return true
+ }
+ if l.Topic > r.Topic {
+ return false
+ }
+ return l.Partition < r.Partition
+ })
+ return s
+}
+
+// On calls fn for the response topic/partition if it exists, returning the
+// response and the error returned from fn. If fn is nil, this simply returns
+// the response.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the topic or partition does not exist, this returns
+// kerr.UnknownTopicOrPartition.
+func (rs DeleteRecordsResponses) On(topic string, partition int32, fn func(*DeleteRecordsResponse) error) (DeleteRecordsResponse, error) {
+ if len(rs) > 0 {
+ t, ok := rs[topic]
+ if ok {
+ p, ok := t[partition]
+ if ok {
+ if fn == nil {
+ return p, nil
+ }
+ return p, fn(&p)
+ }
+ }
+ }
+ return DeleteRecordsResponse{}, kerr.UnknownTopicOrPartition
+}
+
+// Error iterates over all responses and returns the first error
+// encountered, if any.
+func (rs DeleteRecordsResponses) Error() error {
+ for _, ps := range rs {
+ for _, r := range ps {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ }
+ return nil
+}
+
+// DeleteRecords issues a delete records request for the given offsets. Per
+// offset, only the Offset field needs to be set.
+//
+// To delete records, Kafka sets the LogStartOffset for partitions to the
+// requested offset. All segments whose max partition is before the requested
+// offset are deleted, and any records within the segment before the requested
+// offset can no longer be read.
+//
+// This does not return an error on authorization failures, instead,
+// authorization failures are included in the responses.
+//
+// This may return *ShardErrors.
+func (cl *Client) DeleteRecords(ctx context.Context, os Offsets) (DeleteRecordsResponses, error) {
+ if len(os) == 0 {
+ return make(DeleteRecordsResponses), nil
+ }
+
+ req := kmsg.NewPtrDeleteRecordsRequest()
+ req.TimeoutMillis = cl.timeoutMillis
+ for t, ps := range os {
+ rt := kmsg.NewDeleteRecordsRequestTopic()
+ rt.Topic = t
+ for p, o := range ps {
+ rp := kmsg.NewDeleteRecordsRequestTopicPartition()
+ rp.Partition = p
+ rp.Offset = o.At
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+
+ shards := cl.cl.RequestSharded(ctx, req)
+ rs := make(DeleteRecordsResponses)
+ return rs, shardErrEach(req, shards, func(kr kmsg.Response) error {
+ resp := kr.(*kmsg.DeleteRecordsResponse)
+ for _, t := range resp.Topics {
+ rt, exists := rs[t.Topic]
+ if !exists { // topic could be spread around brokers, we need to check existence
+ rt = make(map[int32]DeleteRecordsResponse)
+ rs[t.Topic] = rt
+ }
+ for _, p := range t.Partitions {
+ rt[p.Partition] = DeleteRecordsResponse{
+ Topic: t.Topic,
+ Partition: p.Partition,
+ LowWatermark: p.LowWatermark,
+ Err: kerr.ErrorForCode(p.ErrorCode),
+ }
+ }
+ }
+ return nil
+ })
+}
+
+// CreatePartitionsResponse contains the response for an individual topic from
+// a create partitions request.
+type CreatePartitionsResponse struct {
+ Topic string // Topic is the topic this response is for.
+ Err error // Err is non-nil if partitions were unable to be added to this topic.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// CreatePartitionsResponses contains per-topic responses for a create
+// partitions request.
+type CreatePartitionsResponses map[string]CreatePartitionsResponse
+
+// Sorted returns all create partitions responses sorted by topic.
+func (rs CreatePartitionsResponses) Sorted() []CreatePartitionsResponse {
+ var s []CreatePartitionsResponse
+ for _, r := range rs {
+ s = append(s, r)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].Topic < s[j].Topic })
+ return s
+}
+
+// On calls fn for the response topic if it exists, returning the response and
+// the error returned from fn. If fn is nil, this simply returns the response.
+//
+// The fn is given a copy of the response. This function returns the copy as
+// well; any modifications within fn are modifications on the returned copy.
+//
+// If the topic does not exist, this returns kerr.UnknownTopicOrPartition.
+func (rs CreatePartitionsResponses) On(topic string, fn func(*CreatePartitionsResponse) error) (CreatePartitionsResponse, error) {
+ if len(rs) > 0 {
+ r, ok := rs[topic]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return CreatePartitionsResponse{}, kerr.UnknownTopicOrPartition
+}
+
+// Error iterates over all responses and returns the first error
+// encountered, if any.
+func (rs CreatePartitionsResponses) Error() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// CreatePartitions issues a create partitions request for the given topics,
+// adding "add" partitions to each topic. This request lets Kafka choose where
+// the new partitions should be.
+//
+// This does not return an error on authorization failures for the create
+// partitions request itself, instead, authorization failures are included in
+// the responses. Before adding partitions, this request must issue a metadata
+// request to learn the current count of partitions. If that fails, this
+// returns the metadata request error. If you already know the final amount of
+// partitions you want, you can use UpdatePartitions to set the count directly
+// (rather than adding to the current count). You may consider checking
+// ValidateCreatePartitions before using this method.
+func (cl *Client) CreatePartitions(ctx context.Context, add int, topics ...string) (CreatePartitionsResponses, error) {
+ return cl.createPartitions(ctx, false, add, -1, topics)
+}
+
+// UpdatePartitions issues a create partitions request for the given topics,
+// setting the final partition count to "set" for each topic. This request lets
+// Kafka choose where the new partitions should be.
+//
+// This does not return an error on authorization failures for the create
+// partitions request itself, instead, authorization failures are included in
+// the responses. Unlike CreatePartitions, this request uses your "set" value
+// to set the new final count of partitions. "set" must be equal to or larger
+// than the current count of partitions in the topic. All topics will have the
+// same final count of partitions (unlike CreatePartitions, which allows you to
+// add a specific count of partitions to topics that have a different amount of
+// current partitions). You may consider checking ValidateUpdatePartitions
+// before using this method.
+func (cl *Client) UpdatePartitions(ctx context.Context, set int, topics ...string) (CreatePartitionsResponses, error) {
+ return cl.createPartitions(ctx, false, -1, set, topics)
+}
+
+// ValidateCreatePartitions validates a create partitions request for adding
+// "add" partitions to the given topics.
+//
+// This uses the same logic as CreatePartitions, but with the request's
+// ValidateOnly field set to true. The response is the same response you would
+// receive from CreatePartitions, but no partitions are actually added.
+func (cl *Client) ValidateCreatePartitions(ctx context.Context, add int, topics ...string) (CreatePartitionsResponses, error) {
+ return cl.createPartitions(ctx, true, add, -1, topics)
+}
+
+// ValidateUpdatePartitions validates a create partitions request for setting
+// the partition count on the given topics to "set".
+//
+// This uses the same logic as UpdatePartitions, but with the request's
+// ValidateOnly field set to true. The response is the same response you would
+// receive from UpdatePartitions, but no partitions are actually added.
+func (cl *Client) ValidateUpdatePartitions(ctx context.Context, set int, topics ...string) (CreatePartitionsResponses, error) {
+ return cl.createPartitions(ctx, true, -1, set, topics)
+}
+
+func (cl *Client) createPartitions(ctx context.Context, dry bool, add, set int, topics []string) (CreatePartitionsResponses, error) {
+ if len(topics) == 0 {
+ return make(CreatePartitionsResponses), nil
+ }
+
+ var td TopicDetails
+ var err error
+ if add != -1 {
+ td, err = cl.ListTopics(ctx, topics...)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ req := kmsg.NewCreatePartitionsRequest()
+ req.TimeoutMillis = cl.timeoutMillis
+ req.ValidateOnly = dry
+ for _, t := range topics {
+ rt := kmsg.NewCreatePartitionsRequestTopic()
+ rt.Topic = t
+ if add == -1 {
+ rt.Count = int32(set)
+ } else {
+ rt.Count = int32(len(td[t].Partitions) + add)
+ }
+ req.Topics = append(req.Topics, rt)
+ }
+
+ resp, err := req.RequestWith(ctx, cl.cl)
+ if err != nil {
+ return nil, err
+ }
+
+ rs := make(CreatePartitionsResponses)
+ for _, t := range resp.Topics {
+ rs[t.Topic] = CreatePartitionsResponse{
+ Topic: t.Topic,
+ Err: kerr.ErrorForCode(t.ErrorCode),
+ ErrMessage: unptrStr(t.ErrorMessage),
+ }
+ }
+ return rs, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/txn.go b/vendor/github.com/twmb/franz-go/pkg/kadm/txn.go
new file mode 100644
index 0000000000000..2b8ccbe2e7a15
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/txn.go
@@ -0,0 +1,761 @@
+package kadm
+
+import (
+ "context"
+ "errors"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// DescribedProducer contains the state of a transactional producer's last
+// produce.
+type DescribedProducer struct {
+ Leader int32 // Leader is the leader broker for this topic / partition.
+ Topic string // Topic is the topic being produced to.
+ Partition int32 // Partition is the partition being produced to.
+ ProducerID int64 // ProducerID is the producer ID that produced.
+ ProducerEpoch int16 // ProducerEpoch is the epoch that produced.
+ LastSequence int32 // LastSequence is the last sequence number the producer produced.
+ LastTimestamp int64 // LastTimestamp is the last time this producer produced.
+ CoordinatorEpoch int32 // CoordinatorEpoch is the epoch of the transactional coordinator for the last produce.
+ CurrentTxnStartOffset int64 // CurrentTxnStartOffset is the first offset in the transaction.
+}
+
+// Less returns whether the left described producer is less than the right,
+// in order of:
+//
+// - Topic
+// - Partition
+// - ProducerID
+// - ProducerEpoch
+// - LastTimestamp
+// - LastSequence
+func (l *DescribedProducer) Less(r *DescribedProducer) bool {
+ if l.Topic < r.Topic {
+ return true
+ }
+ if l.Topic > r.Topic {
+ return false
+ }
+ if l.Partition < r.Partition {
+ return true
+ }
+ if l.Partition > r.Partition {
+ return false
+ }
+ if l.ProducerID < r.ProducerID {
+ return true
+ }
+ if l.ProducerID > r.ProducerID {
+ return false
+ }
+ if l.ProducerEpoch < r.ProducerEpoch {
+ return true
+ }
+ if l.ProducerEpoch > r.ProducerEpoch {
+ return false
+ }
+ if l.LastTimestamp < r.LastTimestamp {
+ return true
+ }
+ if l.LastTimestamp > r.LastTimestamp {
+ return false
+ }
+ return l.LastSequence < r.LastSequence
+}
+
+// DescribedProducers maps producer IDs to the full described producer.
+type DescribedProducers map[int64]DescribedProducer
+
+// Sorted returns the described producers sorted by topic, partition, and
+// producer ID.
+func (ds DescribedProducers) Sorted() []DescribedProducer {
+ var all []DescribedProducer
+ for _, d := range ds {
+ all = append(all, d)
+ }
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && (l.Partition < r.Partition || l.Partition == r.Partition && l.ProducerID < r.ProducerID)
+ })
+ return all
+}
+
+// Each calls fn for each described producer.
+func (ds DescribedProducers) Each(fn func(DescribedProducer)) {
+ for _, d := range ds {
+ fn(d)
+ }
+}
+
+// DescribedProducersPartition is a partition whose producer's were described.
+type DescribedProducersPartition struct {
+ Leader int32 // Leader is the leader broker for this topic / partition.
+ Topic string // Topic is the topic whose producer's were described.
+ Partition int32 // Partition is the partition whose producer's were described.
+ ActiveProducers DescribedProducers // ActiveProducers are producer's actively transactionally producing to this partition.
+ Err error // Err is non-nil if describing this partition failed.
+ ErrMessage string // ErrMessage a potential extra message describing any error.
+}
+
+// DescribedProducersPartitions contains partitions whose producer's were described.
+type DescribedProducersPartitions map[int32]DescribedProducersPartition
+
+// Sorted returns the described partitions sorted by topic and partition.
+func (ds DescribedProducersPartitions) Sorted() []DescribedProducersPartition {
+ var all []DescribedProducersPartition
+ for _, d := range ds {
+ all = append(all, d)
+ }
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && l.Partition < r.Partition
+ })
+ return all
+}
+
+// SortedProducer returns all producers sorted first by partition, then by producer ID.
+func (ds DescribedProducersPartitions) SortedProducers() []DescribedProducer {
+ var all []DescribedProducer
+ ds.EachProducer(func(d DescribedProducer) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && (l.Partition < r.Partition || l.Partition == r.Partition && l.ProducerID < r.ProducerID)
+ })
+ return all
+}
+
+// Each calls fn for each partition.
+func (ds DescribedProducersPartitions) Each(fn func(DescribedProducersPartition)) {
+ for _, d := range ds {
+ fn(d)
+ }
+}
+
+// EachProducer calls fn for each producer in all partitions.
+func (ds DescribedProducersPartitions) EachProducer(fn func(DescribedProducer)) {
+ for _, d := range ds {
+ for _, p := range d.ActiveProducers {
+ fn(p)
+ }
+ }
+}
+
+// DescribedProducersTopic contains topic partitions whose producer's were described.
+type DescribedProducersTopic struct {
+ Topic string // Topic is the topic whose producer's were described.
+ Partitions DescribedProducersPartitions // Partitions are partitions whose producer's were described.
+}
+
+// DescribedProducersTopics contains topics whose producer's were described.
+type DescribedProducersTopics map[string]DescribedProducersTopic
+
+// Sorted returns the described topics sorted by topic.
+func (ds DescribedProducersTopics) Sorted() []DescribedProducersTopic {
+ var all []DescribedProducersTopic
+ ds.Each(func(d DescribedProducersTopic) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic
+ })
+ return all
+}
+
+// Sorted returns the described partitions sorted by topic and partition.
+func (ds DescribedProducersTopics) SortedPartitions() []DescribedProducersPartition {
+ var all []DescribedProducersPartition
+ ds.EachPartition(func(d DescribedProducersPartition) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && l.Partition < r.Partition
+ })
+ return all
+}
+
+// SortedProducer returns all producers sorted first by partition, then by producer ID.
+func (ds DescribedProducersTopics) SortedProducers() []DescribedProducer {
+ var all []DescribedProducer
+ ds.EachProducer(func(d DescribedProducer) {
+ all = append(all, d)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && (l.Partition < r.Partition || l.Partition == r.Partition && l.ProducerID < r.ProducerID)
+ })
+ return all
+}
+
+// Each calls fn for every topic.
+func (ds DescribedProducersTopics) Each(fn func(DescribedProducersTopic)) {
+ for _, d := range ds {
+ fn(d)
+ }
+}
+
+// EachPartitions calls fn for all topic partitions.
+func (ds DescribedProducersTopics) EachPartition(fn func(DescribedProducersPartition)) {
+ for _, d := range ds {
+ for _, p := range d.Partitions {
+ fn(p)
+ }
+ }
+}
+
+// EachProducer calls fn for each producer in all topics and partitions.
+func (ds DescribedProducersTopics) EachProducer(fn func(DescribedProducer)) {
+ for _, d := range ds {
+ for _, p := range d.Partitions {
+ for _, b := range p.ActiveProducers {
+ fn(b)
+ }
+ }
+ }
+}
+
+// DescribeProducers describes all producers that are transactionally producing
+// to the requested topic set. This request can be used to detect hanging
+// transactions or other transaction related problems. If the input set is
+// empty, this requests data for all partitions.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) DescribeProducers(ctx context.Context, s TopicsSet) (DescribedProducersTopics, error) {
+ if len(s) == 0 {
+ m, err := cl.Metadata(ctx)
+ if err != nil {
+ return nil, err
+ }
+ s = m.Topics.TopicsSet()
+ } else if e := s.EmptyTopics(); len(e) > 0 {
+ m, err := cl.Metadata(ctx, e...)
+ if err != nil {
+ return nil, err
+ }
+ for t, ps := range m.Topics.TopicsSet() {
+ s[t] = ps
+ }
+ }
+
+ req := kmsg.NewPtrDescribeProducersRequest()
+ for _, t := range s.IntoList() {
+ rt := kmsg.NewDescribeProducersRequestTopic()
+ rt.Topic = t.Topic
+ rt.Partitions = t.Partitions
+ req.Topics = append(req.Topics, rt)
+ }
+ shards := cl.cl.RequestSharded(ctx, req)
+ dts := make(DescribedProducersTopics)
+ return dts, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.DescribeProducersResponse)
+ for _, rt := range resp.Topics {
+ dt, exists := dts[rt.Topic]
+ if !exists { // topic could be spread around brokers, we need to check existence
+ dt = DescribedProducersTopic{
+ Topic: rt.Topic,
+ Partitions: make(DescribedProducersPartitions),
+ }
+ dts[rt.Topic] = dt
+ }
+ dps := dt.Partitions
+ for _, rp := range rt.Partitions {
+ if err := maybeAuthErr(rp.ErrorCode); err != nil {
+ return err
+ }
+ drs := make(DescribedProducers)
+ dp := DescribedProducersPartition{
+ Leader: b.NodeID,
+ Topic: rt.Topic,
+ Partition: rp.Partition,
+ ActiveProducers: drs,
+ Err: kerr.ErrorForCode(rp.ErrorCode),
+ ErrMessage: unptrStr(rp.ErrorMessage),
+ }
+ dps[rp.Partition] = dp // one partition globally, no need to exist-check
+ for _, rr := range rp.ActiveProducers {
+ dr := DescribedProducer{
+ Leader: b.NodeID,
+ Topic: rt.Topic,
+ Partition: rp.Partition,
+ ProducerID: rr.ProducerID,
+ ProducerEpoch: int16(rr.ProducerEpoch),
+ LastSequence: rr.LastSequence,
+ LastTimestamp: rr.LastTimestamp,
+ CoordinatorEpoch: rr.CoordinatorEpoch,
+ CurrentTxnStartOffset: rr.CurrentTxnStartOffset,
+ }
+ drs[dr.ProducerID] = dr
+ }
+ }
+ }
+ return nil
+ })
+}
+
+// DescribedTransaction contains data from a describe transactions response for
+// a single transactional ID.
+type DescribedTransaction struct {
+ Coordinator int32 // Coordinator is the coordinator broker for this transactional ID.
+ TxnID string // TxnID is the name of this transactional ID.
+ State string // State is the state this transaction is in (Empty, Ongoing, PrepareCommit, PrepareAbort, CompleteCommit, CompleteAbort, Dead, PrepareEpochFence).
+ TimeoutMillis int32 // TimeoutMillis is the timeout of this transaction in milliseconds.
+ StartTimestamp int64 // StartTimestamp is millisecond when this transaction started.
+ ProducerID int64 // ProducerID is the ID in use by the transactional ID.
+ ProducerEpoch int16 // ProducerEpoch is the epoch associated with the produce rID.
+
+ // Topics is the set of partitions in the transaction, if active. When
+ // preparing to commit or abort, this includes only partitions which do
+ // not have markers. This does not include topics the user is not
+ // authorized to describe.
+ Topics TopicsSet
+
+ Err error // Err is non-nil if the transaction could not be described.
+}
+
+// DescribedTransactions contains information from a describe transactions
+// response.
+type DescribedTransactions map[string]DescribedTransaction
+
+// Sorted returns all described transactions sorted by transactional ID.
+func (ds DescribedTransactions) Sorted() []DescribedTransaction {
+ s := make([]DescribedTransaction, 0, len(ds))
+ for _, d := range ds {
+ s = append(s, d)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].TxnID < s[j].TxnID })
+ return s
+}
+
+// Each calls fn for each described transaction.
+func (ds DescribedTransactions) Each(fn func(DescribedTransaction)) {
+ for _, d := range ds {
+ fn(d)
+ }
+}
+
+// On calls fn for the transactional ID if it exists, returning the transaction
+// and the error returned from fn. If fn is nil, this simply returns the
+// transaction.
+//
+// The fn is given a shallow copy of the transaction. This function returns the
+// copy as well; any modifications within fn are modifications on the returned
+// copy. Modifications on a described transaction's inner fields are persisted
+// to the original map (because slices are pointers).
+//
+// If the transaction does not exist, this returns
+// kerr.TransactionalIDNotFound.
+func (rs DescribedTransactions) On(txnID string, fn func(*DescribedTransaction) error) (DescribedTransaction, error) {
+ if len(rs) > 0 {
+ r, ok := rs[txnID]
+ if ok {
+ if fn == nil {
+ return r, nil
+ }
+ return r, fn(&r)
+ }
+ }
+ return DescribedTransaction{}, kerr.TransactionalIDNotFound
+}
+
+// TransactionalIDs returns a sorted list of all transactional IDs.
+func (ds DescribedTransactions) TransactionalIDs() []string {
+ all := make([]string, 0, len(ds))
+ for t := range ds {
+ all = append(all, t)
+ }
+ sort.Strings(all)
+ return all
+}
+
+// DescribeTransactions describes either all transactional IDs specified, or
+// all transactional IDs in the cluster if none are specified.
+//
+// This may return *ShardErrors or *AuthError.
+//
+// If no transactional IDs are specified and this method first lists
+// transactional IDs, and listing IDs returns a *ShardErrors, this function
+// describes all successfully listed IDs and appends the list shard errors to
+// any describe shard errors.
+//
+// If only one ID is described, there will be at most one request issued and
+// there is no need to deeply inspect the error.
+func (cl *Client) DescribeTransactions(ctx context.Context, txnIDs ...string) (DescribedTransactions, error) {
+ var seList *ShardErrors
+ if len(txnIDs) == 0 {
+ listed, err := cl.ListTransactions(ctx, nil, nil)
+ switch {
+ case err == nil:
+ case errors.As(err, &seList):
+ default:
+ return nil, err
+ }
+ txnIDs = listed.TransactionalIDs()
+ if len(txnIDs) == 0 {
+ return nil, err
+ }
+ }
+
+ req := kmsg.NewPtrDescribeTransactionsRequest()
+ req.TransactionalIDs = txnIDs
+
+ shards := cl.cl.RequestSharded(ctx, req)
+ described := make(DescribedTransactions)
+ err := shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.DescribeTransactionsResponse)
+ for _, rt := range resp.TransactionStates {
+ if err := maybeAuthErr(rt.ErrorCode); err != nil {
+ return err
+ }
+ t := DescribedTransaction{
+ Coordinator: b.NodeID,
+ TxnID: rt.TransactionalID,
+ State: rt.State,
+ TimeoutMillis: rt.TimeoutMillis,
+ StartTimestamp: rt.StartTimestamp,
+ ProducerID: rt.ProducerID,
+ ProducerEpoch: rt.ProducerEpoch,
+ Err: kerr.ErrorForCode(rt.ErrorCode),
+ }
+ for _, rtt := range rt.Topics {
+ t.Topics.Add(rtt.Topic, rtt.Partitions...)
+ }
+ described[t.TxnID] = t // txnID lives on one coordinator, no need to exist-check
+ }
+ return nil
+ })
+
+ var seDesc *ShardErrors
+ switch {
+ case err == nil:
+ return described, seList.into()
+ case errors.As(err, &seDesc):
+ if seList != nil {
+ seDesc.Errs = append(seList.Errs, seDesc.Errs...)
+ }
+ return described, seDesc.into()
+ default:
+ return nil, err
+ }
+}
+
+// ListedTransaction contains data from a list transactions response for a
+// single transactional ID.
+type ListedTransaction struct {
+ Coordinator int32 // Coordinator the coordinator broker for this transactional ID.
+ TxnID string // TxnID is the name of this transactional ID.
+ ProducerID int64 // ProducerID is the producer ID for this transaction.
+ State string // State is the state this transaction is in (Empty, Ongoing, PrepareCommit, PrepareAbort, CompleteCommit, CompleteAbort, Dead, PrepareEpochFence).
+}
+
+// ListedTransactions contains information from a list transactions response.
+type ListedTransactions map[string]ListedTransaction
+
+// Sorted returns all transactions sorted by transactional ID.
+func (ls ListedTransactions) Sorted() []ListedTransaction {
+ s := make([]ListedTransaction, 0, len(ls))
+ for _, l := range ls {
+ s = append(s, l)
+ }
+ sort.Slice(s, func(i, j int) bool { return s[i].TxnID < s[j].TxnID })
+ return s
+}
+
+// Each calls fn for each listed transaction.
+func (ls ListedTransactions) Each(fn func(ListedTransaction)) {
+ for _, l := range ls {
+ fn(l)
+ }
+}
+
+// TransactionalIDs returns a sorted list of all transactional IDs.
+func (ls ListedTransactions) TransactionalIDs() []string {
+ all := make([]string, 0, len(ls))
+ for t := range ls {
+ all = append(all, t)
+ }
+ sort.Strings(all)
+ return all
+}
+
+// ListTransactions returns all transactions and their states in the cluster.
+// Filter states can be used to return transactions only in the requested
+// states. By default, this returns all transactions you have DESCRIBE access
+// to. Producer IDs can be specified to filter for transactions from the given
+// producer.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) ListTransactions(ctx context.Context, producerIDs []int64, filterStates []string) (ListedTransactions, error) {
+ req := kmsg.NewPtrListTransactionsRequest()
+ req.ProducerIDFilters = producerIDs
+ req.StateFilters = filterStates
+ shards := cl.cl.RequestSharded(ctx, req)
+ list := make(ListedTransactions)
+ return list, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.ListTransactionsResponse)
+ if err := maybeAuthErr(resp.ErrorCode); err != nil {
+ return err
+ }
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return err
+ }
+ for _, t := range resp.TransactionStates {
+ list[t.TransactionalID] = ListedTransaction{ // txnID lives on one coordinator, no need to exist-check
+ Coordinator: b.NodeID,
+ TxnID: t.TransactionalID,
+ ProducerID: t.ProducerID,
+ State: t.TransactionState,
+ }
+ }
+ return nil
+ })
+}
+
+// TxnMarkers marks the end of a partition: the producer ID / epoch doing the
+// writing, whether this is a commit, the coordinator epoch of the broker we
+// are writing to (for fencing), and the topics and partitions that we are
+// writing this abort or commit for.
+//
+// This is a very low level admin request and should likely be built from data
+// in a DescribeProducers response. See KIP-664 if you are trying to use this.
+type TxnMarkers struct {
+ ProducerID int64 // ProducerID is the ID to write markers for.
+ ProducerEpoch int16 // ProducerEpoch is the epoch to write markers for.
+ Commit bool // Commit is true if we are committing, false if we are aborting.
+ CoordinatorEpoch int32 // CoordinatorEpoch is the epoch of the transactional coordinator we are writing to; this is used for fencing.
+ Topics TopicsSet // Topics are topics and partitions to write markers for.
+}
+
+// TxnMarkersPartitionResponse is a response to a topic's partition within a
+// single marker written.
+type TxnMarkersPartitionResponse struct {
+ NodeID int32 // NodeID is the node that this marker was written to.
+ ProducerID int64 // ProducerID corresponds to the PID in the write marker request.
+ Topic string // Topic is the topic being responded to.
+ Partition int32 // Partition is the partition being responded to.
+ Err error // Err is non-nil if the WriteTxnMarkers request for this pid/topic/partition failed.
+}
+
+// TxnMarkersPartitionResponses contains per-partition responses to a
+// WriteTxnMarkers request.
+type TxnMarkersPartitionResponses map[int32]TxnMarkersPartitionResponse
+
+// Sorted returns all partitions sorted by partition.
+func (ps TxnMarkersPartitionResponses) Sorted() []TxnMarkersPartitionResponse {
+ var all []TxnMarkersPartitionResponse
+ ps.Each(func(p TxnMarkersPartitionResponse) {
+ all = append(all, p)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Partition < r.Partition
+ })
+ return all
+}
+
+// Each calls fn for each partition.
+func (ps TxnMarkersPartitionResponses) Each(fn func(TxnMarkersPartitionResponse)) {
+ for _, p := range ps {
+ fn(p)
+ }
+}
+
+// TxnMarkersTopicResponse is a response to a topic within a single marker
+// written.
+type TxnMarkersTopicResponse struct {
+ ProducerID int64 // ProducerID corresponds to the PID in the write marker request.
+ Topic string // Topic is the topic being responded to.
+ Partitions TxnMarkersPartitionResponses // Partitions are the responses for partitions in this marker.
+}
+
+// TxnMarkersTopicResponses contains per-topic responses to a WriteTxnMarkers
+// request.
+type TxnMarkersTopicResponses map[string]TxnMarkersTopicResponse
+
+// Sorted returns all topics sorted by topic.
+func (ts TxnMarkersTopicResponses) Sorted() []TxnMarkersTopicResponse {
+ var all []TxnMarkersTopicResponse
+ ts.Each(func(t TxnMarkersTopicResponse) {
+ all = append(all, t)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic
+ })
+ return all
+}
+
+// SortedPartitions returns all topics sorted by topic then partition.
+func (ts TxnMarkersTopicResponses) SortedPartitions() []TxnMarkersPartitionResponse {
+ var all []TxnMarkersPartitionResponse
+ ts.EachPartition(func(p TxnMarkersPartitionResponse) {
+ all = append(all, p)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.Topic < r.Topic || l.Topic == r.Topic && l.Partition < r.Partition
+ })
+ return all
+}
+
+// Each calls fn for each topic.
+func (ts TxnMarkersTopicResponses) Each(fn func(TxnMarkersTopicResponse)) {
+ for _, t := range ts {
+ fn(t)
+ }
+}
+
+// EachPartition calls fn for every partition in all topics.
+func (ts TxnMarkersTopicResponses) EachPartition(fn func(TxnMarkersPartitionResponse)) {
+ for _, t := range ts {
+ for _, p := range t.Partitions {
+ fn(p)
+ }
+ }
+}
+
+// TxnMarkersResponse is a response for a single marker written.
+type TxnMarkersResponse struct {
+ ProducerID int64 // ProducerID corresponds to the PID in the write marker request.
+ Topics TxnMarkersTopicResponses // Topics contains the topics that markers were written for, for this ProducerID.
+}
+
+// TxnMarkersResponse contains per-partition-ID responses to a WriteTxnMarkers
+// request.
+type TxnMarkersResponses map[int64]TxnMarkersResponse
+
+// Sorted returns all markers sorted by producer ID.
+func (ms TxnMarkersResponses) Sorted() []TxnMarkersResponse {
+ var all []TxnMarkersResponse
+ ms.Each(func(m TxnMarkersResponse) {
+ all = append(all, m)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.ProducerID < r.ProducerID
+ })
+ return all
+}
+
+// SortedTopics returns all marker topics sorted by producer ID then topic.
+func (ms TxnMarkersResponses) SortedTopics() []TxnMarkersTopicResponse {
+ var all []TxnMarkersTopicResponse
+ ms.EachTopic(func(t TxnMarkersTopicResponse) {
+ all = append(all, t)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.ProducerID < r.ProducerID || l.ProducerID == r.ProducerID && l.Topic < r.Topic
+ })
+ return all
+}
+
+// SortedPartitions returns all marker topic partitions sorted by producer ID
+// then topic then partition.
+func (ms TxnMarkersResponses) SortedPartitions() []TxnMarkersPartitionResponse {
+ var all []TxnMarkersPartitionResponse
+ ms.EachPartition(func(p TxnMarkersPartitionResponse) {
+ all = append(all, p)
+ })
+ sort.Slice(all, func(i, j int) bool {
+ l, r := all[i], all[j]
+ return l.ProducerID < r.ProducerID || l.ProducerID == r.ProducerID && l.Topic < r.Topic || l.Topic == r.Topic && l.Partition < r.Partition
+ })
+ return all
+}
+
+// Each calls fn for each marker response.
+func (ms TxnMarkersResponses) Each(fn func(TxnMarkersResponse)) {
+ for _, m := range ms {
+ fn(m)
+ }
+}
+
+// EachTopic calls fn for every topic in all marker responses.
+func (ms TxnMarkersResponses) EachTopic(fn func(TxnMarkersTopicResponse)) {
+ for _, m := range ms {
+ for _, t := range m.Topics {
+ fn(t)
+ }
+ }
+}
+
+// EachPartition calls fn for every partition in all topics in all marker
+// responses.
+func (ms TxnMarkersResponses) EachPartition(fn func(TxnMarkersPartitionResponse)) {
+ for _, m := range ms {
+ for _, t := range m.Topics {
+ for _, p := range t.Partitions {
+ fn(p)
+ }
+ }
+ }
+}
+
+// WriteTxnMarkers writes transaction markers to brokers. This is an advanced
+// admin way to close out open transactions. See KIP-664 for more details.
+//
+// This may return *ShardErrors or *AuthError.
+func (cl *Client) WriteTxnMarkers(ctx context.Context, markers ...TxnMarkers) (TxnMarkersResponses, error) {
+ req := kmsg.NewPtrWriteTxnMarkersRequest()
+ for _, m := range markers {
+ rm := kmsg.NewWriteTxnMarkersRequestMarker()
+ rm.ProducerID = m.ProducerID
+ rm.ProducerEpoch = m.ProducerEpoch
+ rm.Committed = m.Commit
+ rm.CoordinatorEpoch = m.CoordinatorEpoch
+ for t, ps := range m.Topics {
+ rt := kmsg.NewWriteTxnMarkersRequestMarkerTopic()
+ rt.Topic = t
+ for p := range ps {
+ rt.Partitions = append(rt.Partitions, p)
+ }
+ rm.Topics = append(rm.Topics, rt)
+ }
+ req.Markers = append(req.Markers, rm)
+ }
+ shards := cl.cl.RequestSharded(ctx, req)
+ rs := make(TxnMarkersResponses)
+ return rs, shardErrEachBroker(req, shards, func(b BrokerDetail, kr kmsg.Response) error {
+ resp := kr.(*kmsg.WriteTxnMarkersResponse)
+ for _, rm := range resp.Markers {
+ m, exists := rs[rm.ProducerID] // partitions are spread around, our marker could be split: we need to check existence
+ if !exists {
+ m = TxnMarkersResponse{
+ ProducerID: rm.ProducerID,
+ Topics: make(TxnMarkersTopicResponses),
+ }
+ rs[rm.ProducerID] = m
+ }
+ for _, rt := range rm.Topics {
+ t, exists := m.Topics[rt.Topic]
+ if !exists { // same thought
+ t = TxnMarkersTopicResponse{
+ ProducerID: rm.ProducerID,
+ Topic: rt.Topic,
+ Partitions: make(TxnMarkersPartitionResponses),
+ }
+ m.Topics[rt.Topic] = t
+ }
+ for _, rp := range rt.Partitions {
+ if err := maybeAuthErr(rp.ErrorCode); err != nil {
+ return err
+ }
+ t.Partitions[rp.Partition] = TxnMarkersPartitionResponse{ // one partition globally, no need to exist-check
+ NodeID: b.NodeID,
+ ProducerID: rm.ProducerID,
+ Topic: rt.Topic,
+ Partition: rp.Partition,
+ Err: kerr.ErrorForCode(rp.ErrorCode),
+ }
+ }
+ }
+ }
+ return nil
+ })
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kbin/primitives.go b/vendor/github.com/twmb/franz-go/pkg/kbin/primitives.go
new file mode 100644
index 0000000000000..487e7f6c2a3ba
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kbin/primitives.go
@@ -0,0 +1,856 @@
+// Package kbin contains Kafka primitive reading and writing functions.
+package kbin
+
+import (
+ "encoding/binary"
+ "errors"
+ "math"
+ "math/bits"
+ "reflect"
+ "unsafe"
+)
+
+// This file contains primitive type encoding and decoding.
+//
+// The Reader helper can be used even when content runs out
+// or an error is hit; all other number requests will return
+// zero so a decode will basically no-op.
+
+// ErrNotEnoughData is returned when a type could not fully decode
+// from a slice because the slice did not have enough data.
+var ErrNotEnoughData = errors.New("response did not contain enough data to be valid")
+
+// AppendBool appends 1 for true or 0 for false to dst.
+func AppendBool(dst []byte, v bool) []byte {
+ if v {
+ return append(dst, 1)
+ }
+ return append(dst, 0)
+}
+
+// AppendInt8 appends an int8 to dst.
+func AppendInt8(dst []byte, i int8) []byte {
+ return append(dst, byte(i))
+}
+
+// AppendInt16 appends a big endian int16 to dst.
+func AppendInt16(dst []byte, i int16) []byte {
+ return AppendUint16(dst, uint16(i))
+}
+
+// AppendUint16 appends a big endian uint16 to dst.
+func AppendUint16(dst []byte, u uint16) []byte {
+ return append(dst, byte(u>>8), byte(u))
+}
+
+// AppendInt32 appends a big endian int32 to dst.
+func AppendInt32(dst []byte, i int32) []byte {
+ return AppendUint32(dst, uint32(i))
+}
+
+// AppendInt64 appends a big endian int64 to dst.
+func AppendInt64(dst []byte, i int64) []byte {
+ return appendUint64(dst, uint64(i))
+}
+
+// AppendFloat64 appends a big endian float64 to dst.
+func AppendFloat64(dst []byte, f float64) []byte {
+ return appendUint64(dst, math.Float64bits(f))
+}
+
+// AppendUuid appends the 16 uuid bytes to dst.
+func AppendUuid(dst []byte, uuid [16]byte) []byte {
+ return append(dst, uuid[:]...)
+}
+
+func appendUint64(dst []byte, u uint64) []byte {
+ return append(dst, byte(u>>56), byte(u>>48), byte(u>>40), byte(u>>32),
+ byte(u>>24), byte(u>>16), byte(u>>8), byte(u))
+}
+
+// AppendUint32 appends a big endian uint32 to dst.
+func AppendUint32(dst []byte, u uint32) []byte {
+ return append(dst, byte(u>>24), byte(u>>16), byte(u>>8), byte(u))
+}
+
+// uvarintLens could only be length 65, but using 256 allows bounds check
+// elimination on lookup.
+const uvarintLens = "\x01\x01\x01\x01\x01\x01\x01\x01\x02\x02\x02\x02\x02\x02\x02\x03\x03\x03\x03\x03\x03\x03\x04\x04\x04\x04\x04\x04\x04\x05\x05\x05\x05\x05\x05\x05\x06\x06\x06\x06\x06\x06\x06\x07\x07\x07\x07\x07\x07\x07\x08\x08\x08\x08\x08\x08\x08\x09\x09\x09\x09\x09\x09\x09\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
+
+// VarintLen returns how long i would be if it were varint encoded.
+func VarintLen(i int32) int {
+ u := uint32(i)<<1 ^ uint32(i>>31)
+ return UvarintLen(u)
+}
+
+// UvarintLen returns how long u would be if it were uvarint encoded.
+func UvarintLen(u uint32) int {
+ return int(uvarintLens[byte(bits.Len32(u))])
+}
+
+// VarlongLen returns how long i would be if it were varlong encoded.
+func VarlongLen(i int64) int {
+ u := uint64(i)<<1 ^ uint64(i>>63)
+ return uvarlongLen(u)
+}
+
+func uvarlongLen(u uint64) int {
+ return int(uvarintLens[byte(bits.Len64(u))])
+}
+
+// Varint is a loop unrolled 32 bit varint decoder. The return semantics
+// are the same as binary.Varint, with the added benefit that overflows
+// in 5 byte encodings are handled rather than left to the user.
+func Varint(in []byte) (int32, int) {
+ x, n := Uvarint(in)
+ return int32((x >> 1) ^ -(x & 1)), n
+}
+
+// Uvarint is a loop unrolled 32 bit uvarint decoder. The return semantics
+// are the same as binary.Uvarint, with the added benefit that overflows
+// in 5 byte encodings are handled rather than left to the user.
+func Uvarint(in []byte) (uint32, int) {
+ var x uint32
+ var overflow int
+
+ if len(in) < 1 {
+ goto fail
+ }
+
+ x = uint32(in[0] & 0x7f)
+ if in[0]&0x80 == 0 {
+ return x, 1
+ } else if len(in) < 2 {
+ goto fail
+ }
+
+ x |= uint32(in[1]&0x7f) << 7
+ if in[1]&0x80 == 0 {
+ return x, 2
+ } else if len(in) < 3 {
+ goto fail
+ }
+
+ x |= uint32(in[2]&0x7f) << 14
+ if in[2]&0x80 == 0 {
+ return x, 3
+ } else if len(in) < 4 {
+ goto fail
+ }
+
+ x |= uint32(in[3]&0x7f) << 21
+ if in[3]&0x80 == 0 {
+ return x, 4
+ } else if len(in) < 5 {
+ goto fail
+ }
+
+ x |= uint32(in[4]) << 28
+ if in[4] <= 0x0f {
+ return x, 5
+ }
+
+ overflow = -5
+
+fail:
+ return 0, overflow
+}
+
+// Varlong is a loop unrolled 64 bit varint decoder. The return semantics
+// are the same as binary.Varint, with the added benefit that overflows
+// in 10 byte encodings are handled rather than left to the user.
+func Varlong(in []byte) (int64, int) {
+ x, n := uvarlong(in)
+ return int64((x >> 1) ^ -(x & 1)), n
+}
+
+func uvarlong(in []byte) (uint64, int) {
+ var x uint64
+ var overflow int
+
+ if len(in) < 1 {
+ goto fail
+ }
+
+ x = uint64(in[0] & 0x7f)
+ if in[0]&0x80 == 0 {
+ return x, 1
+ } else if len(in) < 2 {
+ goto fail
+ }
+
+ x |= uint64(in[1]&0x7f) << 7
+ if in[1]&0x80 == 0 {
+ return x, 2
+ } else if len(in) < 3 {
+ goto fail
+ }
+
+ x |= uint64(in[2]&0x7f) << 14
+ if in[2]&0x80 == 0 {
+ return x, 3
+ } else if len(in) < 4 {
+ goto fail
+ }
+
+ x |= uint64(in[3]&0x7f) << 21
+ if in[3]&0x80 == 0 {
+ return x, 4
+ } else if len(in) < 5 {
+ goto fail
+ }
+
+ x |= uint64(in[4]&0x7f) << 28
+ if in[4]&0x80 == 0 {
+ return x, 5
+ } else if len(in) < 6 {
+ goto fail
+ }
+
+ x |= uint64(in[5]&0x7f) << 35
+ if in[5]&0x80 == 0 {
+ return x, 6
+ } else if len(in) < 7 {
+ goto fail
+ }
+
+ x |= uint64(in[6]&0x7f) << 42
+ if in[6]&0x80 == 0 {
+ return x, 7
+ } else if len(in) < 8 {
+ goto fail
+ }
+
+ x |= uint64(in[7]&0x7f) << 49
+ if in[7]&0x80 == 0 {
+ return x, 8
+ } else if len(in) < 9 {
+ goto fail
+ }
+
+ x |= uint64(in[8]&0x7f) << 56
+ if in[8]&0x80 == 0 {
+ return x, 9
+ } else if len(in) < 10 {
+ goto fail
+ }
+
+ x |= uint64(in[9]) << 63
+ if in[9] <= 0x01 {
+ return x, 10
+ }
+
+ overflow = -10
+
+fail:
+ return 0, overflow
+}
+
+// AppendVarint appends a varint encoded i to dst.
+func AppendVarint(dst []byte, i int32) []byte {
+ return AppendUvarint(dst, uint32(i)<<1^uint32(i>>31))
+}
+
+// AppendUvarint appends a uvarint encoded u to dst.
+func AppendUvarint(dst []byte, u uint32) []byte {
+ switch UvarintLen(u) {
+ case 5:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte(u>>28))
+ case 4:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte(u>>21))
+ case 3:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte(u>>14))
+ case 2:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte(u>>7))
+ case 1:
+ return append(dst, byte(u))
+ }
+ return dst
+}
+
+// AppendVarlong appends a varint encoded i to dst.
+func AppendVarlong(dst []byte, i int64) []byte {
+ return appendUvarlong(dst, uint64(i)<<1^uint64(i>>63))
+}
+
+func appendUvarlong(dst []byte, u uint64) []byte {
+ switch uvarlongLen(u) {
+ case 10:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte((u>>42)&0x7f|0x80),
+ byte((u>>49)&0x7f|0x80),
+ byte((u>>56)&0x7f|0x80),
+ byte(u>>63))
+ case 9:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte((u>>42)&0x7f|0x80),
+ byte((u>>49)&0x7f|0x80),
+ byte(u>>56))
+ case 8:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte((u>>42)&0x7f|0x80),
+ byte(u>>49))
+ case 7:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte(u>>42))
+ case 6:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte(u>>35))
+ case 5:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte(u>>28))
+ case 4:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte(u>>21))
+ case 3:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte(u>>14))
+ case 2:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte(u>>7))
+ case 1:
+ return append(dst, byte(u))
+ }
+ return dst
+}
+
+// AppendString appends a string to dst prefixed with its int16 length.
+func AppendString(dst []byte, s string) []byte {
+ dst = AppendInt16(dst, int16(len(s)))
+ return append(dst, s...)
+}
+
+// AppendCompactString appends a string to dst prefixed with its uvarint length
+// starting at 1; 0 is reserved for null, which compact strings are not
+// (nullable compact ones are!). Thus, the length is the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactString(dst []byte, s string) []byte {
+ dst = AppendUvarint(dst, 1+uint32(len(s)))
+ return append(dst, s...)
+}
+
+// AppendNullableString appends potentially nil string to dst prefixed with its
+// int16 length or int16(-1) if nil.
+func AppendNullableString(dst []byte, s *string) []byte {
+ if s == nil {
+ return AppendInt16(dst, -1)
+ }
+ return AppendString(dst, *s)
+}
+
+// AppendCompactNullableString appends a potentially nil string to dst with its
+// uvarint length starting at 1, with 0 indicating null. Thus, the length is
+// the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactNullableString(dst []byte, s *string) []byte {
+ if s == nil {
+ return AppendUvarint(dst, 0)
+ }
+ return AppendCompactString(dst, *s)
+}
+
+// AppendBytes appends bytes to dst prefixed with its int32 length.
+func AppendBytes(dst, b []byte) []byte {
+ dst = AppendInt32(dst, int32(len(b)))
+ return append(dst, b...)
+}
+
+// AppendCompactBytes appends bytes to dst prefixed with a its uvarint length
+// starting at 1; 0 is reserved for null, which compact bytes are not (nullable
+// compact ones are!). Thus, the length is the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactBytes(dst, b []byte) []byte {
+ dst = AppendUvarint(dst, 1+uint32(len(b)))
+ return append(dst, b...)
+}
+
+// AppendNullableBytes appends a potentially nil slice to dst prefixed with its
+// int32 length or int32(-1) if nil.
+func AppendNullableBytes(dst, b []byte) []byte {
+ if b == nil {
+ return AppendInt32(dst, -1)
+ }
+ return AppendBytes(dst, b)
+}
+
+// AppendCompactNullableBytes appends a potentially nil slice to dst with its
+// uvarint length starting at 1, with 0 indicating null. Thus, the length is
+// the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactNullableBytes(dst, b []byte) []byte {
+ if b == nil {
+ return AppendUvarint(dst, 0)
+ }
+ return AppendCompactBytes(dst, b)
+}
+
+// AppendVarintString appends a string to dst prefixed with its length encoded
+// as a varint.
+func AppendVarintString(dst []byte, s string) []byte {
+ dst = AppendVarint(dst, int32(len(s)))
+ return append(dst, s...)
+}
+
+// AppendVarintBytes appends a slice to dst prefixed with its length encoded as
+// a varint.
+func AppendVarintBytes(dst, b []byte) []byte {
+ if b == nil {
+ return AppendVarint(dst, -1)
+ }
+ dst = AppendVarint(dst, int32(len(b)))
+ return append(dst, b...)
+}
+
+// AppendArrayLen appends the length of an array as an int32 to dst.
+func AppendArrayLen(dst []byte, l int) []byte {
+ return AppendInt32(dst, int32(l))
+}
+
+// AppendCompactArrayLen appends the length of an array as a uvarint to dst
+// as the length + 1.
+//
+// For KIP-482.
+func AppendCompactArrayLen(dst []byte, l int) []byte {
+ return AppendUvarint(dst, 1+uint32(l))
+}
+
+// AppendNullableArrayLen appends the length of an array as an int32 to dst,
+// or -1 if isNil is true.
+func AppendNullableArrayLen(dst []byte, l int, isNil bool) []byte {
+ if isNil {
+ return AppendInt32(dst, -1)
+ }
+ return AppendInt32(dst, int32(l))
+}
+
+// AppendCompactNullableArrayLen appends the length of an array as a uvarint to
+// dst as the length + 1; if isNil is true, this appends 0 as a uvarint.
+//
+// For KIP-482.
+func AppendCompactNullableArrayLen(dst []byte, l int, isNil bool) []byte {
+ if isNil {
+ return AppendUvarint(dst, 0)
+ }
+ return AppendUvarint(dst, 1+uint32(l))
+}
+
+// Reader is used to decode Kafka messages.
+//
+// For all functions on Reader, if the reader has been invalidated, functions
+// return defaults (false, 0, nil, ""). Use Complete to detect if the reader
+// was invalidated or if the reader has remaining data.
+type Reader struct {
+ Src []byte
+ bad bool
+}
+
+// Bool returns a bool from the reader.
+func (b *Reader) Bool() bool {
+ if len(b.Src) < 1 {
+ b.bad = true
+ b.Src = nil
+ return false
+ }
+ t := b.Src[0] != 0 // if '0', false
+ b.Src = b.Src[1:]
+ return t
+}
+
+// Int8 returns an int8 from the reader.
+func (b *Reader) Int8() int8 {
+ if len(b.Src) < 1 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := b.Src[0]
+ b.Src = b.Src[1:]
+ return int8(r)
+}
+
+// Int16 returns an int16 from the reader.
+func (b *Reader) Int16() int16 {
+ if len(b.Src) < 2 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := int16(binary.BigEndian.Uint16(b.Src))
+ b.Src = b.Src[2:]
+ return r
+}
+
+// Uint16 returns an uint16 from the reader.
+func (b *Reader) Uint16() uint16 {
+ if len(b.Src) < 2 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := binary.BigEndian.Uint16(b.Src)
+ b.Src = b.Src[2:]
+ return r
+}
+
+// Int32 returns an int32 from the reader.
+func (b *Reader) Int32() int32 {
+ if len(b.Src) < 4 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := int32(binary.BigEndian.Uint32(b.Src))
+ b.Src = b.Src[4:]
+ return r
+}
+
+// Int64 returns an int64 from the reader.
+func (b *Reader) Int64() int64 {
+ return int64(b.readUint64())
+}
+
+// Uuid returns a uuid from the reader.
+func (b *Reader) Uuid() [16]byte {
+ var r [16]byte
+ copy(r[:], b.Span(16))
+ return r
+}
+
+// Float64 returns a float64 from the reader.
+func (b *Reader) Float64() float64 {
+ return math.Float64frombits(b.readUint64())
+}
+
+func (b *Reader) readUint64() uint64 {
+ if len(b.Src) < 8 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := binary.BigEndian.Uint64(b.Src)
+ b.Src = b.Src[8:]
+ return r
+}
+
+// Uint32 returns a uint32 from the reader.
+func (b *Reader) Uint32() uint32 {
+ if len(b.Src) < 4 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := binary.BigEndian.Uint32(b.Src)
+ b.Src = b.Src[4:]
+ return r
+}
+
+// Varint returns a varint int32 from the reader.
+func (b *Reader) Varint() int32 {
+ val, n := Varint(b.Src)
+ if n <= 0 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ b.Src = b.Src[n:]
+ return val
+}
+
+// Varlong returns a varlong int64 from the reader.
+func (b *Reader) Varlong() int64 {
+ val, n := Varlong(b.Src)
+ if n <= 0 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ b.Src = b.Src[n:]
+ return val
+}
+
+// Uvarint returns a uvarint encoded uint32 from the reader.
+func (b *Reader) Uvarint() uint32 {
+ val, n := Uvarint(b.Src)
+ if n <= 0 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ b.Src = b.Src[n:]
+ return val
+}
+
+// Span returns l bytes from the reader.
+func (b *Reader) Span(l int) []byte {
+ if len(b.Src) < l || l < 0 {
+ b.bad = true
+ b.Src = nil
+ return nil
+ }
+ r := b.Src[:l:l]
+ b.Src = b.Src[l:]
+ return r
+}
+
+// UnsafeString returns a Kafka string from the reader without allocating using
+// the unsafe package. This must be used with care; note the string holds a
+// reference to the original slice.
+func (b *Reader) UnsafeString() string {
+ l := b.Int16()
+ return UnsafeString(b.Span(int(l)))
+}
+
+// String returns a Kafka string from the reader.
+func (b *Reader) String() string {
+ l := b.Int16()
+ return string(b.Span(int(l)))
+}
+
+// UnsafeCompactString returns a Kafka compact string from the reader without
+// allocating using the unsafe package. This must be used with care; note the
+// string holds a reference to the original slice.
+func (b *Reader) UnsafeCompactString() string {
+ l := int(b.Uvarint()) - 1
+ return UnsafeString(b.Span(l))
+}
+
+// CompactString returns a Kafka compact string from the reader.
+func (b *Reader) CompactString() string {
+ l := int(b.Uvarint()) - 1
+ return string(b.Span(l))
+}
+
+// UnsafeNullableString returns a Kafka nullable string from the reader without
+// allocating using the unsafe package. This must be used with care; note the
+// string holds a reference to the original slice.
+func (b *Reader) UnsafeNullableString() *string {
+ l := b.Int16()
+ if l < 0 {
+ return nil
+ }
+ s := UnsafeString(b.Span(int(l)))
+ return &s
+}
+
+// NullableString returns a Kafka nullable string from the reader.
+func (b *Reader) NullableString() *string {
+ l := b.Int16()
+ if l < 0 {
+ return nil
+ }
+ s := string(b.Span(int(l)))
+ return &s
+}
+
+// UnsafeCompactNullableString returns a Kafka compact nullable string from the
+// reader without allocating using the unsafe package. This must be used with
+// care; note the string holds a reference to the original slice.
+func (b *Reader) UnsafeCompactNullableString() *string {
+ l := int(b.Uvarint()) - 1
+ if l < 0 {
+ return nil
+ }
+ s := UnsafeString(b.Span(l))
+ return &s
+}
+
+// CompactNullableString returns a Kafka compact nullable string from the
+// reader.
+func (b *Reader) CompactNullableString() *string {
+ l := int(b.Uvarint()) - 1
+ if l < 0 {
+ return nil
+ }
+ s := string(b.Span(l))
+ return &s
+}
+
+// Bytes returns a Kafka byte array from the reader.
+//
+// This never returns nil.
+func (b *Reader) Bytes() []byte {
+ l := b.Int32()
+ // This is not to spec, but it is not clearly documented and Microsoft
+ // EventHubs fails here. -1 means null, which should throw an
+ // exception. EventHubs uses -1 to mean "does not exist" on some
+ // non-nullable fields.
+ //
+ // Until EventHubs is fixed, we return an empty byte slice for null.
+ if l == -1 {
+ return []byte{}
+ }
+ return b.Span(int(l))
+}
+
+// CompactBytes returns a Kafka compact byte array from the reader.
+//
+// This never returns nil.
+func (b *Reader) CompactBytes() []byte {
+ l := int(b.Uvarint()) - 1
+ if l == -1 { // same as above: -1 should not be allowed here
+ return []byte{}
+ }
+ return b.Span(l)
+}
+
+// NullableBytes returns a Kafka nullable byte array from the reader, returning
+// nil as appropriate.
+func (b *Reader) NullableBytes() []byte {
+ l := b.Int32()
+ if l < 0 {
+ return nil
+ }
+ r := b.Span(int(l))
+ return r
+}
+
+// CompactNullableBytes returns a Kafka compact nullable byte array from the
+// reader, returning nil as appropriate.
+func (b *Reader) CompactNullableBytes() []byte {
+ l := int(b.Uvarint()) - 1
+ if l < 0 {
+ return nil
+ }
+ r := b.Span(l)
+ return r
+}
+
+// ArrayLen returns a Kafka array length from the reader.
+func (b *Reader) ArrayLen() int32 {
+ r := b.Int32()
+ // The min size of a Kafka type is a byte, so if we do not have
+ // at least the array length of bytes left, it is bad.
+ if len(b.Src) < int(r) {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ return r
+}
+
+// VarintArrayLen returns a Kafka array length from the reader.
+func (b *Reader) VarintArrayLen() int32 {
+ r := b.Varint()
+ // The min size of a Kafka type is a byte, so if we do not have
+ // at least the array length of bytes left, it is bad.
+ if len(b.Src) < int(r) {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ return r
+}
+
+// CompactArrayLen returns a Kafka compact array length from the reader.
+func (b *Reader) CompactArrayLen() int32 {
+ r := int32(b.Uvarint()) - 1
+ // The min size of a Kafka type is a byte, so if we do not have
+ // at least the array length of bytes left, it is bad.
+ if len(b.Src) < int(r) {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ return r
+}
+
+// VarintBytes returns a Kafka encoded varint array from the reader, returning
+// nil as appropriate.
+func (b *Reader) VarintBytes() []byte {
+ l := b.Varint()
+ if l < 0 {
+ return nil
+ }
+ return b.Span(int(l))
+}
+
+// UnsafeVarintString returns a Kafka encoded varint string from the reader
+// without allocating using the unsafe package. This must be used with care;
+// note the string holds a reference to the original slice.
+func (b *Reader) UnsafeVarintString() string {
+ return UnsafeString(b.VarintBytes())
+}
+
+// VarintString returns a Kafka encoded varint string from the reader.
+func (b *Reader) VarintString() string {
+ return string(b.VarintBytes())
+}
+
+// Complete returns ErrNotEnoughData if the source ran out while decoding.
+func (b *Reader) Complete() error {
+ if b.bad {
+ return ErrNotEnoughData
+ }
+ return nil
+}
+
+// Ok returns true if the reader is still ok.
+func (b *Reader) Ok() bool {
+ return !b.bad
+}
+
+// UnsafeString returns the slice as a string using unsafe rule (6).
+func UnsafeString(slice []byte) string {
+ var str string
+ strhdr := (*reflect.StringHeader)(unsafe.Pointer(&str)) //nolint:gosec // known way to convert slice to string
+ strhdr.Data = ((*reflect.SliceHeader)(unsafe.Pointer(&slice))).Data //nolint:gosec // known way to convert slice to string
+ strhdr.Len = len(slice)
+ return str
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go b/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go
new file mode 100644
index 0000000000000..731a23a1975ac
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go
@@ -0,0 +1,315 @@
+// Package kerr contains Kafka errors.
+//
+// The errors are undocumented to avoid duplicating the official descriptions
+// that can be found at https://kafka.apache.org/protocol.html#protocol_error_codes (although,
+// this code does duplicate the descriptions into the errors themselves, so the
+// descriptions can be seen as the documentation).
+//
+// Since this package is dedicated to errors and the package is named "kerr",
+// all errors elide the standard "Err" prefix.
+package kerr
+
+import (
+ "errors"
+ "fmt"
+)
+
+// Error is a Kafka error.
+type Error struct {
+ // Message is the string form of a Kafka error code
+ // (UNKNOWN_SERVER_ERROR, etc).
+ Message string
+ // Code is a Kafka error code.
+ Code int16
+ // Retriable is whether the error is considered retriable by Kafka.
+ Retriable bool
+ // Description is a succinct description of what this error means.
+ Description string
+}
+
+func (e *Error) Error() string {
+ return fmt.Sprintf("%s: %s", e.Message, e.Description)
+}
+
+// ErrorForCode returns the error corresponding to the given error code.
+//
+// If the code is unknown, this returns UnknownServerError.
+// If the code is 0, this returns nil.
+func ErrorForCode(code int16) error {
+ err, exists := code2err[code]
+ if !exists {
+ return UnknownServerError
+ }
+ return err
+}
+
+// TypedErrorForCode returns the kerr.Error corresponding to the given error
+// code.
+//
+// If the code is unknown, this returns UnknownServerError.
+// If the code is 0, this returns nil.
+//
+// Note that this function is provided as a simplicity function for code that
+// needs to work with the *Error only, but this function comes with caveats.
+// Because this can return a typed nil, passing the return of this to a
+// function that accepts an error (the Go error interface), the return from
+// this will never be considered a nil error. Instead, it will be an error with
+// a nil internal value.
+func TypedErrorForCode(code int16) *Error {
+ err, exists := code2err[code]
+ if !exists {
+ return UnknownServerError
+ }
+ if err == nil {
+ return nil
+ }
+ return err.(*Error)
+}
+
+// IsRetriable returns whether a Kafka error is considered retriable.
+func IsRetriable(err error) bool {
+ var kerr *Error
+ return errors.As(err, &kerr) && kerr.Retriable
+}
+
+var (
+ UnknownServerError = &Error{"UNKNOWN_SERVER_ERROR", -1, false, "The server experienced an unexpected error when processing the request."}
+ OffsetOutOfRange = &Error{"OFFSET_OUT_OF_RANGE", 1, false, "The requested offset is not within the range of offsets maintained by the server."}
+ CorruptMessage = &Error{"CORRUPT_MESSAGE", 2, true, "This message has failed its CRC checksum, exceeds the valid size, has a null key for a compacted topic, or is otherwise corrupt."}
+ UnknownTopicOrPartition = &Error{"UNKNOWN_TOPIC_OR_PARTITION", 3, true, "This server does not host this topic-partition."}
+ InvalidFetchSize = &Error{"INVALID_FETCH_SIZE", 4, false, "The requested fetch size is invalid."}
+ LeaderNotAvailable = &Error{"LEADER_NOT_AVAILABLE", 5, true, "There is no leader for this topic-partition as we are in the middle of a leadership election."}
+ NotLeaderForPartition = &Error{"NOT_LEADER_FOR_PARTITION", 6, true, "This server is not the leader for that topic-partition."}
+ RequestTimedOut = &Error{"REQUEST_TIMED_OUT", 7, true, "The request timed out."}
+ BrokerNotAvailable = &Error{"BROKER_NOT_AVAILABLE", 8, true, "The broker is not available."}
+ ReplicaNotAvailable = &Error{"REPLICA_NOT_AVAILABLE", 9, true, "The replica is not available for the requested topic-partition."}
+ MessageTooLarge = &Error{"MESSAGE_TOO_LARGE", 10, false, "The request included a message larger than the max message size the server will accept."}
+ StaleControllerEpoch = &Error{"STALE_CONTROLLER_EPOCH", 11, false, "The controller moved to another broker."}
+ OffsetMetadataTooLarge = &Error{"OFFSET_METADATA_TOO_LARGE", 12, false, "The metadata field of the offset request was too large."}
+ NetworkException = &Error{"NETWORK_EXCEPTION", 13, true, "The server disconnected before a response was received."}
+ CoordinatorLoadInProgress = &Error{"COORDINATOR_LOAD_IN_PROGRESS", 14, true, "The coordinator is loading and hence can't process requests."}
+ CoordinatorNotAvailable = &Error{"COORDINATOR_NOT_AVAILABLE", 15, true, "The coordinator is not available."}
+ NotCoordinator = &Error{"NOT_COORDINATOR", 16, true, "This is not the correct coordinator."}
+ InvalidTopicException = &Error{"INVALID_TOPIC_EXCEPTION", 17, false, "The request attempted to perform an operation on an invalid topic."}
+ RecordListTooLarge = &Error{"RECORD_LIST_TOO_LARGE", 18, false, "The request included message batch larger than the configured segment size on the server."}
+ NotEnoughReplicas = &Error{"NOT_ENOUGH_REPLICAS", 19, true, "Messages are rejected since there are fewer in-sync replicas than required."}
+ NotEnoughReplicasAfterAppend = &Error{"NOT_ENOUGH_REPLICAS_AFTER_APPEND", 20, true, "Messages are written to the log, but to fewer in-sync replicas than required."}
+ InvalidRequiredAcks = &Error{"INVALID_REQUIRED_ACKS", 21, false, "Produce request specified an invalid value for required acks."}
+ IllegalGeneration = &Error{"ILLEGAL_GENERATION", 22, false, "Specified group generation id is not valid."}
+ InconsistentGroupProtocol = &Error{"INCONSISTENT_GROUP_PROTOCOL", 23, false, "The group member's supported protocols are incompatible with those of existing members or first group member tried to join with empty protocol type or empty protocol list."}
+ InvalidGroupID = &Error{"INVALID_GROUP_ID", 24, false, "The configured groupID is invalid."}
+ UnknownMemberID = &Error{"UNKNOWN_MEMBER_ID", 25, false, "The coordinator is not aware of this member."}
+ InvalidSessionTimeout = &Error{"INVALID_SESSION_TIMEOUT", 26, false, "The session timeout is not within the range allowed by the broker (as configured by group.min.session.timeout.ms and group.max.session.timeout.ms)."}
+ RebalanceInProgress = &Error{"REBALANCE_IN_PROGRESS", 27, false, "The group is rebalancing, so a rejoin is needed."}
+ InvalidCommitOffsetSize = &Error{"INVALID_COMMIT_OFFSET_SIZE", 28, false, "The committing offset data size is not valid."}
+ TopicAuthorizationFailed = &Error{"TOPIC_AUTHORIZATION_FAILED", 29, false, "Not authorized to access topics: [Topic authorization failed.]"}
+ GroupAuthorizationFailed = &Error{"GROUP_AUTHORIZATION_FAILED", 30, false, "Not authorized to access group: Group authorization failed."}
+ ClusterAuthorizationFailed = &Error{"CLUSTER_AUTHORIZATION_FAILED", 31, false, "Cluster authorization failed."}
+ InvalidTimestamp = &Error{"INVALID_TIMESTAMP", 32, false, "The timestamp of the message is out of acceptable range."}
+ UnsupportedSaslMechanism = &Error{"UNSUPPORTED_SASL_MECHANISM", 33, false, "The broker does not support the requested SASL mechanism."}
+ IllegalSaslState = &Error{"ILLEGAL_SASL_STATE", 34, false, "Request is not valid given the current SASL state."}
+ UnsupportedVersion = &Error{"UNSUPPORTED_VERSION", 35, false, "The version of API is not supported."}
+ TopicAlreadyExists = &Error{"TOPIC_ALREADY_EXISTS", 36, false, "Topic with this name already exists."}
+ InvalidPartitions = &Error{"INVALID_PARTITIONS", 37, false, "Number of partitions is below 1."}
+ InvalidReplicationFactor = &Error{"INVALID_REPLICATION_FACTOR", 38, false, "Replication factor is below 1 or larger than the number of available brokers."}
+ InvalidReplicaAssignment = &Error{"INVALID_REPLICA_ASSIGNMENT", 39, false, "Replica assignment is invalid."}
+ InvalidConfig = &Error{"INVALID_CONFIG", 40, false, "Configuration is invalid."}
+ NotController = &Error{"NOT_CONTROLLER", 41, true, "This is not the correct controller for this cluster."}
+ InvalidRequest = &Error{"INVALID_REQUEST", 42, false, "This most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details."}
+ UnsupportedForMessageFormat = &Error{"UNSUPPORTED_FOR_MESSAGE_FORMAT", 43, false, "The message format version on the broker does not support the request."}
+ PolicyViolation = &Error{"POLICY_VIOLATION", 44, false, "Request parameters do not satisfy the configured policy."}
+ OutOfOrderSequenceNumber = &Error{"OUT_OF_ORDER_SEQUENCE_NUMBER", 45, false, "The broker received an out of order sequence number."}
+ DuplicateSequenceNumber = &Error{"DUPLICATE_SEQUENCE_NUMBER", 46, false, "The broker received a duplicate sequence number."}
+ InvalidProducerEpoch = &Error{"INVALID_PRODUCER_EPOCH", 47, false, "Producer attempted an operation with an old epoch."}
+ InvalidTxnState = &Error{"INVALID_TXN_STATE", 48, false, "The producer attempted a transactional operation in an invalid state."}
+ InvalidProducerIDMapping = &Error{"INVALID_PRODUCER_ID_MAPPING", 49, false, "The producer attempted to use a producer id which is not currently assigned to its transactional id."}
+ InvalidTransactionTimeout = &Error{"INVALID_TRANSACTION_TIMEOUT", 50, false, "The transaction timeout is larger than the maximum value allowed by the broker (as configured by transaction.max.timeout.ms)."}
+ ConcurrentTransactions = &Error{"CONCURRENT_TRANSACTIONS", 51, false, "The producer attempted to update a transaction while another concurrent operation on the same transaction was ongoing."}
+ TransactionCoordinatorFenced = &Error{"TRANSACTION_COORDINATOR_FENCED", 52, false, "Indicates that the transaction coordinator sending a WriteTxnMarker is no longer the current coordinator for a given producer."}
+ TransactionalIDAuthorizationFailed = &Error{"TRANSACTIONAL_ID_AUTHORIZATION_FAILED", 53, false, "Transactional ID authorization failed."}
+ SecurityDisabled = &Error{"SECURITY_DISABLED", 54, false, "Security features are disabled."}
+ OperationNotAttempted = &Error{"OPERATION_NOT_ATTEMPTED", 55, false, "The broker did not attempt to execute this operation. This may happen for batched RPCs where some operations in the batch failed, causing the broker to respond without trying the rest."}
+ KafkaStorageError = &Error{"KAFKA_STORAGE_ERROR", 56, true, "Disk error when trying to access log file on the disk."}
+ LogDirNotFound = &Error{"LOG_DIR_NOT_FOUND", 57, false, "The user-specified log directory is not found in the broker config."}
+ SaslAuthenticationFailed = &Error{"SASL_AUTHENTICATION_FAILED", 58, false, "SASL Authentication failed."}
+ UnknownProducerID = &Error{"UNKNOWN_PRODUCER_ID", 59, false, "This exception is raised by the broker if it could not locate the producer metadata associated with the producerID in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerID are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception."}
+ ReassignmentInProgress = &Error{"REASSIGNMENT_IN_PROGRESS", 60, false, "A partition reassignment is in progress."}
+ DelegationTokenAuthDisabled = &Error{"DELEGATION_TOKEN_AUTH_DISABLED", 61, false, "Delegation Token feature is not enabled."}
+ DelegationTokenNotFound = &Error{"DELEGATION_TOKEN_NOT_FOUND", 62, false, "Delegation Token is not found on server."}
+ DelegationTokenOwnerMismatch = &Error{"DELEGATION_TOKEN_OWNER_MISMATCH", 63, false, "Specified Principal is not valid Owner/Renewer."}
+ DelegationTokenRequestNotAllowed = &Error{"DELEGATION_TOKEN_REQUEST_NOT_ALLOWED", 64, false, "Delegation Token requests are not allowed on PLAINTEXT/1-way SSL channels and on delegation token authenticated channels."}
+ DelegationTokenAuthorizationFailed = &Error{"DELEGATION_TOKEN_AUTHORIZATION_FAILED", 65, false, "Delegation Token authorization failed."}
+ DelegationTokenExpired = &Error{"DELEGATION_TOKEN_EXPIRED", 66, false, "Delegation Token is expired."}
+ InvalidPrincipalType = &Error{"INVALID_PRINCIPAL_TYPE", 67, false, "Supplied principalType is not supported."}
+ NonEmptyGroup = &Error{"NON_EMPTY_GROUP", 68, false, "The group is not empty."}
+ GroupIDNotFound = &Error{"GROUP_ID_NOT_FOUND", 69, false, "The group id does not exist."}
+ FetchSessionIDNotFound = &Error{"FETCH_SESSION_ID_NOT_FOUND", 70, true, "The fetch session ID was not found."}
+ InvalidFetchSessionEpoch = &Error{"INVALID_FETCH_SESSION_EPOCH", 71, true, "The fetch session epoch is invalid."}
+ ListenerNotFound = &Error{"LISTENER_NOT_FOUND", 72, true, "There is no listener on the leader broker that matches the listener on which metadata request was processed."}
+ TopicDeletionDisabled = &Error{"TOPIC_DELETION_DISABLED", 73, false, "Topic deletion is disabled."}
+ FencedLeaderEpoch = &Error{"FENCED_LEADER_EPOCH", 74, true, "The leader epoch in the request is older than the epoch on the broker"}
+ UnknownLeaderEpoch = &Error{"UNKNOWN_LEADER_EPOCH", 75, true, "The leader epoch in the request is newer than the epoch on the broker"}
+ UnsupportedCompressionType = &Error{"UNSUPPORTED_COMPRESSION_TYPE", 76, false, "The requesting client does not support the compression type of given partition."}
+ StaleBrokerEpoch = &Error{"STALE_BROKER_EPOCH", 77, false, "Broker epoch has changed"}
+ OffsetNotAvailable = &Error{"OFFSET_NOT_AVAILABLE", 78, true, "The leader high watermark has not caught up from a recent leader election so the offsets cannot be guaranteed to be monotonically increasing"}
+ MemberIDRequired = &Error{"MEMBER_ID_REQUIRED", 79, false, "The group member needs to have a valid member id before actually entering a consumer group"}
+ PreferredLeaderNotAvailable = &Error{"PREFERRED_LEADER_NOT_AVAILABLE", 80, true, "The preferred leader was not available"}
+ GroupMaxSizeReached = &Error{"GROUP_MAX_SIZE_REACHED", 81, false, "The consumer group has reached its max size"}
+ FencedInstanceID = &Error{"FENCED_INSTANCE_ID", 82, false, "The broker rejected this static consumer since another consumer with the same group.instance.id has registered with a different member.id."}
+ EligibleLeadersNotAvailable = &Error{"ELIGIBLE_LEADERS_NOT_AVAILABLE", 83, true, "Eligible topic partition leaders are not available"}
+ ElectionNotNeeded = &Error{"ELECTION_NOT_NEEDED", 84, true, "Leader election not needed for topic partition"}
+ NoReassignmentInProgress = &Error{"NO_REASSIGNMENT_IN_PROGRESS", 85, false, "No partition reassignment is in progress."}
+ GroupSubscribedToTopic = &Error{"GROUP_SUBSCRIBED_TO_TOPIC", 86, false, "Deleting offsets of a topic is forbidden while the consumer group is actively subscribed to it."}
+ InvalidRecord = &Error{"INVALID_RECORD", 87, false, "This record has failed the validation on broker and hence be rejected."}
+ UnstableOffsetCommit = &Error{"UNSTABLE_OFFSET_COMMIT", 88, true, "There are unstable offsets that need to be cleared."}
+ ThrottlingQuotaExceeded = &Error{"THROTTLING_QUOTA_EXCEEDED", 89, true, "The throttling quota has been exceeded."}
+ ProducerFenced = &Error{"PRODUCER_FENCED", 90, false, "There is a newer producer with the same transactionalId which fences the current one."}
+ ResourceNotFound = &Error{"RESOURCE_NOT_FOUND", 91, false, "A request illegally referred to a resource that does not exist."}
+ DuplicateResource = &Error{"DUPLICATE_RESOURCE", 92, false, "A request illegally referred to the same resource twice."}
+ UnacceptableCredential = &Error{"UNACCEPTABLE_CREDENTIAL", 93, false, "Requested credential would not meet criteria for acceptability."}
+ InconsistentVoterSet = &Error{"INCONSISTENT_VOTER_SET", 94, false, "Indicates that either the sender or recipient of a voter-only request is not one of the expected voters."}
+ InvalidUpdateVersion = &Error{"INVALID_UPDATE_VERSION", 95, false, "The given update version was invalid."}
+ FeatureUpdateFailed = &Error{"FEATURE_UPDATE_FAILED", 96, false, "Unable to update finalized features due to an unexpected server error."}
+ PrincipalDeserializationFailure = &Error{"PRINCIPAL_DESERIALIZATION_FAILURE", 97, false, "Request principal deserialization failed during forwarding. This indicates an internal error on the broker cluster security setup."}
+ SnapshotNotFound = &Error{"SNAPSHOT_NOT_FOUND", 98, false, "Requested snapshot was not found."}
+ PositionOutOfRange = &Error{"POSITION_OUT_OF_RANGE", 99, false, "Requested position is not greater than or equal to zero, and less than the size of the snapshot."}
+ UnknownTopicID = &Error{"UNKNOWN_TOPIC_ID", 100, true, "This server does not host this topic ID."}
+ DuplicateBrokerRegistration = &Error{"DUPLICATE_BROKER_REGISTRATION", 101, false, "This broker ID is already in use."}
+ BrokerIDNotRegistered = &Error{"BROKER_ID_NOT_REGISTERED", 102, false, "The given broker ID was not registered."}
+ InconsistentTopicID = &Error{"INCONSISTENT_TOPIC_ID", 103, true, "The log's topic ID did not match the topic ID in the request."}
+ InconsistentClusterID = &Error{"INCONSISTENT_CLUSTER_ID", 104, false, "The clusterId in the request does not match that found on the server."}
+ TransactionalIDNotFound = &Error{"TRANSACTIONAL_ID_NOT_FOUND", 105, false, "The transactionalId could not be found."}
+ FetchSessionTopicIDError = &Error{"FETCH_SESSION_TOPIC_ID_ERROR", 106, true, "The fetch session encountered inconsistent topic ID usage."}
+ IneligibleReplica = &Error{"INELIGIBLE_REPLICA", 107, false, "The new ISR contains at least one ineligible replica."}
+ NewLeaderElected = &Error{"NEW_LEADER_ELECTED", 108, false, "The AlterPartition request successfully updated the partition state but the leader has changed."}
+ OffsetMovedToTieredStorage = &Error{"OFFSET_MOVED_TO_TIERED_STORAGE", 109, false, "The requested offset is moved to tiered storage."}
+ FencedMemberEpoch = &Error{"FENCED_MEMBER_EPOCH", 110, false, "The member epoch is fenced by the group coordinator. The member must abandon all its partitions and rejoin."}
+ UnreleasedInstanceID = &Error{"UNRELEASED_INSTANCE_ID", 111, false, "The instance ID is still used by another member in the consumer group. That member must leave first."}
+ UnsupportedAssignor = &Error{"UNSUPPORTED_ASSIGNOR", 112, false, "The assignor or its version range is not supported by the consumer group."}
+ StaleMemberEpoch = &Error{"STALE_MEMBER_EPOCH", 113, false, "The member epoch is stale. The member must retry after receiving its updated member epoch via the ConsumerGroupHeartbeat API."}
+ MismatchedEndpointType = &Error{"MISMATCHED_ENDPOINT_TYPE", 114, false, "The request was sent to an endpoint of the wrong type."}
+ UnsupportedEndpointType = &Error{"UNSUPPORTED_ENDPOINT_TYPE", 115, false, "This endpoint type is not supported yet."}
+ UnknownControllerID = &Error{"UNKNOWN_CONTROLLER_ID", 116, false, "This controller ID is not known"}
+)
+
+var code2err = map[int16]error{
+ -1: UnknownServerError,
+ 0: nil,
+ 1: OffsetOutOfRange,
+ 2: CorruptMessage,
+ 3: UnknownTopicOrPartition,
+ 4: InvalidFetchSize,
+ 5: LeaderNotAvailable,
+ 6: NotLeaderForPartition,
+ 7: RequestTimedOut,
+ 8: BrokerNotAvailable,
+ 9: ReplicaNotAvailable,
+ 10: MessageTooLarge,
+ 11: StaleControllerEpoch,
+ 12: OffsetMetadataTooLarge,
+ 13: NetworkException,
+ 14: CoordinatorLoadInProgress,
+ 15: CoordinatorNotAvailable,
+ 16: NotCoordinator,
+ 17: InvalidTopicException,
+ 18: RecordListTooLarge,
+ 19: NotEnoughReplicas,
+ 20: NotEnoughReplicasAfterAppend,
+ 21: InvalidRequiredAcks,
+ 22: IllegalGeneration,
+ 23: InconsistentGroupProtocol,
+ 24: InvalidGroupID,
+ 25: UnknownMemberID,
+ 26: InvalidSessionTimeout,
+ 27: RebalanceInProgress,
+ 28: InvalidCommitOffsetSize,
+ 29: TopicAuthorizationFailed,
+ 30: GroupAuthorizationFailed,
+ 31: ClusterAuthorizationFailed,
+ 32: InvalidTimestamp,
+ 33: UnsupportedSaslMechanism,
+ 34: IllegalSaslState,
+ 35: UnsupportedVersion,
+ 36: TopicAlreadyExists,
+ 37: InvalidPartitions,
+ 38: InvalidReplicationFactor,
+ 39: InvalidReplicaAssignment,
+ 40: InvalidConfig,
+ 41: NotController,
+ 42: InvalidRequest,
+ 43: UnsupportedForMessageFormat,
+ 44: PolicyViolation,
+ 45: OutOfOrderSequenceNumber,
+ 46: DuplicateSequenceNumber,
+ 47: InvalidProducerEpoch,
+ 48: InvalidTxnState,
+ 49: InvalidProducerIDMapping,
+ 50: InvalidTransactionTimeout,
+ 51: ConcurrentTransactions,
+ 52: TransactionCoordinatorFenced,
+ 53: TransactionalIDAuthorizationFailed,
+ 54: SecurityDisabled,
+ 55: OperationNotAttempted,
+ 56: KafkaStorageError,
+ 57: LogDirNotFound,
+ 58: SaslAuthenticationFailed,
+ 59: UnknownProducerID,
+ 60: ReassignmentInProgress,
+ 61: DelegationTokenAuthDisabled,
+ 62: DelegationTokenNotFound,
+ 63: DelegationTokenOwnerMismatch,
+ 64: DelegationTokenRequestNotAllowed,
+ 65: DelegationTokenAuthorizationFailed,
+ 66: DelegationTokenExpired,
+ 67: InvalidPrincipalType,
+ 68: NonEmptyGroup,
+ 69: GroupIDNotFound,
+ 70: FetchSessionIDNotFound,
+ 71: InvalidFetchSessionEpoch,
+ 72: ListenerNotFound,
+ 73: TopicDeletionDisabled,
+ 74: FencedLeaderEpoch,
+ 75: UnknownLeaderEpoch,
+ 76: UnsupportedCompressionType,
+ 77: StaleBrokerEpoch,
+ 78: OffsetNotAvailable,
+ 79: MemberIDRequired,
+ 80: PreferredLeaderNotAvailable,
+ 81: GroupMaxSizeReached,
+ 82: FencedInstanceID,
+ 83: EligibleLeadersNotAvailable,
+ 84: ElectionNotNeeded,
+ 85: NoReassignmentInProgress,
+ 86: GroupSubscribedToTopic,
+ 87: InvalidRecord,
+ 88: UnstableOffsetCommit,
+ 89: ThrottlingQuotaExceeded,
+ 90: ProducerFenced,
+ 91: ResourceNotFound,
+ 92: DuplicateResource,
+ 93: UnacceptableCredential,
+ 94: InconsistentVoterSet,
+ 95: InvalidUpdateVersion,
+ 96: FeatureUpdateFailed,
+ 97: PrincipalDeserializationFailure,
+ 98: SnapshotNotFound,
+ 99: PositionOutOfRange,
+ 100: UnknownTopicID,
+ 101: DuplicateBrokerRegistration,
+ 102: BrokerIDNotRegistered,
+ 103: InconsistentTopicID,
+ 104: InconsistentClusterID,
+ 105: TransactionalIDNotFound,
+ 106: FetchSessionTopicIDError,
+ 107: IneligibleReplica,
+ 108: NewLeaderElected,
+ 109: OffsetMovedToTieredStorage, // KIP-405, v3.5
+ 110: FencedMemberEpoch, // KIP-848, released unstable in v3.6, stable in 3.7
+ 111: UnreleasedInstanceID, // ""
+ 112: UnsupportedAssignor, // ""
+ 113: StaleMemberEpoch, // ""
+ 114: MismatchedEndpointType, // KIP-919, v3.7
+ 115: UnsupportedEndpointType, // ""
+ 116: UnknownControllerID, // ""
+
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/00_produce.go b/vendor/github.com/twmb/franz-go/pkg/kfake/00_produce.go
new file mode 100644
index 0000000000000..9aeb7668130ad
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/00_produce.go
@@ -0,0 +1,180 @@
+package kfake
+
+import (
+ "hash/crc32"
+ "net"
+ "strconv"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO
+// * Leaders
+// * Support txns
+// * Multiple batches in one produce
+// * Compact
+
+func init() { regKey(0, 3, 10) }
+
+func (c *Cluster) handleProduce(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ var (
+ req = kreq.(*kmsg.ProduceRequest)
+ resp = req.ResponseKind().(*kmsg.ProduceResponse)
+ tdone = make(map[string][]kmsg.ProduceResponseTopicPartition)
+ )
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ donep := func(t string, p kmsg.ProduceRequestTopicPartition, errCode int16) *kmsg.ProduceResponseTopicPartition {
+ sp := kmsg.NewProduceResponseTopicPartition()
+ sp.Partition = p.Partition
+ sp.ErrorCode = errCode
+ ps := tdone[t]
+ ps = append(ps, sp)
+ tdone[t] = ps
+ return &ps[len(ps)-1]
+ }
+ donet := func(t kmsg.ProduceRequestTopic, errCode int16) {
+ for _, p := range t.Partitions {
+ donep(t.Topic, p, errCode)
+ }
+ }
+ donets := func(errCode int16) {
+ for _, t := range req.Topics {
+ donet(t, errCode)
+ }
+ }
+ var includeBrokers bool
+ toresp := func() kmsg.Response {
+ for topic, partitions := range tdone {
+ st := kmsg.NewProduceResponseTopic()
+ st.Topic = topic
+ st.Partitions = partitions
+ resp.Topics = append(resp.Topics, st)
+ }
+ if includeBrokers {
+ for _, b := range c.bs {
+ sb := kmsg.NewProduceResponseBroker()
+ h, p, _ := net.SplitHostPort(b.ln.Addr().String())
+ p32, _ := strconv.Atoi(p)
+ sb.NodeID = b.node
+ sb.Host = h
+ sb.Port = int32(p32)
+ resp.Brokers = append(resp.Brokers, sb)
+ }
+ }
+ return resp
+ }
+
+ if req.TransactionID != nil {
+ donets(kerr.TransactionalIDAuthorizationFailed.Code)
+ return toresp(), nil
+ }
+ switch req.Acks {
+ case -1, 0, 1:
+ default:
+ donets(kerr.InvalidRequiredAcks.Code)
+ return toresp(), nil
+ }
+
+ now := time.Now().UnixMilli()
+ for _, rt := range req.Topics {
+ for _, rp := range rt.Partitions {
+ pd, ok := c.data.tps.getp(rt.Topic, rp.Partition)
+ if !ok {
+ donep(rt.Topic, rp, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ if pd.leader != b {
+ p := donep(rt.Topic, rp, kerr.NotLeaderForPartition.Code)
+ p.CurrentLeader.LeaderID = pd.leader.node
+ p.CurrentLeader.LeaderEpoch = pd.epoch
+ includeBrokers = true
+ continue
+ }
+
+ var b kmsg.RecordBatch
+ if err := b.ReadFrom(rp.Records); err != nil {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ if b.FirstOffset != 0 {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ if int(b.Length) != len(rp.Records)-12 {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ if b.PartitionLeaderEpoch != -1 {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ if b.Magic != 2 {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ if b.CRC != int32(crc32.Checksum(rp.Records[21:], crc32c)) { // crc starts at byte 21
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ attrs := uint16(b.Attributes)
+ if attrs&0x0007 > 4 {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ logAppendTime := int64(-1)
+ if attrs&0x0008 > 0 {
+ b.FirstTimestamp = now
+ b.MaxTimestamp = now
+ logAppendTime = now
+ }
+ if attrs&0xfff0 != 0 { // TODO txn bit
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+ if b.LastOffsetDelta != b.NumRecords-1 {
+ donep(rt.Topic, rp, kerr.CorruptMessage.Code)
+ continue
+ }
+
+ seqs, epoch := c.pids.get(b.ProducerID, b.ProducerEpoch, rt.Topic, rp.Partition)
+ if be := b.ProducerEpoch; be != -1 {
+ if be < epoch {
+ donep(rt.Topic, rp, kerr.FencedLeaderEpoch.Code)
+ continue
+ } else if be > epoch {
+ donep(rt.Topic, rp, kerr.UnknownLeaderEpoch.Code)
+ continue
+ }
+ }
+ ok, dup := seqs.pushAndValidate(b.FirstSequence, b.NumRecords)
+ if !ok {
+ donep(rt.Topic, rp, kerr.OutOfOrderSequenceNumber.Code)
+ continue
+ }
+ if dup {
+ donep(rt.Topic, rp, 0)
+ continue
+ }
+ baseOffset := pd.highWatermark
+ lso := pd.logStartOffset
+ pd.pushBatch(len(rp.Records), b)
+ sp := donep(rt.Topic, rp, 0)
+ sp.BaseOffset = baseOffset
+ sp.LogAppendTime = logAppendTime
+ sp.LogStartOffset = lso
+ }
+ }
+
+ if req.Acks == 0 {
+ return nil, nil
+ }
+ return toresp(), nil
+}
+
+var crc32c = crc32.MakeTable(crc32.Castagnoli)
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/01_fetch.go b/vendor/github.com/twmb/franz-go/pkg/kfake/01_fetch.go
new file mode 100644
index 0000000000000..53f9a8e6ee960
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/01_fetch.go
@@ -0,0 +1,236 @@
+package kfake
+
+import (
+ "net"
+ "strconv"
+ "sync"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// Behavior:
+//
+// * If topic does not exist, we hang
+// * Topic created while waiting is not returned in final response
+// * If any partition is on a different broker, we return immediately
+// * Out of range fetch causes early return
+// * Raw bytes of batch counts against wait bytes
+
+func init() { regKey(1, 4, 16) }
+
+func (c *Cluster) handleFetch(creq *clientReq, w *watchFetch) (kmsg.Response, error) {
+ var (
+ req = creq.kreq.(*kmsg.FetchRequest)
+ resp = req.ResponseKind().(*kmsg.FetchResponse)
+ )
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ var (
+ nbytes int
+ returnEarly bool
+ needp tps[int]
+ )
+ if w == nil {
+ out:
+ for i, rt := range req.Topics {
+ if req.Version >= 13 {
+ rt.Topic = c.data.id2t[rt.TopicID]
+ req.Topics[i].Topic = rt.Topic
+ }
+ t, ok := c.data.tps.gett(rt.Topic)
+ if !ok {
+ continue
+ }
+ for _, rp := range rt.Partitions {
+ pd, ok := t[rp.Partition]
+ if !ok || pd.createdAt.After(creq.at) {
+ continue
+ }
+ if pd.leader != creq.cc.b {
+ returnEarly = true // NotLeaderForPartition
+ break out
+ }
+ i, ok, atEnd := pd.searchOffset(rp.FetchOffset)
+ if atEnd {
+ continue
+ }
+ if !ok {
+ returnEarly = true // OffsetOutOfRange
+ break out
+ }
+ pbytes := 0
+ for _, b := range pd.batches[i:] {
+ nbytes += b.nbytes
+ pbytes += b.nbytes
+ if pbytes >= int(rp.PartitionMaxBytes) {
+ returnEarly = true
+ break out
+ }
+ }
+ needp.set(rt.Topic, rp.Partition, int(rp.PartitionMaxBytes)-pbytes)
+ }
+ }
+ }
+
+ wait := time.Duration(req.MaxWaitMillis) * time.Millisecond
+ deadline := creq.at.Add(wait)
+ if w == nil && !returnEarly && nbytes < int(req.MinBytes) && time.Now().Before(deadline) {
+ w := &watchFetch{
+ need: int(req.MinBytes) - nbytes,
+ needp: needp,
+ deadline: deadline,
+ creq: creq,
+ }
+ w.cb = func() {
+ select {
+ case c.watchFetchCh <- w:
+ case <-c.die:
+ }
+ }
+ for _, rt := range req.Topics {
+ t, ok := c.data.tps.gett(rt.Topic)
+ if !ok {
+ continue
+ }
+ for _, rp := range rt.Partitions {
+ pd, ok := t[rp.Partition]
+ if !ok || pd.createdAt.After(creq.at) {
+ continue
+ }
+ pd.watch[w] = struct{}{}
+ w.in = append(w.in, pd)
+ }
+ }
+ w.t = time.AfterFunc(wait, w.cb)
+ return nil, nil
+ }
+
+ id2t := make(map[uuid]string)
+ tidx := make(map[string]int)
+
+ donet := func(t string, id uuid, errCode int16) *kmsg.FetchResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ id2t[id] = t
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewFetchResponseTopic()
+ st.Topic = t
+ st.TopicID = id
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, id uuid, p int32, errCode int16) *kmsg.FetchResponseTopicPartition {
+ sp := kmsg.NewFetchResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, id, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+
+ var includeBrokers bool
+ defer func() {
+ if includeBrokers {
+ for _, b := range c.bs {
+ sb := kmsg.NewFetchResponseBroker()
+ h, p, _ := net.SplitHostPort(b.ln.Addr().String())
+ p32, _ := strconv.Atoi(p)
+ sb.NodeID = b.node
+ sb.Host = h
+ sb.Port = int32(p32)
+ resp.Brokers = append(resp.Brokers, sb)
+ }
+ }
+ }()
+
+ var batchesAdded int
+full:
+ for _, rt := range req.Topics {
+ for _, rp := range rt.Partitions {
+ pd, ok := c.data.tps.getp(rt.Topic, rp.Partition)
+ if !ok {
+ if req.Version >= 13 {
+ donep(rt.Topic, rt.TopicID, rp.Partition, kerr.UnknownTopicID.Code)
+ } else {
+ donep(rt.Topic, rt.TopicID, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ }
+ continue
+ }
+ if pd.leader != creq.cc.b {
+ p := donep(rt.Topic, rt.TopicID, rp.Partition, kerr.NotLeaderForPartition.Code)
+ p.CurrentLeader.LeaderID = pd.leader.node
+ p.CurrentLeader.LeaderEpoch = pd.epoch
+ includeBrokers = true
+ continue
+ }
+ sp := donep(rt.Topic, rt.TopicID, rp.Partition, 0)
+ sp.HighWatermark = pd.highWatermark
+ sp.LastStableOffset = pd.lastStableOffset
+ sp.LogStartOffset = pd.logStartOffset
+ i, ok, atEnd := pd.searchOffset(rp.FetchOffset)
+ if atEnd {
+ continue
+ }
+ if !ok {
+ sp.ErrorCode = kerr.OffsetOutOfRange.Code
+ continue
+ }
+ var pbytes int
+ for _, b := range pd.batches[i:] {
+ if nbytes = nbytes + b.nbytes; nbytes > int(req.MaxBytes) && batchesAdded > 1 {
+ break full
+ }
+ if pbytes = pbytes + b.nbytes; pbytes > int(rp.PartitionMaxBytes) && batchesAdded > 1 {
+ break
+ }
+ batchesAdded++
+ sp.RecordBatches = b.AppendTo(sp.RecordBatches)
+ }
+ }
+ }
+
+ return resp, nil
+}
+
+type watchFetch struct {
+ need int
+ needp tps[int]
+ deadline time.Time
+ creq *clientReq
+
+ in []*partData
+ cb func()
+ t *time.Timer
+
+ once sync.Once
+ cleaned bool
+}
+
+func (w *watchFetch) push(nbytes int) {
+ w.need -= nbytes
+ if w.need <= 0 {
+ w.once.Do(func() {
+ go w.cb()
+ })
+ }
+}
+
+func (w *watchFetch) deleted() {
+ w.once.Do(func() {
+ go w.cb()
+ })
+}
+
+func (w *watchFetch) cleanup(c *Cluster) {
+ w.cleaned = true
+ for _, in := range w.in {
+ delete(in.watch, w)
+ }
+ w.t.Stop()
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/02_list_offsets.go b/vendor/github.com/twmb/franz-go/pkg/kfake/02_list_offsets.go
new file mode 100644
index 0000000000000..587c41a6c777d
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/02_list_offsets.go
@@ -0,0 +1,99 @@
+package kfake
+
+import (
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(2, 0, 7) }
+
+func (c *Cluster) handleListOffsets(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.ListOffsetsRequest)
+ resp := req.ResponseKind().(*kmsg.ListOffsetsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ tidx := make(map[string]int)
+ donet := func(t string, errCode int16) *kmsg.ListOffsetsResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewListOffsetsResponseTopic()
+ st.Topic = t
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, p int32, errCode int16) *kmsg.ListOffsetsResponseTopicPartition {
+ sp := kmsg.NewListOffsetsResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+
+ for _, rt := range req.Topics {
+ ps, ok := c.data.tps.gett(rt.Topic)
+ for _, rp := range rt.Partitions {
+ if !ok {
+ donep(rt.Topic, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ pd, ok := ps[rp.Partition]
+ if !ok {
+ donep(rt.Topic, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ if pd.leader != b {
+ donep(rt.Topic, rp.Partition, kerr.NotLeaderForPartition.Code)
+ continue
+ }
+ if le := rp.CurrentLeaderEpoch; le != -1 {
+ if le < pd.epoch {
+ donep(rt.Topic, rp.Partition, kerr.FencedLeaderEpoch.Code)
+ continue
+ } else if le > pd.epoch {
+ donep(rt.Topic, rp.Partition, kerr.UnknownLeaderEpoch.Code)
+ continue
+ }
+ }
+
+ sp := donep(rt.Topic, rp.Partition, 0)
+ sp.LeaderEpoch = pd.epoch
+ switch rp.Timestamp {
+ case -2:
+ sp.Offset = pd.logStartOffset
+ case -1:
+ if req.IsolationLevel == 1 {
+ sp.Offset = pd.lastStableOffset
+ } else {
+ sp.Offset = pd.highWatermark
+ }
+ default:
+ // returns the index of the first batch _after_ the requested timestamp
+ idx, _ := sort.Find(len(pd.batches), func(idx int) int {
+ maxEarlier := pd.batches[idx].maxEarlierTimestamp
+ switch {
+ case maxEarlier > rp.Timestamp:
+ return -1
+ case maxEarlier == rp.Timestamp:
+ return 0
+ default:
+ return 1
+ }
+ })
+ if idx == len(pd.batches) {
+ sp.Offset = -1
+ } else {
+ sp.Offset = pd.batches[idx].FirstOffset
+ }
+ }
+ }
+ }
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/03_metadata.go b/vendor/github.com/twmb/franz-go/pkg/kfake/03_metadata.go
new file mode 100644
index 0000000000000..a4f81276725fa
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/03_metadata.go
@@ -0,0 +1,120 @@
+package kfake
+
+import (
+ "net"
+ "strconv"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(3, 0, 12) }
+
+func (c *Cluster) handleMetadata(kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.MetadataRequest)
+ resp := req.ResponseKind().(*kmsg.MetadataResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ for _, b := range c.bs {
+ sb := kmsg.NewMetadataResponseBroker()
+ h, p, _ := net.SplitHostPort(b.ln.Addr().String())
+ p32, _ := strconv.Atoi(p)
+ sb.NodeID = b.node
+ sb.Host = h
+ sb.Port = int32(p32)
+ resp.Brokers = append(resp.Brokers, sb)
+ }
+
+ resp.ClusterID = &c.cfg.clusterID
+ resp.ControllerID = c.controller.node
+
+ id2t := make(map[uuid]string)
+ tidx := make(map[string]int)
+
+ donet := func(t string, id uuid, errCode int16) *kmsg.MetadataResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ id2t[id] = t
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewMetadataResponseTopic()
+ if t != "" {
+ st.Topic = kmsg.StringPtr(t)
+ }
+ st.TopicID = id
+ st.ErrorCode = errCode
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, id uuid, p int32, errCode int16) *kmsg.MetadataResponseTopicPartition {
+ sp := kmsg.NewMetadataResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, id, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+ okp := func(t string, id uuid, p int32, pd *partData) {
+ nreplicas := c.data.treplicas[t]
+ if nreplicas > len(c.bs) {
+ nreplicas = len(c.bs)
+ }
+
+ sp := donep(t, id, p, 0)
+ sp.Leader = pd.leader.node
+ sp.LeaderEpoch = pd.epoch
+
+ for i := 0; i < nreplicas; i++ {
+ idx := (pd.leader.bsIdx + i) % len(c.bs)
+ sp.Replicas = append(sp.Replicas, c.bs[idx].node)
+ }
+ sp.ISR = sp.Replicas
+ }
+
+ allowAuto := req.AllowAutoTopicCreation && c.cfg.allowAutoTopic
+ for _, rt := range req.Topics {
+ var topic string
+ var ok bool
+ // If topic ID is present, we ignore any provided topic.
+ // Duplicate topics are merged into one response topic.
+ // Topics with no topic and no ID are ignored.
+ if rt.TopicID != noID {
+ if topic, ok = c.data.id2t[rt.TopicID]; !ok {
+ donet("", rt.TopicID, kerr.UnknownTopicID.Code)
+ continue
+ }
+ } else if rt.Topic == nil {
+ continue
+ } else {
+ topic = *rt.Topic
+ }
+
+ ps, ok := c.data.tps.gett(topic)
+ if !ok {
+ if !allowAuto {
+ donet(topic, rt.TopicID, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ c.data.mkt(topic, -1, -1, nil)
+ ps, _ = c.data.tps.gett(topic)
+ }
+
+ id := c.data.t2id[topic]
+ for p, pd := range ps {
+ okp(topic, id, p, pd)
+ }
+ }
+ if req.Topics == nil && c.data.tps != nil {
+ for topic, ps := range c.data.tps {
+ id := c.data.t2id[topic]
+ for p, pd := range ps {
+ okp(topic, id, p, pd)
+ }
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/08_offset_commit.go b/vendor/github.com/twmb/franz-go/pkg/kfake/08_offset_commit.go
new file mode 100644
index 0000000000000..b82400e0f4fa1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/08_offset_commit.go
@@ -0,0 +1,18 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(8, 0, 8) }
+
+func (c *Cluster) handleOffsetCommit(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.OffsetCommitRequest)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ c.groups.handleOffsetCommit(creq)
+ return nil, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/09_offset_fetch.go b/vendor/github.com/twmb/franz-go/pkg/kfake/09_offset_fetch.go
new file mode 100644
index 0000000000000..204339ecff767
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/09_offset_fetch.go
@@ -0,0 +1,17 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(9, 0, 8) }
+
+func (c *Cluster) handleOffsetFetch(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.OffsetFetchRequest)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ return c.groups.handleOffsetFetch(creq), nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/10_find_coordinator.go b/vendor/github.com/twmb/franz-go/pkg/kfake/10_find_coordinator.go
new file mode 100644
index 0000000000000..2873a3320bc17
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/10_find_coordinator.go
@@ -0,0 +1,61 @@
+package kfake
+
+import (
+ "net"
+ "strconv"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(10, 0, 4) }
+
+func (c *Cluster) handleFindCoordinator(kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.FindCoordinatorRequest)
+ resp := req.ResponseKind().(*kmsg.FindCoordinatorResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ var unknown bool
+ if req.CoordinatorType != 0 && req.CoordinatorType != 1 {
+ unknown = true
+ }
+
+ if req.Version <= 3 {
+ req.CoordinatorKeys = append(req.CoordinatorKeys, req.CoordinatorKey)
+ defer func() {
+ resp.ErrorCode = resp.Coordinators[0].ErrorCode
+ resp.ErrorMessage = resp.Coordinators[0].ErrorMessage
+ resp.NodeID = resp.Coordinators[0].NodeID
+ resp.Host = resp.Coordinators[0].Host
+ resp.Port = resp.Coordinators[0].Port
+ }()
+ }
+
+ addc := func(key string) *kmsg.FindCoordinatorResponseCoordinator {
+ sc := kmsg.NewFindCoordinatorResponseCoordinator()
+ sc.Key = key
+ resp.Coordinators = append(resp.Coordinators, sc)
+ return &resp.Coordinators[len(resp.Coordinators)-1]
+ }
+
+ for _, key := range req.CoordinatorKeys {
+ sc := addc(key)
+ if unknown {
+ sc.ErrorCode = kerr.InvalidRequest.Code
+ continue
+ }
+
+ b := c.coordinator(key)
+ host, port, _ := net.SplitHostPort(b.ln.Addr().String())
+ iport, _ := strconv.Atoi(port)
+
+ sc.NodeID = b.node
+ sc.Host = host
+ sc.Port = int32(iport)
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/11_join_group.go b/vendor/github.com/twmb/franz-go/pkg/kfake/11_join_group.go
new file mode 100644
index 0000000000000..70ed1d896e2cb
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/11_join_group.go
@@ -0,0 +1,18 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(11, 0, 9) }
+
+func (c *Cluster) handleJoinGroup(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.JoinGroupRequest)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ c.groups.handleJoin(creq)
+ return nil, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/12_heartbeat.go b/vendor/github.com/twmb/franz-go/pkg/kfake/12_heartbeat.go
new file mode 100644
index 0000000000000..59f12712cc0af
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/12_heartbeat.go
@@ -0,0 +1,23 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(12, 0, 4) }
+
+func (c *Cluster) handleHeartbeat(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.HeartbeatRequest)
+ resp := req.ResponseKind().(*kmsg.HeartbeatResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ if c.groups.handleHeartbeat(creq) {
+ return nil, nil
+ }
+ resp.ErrorCode = kerr.GroupIDNotFound.Code
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/13_leave_group.go b/vendor/github.com/twmb/franz-go/pkg/kfake/13_leave_group.go
new file mode 100644
index 0000000000000..e941f19ede25c
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/13_leave_group.go
@@ -0,0 +1,23 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(13, 0, 5) }
+
+func (c *Cluster) handleLeaveGroup(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.LeaveGroupRequest)
+ resp := req.ResponseKind().(*kmsg.LeaveGroupResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ if c.groups.handleLeave(creq) {
+ return nil, nil
+ }
+ resp.ErrorCode = kerr.GroupIDNotFound.Code
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/14_sync_group.go b/vendor/github.com/twmb/franz-go/pkg/kfake/14_sync_group.go
new file mode 100644
index 0000000000000..9944b11204adc
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/14_sync_group.go
@@ -0,0 +1,23 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(14, 0, 5) }
+
+func (c *Cluster) handleSyncGroup(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.SyncGroupRequest)
+ resp := req.ResponseKind().(*kmsg.SyncGroupResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ if c.groups.handleSync(creq) {
+ return nil, nil
+ }
+ resp.ErrorCode = kerr.GroupIDNotFound.Code
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/15_describe_groups.go b/vendor/github.com/twmb/franz-go/pkg/kfake/15_describe_groups.go
new file mode 100644
index 0000000000000..8791759becf79
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/15_describe_groups.go
@@ -0,0 +1,17 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(15, 0, 5) }
+
+func (c *Cluster) handleDescribeGroups(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.DescribeGroupsRequest)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ return c.groups.handleDescribe(creq), nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/16_list_groups.go b/vendor/github.com/twmb/franz-go/pkg/kfake/16_list_groups.go
new file mode 100644
index 0000000000000..6d0189c4b892b
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/16_list_groups.go
@@ -0,0 +1,17 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(16, 0, 4) }
+
+func (c *Cluster) handleListGroups(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.ListGroupsRequest)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ return c.groups.handleList(creq), nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/17_sasl_handshake.go b/vendor/github.com/twmb/franz-go/pkg/kfake/17_sasl_handshake.go
new file mode 100644
index 0000000000000..8a80cbf1c8240
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/17_sasl_handshake.go
@@ -0,0 +1,35 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(17, 1, 1) }
+
+func (c *Cluster) handleSASLHandshake(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.SASLHandshakeRequest)
+ resp := req.ResponseKind().(*kmsg.SASLHandshakeResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ if creq.cc.saslStage != saslStageBegin {
+ resp.ErrorCode = kerr.IllegalSaslState.Code
+ return resp, nil
+ }
+
+ switch req.Mechanism {
+ case saslPlain:
+ creq.cc.saslStage = saslStageAuthPlain
+ case saslScram256:
+ creq.cc.saslStage = saslStageAuthScram0_256
+ case saslScram512:
+ creq.cc.saslStage = saslStageAuthScram0_512
+ default:
+ resp.ErrorCode = kerr.UnsupportedSaslMechanism.Code
+ resp.SupportedMechanisms = []string{saslPlain, saslScram256, saslScram512}
+ }
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/18_api_versions.go b/vendor/github.com/twmb/franz-go/pkg/kfake/18_api_versions.go
new file mode 100644
index 0000000000000..c4378da4b8d03
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/18_api_versions.go
@@ -0,0 +1,81 @@
+package kfake
+
+import (
+ "fmt"
+ "sort"
+ "sync"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(18, 0, 3) }
+
+func (c *Cluster) handleApiVersions(kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.ApiVersionsRequest)
+ resp := req.ResponseKind().(*kmsg.ApiVersionsResponse)
+
+ if resp.Version > 3 {
+ resp.Version = 0 // downgrades to 0 if the version is unknown
+ }
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ // If we are handling ApiVersions, our package is initialized and we
+ // build our response once.
+ apiVersionsOnce.Do(func() {
+ for _, v := range apiVersionsKeys {
+ apiVersionsSorted = append(apiVersionsSorted, v)
+ }
+ sort.Slice(apiVersionsSorted, func(i, j int) bool {
+ return apiVersionsSorted[i].ApiKey < apiVersionsSorted[j].ApiKey
+ })
+ })
+ resp.ApiKeys = apiVersionsSorted
+
+ return resp, nil
+}
+
+// Called at the beginning of every request, this validates that the client
+// is sending requests within version ranges we can handle.
+func checkReqVersion(key, version int16) error {
+ v, exists := apiVersionsKeys[key]
+ if !exists {
+ return fmt.Errorf("unsupported request key %d", key)
+ }
+ if version < v.MinVersion {
+ return fmt.Errorf("%s version %d below min supported version %d", kmsg.NameForKey(key), version, v.MinVersion)
+ }
+ if version > v.MaxVersion {
+ return fmt.Errorf("%s version %d above max supported version %d", kmsg.NameForKey(key), version, v.MaxVersion)
+ }
+ return nil
+}
+
+var (
+ apiVersionsMu sync.Mutex
+ apiVersionsKeys = make(map[int16]kmsg.ApiVersionsResponseApiKey)
+
+ apiVersionsOnce sync.Once
+ apiVersionsSorted []kmsg.ApiVersionsResponseApiKey
+)
+
+// Every request we implement calls regKey in an init function, allowing us to
+// fully correctly build our ApiVersions response.
+func regKey(key, min, max int16) {
+ apiVersionsMu.Lock()
+ defer apiVersionsMu.Unlock()
+
+ if key < 0 || min < 0 || max < 0 || max < min {
+ panic(fmt.Sprintf("invalid registration, key: %d, min: %d, max: %d", key, min, max))
+ }
+ if _, exists := apiVersionsKeys[key]; exists {
+ panic(fmt.Sprintf("doubly registered key %d", key))
+ }
+ apiVersionsKeys[key] = kmsg.ApiVersionsResponseApiKey{
+ ApiKey: key,
+ MinVersion: min,
+ MaxVersion: max,
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/19_create_topics.go b/vendor/github.com/twmb/franz-go/pkg/kfake/19_create_topics.go
new file mode 100644
index 0000000000000..1f9d00e620bfa
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/19_create_topics.go
@@ -0,0 +1,85 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO
+//
+// * Return InvalidTopicException when names collide
+
+func init() { regKey(19, 0, 7) }
+
+func (c *Cluster) handleCreateTopics(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.CreateTopicsRequest)
+ resp := req.ResponseKind().(*kmsg.CreateTopicsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ donet := func(t string, errCode int16) *kmsg.CreateTopicsResponseTopic {
+ st := kmsg.NewCreateTopicsResponseTopic()
+ st.Topic = t
+ st.ErrorCode = errCode
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donets := func(errCode int16) {
+ for _, rt := range req.Topics {
+ donet(rt.Topic, errCode)
+ }
+ }
+
+ if b != c.controller {
+ donets(kerr.NotController.Code)
+ return resp, nil
+ }
+
+ uniq := make(map[string]struct{})
+ for _, rt := range req.Topics {
+ if _, ok := uniq[rt.Topic]; ok {
+ donets(kerr.InvalidRequest.Code)
+ return resp, nil
+ }
+ uniq[rt.Topic] = struct{}{}
+ }
+
+ for _, rt := range req.Topics {
+ if _, ok := c.data.tps.gett(rt.Topic); ok {
+ donet(rt.Topic, kerr.TopicAlreadyExists.Code)
+ continue
+ }
+ if len(rt.ReplicaAssignment) > 0 {
+ donet(rt.Topic, kerr.InvalidReplicaAssignment.Code)
+ continue
+ }
+ if int(rt.ReplicationFactor) > len(c.bs) {
+ donet(rt.Topic, kerr.InvalidReplicationFactor.Code)
+ continue
+ }
+ if rt.NumPartitions == 0 {
+ donet(rt.Topic, kerr.InvalidPartitions.Code)
+ continue
+ }
+ configs := make(map[string]*string)
+ for _, c := range rt.Configs {
+ configs[c.Name] = c.Value
+ }
+ c.data.mkt(rt.Topic, int(rt.NumPartitions), int(rt.ReplicationFactor), configs)
+ st := donet(rt.Topic, 0)
+ st.TopicID = c.data.t2id[rt.Topic]
+ st.NumPartitions = int32(len(c.data.tps[rt.Topic]))
+ st.ReplicationFactor = int16(c.data.treplicas[rt.Topic])
+ for k, v := range configs {
+ c := kmsg.NewCreateTopicsResponseTopicConfig()
+ c.Name = k
+ c.Value = v
+ // Source?
+ st.Configs = append(st.Configs, c)
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/20_delete_topics.go b/vendor/github.com/twmb/franz-go/pkg/kfake/20_delete_topics.go
new file mode 100644
index 0000000000000..eab80235e6db1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/20_delete_topics.go
@@ -0,0 +1,94 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(20, 0, 6) }
+
+func (c *Cluster) handleDeleteTopics(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.DeleteTopicsRequest)
+ resp := req.ResponseKind().(*kmsg.DeleteTopicsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ donet := func(t *string, id uuid, errCode int16) *kmsg.DeleteTopicsResponseTopic {
+ st := kmsg.NewDeleteTopicsResponseTopic()
+ st.Topic = t
+ st.TopicID = id
+ st.ErrorCode = errCode
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donets := func(errCode int16) {
+ for _, rt := range req.Topics {
+ donet(rt.Topic, rt.TopicID, errCode)
+ }
+ }
+
+ if req.Version <= 5 {
+ for _, topic := range req.TopicNames {
+ rt := kmsg.NewDeleteTopicsRequestTopic()
+ rt.Topic = kmsg.StringPtr(topic)
+ req.Topics = append(req.Topics, rt)
+ }
+ }
+
+ if b != c.controller {
+ donets(kerr.NotController.Code)
+ return resp, nil
+ }
+ for _, rt := range req.Topics {
+ if rt.TopicID != noID && rt.Topic != nil {
+ donets(kerr.InvalidRequest.Code)
+ return resp, nil
+ }
+ }
+
+ type toDelete struct {
+ topic string
+ id uuid
+ }
+ var toDeletes []toDelete
+ defer func() {
+ for _, td := range toDeletes {
+ delete(c.data.tps, td.topic)
+ delete(c.data.id2t, td.id)
+ delete(c.data.t2id, td.topic)
+
+ }
+ }()
+ for _, rt := range req.Topics {
+ var topic string
+ var id uuid
+ if rt.Topic != nil {
+ topic = *rt.Topic
+ id = c.data.t2id[topic]
+ } else {
+ topic = c.data.id2t[rt.TopicID]
+ id = rt.TopicID
+ }
+ t, ok := c.data.tps.gett(topic)
+ if !ok {
+ if rt.Topic != nil {
+ donet(&topic, id, kerr.UnknownTopicOrPartition.Code)
+ } else {
+ donet(&topic, id, kerr.UnknownTopicID.Code)
+ }
+ continue
+ }
+
+ donet(&topic, id, 0)
+ toDeletes = append(toDeletes, toDelete{topic, id})
+ for _, pd := range t {
+ for watch := range pd.watch {
+ watch.deleted()
+ }
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/21_delete_records.go b/vendor/github.com/twmb/franz-go/pkg/kfake/21_delete_records.go
new file mode 100644
index 0000000000000..e175045e4cfea
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/21_delete_records.go
@@ -0,0 +1,74 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO
+//
+// * Return InvalidTopicException when names collide
+
+func init() { regKey(21, 0, 2) }
+
+func (c *Cluster) handleDeleteRecords(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.DeleteRecordsRequest)
+ resp := req.ResponseKind().(*kmsg.DeleteRecordsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ tidx := make(map[string]int)
+ donet := func(t string, errCode int16) *kmsg.DeleteRecordsResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewDeleteRecordsResponseTopic()
+ st.Topic = t
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, p int32, errCode int16) *kmsg.DeleteRecordsResponseTopicPartition {
+ sp := kmsg.NewDeleteRecordsResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+
+ for _, rt := range req.Topics {
+ ps, ok := c.data.tps.gett(rt.Topic)
+ for _, rp := range rt.Partitions {
+ if !ok {
+ donep(rt.Topic, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ pd, ok := ps[rp.Partition]
+ if !ok {
+ donep(rt.Topic, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ if pd.leader != b {
+ donep(rt.Topic, rp.Partition, kerr.NotLeaderForPartition.Code)
+ continue
+ }
+ to := rp.Offset
+ if to == -1 {
+ to = pd.highWatermark
+ }
+ if to < pd.logStartOffset || to > pd.highWatermark {
+ donep(rt.Topic, rp.Partition, kerr.OffsetOutOfRange.Code)
+ continue
+ }
+ pd.logStartOffset = to
+ pd.trimLeft()
+ sp := donep(rt.Topic, rp.Partition, 0)
+ sp.LowWatermark = to
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/22_init_producer_id.go b/vendor/github.com/twmb/franz-go/pkg/kfake/22_init_producer_id.go
new file mode 100644
index 0000000000000..a0485905a42e7
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/22_init_producer_id.go
@@ -0,0 +1,34 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO
+//
+// * Transactional IDs
+// * v3+
+
+func init() { regKey(22, 0, 4) }
+
+func (c *Cluster) handleInitProducerID(kreq kmsg.Request) (kmsg.Response, error) {
+ var (
+ req = kreq.(*kmsg.InitProducerIDRequest)
+ resp = req.ResponseKind().(*kmsg.InitProducerIDResponse)
+ )
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ if req.TransactionalID != nil {
+ resp.ErrorCode = kerr.UnknownServerError.Code
+ return resp, nil
+ }
+
+ pid := c.pids.create(nil)
+ resp.ProducerID = pid.id
+ resp.ProducerEpoch = pid.epoch
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/23_offset_for_leader_epoch.go b/vendor/github.com/twmb/franz-go/pkg/kfake/23_offset_for_leader_epoch.go
new file mode 100644
index 0000000000000..f531ecf89cfab
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/23_offset_for_leader_epoch.go
@@ -0,0 +1,121 @@
+package kfake
+
+import (
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(23, 3, 4) }
+
+func (c *Cluster) handleOffsetForLeaderEpoch(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.OffsetForLeaderEpochRequest)
+ resp := req.ResponseKind().(*kmsg.OffsetForLeaderEpochResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ tidx := make(map[string]int)
+ donet := func(t string, errCode int16) *kmsg.OffsetForLeaderEpochResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewOffsetForLeaderEpochResponseTopic()
+ st.Topic = t
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, p int32, errCode int16) *kmsg.OffsetForLeaderEpochResponseTopicPartition {
+ sp := kmsg.NewOffsetForLeaderEpochResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+
+ for _, rt := range req.Topics {
+ ps, ok := c.data.tps.gett(rt.Topic)
+ for _, rp := range rt.Partitions {
+ if req.ReplicaID != -1 {
+ donep(rt.Topic, rp.Partition, kerr.UnknownServerError.Code)
+ continue
+ }
+ if !ok {
+ donep(rt.Topic, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ pd, ok := ps[rp.Partition]
+ if !ok {
+ donep(rt.Topic, rp.Partition, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ if pd.leader != b {
+ donep(rt.Topic, rp.Partition, kerr.NotLeaderForPartition.Code)
+ continue
+ }
+ if rp.CurrentLeaderEpoch < pd.epoch {
+ donep(rt.Topic, rp.Partition, kerr.FencedLeaderEpoch.Code)
+ continue
+ } else if rp.CurrentLeaderEpoch > pd.epoch {
+ donep(rt.Topic, rp.Partition, kerr.UnknownLeaderEpoch.Code)
+ continue
+ }
+
+ sp := donep(rt.Topic, rp.Partition, 0)
+
+ // If the user is requesting our current epoch, we return the HWM.
+ if rp.LeaderEpoch == pd.epoch {
+ sp.LeaderEpoch = pd.epoch
+ sp.EndOffset = pd.highWatermark
+ continue
+ }
+
+ // If our epoch was bumped before anything was
+ // produced, return the epoch and a start offset of 0.
+ if len(pd.batches) == 0 {
+ sp.LeaderEpoch = pd.epoch
+ sp.EndOffset = 0
+ if rp.LeaderEpoch > pd.epoch {
+ sp.LeaderEpoch = -1
+ sp.EndOffset = -1
+ }
+ continue
+ }
+
+ // What is the largest epoch after the requested epoch?
+ nextEpoch := rp.LeaderEpoch + 1
+ idx, _ := sort.Find(len(pd.batches), func(idx int) int {
+ batchEpoch := pd.batches[idx].epoch
+ switch {
+ case nextEpoch <= batchEpoch:
+ return -1
+ default:
+ return 1
+ }
+ })
+
+ // Requested epoch is not yet known: keep -1 returns.
+ if idx == len(pd.batches) {
+ sp.LeaderEpoch = -1
+ sp.EndOffset = -1
+ continue
+ }
+
+ // Next epoch is actually the first epoch: return the
+ // requested epoch and the LSO.
+ if idx == 0 {
+ sp.LeaderEpoch = rp.LeaderEpoch
+ sp.EndOffset = pd.logStartOffset
+ continue
+ }
+
+ sp.LeaderEpoch = pd.batches[idx-1].epoch
+ sp.EndOffset = pd.batches[idx].FirstOffset
+ }
+ }
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/32_describe_configs.go b/vendor/github.com/twmb/franz-go/pkg/kfake/32_describe_configs.go
new file mode 100644
index 0000000000000..02662cc5c858a
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/32_describe_configs.go
@@ -0,0 +1,108 @@
+package kfake
+
+import (
+ "strconv"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(32, 0, 4) }
+
+func (c *Cluster) handleDescribeConfigs(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.DescribeConfigsRequest)
+ resp := req.ResponseKind().(*kmsg.DescribeConfigsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ doner := func(n string, t kmsg.ConfigResourceType, errCode int16) *kmsg.DescribeConfigsResponseResource {
+ st := kmsg.NewDescribeConfigsResponseResource()
+ st.ResourceName = n
+ st.ResourceType = t
+ st.ErrorCode = errCode
+ resp.Resources = append(resp.Resources, st)
+ return &resp.Resources[len(resp.Resources)-1]
+ }
+
+ rfn := func(r *kmsg.DescribeConfigsResponseResource) func(k string, v *string, src kmsg.ConfigSource, sensitive bool) {
+ nameIdxs := make(map[string]int)
+ return func(k string, v *string, src kmsg.ConfigSource, sensitive bool) {
+ rc := kmsg.NewDescribeConfigsResponseResourceConfig()
+ rc.Name = k
+ rc.Value = v
+ rc.Source = src
+ rc.ReadOnly = rc.Source == kmsg.ConfigSourceStaticBrokerConfig
+ rc.IsDefault = rc.Source == kmsg.ConfigSourceDefaultConfig || rc.Source == kmsg.ConfigSourceStaticBrokerConfig
+ rc.IsSensitive = sensitive
+
+ // We walk configs from static to default to dynamic,
+ // if this config already exists previously, we move
+ // the previous config to a synonym and update the
+ // previous config.
+ if idx, ok := nameIdxs[k]; ok {
+ prior := r.Configs[idx]
+ syn := kmsg.NewDescribeConfigsResponseResourceConfigConfigSynonym()
+ syn.Name = prior.Name
+ syn.Value = prior.Value
+ syn.Source = prior.Source
+ rc.ConfigSynonyms = append([]kmsg.DescribeConfigsResponseResourceConfigConfigSynonym{syn}, prior.ConfigSynonyms...)
+ r.Configs[idx] = rc
+ return
+ }
+ nameIdxs[k] = len(r.Configs)
+ r.Configs = append(r.Configs, rc)
+ }
+ }
+ filter := func(rr *kmsg.DescribeConfigsRequestResource, r *kmsg.DescribeConfigsResponseResource) {
+ if rr.ConfigNames == nil {
+ return
+ }
+ names := make(map[string]struct{})
+ for _, name := range rr.ConfigNames {
+ names[name] = struct{}{}
+ }
+ keep := r.Configs[:0]
+ for _, rc := range r.Configs {
+ if _, ok := names[rc.Name]; ok {
+ keep = append(keep, rc)
+ }
+ }
+ r.Configs = keep
+ }
+
+outer:
+ for i := range req.Resources {
+ rr := &req.Resources[i]
+ switch rr.ResourceType {
+ case kmsg.ConfigResourceTypeBroker:
+ id := int32(-1)
+ if rr.ResourceName != "" {
+ iid, err := strconv.Atoi(rr.ResourceName)
+ id = int32(iid)
+ if err != nil || id != b.node {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue outer
+ }
+ }
+ r := doner(rr.ResourceName, rr.ResourceType, 0)
+ c.brokerConfigs(id, rfn(r))
+ filter(rr, r)
+
+ case kmsg.ConfigResourceTypeTopic:
+ if _, ok := c.data.tps.gett(rr.ResourceName); !ok {
+ doner(rr.ResourceName, rr.ResourceType, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ r := doner(rr.ResourceName, rr.ResourceType, 0)
+ c.data.configs(rr.ResourceName, rfn(r))
+ filter(rr, r)
+
+ default:
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/33_alter_configs.go b/vendor/github.com/twmb/franz-go/pkg/kfake/33_alter_configs.go
new file mode 100644
index 0000000000000..76e1e5fb576a0
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/33_alter_configs.go
@@ -0,0 +1,92 @@
+package kfake
+
+import (
+ "strconv"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(33, 0, 2) }
+
+func (c *Cluster) handleAlterConfigs(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.AlterConfigsRequest)
+ resp := req.ResponseKind().(*kmsg.AlterConfigsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ doner := func(n string, t kmsg.ConfigResourceType, errCode int16) *kmsg.AlterConfigsResponseResource {
+ st := kmsg.NewAlterConfigsResponseResource()
+ st.ResourceName = n
+ st.ResourceType = t
+ st.ErrorCode = errCode
+ resp.Resources = append(resp.Resources, st)
+ return &resp.Resources[len(resp.Resources)-1]
+ }
+
+outer:
+ for i := range req.Resources {
+ rr := &req.Resources[i]
+ switch rr.ResourceType {
+ case kmsg.ConfigResourceTypeBroker:
+ id := int32(-1)
+ if rr.ResourceName != "" {
+ iid, err := strconv.Atoi(rr.ResourceName)
+ id = int32(iid)
+ if err != nil || id != b.node {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue outer
+ }
+ }
+ var invalid bool
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ invalid = invalid || !c.setBrokerConfig(rc.Name, rc.Value, true)
+ }
+ if invalid {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue
+ }
+ doner(rr.ResourceName, rr.ResourceType, 0)
+ if req.ValidateOnly {
+ continue
+ }
+ c.bcfgs = make(map[string]*string)
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ c.setBrokerConfig(rc.Name, rc.Value, false)
+ }
+
+ case kmsg.ConfigResourceTypeTopic:
+ if _, ok := c.data.tps.gett(rr.ResourceName); !ok {
+ doner(rr.ResourceName, rr.ResourceType, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ var invalid bool
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ invalid = invalid || !c.data.setTopicConfig(rr.ResourceName, rc.Name, rc.Value, true)
+ }
+ if invalid {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue
+ }
+ doner(rr.ResourceName, rr.ResourceType, 0)
+ if req.ValidateOnly {
+ continue
+ }
+ delete(c.data.tcfgs, rr.ResourceName)
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ c.data.setTopicConfig(rr.ResourceName, rc.Name, rc.Value, false)
+ }
+
+ default:
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/34_alter_replica_log_dirs.go b/vendor/github.com/twmb/franz-go/pkg/kfake/34_alter_replica_log_dirs.go
new file mode 100644
index 0000000000000..8031447c3270b
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/34_alter_replica_log_dirs.go
@@ -0,0 +1,53 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(34, 0, 2) }
+
+func (c *Cluster) handleAlterReplicaLogDirs(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.AlterReplicaLogDirsRequest)
+ resp := req.ResponseKind().(*kmsg.AlterReplicaLogDirsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ tidx := make(map[string]int)
+ donet := func(t string, errCode int16) *kmsg.AlterReplicaLogDirsResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewAlterReplicaLogDirsResponseTopic()
+ st.Topic = t
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, p int32, errCode int16) *kmsg.AlterReplicaLogDirsResponseTopicPartition {
+ sp := kmsg.NewAlterReplicaLogDirsResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+
+ for _, rd := range req.Dirs {
+ for _, t := range rd.Topics {
+ for _, p := range t.Partitions {
+ d, ok := c.data.tps.getp(t.Topic, p)
+ if !ok {
+ donep(t.Topic, p, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ d.dir = rd.Dir
+ donep(t.Topic, p, 0)
+ }
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/35_describe_log_dirs.go b/vendor/github.com/twmb/franz-go/pkg/kfake/35_describe_log_dirs.go
new file mode 100644
index 0000000000000..e1ea9b8a9f08e
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/35_describe_log_dirs.go
@@ -0,0 +1,70 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(35, 0, 4) }
+
+func (c *Cluster) handleDescribeLogDirs(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.DescribeLogDirsRequest)
+ resp := req.ResponseKind().(*kmsg.DescribeLogDirsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ totalSpace := make(map[string]int64)
+ individual := make(map[string]map[string]map[int32]int64)
+
+ add := func(d string, t string, p int32, s int64) {
+ totalSpace[d] += s
+ ts, ok := individual[d]
+ if !ok {
+ ts = make(map[string]map[int32]int64)
+ individual[d] = ts
+ }
+ ps, ok := ts[t]
+ if !ok {
+ ps = make(map[int32]int64)
+ ts[t] = ps
+ }
+ ps[p] += s
+ }
+
+ if req.Topics == nil {
+ c.data.tps.each(func(t string, p int32, d *partData) {
+ add(d.dir, t, p, d.nbytes)
+ })
+ } else {
+ for _, t := range req.Topics {
+ for _, p := range t.Partitions {
+ d, ok := c.data.tps.getp(t.Topic, p)
+ if ok {
+ add(d.dir, t.Topic, p, d.nbytes)
+ }
+ }
+ }
+ }
+
+ for dir, ts := range individual {
+ rd := kmsg.NewDescribeLogDirsResponseDir()
+ rd.Dir = dir
+ rd.TotalBytes = totalSpace[dir]
+ rd.UsableBytes = 32 << 30
+ for t, ps := range ts {
+ rt := kmsg.NewDescribeLogDirsResponseDirTopic()
+ rt.Topic = t
+ for p, s := range ps {
+ rp := kmsg.NewDescribeLogDirsResponseDirTopicPartition()
+ rp.Partition = p
+ rp.Size = s
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ rd.Topics = append(rd.Topics, rt)
+ }
+ resp.Dirs = append(resp.Dirs, rd)
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/36_sasl_authenticate.go b/vendor/github.com/twmb/franz-go/pkg/kfake/36_sasl_authenticate.go
new file mode 100644
index 0000000000000..b94d2f0118fe9
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/36_sasl_authenticate.go
@@ -0,0 +1,83 @@
+package kfake
+
+import (
+ "errors"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(36, 0, 2) }
+
+func (c *Cluster) handleSASLAuthenticate(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.SASLAuthenticateRequest)
+ resp := req.ResponseKind().(*kmsg.SASLAuthenticateResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ switch creq.cc.saslStage {
+ default:
+ resp.ErrorCode = kerr.IllegalSaslState.Code
+ return resp, nil
+
+ case saslStageAuthPlain:
+ u, p, err := saslSplitPlain(req.SASLAuthBytes)
+ if err != nil {
+ return nil, err
+ }
+ if c.sasls.plain == nil {
+ return nil, errors.New("invalid sasl")
+ }
+ if p != c.sasls.plain[u] {
+ return nil, errors.New("invalid sasl")
+ }
+ creq.cc.saslStage = saslStageComplete
+
+ case saslStageAuthScram0_256:
+ c0, err := scramParseClient0(req.SASLAuthBytes)
+ if err != nil {
+ return nil, err
+ }
+ if c.sasls.scram256 == nil {
+ return nil, errors.New("invalid sasl")
+ }
+ a, ok := c.sasls.scram256[c0.user]
+ if !ok {
+ return nil, errors.New("invalid sasl")
+ }
+ s0, serverFirst := scramServerFirst(c0, a)
+ resp.SASLAuthBytes = serverFirst
+ creq.cc.saslStage = saslStageAuthScram1
+ creq.cc.s0 = &s0
+
+ case saslStageAuthScram0_512:
+ c0, err := scramParseClient0(req.SASLAuthBytes)
+ if err != nil {
+ return nil, err
+ }
+ if c.sasls.scram512 == nil {
+ return nil, errors.New("invalid sasl")
+ }
+ a, ok := c.sasls.scram512[c0.user]
+ if !ok {
+ return nil, errors.New("invalid sasl")
+ }
+ s0, serverFirst := scramServerFirst(c0, a)
+ resp.SASLAuthBytes = serverFirst
+ creq.cc.saslStage = saslStageAuthScram1
+ creq.cc.s0 = &s0
+
+ case saslStageAuthScram1:
+ serverFinal, err := creq.cc.s0.serverFinal(req.SASLAuthBytes)
+ if err != nil {
+ return nil, err
+ }
+ resp.SASLAuthBytes = serverFinal
+ creq.cc.saslStage = saslStageComplete
+ creq.cc.s0 = nil
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/37_create_partitions.go b/vendor/github.com/twmb/franz-go/pkg/kfake/37_create_partitions.go
new file mode 100644
index 0000000000000..bd4954ef58726
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/37_create_partitions.go
@@ -0,0 +1,66 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(37, 0, 3) }
+
+func (c *Cluster) handleCreatePartitions(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.CreatePartitionsRequest)
+ resp := req.ResponseKind().(*kmsg.CreatePartitionsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ donet := func(t string, errCode int16) *kmsg.CreatePartitionsResponseTopic {
+ st := kmsg.NewCreatePartitionsResponseTopic()
+ st.Topic = t
+ st.ErrorCode = errCode
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donets := func(errCode int16) {
+ for _, rt := range req.Topics {
+ donet(rt.Topic, errCode)
+ }
+ }
+
+ if b != c.controller {
+ donets(kerr.NotController.Code)
+ return resp, nil
+ }
+
+ uniq := make(map[string]struct{})
+ for _, rt := range req.Topics {
+ if _, ok := uniq[rt.Topic]; ok {
+ donets(kerr.InvalidRequest.Code)
+ return resp, nil
+ }
+ uniq[rt.Topic] = struct{}{}
+ }
+
+ for _, rt := range req.Topics {
+ t, ok := c.data.tps.gett(rt.Topic)
+ if !ok {
+ donet(rt.Topic, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ if len(rt.Assignment) > 0 {
+ donet(rt.Topic, kerr.InvalidReplicaAssignment.Code)
+ continue
+ }
+ if rt.Count < int32(len(t)) {
+ donet(rt.Topic, kerr.InvalidPartitions.Code)
+ continue
+ }
+ for i := int32(len(t)); i < rt.Count; i++ {
+ c.data.tps.mkp(rt.Topic, i, c.newPartData)
+ }
+ donet(rt.Topic, 0)
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/42_delete_groups.go b/vendor/github.com/twmb/franz-go/pkg/kfake/42_delete_groups.go
new file mode 100644
index 0000000000000..682574157d6dd
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/42_delete_groups.go
@@ -0,0 +1,17 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(42, 0, 2) }
+
+func (c *Cluster) handleDeleteGroups(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.DeleteGroupsRequest)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ return c.groups.handleDelete(creq), nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/44_incremental_alter_configs.go b/vendor/github.com/twmb/franz-go/pkg/kfake/44_incremental_alter_configs.go
new file mode 100644
index 0000000000000..a8096bcfc9ff2
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/44_incremental_alter_configs.go
@@ -0,0 +1,112 @@
+package kfake
+
+import (
+ "strconv"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(44, 0, 1) }
+
+func (c *Cluster) handleIncrementalAlterConfigs(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ req := kreq.(*kmsg.IncrementalAlterConfigsRequest)
+ resp := req.ResponseKind().(*kmsg.IncrementalAlterConfigsResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ doner := func(n string, t kmsg.ConfigResourceType, errCode int16) *kmsg.IncrementalAlterConfigsResponseResource {
+ st := kmsg.NewIncrementalAlterConfigsResponseResource()
+ st.ResourceName = n
+ st.ResourceType = t
+ st.ErrorCode = errCode
+ resp.Resources = append(resp.Resources, st)
+ return &resp.Resources[len(resp.Resources)-1]
+ }
+
+outer:
+ for i := range req.Resources {
+ rr := &req.Resources[i]
+ switch rr.ResourceType {
+ case kmsg.ConfigResourceTypeBroker:
+ id := int32(-1)
+ if rr.ResourceName != "" {
+ iid, err := strconv.Atoi(rr.ResourceName)
+ id = int32(iid)
+ if err != nil || id != b.node {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue outer
+ }
+ }
+ var invalid bool
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ switch rc.Op {
+ case kmsg.IncrementalAlterConfigOpSet:
+ invalid = invalid || !c.setBrokerConfig(rr.Configs[i].Name, rr.Configs[i].Value, true)
+ case kmsg.IncrementalAlterConfigOpDelete:
+ default:
+ invalid = true
+ }
+ }
+ if invalid {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue
+ }
+ doner(rr.ResourceName, rr.ResourceType, 0)
+ if req.ValidateOnly {
+ continue
+ }
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ switch rc.Op {
+ case kmsg.IncrementalAlterConfigOpSet:
+ c.setBrokerConfig(rr.Configs[i].Name, rr.Configs[i].Value, false)
+ case kmsg.IncrementalAlterConfigOpDelete:
+ delete(c.bcfgs, rc.Name)
+ }
+ }
+
+ case kmsg.ConfigResourceTypeTopic:
+ if _, ok := c.data.tps.gett(rr.ResourceName); !ok {
+ doner(rr.ResourceName, rr.ResourceType, kerr.UnknownTopicOrPartition.Code)
+ continue
+ }
+ var invalid bool
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ switch rc.Op {
+ case kmsg.IncrementalAlterConfigOpSet:
+ invalid = invalid || !c.data.setTopicConfig(rr.ResourceName, rc.Name, rc.Value, true)
+ case kmsg.IncrementalAlterConfigOpDelete:
+ default:
+ invalid = true
+ }
+ }
+ if invalid {
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ continue
+ }
+ doner(rr.ResourceName, rr.ResourceType, 0)
+ if req.ValidateOnly {
+ continue
+ }
+ for i := range rr.Configs {
+ rc := &rr.Configs[i]
+ switch rc.Op {
+ case kmsg.IncrementalAlterConfigOpSet:
+ c.data.setTopicConfig(rr.ResourceName, rc.Name, rc.Value, false)
+ case kmsg.IncrementalAlterConfigOpDelete:
+ delete(c.data.tcfgs[rr.ResourceName], rc.Name)
+ }
+ }
+
+ default:
+ doner(rr.ResourceName, rr.ResourceType, kerr.InvalidRequest.Code)
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/47_offset_delete.go b/vendor/github.com/twmb/franz-go/pkg/kfake/47_offset_delete.go
new file mode 100644
index 0000000000000..878e83b45a0e4
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/47_offset_delete.go
@@ -0,0 +1,23 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(47, 0, 0) }
+
+func (c *Cluster) handleOffsetDelete(creq *clientReq) (kmsg.Response, error) {
+ req := creq.kreq.(*kmsg.OffsetDeleteRequest)
+ resp := req.ResponseKind().(*kmsg.OffsetDeleteResponse)
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ if c.groups.handleOffsetDelete(creq) {
+ return nil, nil
+ }
+ resp.ErrorCode = kerr.GroupIDNotFound.Code
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/50_describe_user_scram_credentials.go b/vendor/github.com/twmb/franz-go/pkg/kfake/50_describe_user_scram_credentials.go
new file mode 100644
index 0000000000000..0cb107d21037e
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/50_describe_user_scram_credentials.go
@@ -0,0 +1,68 @@
+package kfake
+
+import (
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(50, 0, 0) }
+
+func (c *Cluster) handleDescribeUserSCRAMCredentials(kreq kmsg.Request) (kmsg.Response, error) {
+ var (
+ req = kreq.(*kmsg.DescribeUserSCRAMCredentialsRequest)
+ resp = req.ResponseKind().(*kmsg.DescribeUserSCRAMCredentialsResponse)
+ )
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ describe := make(map[string]bool) // if false, user was duplicated
+ for _, u := range req.Users {
+ if _, ok := describe[u.Name]; ok {
+ describe[u.Name] = true
+ } else {
+ describe[u.Name] = false
+ }
+ }
+ if req.Users == nil { // null returns all
+ for u := range c.sasls.scram256 {
+ describe[u] = false
+ }
+ for u := range c.sasls.scram512 {
+ describe[u] = false
+ }
+ }
+
+ addr := func(u string) *kmsg.DescribeUserSCRAMCredentialsResponseResult {
+ sr := kmsg.NewDescribeUserSCRAMCredentialsResponseResult()
+ sr.User = u
+ resp.Results = append(resp.Results, sr)
+ return &resp.Results[len(resp.Results)-1]
+ }
+
+ for u, duplicated := range describe {
+ sr := addr(u)
+ if duplicated {
+ sr.ErrorCode = kerr.DuplicateResource.Code
+ continue
+ }
+ if a, ok := c.sasls.scram256[u]; ok {
+ ci := kmsg.NewDescribeUserSCRAMCredentialsResponseResultCredentialInfo()
+ ci.Mechanism = 1
+ ci.Iterations = int32(a.iterations)
+ sr.CredentialInfos = append(sr.CredentialInfos, ci)
+ }
+ if a, ok := c.sasls.scram512[u]; ok {
+ ci := kmsg.NewDescribeUserSCRAMCredentialsResponseResultCredentialInfo()
+ ci.Mechanism = 2
+ ci.Iterations = int32(a.iterations)
+ sr.CredentialInfos = append(sr.CredentialInfos, ci)
+ }
+ if len(sr.CredentialInfos) == 0 {
+ sr.ErrorCode = kerr.ResourceNotFound.Code
+ }
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/51_alter_user_scram_credentials.go b/vendor/github.com/twmb/franz-go/pkg/kfake/51_alter_user_scram_credentials.go
new file mode 100644
index 0000000000000..7f853b5618f77
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/51_alter_user_scram_credentials.go
@@ -0,0 +1,124 @@
+package kfake
+
+import (
+ "bytes"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+func init() { regKey(51, 0, 0) }
+
+func (c *Cluster) handleAlterUserSCRAMCredentials(b *broker, kreq kmsg.Request) (kmsg.Response, error) {
+ var (
+ req = kreq.(*kmsg.AlterUserSCRAMCredentialsRequest)
+ resp = req.ResponseKind().(*kmsg.AlterUserSCRAMCredentialsResponse)
+ )
+
+ if err := checkReqVersion(req.Key(), req.Version); err != nil {
+ return nil, err
+ }
+
+ addr := func(u string) *kmsg.AlterUserSCRAMCredentialsResponseResult {
+ sr := kmsg.NewAlterUserSCRAMCredentialsResponseResult()
+ sr.User = u
+ resp.Results = append(resp.Results, sr)
+ return &resp.Results[len(resp.Results)-1]
+ }
+ doneu := func(u string, code int16) *kmsg.AlterUserSCRAMCredentialsResponseResult {
+ sr := addr(u)
+ sr.ErrorCode = code
+ return sr
+ }
+
+ users := make(map[string]int16)
+
+ // Validate everything up front, keeping track of all (and duplicate)
+ // users. If we are not controller, we fail with our users map.
+ for _, d := range req.Deletions {
+ if d.Name == "" {
+ users[d.Name] = kerr.UnacceptableCredential.Code
+ continue
+ }
+ if d.Mechanism != 1 && d.Mechanism != 2 {
+ users[d.Name] = kerr.UnsupportedSaslMechanism.Code
+ continue
+ }
+ users[d.Name] = 0
+ }
+ for _, u := range req.Upsertions {
+ if u.Name == "" || u.Iterations < 4096 || u.Iterations > 16384 { // Kafka min/max
+ users[u.Name] = kerr.UnacceptableCredential.Code
+ continue
+ }
+ if u.Mechanism != 1 && u.Mechanism != 2 {
+ users[u.Name] = kerr.UnsupportedSaslMechanism.Code
+ continue
+ }
+ if code, deleting := users[u.Name]; deleting && code == 0 {
+ users[u.Name] = kerr.DuplicateResource.Code
+ continue
+ }
+ users[u.Name] = 0
+ }
+
+ if b != c.controller {
+ for u := range users {
+ doneu(u, kerr.NotController.Code)
+ }
+ return resp, nil
+ }
+
+ // Add anything that failed validation.
+ for u, code := range users {
+ if code != 0 {
+ doneu(u, code)
+ }
+ }
+
+ // Process all deletions, adding ResourceNotFound as necessary.
+ for _, d := range req.Deletions {
+ if users[d.Name] != 0 {
+ continue
+ }
+ m := c.sasls.scram256
+ if d.Mechanism == 2 {
+ m = c.sasls.scram512
+ }
+ if m == nil {
+ doneu(d.Name, kerr.ResourceNotFound.Code)
+ continue
+ }
+ if _, ok := m[d.Name]; !ok {
+ doneu(d.Name, kerr.ResourceNotFound.Code)
+ continue
+ }
+ delete(m, d.Name)
+ doneu(d.Name, 0)
+ }
+
+ // Process all upsertions.
+ for _, u := range req.Upsertions {
+ if users[u.Name] != 0 {
+ continue
+ }
+ m := &c.sasls.scram256
+ mech := saslScram256
+ if u.Mechanism == 2 {
+ m = &c.sasls.scram512
+ mech = saslScram512
+ }
+ if *m == nil {
+ *m = make(map[string]scramAuth)
+ }
+ (*m)[u.Name] = scramAuth{
+ mechanism: mech,
+ iterations: int(u.Iterations),
+ saltedPass: bytes.Clone(u.SaltedPassword),
+ salt: bytes.Clone(u.Salt),
+ }
+ doneu(u.Name, 0)
+ }
+
+ return resp, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/LICENSE b/vendor/github.com/twmb/franz-go/pkg/kfake/LICENSE
new file mode 100644
index 0000000000000..36e18034325d5
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/LICENSE
@@ -0,0 +1,24 @@
+Copyright 2020, Travis Bischel.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the library nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/NOTES b/vendor/github.com/twmb/franz-go/pkg/kfake/NOTES
new file mode 100644
index 0000000000000..fd7583da67ec4
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/NOTES
@@ -0,0 +1,62 @@
+ORDER
+
+BASIC
+x Produce
+x Metadata
+x CreateTopics
+x InitProducerID
+x ListOffsets
+x Fetch
+x DeleteTopics
+x CreatePartitions
+
+GROUPS
+x OffsetCommit
+x OffsetFetch
+x FindCoordinator
+x JoinGroup
+x Heartbeat
+x LeaveGroup
+x SyncGroup
+x DescribeGroups
+x ListGroups
+x DeleteGroups
+
+MISC
+x OffsetForLeaderEpoch
+
+SASL
+x SaslHandshake
+x SaslAuthenticate
+x DescribeUserScramCredentials
+x AlterUserScramCredentials
+
+LOW-PRIO
+x DeleteRecords
+x DescribeConfigs
+x AlterConfigs
+x IncrementalAlterConfigs
+x OffsetDelete
+x AlterReplicaLogDirs
+x DescribeLogDirs
+
+TXNS
+* AddPartitionsToTxn
+* AddOffsetsToTxn
+* EndTxn
+* TxnOffsetCommit
+
+ACLS
+* DescribeACLs
+* CreateACLs
+* DeleteACLs
+
+LOWER-PRIO
+* DescribeProducers
+* DescribeTransactions
+* ListTransactions
+* AlterPartitionAssignments
+* ListPartitionReassignments
+* DescribeClientQuotas
+* AlterClientQuotas
+DTOKEN: ignore
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/client_conn.go b/vendor/github.com/twmb/franz-go/pkg/kfake/client_conn.go
new file mode 100644
index 0000000000000..a78b574f3f6f7
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/client_conn.go
@@ -0,0 +1,182 @@
+package kfake
+
+import (
+ "encoding/binary"
+ "io"
+ "net"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+type (
+ clientConn struct {
+ c *Cluster
+ b *broker
+ conn net.Conn
+ respCh chan clientResp
+
+ saslStage saslStage
+ s0 *scramServer0
+ }
+
+ clientReq struct {
+ cc *clientConn
+ kreq kmsg.Request
+ at time.Time
+ cid string
+ corr int32
+ seq uint32
+ }
+
+ clientResp struct {
+ kresp kmsg.Response
+ corr int32
+ err error
+ seq uint32
+ }
+)
+
+func (creq *clientReq) empty() bool { return creq == nil || creq.cc == nil || creq.kreq == nil }
+
+func (cc *clientConn) read() {
+ defer cc.conn.Close()
+
+ type read struct {
+ body []byte
+ err error
+ }
+ var (
+ who = cc.conn.RemoteAddr()
+ size = make([]byte, 4)
+ readCh = make(chan read, 1)
+ seq uint32
+ )
+ for {
+ go func() {
+ if _, err := io.ReadFull(cc.conn, size); err != nil {
+ readCh <- read{err: err}
+ return
+ }
+ body := make([]byte, binary.BigEndian.Uint32(size))
+ _, err := io.ReadFull(cc.conn, body)
+ readCh <- read{body: body, err: err}
+ }()
+
+ var read read
+ select {
+ case <-cc.c.die:
+ return
+ case read = <-readCh:
+ }
+
+ if err := read.err; err != nil {
+ cc.c.cfg.logger.Logf(LogLevelDebug, "client %s disconnected from read: %v", who, err)
+ return
+ }
+
+ var (
+ body = read.body
+ reader = kbin.Reader{Src: body}
+ key = reader.Int16()
+ version = reader.Int16()
+ corr = reader.Int32()
+ clientID = reader.NullableString()
+ kreq = kmsg.RequestForKey(key)
+ )
+ kreq.SetVersion(version)
+ if kreq.IsFlexible() {
+ kmsg.SkipTags(&reader)
+ }
+ if err := kreq.ReadFrom(reader.Src); err != nil {
+ cc.c.cfg.logger.Logf(LogLevelDebug, "client %s unable to parse request: %v", who, err)
+ return
+ }
+
+ // Within Kafka, a null client ID is treated as an empty string.
+ var cid string
+ if clientID != nil {
+ cid = *clientID
+ }
+
+ select {
+ case cc.c.reqCh <- &clientReq{cc, kreq, time.Now(), cid, corr, seq}:
+ seq++
+ case <-cc.c.die:
+ return
+ }
+ }
+}
+
+func (cc *clientConn) write() {
+ defer cc.conn.Close()
+
+ var (
+ who = cc.conn.RemoteAddr()
+ writeCh = make(chan error, 1)
+ buf []byte
+ seq uint32
+
+ // If a request is by necessity slow (join&sync), and the
+ // client sends another request down the same conn, we can
+ // actually handle them out of order because group state is
+ // managed independently in its own loop. To ensure
+ // serialization, we capture out of order responses and only
+ // send them once the prior requests are replied to.
+ //
+ // (this is also why there is a seq in the clientReq)
+ oooresp = make(map[uint32]clientResp)
+ )
+ for {
+ resp, ok := oooresp[seq]
+ if !ok {
+ select {
+ case resp = <-cc.respCh:
+ if resp.seq != seq {
+ oooresp[resp.seq] = resp
+ continue
+ }
+ seq = resp.seq + 1
+ case <-cc.c.die:
+ return
+ }
+ } else {
+ delete(oooresp, seq)
+ seq++
+ }
+ if err := resp.err; err != nil {
+ cc.c.cfg.logger.Logf(LogLevelInfo, "client %s request unable to be handled: %v", who, err)
+ return
+ }
+
+ // Size, corr, and empty tag section if flexible: 9 bytes max.
+ buf = append(buf[:0], 0, 0, 0, 0, 0, 0, 0, 0, 0)
+ buf = resp.kresp.AppendTo(buf)
+
+ start := 0
+ l := len(buf) - 4
+ if !resp.kresp.IsFlexible() || resp.kresp.Key() == 18 {
+ l--
+ start++
+ }
+ binary.BigEndian.PutUint32(buf[start:], uint32(l))
+ binary.BigEndian.PutUint32(buf[start+4:], uint32(resp.corr))
+
+ go func() {
+ _, err := cc.conn.Write(buf[start:])
+ writeCh <- err
+ }()
+
+ var err error
+ select {
+ case <-cc.c.die:
+ return
+ case err = <-writeCh:
+ }
+ if err != nil {
+ cc.c.cfg.logger.Logf(LogLevelDebug, "client %s disconnected from write: %v", who, err)
+ return
+ }
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/cluster.go b/vendor/github.com/twmb/franz-go/pkg/kfake/cluster.go
new file mode 100644
index 0000000000000..275b14402addf
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/cluster.go
@@ -0,0 +1,1084 @@
+package kfake
+
+import (
+ "crypto/tls"
+ "errors"
+ "fmt"
+ "math/rand"
+ "net"
+ "strconv"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO
+//
+// * Add raft and make the brokers independent
+//
+// * Support multiple replicas -- we just pass this through
+
+type (
+
+ // Cluster is a mock Kafka broker cluster.
+ Cluster struct {
+ cfg cfg
+
+ controller *broker
+ bs []*broker
+
+ coordinatorGen atomic.Uint64
+
+ adminCh chan func()
+ reqCh chan *clientReq
+ wakeCh chan *slept
+ watchFetchCh chan *watchFetch
+
+ controlMu sync.Mutex
+ control map[int16]map[*controlCtx]struct{}
+ currentBroker *broker
+ currentControl *controlCtx
+ sleeping map[*clientConn]*bsleep
+ controlSleep chan sleepChs
+
+ data data
+ pids pids
+ groups groups
+ sasls sasls
+ bcfgs map[string]*string
+
+ die chan struct{}
+ dead atomic.Bool
+ }
+
+ broker struct {
+ c *Cluster
+ ln net.Listener
+ node int32
+ bsIdx int
+ }
+
+ controlFn func(kmsg.Request) (kmsg.Response, error, bool)
+
+ controlCtx struct {
+ key int16
+ fn controlFn
+ keep bool
+ drop bool
+ lastReq map[*clientConn]*clientReq // used to not re-run requests that slept, see doc comments below
+ }
+
+ controlResp struct {
+ kresp kmsg.Response
+ err error
+ handled bool
+ }
+)
+
+// MustCluster is like NewCluster, but panics on error.
+func MustCluster(opts ...Opt) *Cluster {
+ c, err := NewCluster(opts...)
+ if err != nil {
+ panic(err)
+ }
+ return c
+}
+
+// NewCluster returns a new mocked Kafka cluster.
+func NewCluster(opts ...Opt) (*Cluster, error) {
+ cfg := cfg{
+ nbrokers: 3,
+ logger: new(nopLogger),
+ clusterID: "kfake",
+ defaultNumParts: 10,
+
+ minSessionTimeout: 6 * time.Second,
+ maxSessionTimeout: 5 * time.Minute,
+
+ sasls: make(map[struct{ m, u string }]string),
+ }
+ for _, opt := range opts {
+ opt.apply(&cfg)
+ }
+ if len(cfg.ports) > 0 {
+ cfg.nbrokers = len(cfg.ports)
+ }
+
+ c := &Cluster{
+ cfg: cfg,
+
+ adminCh: make(chan func()),
+ reqCh: make(chan *clientReq, 20),
+ wakeCh: make(chan *slept, 10),
+ watchFetchCh: make(chan *watchFetch, 20),
+ control: make(map[int16]map[*controlCtx]struct{}),
+ controlSleep: make(chan sleepChs, 1),
+
+ sleeping: make(map[*clientConn]*bsleep),
+
+ data: data{
+ id2t: make(map[uuid]string),
+ t2id: make(map[string]uuid),
+ treplicas: make(map[string]int),
+ tcfgs: make(map[string]map[string]*string),
+ },
+ bcfgs: make(map[string]*string),
+
+ die: make(chan struct{}),
+ }
+ c.data.c = c
+ c.groups.c = c
+ var err error
+ defer func() {
+ if err != nil {
+ c.Close()
+ }
+ }()
+
+ for mu, p := range cfg.sasls {
+ switch mu.m {
+ case saslPlain:
+ if c.sasls.plain == nil {
+ c.sasls.plain = make(map[string]string)
+ }
+ c.sasls.plain[mu.u] = p
+ case saslScram256:
+ if c.sasls.scram256 == nil {
+ c.sasls.scram256 = make(map[string]scramAuth)
+ }
+ c.sasls.scram256[mu.u] = newScramAuth(saslScram256, p)
+ case saslScram512:
+ if c.sasls.scram512 == nil {
+ c.sasls.scram512 = make(map[string]scramAuth)
+ }
+ c.sasls.scram512[mu.u] = newScramAuth(saslScram512, p)
+ default:
+ return nil, fmt.Errorf("unknown SASL mechanism %v", mu.m)
+ }
+ }
+ cfg.sasls = nil
+
+ if cfg.enableSASL && c.sasls.empty() {
+ c.sasls.scram256 = map[string]scramAuth{
+ "admin": newScramAuth(saslScram256, "admin"),
+ }
+ }
+
+ for i := 0; i < cfg.nbrokers; i++ {
+ var port int
+ if len(cfg.ports) > 0 {
+ port = cfg.ports[i]
+ }
+ var ln net.Listener
+ ln, err = newListener(port, c.cfg.tls)
+ if err != nil {
+ return nil, err
+ }
+ b := &broker{
+ c: c,
+ ln: ln,
+ node: int32(i),
+ bsIdx: len(c.bs),
+ }
+ c.bs = append(c.bs, b)
+ go b.listen()
+ }
+ c.controller = c.bs[len(c.bs)-1]
+ go c.run()
+
+ seedTopics := make(map[string]int32)
+ for _, sts := range cfg.seedTopics {
+ p := sts.p
+ if p < 1 {
+ p = int32(cfg.defaultNumParts)
+ }
+ for _, t := range sts.ts {
+ seedTopics[t] = p
+ }
+ }
+ for t, p := range seedTopics {
+ c.data.mkt(t, int(p), -1, nil)
+ }
+ return c, nil
+}
+
+// ListenAddrs returns the hostports that the cluster is listening on.
+func (c *Cluster) ListenAddrs() []string {
+ var addrs []string
+ c.admin(func() {
+ for _, b := range c.bs {
+ addrs = append(addrs, b.ln.Addr().String())
+ }
+ })
+ return addrs
+}
+
+// Close shuts down the cluster.
+func (c *Cluster) Close() {
+ if c.dead.Swap(true) {
+ return
+ }
+ close(c.die)
+ for _, b := range c.bs {
+ b.ln.Close()
+ }
+}
+
+func newListener(port int, tc *tls.Config) (net.Listener, error) {
+ l, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", port))
+ if err != nil {
+ return nil, err
+ }
+ if tc != nil {
+ l = tls.NewListener(l, tc)
+ }
+ return l, nil
+}
+
+func (b *broker) listen() {
+ defer b.ln.Close()
+ for {
+ conn, err := b.ln.Accept()
+ if err != nil {
+ return
+ }
+
+ cc := &clientConn{
+ c: b.c,
+ b: b,
+ conn: conn,
+ respCh: make(chan clientResp, 2),
+ }
+ go cc.read()
+ go cc.write()
+ }
+}
+
+func (c *Cluster) run() {
+outer:
+ for {
+ var (
+ creq *clientReq
+ w *watchFetch
+ s *slept
+ kreq kmsg.Request
+ kresp kmsg.Response
+ err error
+ handled bool
+ )
+
+ select {
+ case <-c.die:
+ return
+
+ case admin := <-c.adminCh:
+ admin()
+ continue
+
+ case creq = <-c.reqCh:
+ if c.cfg.sleepOutOfOrder {
+ break
+ }
+ // If we have any sleeping request on this node,
+ // we enqueue the new live request to the end and
+ // wait for the sleeping request to finish.
+ bs := c.sleeping[creq.cc]
+ if bs.enqueue(&slept{
+ creq: creq,
+ waiting: true,
+ }) {
+ continue
+ }
+
+ case s = <-c.wakeCh:
+ // On wakeup, we know we are handling a control
+ // function that was slept, or a request that was
+ // waiting for a control function to finish sleeping.
+ creq = s.creq
+ if s.waiting {
+ break
+ }
+
+ // We continue a previously sleeping request, and
+ // handle results similar to tryControl.
+ //
+ // Control flow is weird here, but is described more
+ // fully in the finish/resleep/etc methods.
+ c.continueSleptControl(s)
+ inner:
+ for {
+ select {
+ case admin := <-c.adminCh:
+ admin()
+ continue inner
+ case res := <-s.res:
+ c.finishSleptControl(s)
+ cctx := s.cctx
+ s = nil
+ kresp, err, handled = res.kresp, res.err, res.handled
+ c.maybePopControl(handled, cctx)
+ if handled {
+ goto afterControl
+ }
+ break inner
+ case sleepChs := <-c.controlSleep:
+ c.resleepSleptControl(s, sleepChs)
+ continue outer
+ }
+ }
+
+ case w = <-c.watchFetchCh:
+ if w.cleaned {
+ continue // already cleaned up, this is an extraneous timer fire
+ }
+ w.cleanup(c)
+ creq = w.creq
+ }
+
+ kresp, err, handled = c.tryControl(creq)
+ if handled {
+ goto afterControl
+ }
+
+ if c.cfg.enableSASL {
+ if allow := c.handleSASL(creq); !allow {
+ err = errors.New("not allowed given SASL state")
+ goto afterControl
+ }
+ }
+
+ kreq = creq.kreq
+ switch k := kmsg.Key(kreq.Key()); k {
+ case kmsg.Produce:
+ kresp, err = c.handleProduce(creq.cc.b, kreq)
+ case kmsg.Fetch:
+ kresp, err = c.handleFetch(creq, w)
+ case kmsg.ListOffsets:
+ kresp, err = c.handleListOffsets(creq.cc.b, kreq)
+ case kmsg.Metadata:
+ kresp, err = c.handleMetadata(kreq)
+ case kmsg.OffsetCommit:
+ kresp, err = c.handleOffsetCommit(creq)
+ case kmsg.OffsetFetch:
+ kresp, err = c.handleOffsetFetch(creq)
+ case kmsg.FindCoordinator:
+ kresp, err = c.handleFindCoordinator(kreq)
+ case kmsg.JoinGroup:
+ kresp, err = c.handleJoinGroup(creq)
+ case kmsg.Heartbeat:
+ kresp, err = c.handleHeartbeat(creq)
+ case kmsg.LeaveGroup:
+ kresp, err = c.handleLeaveGroup(creq)
+ case kmsg.SyncGroup:
+ kresp, err = c.handleSyncGroup(creq)
+ case kmsg.DescribeGroups:
+ kresp, err = c.handleDescribeGroups(creq)
+ case kmsg.ListGroups:
+ kresp, err = c.handleListGroups(creq)
+ case kmsg.SASLHandshake:
+ kresp, err = c.handleSASLHandshake(creq)
+ case kmsg.ApiVersions:
+ kresp, err = c.handleApiVersions(kreq)
+ case kmsg.CreateTopics:
+ kresp, err = c.handleCreateTopics(creq.cc.b, kreq)
+ case kmsg.DeleteTopics:
+ kresp, err = c.handleDeleteTopics(creq.cc.b, kreq)
+ case kmsg.DeleteRecords:
+ kresp, err = c.handleDeleteRecords(creq.cc.b, kreq)
+ case kmsg.InitProducerID:
+ kresp, err = c.handleInitProducerID(kreq)
+ case kmsg.OffsetForLeaderEpoch:
+ kresp, err = c.handleOffsetForLeaderEpoch(creq.cc.b, kreq)
+ case kmsg.DescribeConfigs:
+ kresp, err = c.handleDescribeConfigs(creq.cc.b, kreq)
+ case kmsg.AlterConfigs:
+ kresp, err = c.handleAlterConfigs(creq.cc.b, kreq)
+ case kmsg.AlterReplicaLogDirs:
+ kresp, err = c.handleAlterReplicaLogDirs(creq.cc.b, kreq)
+ case kmsg.DescribeLogDirs:
+ kresp, err = c.handleDescribeLogDirs(creq.cc.b, kreq)
+ case kmsg.SASLAuthenticate:
+ kresp, err = c.handleSASLAuthenticate(creq)
+ case kmsg.CreatePartitions:
+ kresp, err = c.handleCreatePartitions(creq.cc.b, kreq)
+ case kmsg.DeleteGroups:
+ kresp, err = c.handleDeleteGroups(creq)
+ case kmsg.IncrementalAlterConfigs:
+ kresp, err = c.handleIncrementalAlterConfigs(creq.cc.b, kreq)
+ case kmsg.OffsetDelete:
+ kresp, err = c.handleOffsetDelete(creq)
+ case kmsg.DescribeUserSCRAMCredentials:
+ kresp, err = c.handleDescribeUserSCRAMCredentials(kreq)
+ case kmsg.AlterUserSCRAMCredentials:
+ kresp, err = c.handleAlterUserSCRAMCredentials(creq.cc.b, kreq)
+ default:
+ err = fmt.Errorf("unhandled key %v", k)
+ }
+
+ afterControl:
+ // If s is non-nil, this is either a previously slept control
+ // that finished but was not handled, or a previously slept
+ // waiting request. In either case, we need to signal to the
+ // sleep dequeue loop to continue.
+ if s != nil {
+ s.continueDequeue <- struct{}{}
+ }
+ if kresp == nil && err == nil { // produce request with no acks, or otherwise hijacked request (group, sleep)
+ continue
+ }
+
+ select {
+ case creq.cc.respCh <- clientResp{kresp: kresp, corr: creq.corr, err: err, seq: creq.seq}:
+ case <-c.die:
+ return
+ }
+ }
+}
+
+// Control is a function to call on any client request the cluster handles.
+//
+// If the control function returns true, then either the response is written
+// back to the client or, if there the control function returns an error, the
+// client connection is closed. If both returns are nil, then the cluster will
+// loop continuing to read from the client and the client will likely have a
+// read timeout at some point.
+//
+// Controlling a request drops the control function from the cluster, meaning
+// that a control function can only control *one* request. To keep the control
+// function handling more requests, you can call KeepControl within your
+// control function. Alternatively, if you want to just run some logic in your
+// control function but then have the cluster handle the request as normal,
+// you can call DropControl to drop a control function that was not handled.
+//
+// It is safe to add new control functions within a control function.
+//
+// Control functions are run serially unless you use SleepControl, multiple
+// control functions are "in progress", and you run Cluster.Close. Closing a
+// Cluster awakens all sleeping control functions.
+func (c *Cluster) Control(fn func(kmsg.Request) (kmsg.Response, error, bool)) {
+ c.ControlKey(-1, fn)
+}
+
+// Control is a function to call on a specific request key that the cluster
+// handles.
+//
+// If the control function returns true, then either the response is written
+// back to the client or, if there the control function returns an error, the
+// client connection is closed. If both returns are nil, then the cluster will
+// loop continuing to read from the client and the client will likely have a
+// read timeout at some point.
+//
+// Controlling a request drops the control function from the cluster, meaning
+// that a control function can only control *one* request. To keep the control
+// function handling more requests, you can call KeepControl within your
+// control function. Alternatively, if you want to just run some logic in your
+// control function but then have the cluster handle the request as normal,
+// you can call DropControl to drop a control function that was not handled.
+//
+// It is safe to add new control functions within a control function.
+//
+// Control functions are run serially unless you use SleepControl, multiple
+// control functions are "in progress", and you run Cluster.Close. Closing a
+// Cluster awakens all sleeping control functions.
+func (c *Cluster) ControlKey(key int16, fn func(kmsg.Request) (kmsg.Response, error, bool)) {
+ c.controlMu.Lock()
+ defer c.controlMu.Unlock()
+ m := c.control[key]
+ if m == nil {
+ m = make(map[*controlCtx]struct{})
+ c.control[key] = m
+ }
+ m[&controlCtx{
+ key: key,
+ fn: fn,
+ lastReq: make(map[*clientConn]*clientReq),
+ }] = struct{}{}
+}
+
+// KeepControl marks the currently running control function to be kept even if
+// you handle the request and return true. This can be used to continuously
+// control requests without needing to re-add control functions manually.
+func (c *Cluster) KeepControl() {
+ c.controlMu.Lock()
+ defer c.controlMu.Unlock()
+ if c.currentControl != nil {
+ c.currentControl.keep = true
+ }
+}
+
+// DropControl allows you to drop the current control function. This takes
+// precedence over KeepControl. The use of this function is you can run custom
+// control logic *once*, drop the control function, and return that the
+// function was not handled -- thus allowing other control functions to run, or
+// allowing the kfake cluster to process the request as normal.
+func (c *Cluster) DropControl() {
+ c.controlMu.Lock()
+ defer c.controlMu.Unlock()
+ if c.currentControl != nil {
+ c.currentControl.drop = true
+ }
+}
+
+// SleepControl sleeps the current control function until wakeup returns. This
+// yields to run any other connection.
+//
+// Note that per protocol, requests on the same connection must be replied to
+// in order. Many clients write multiple requests to the same connection, so
+// if you sleep until a different request runs, you may sleep forever -- you
+// must know the semantics of your client to know whether requests run on
+// different connections (or, ensure you are writing to different brokers).
+//
+// For example, franz-go uses a dedicated connection for:
+// - produce requests
+// - fetch requests
+// - join&sync requests
+// - requests with a Timeout field
+// - all other request
+//
+// So, for franz-go, there are up to five separate connections depending
+// on what you are doing.
+//
+// You can run SleepControl multiple times in the same control function. If you
+// sleep a request you are controlling, and another request of the same key
+// comes in, it will run the same control function and may also sleep (i.e.,
+// you must have logic if you want to avoid sleeping on the same request).
+func (c *Cluster) SleepControl(wakeup func()) {
+ c.controlMu.Lock()
+ if c.currentControl == nil {
+ c.controlMu.Unlock()
+ return
+ }
+ c.controlMu.Unlock()
+
+ sleepChs := sleepChs{
+ clientWait: make(chan struct{}, 1),
+ clientCont: make(chan struct{}, 1),
+ }
+ go func() {
+ wakeup()
+ sleepChs.clientWait <- struct{}{}
+ }()
+
+ c.controlSleep <- sleepChs
+ select {
+ case <-sleepChs.clientCont:
+ case <-c.die:
+ }
+}
+
+// CurrentNode is solely valid from within a control function; it returns
+// the broker id that the request was received by.
+// If there's no request currently inflight, this returns -1.
+func (c *Cluster) CurrentNode() int32 {
+ c.controlMu.Lock()
+ defer c.controlMu.Unlock()
+ if b := c.currentBroker; b != nil {
+ return b.node
+ }
+ return -1
+}
+
+func (c *Cluster) tryControl(creq *clientReq) (kresp kmsg.Response, err error, handled bool) {
+ c.controlMu.Lock()
+ defer c.controlMu.Unlock()
+ if len(c.control) == 0 {
+ return nil, nil, false
+ }
+ kresp, err, handled = c.tryControlKey(creq.kreq.Key(), creq)
+ if !handled {
+ kresp, err, handled = c.tryControlKey(-1, creq)
+ }
+ return kresp, err, handled
+}
+
+func (c *Cluster) tryControlKey(key int16, creq *clientReq) (kmsg.Response, error, bool) {
+ for cctx := range c.control[key] {
+ if cctx.lastReq[creq.cc] == creq {
+ continue
+ }
+ cctx.lastReq[creq.cc] = creq
+ res := c.runControl(cctx, creq)
+ for {
+ select {
+ case admin := <-c.adminCh:
+ admin()
+ continue
+ case res := <-res:
+ c.maybePopControl(res.handled, cctx)
+ return res.kresp, res.err, res.handled
+ case sleepChs := <-c.controlSleep:
+ c.beginSleptControl(&slept{
+ cctx: cctx,
+ sleepChs: sleepChs,
+ res: res,
+ creq: creq,
+ })
+ return nil, nil, true
+ }
+ }
+ }
+ return nil, nil, false
+}
+
+func (c *Cluster) runControl(cctx *controlCtx, creq *clientReq) chan controlResp {
+ res := make(chan controlResp, 1)
+ c.currentBroker = creq.cc.b
+ c.currentControl = cctx
+ // We unlock before entering a control function so that the control
+ // function can modify / add more control. We re-lock when exiting the
+ // control function. This does pose some weird control flow issues
+ // w.r.t. sleeping requests. Here, we have to re-lock before sending
+ // down res, otherwise we risk unlocking an unlocked mu in
+ // finishSleepControl.
+ c.controlMu.Unlock()
+ go func() {
+ kresp, err, handled := cctx.fn(creq.kreq)
+ c.controlMu.Lock()
+ c.currentControl = nil
+ c.currentBroker = nil
+ res <- controlResp{kresp, err, handled}
+ }()
+ return res
+}
+
+func (c *Cluster) beginSleptControl(s *slept) {
+ // Control flow gets really weird here. We unlocked when entering the
+ // control function, so we have to re-lock now so that tryControl can
+ // unlock us safely.
+ bs := c.sleeping[s.creq.cc]
+ if bs == nil {
+ bs = &bsleep{
+ c: c,
+ set: make(map[*slept]struct{}),
+ setWake: make(chan *slept, 1),
+ }
+ c.sleeping[s.creq.cc] = bs
+ }
+ bs.enqueue(s)
+ c.controlMu.Lock()
+ c.currentControl = nil
+ c.currentBroker = nil
+}
+
+func (c *Cluster) continueSleptControl(s *slept) {
+ // When continuing a slept control, we are in the main run loop and are
+ // not currently under the control mu. We need to re-set the current
+ // broker and current control before resuming.
+ c.controlMu.Lock()
+ c.currentBroker = s.creq.cc.b
+ c.currentControl = s.cctx
+ c.controlMu.Unlock()
+ s.sleepChs.clientCont <- struct{}{}
+}
+
+func (c *Cluster) finishSleptControl(s *slept) {
+ // When finishing a slept control, the control function exited and
+ // grabbed the control mu. We clear the control, unlock, and allow the
+ // slept control to be dequeued.
+ c.currentControl = nil
+ c.currentBroker = nil
+ c.controlMu.Unlock()
+ s.continueDequeue <- struct{}{}
+}
+
+func (c *Cluster) resleepSleptControl(s *slept, sleepChs sleepChs) {
+ // A control function previously slept and is now again sleeping. We
+ // need to clear the control broker / etc, update the sleep channels,
+ // and allow the sleep dequeueing to continue. The control function
+ // will not be deqeueued in the loop because we updated sleepChs with
+ // a non-nil clientWait.
+ c.controlMu.Lock()
+ c.currentBroker = nil
+ c.currentControl = nil
+ c.controlMu.Unlock()
+ s.sleepChs = sleepChs
+ s.continueDequeue <- struct{}{}
+ // For OOO requests, we need to manually trigger a goroutine to
+ // watch for the sleep to end.
+ s.bs.maybeWaitOOOWake(s)
+}
+
+func (c *Cluster) maybePopControl(handled bool, cctx *controlCtx) {
+ if handled && !cctx.keep || cctx.drop {
+ delete(c.control[cctx.key], cctx)
+ }
+}
+
+// bsleep manages sleeping requests on a connection to a broker, or
+// non-sleeping requests that are waiting for sleeping requests to finish.
+type bsleep struct {
+ c *Cluster
+ mu sync.Mutex
+ queue []*slept
+ set map[*slept]struct{}
+ setWake chan *slept
+}
+
+type slept struct {
+ bs *bsleep
+ cctx *controlCtx
+ sleepChs sleepChs
+ res <-chan controlResp
+ creq *clientReq
+ waiting bool
+
+ continueDequeue chan struct{}
+}
+
+type sleepChs struct {
+ clientWait chan struct{}
+ clientCont chan struct{}
+}
+
+// enqueue has a few potential behaviors.
+//
+// (1) If s is waiting, this is a new request enqueueing to the back of an
+// existing queue, where we are waiting for the head request to finish
+// sleeping. Easy case.
+//
+// (2) If s is not waiting, this is a sleeping request. If the queue is empty,
+// this is the first sleeping request on a node. We enqueue and start our wait
+// goroutine. Easy.
+//
+// (3) If s is not waiting, but our queue is non-empty, this must be from a
+// convoluted scenario:
+//
+// (a) the user has SleepOutOfOrder configured,
+// (b) or, there was a request in front of us that slept, we were waiting,
+// and now we ourselves are sleeping
+// (c) or, we are sleeping for the second time in a single control
+func (bs *bsleep) enqueue(s *slept) bool {
+ if bs == nil {
+ return false // Do not enqueue, nothing sleeping
+ }
+ s.continueDequeue = make(chan struct{}, 1)
+ s.bs = bs
+ bs.mu.Lock()
+ defer bs.mu.Unlock()
+ if s.waiting {
+ if bs.c.cfg.sleepOutOfOrder {
+ panic("enqueueing a waiting request even though we are sleeping out of order")
+ }
+ if !bs.empty() {
+ bs.keep(s) // Case (1)
+ return true
+ }
+ return false // We do not enqueue, do not wait: nothing sleeping ahead of us
+ }
+ if bs.empty() {
+ bs.keep(s)
+ go bs.wait() // Case (2)
+ return true
+ }
+ var q0 *slept
+ if !bs.c.cfg.sleepOutOfOrder {
+ q0 = bs.queue[0] // Case (3b) or (3c) -- just update values below
+ } else {
+ // Case (3a), out of order sleep: we need to check the entire
+ // queue to see if this request was already sleeping and, if
+ // so, update the values. If it was not already sleeping, we
+ // "keep" the new sleeping item.
+ bs.keep(s)
+ return true
+ }
+ if q0.creq != s.creq {
+ panic("internal error: sleeping request not head request")
+ }
+ // We do not update continueDequeue because it is actively being read,
+ // we just reuse the old value.
+ q0.cctx = s.cctx
+ q0.sleepChs = s.sleepChs
+ q0.res = s.res
+ q0.waiting = s.waiting
+ return true
+}
+
+// keep stores a sleeping request to be managed. For out of order control, the
+// log is a bit more complicated and we need to watch for the control sleep
+// finishing here, and forward the "I'm done sleeping" notification to waitSet.
+func (bs *bsleep) keep(s *slept) {
+ if !bs.c.cfg.sleepOutOfOrder {
+ bs.queue = append(bs.queue, s)
+ return
+ }
+ bs.set[s] = struct{}{}
+ bs.maybeWaitOOOWake(s)
+}
+
+func (bs *bsleep) maybeWaitOOOWake(s *slept) {
+ if !bs.c.cfg.sleepOutOfOrder {
+ return
+ }
+ go func() {
+ select {
+ case <-bs.c.die:
+ case <-s.sleepChs.clientWait:
+ select {
+ case <-bs.c.die:
+ case bs.setWake <- s:
+ }
+ }
+ }()
+}
+
+func (bs *bsleep) empty() bool {
+ return len(bs.queue) == 0 && len(bs.set) == 0
+}
+
+func (bs *bsleep) wait() {
+ if bs.c.cfg.sleepOutOfOrder {
+ bs.waitSet()
+ } else {
+ bs.waitQueue()
+ }
+}
+
+// For out of order control, all control functions run concurrently, serially.
+// Whenever they wake up, they send themselves down setWake. waitSet manages
+// handling the wake up and interacting with the serial manage goroutine to
+// run everything properly.
+func (bs *bsleep) waitSet() {
+ for {
+ bs.mu.Lock()
+ if len(bs.set) == 0 {
+ bs.mu.Unlock()
+ return
+ }
+ bs.mu.Unlock()
+
+ // Wait for a control function to awaken.
+ var q *slept
+ select {
+ case <-bs.c.die:
+ return
+ case q = <-bs.setWake:
+ q.sleepChs.clientWait = nil
+ }
+
+ // Now, schedule ourselves with the run loop.
+ select {
+ case <-bs.c.die:
+ return
+ case bs.c.wakeCh <- q:
+ }
+
+ // Wait for this control function to finish its loop in the run
+ // function. Once it does, if clientWait is non-nil, the
+ // control function went back to sleep. If it is nil, the
+ // control function is done and we remove this from tracking.
+ select {
+ case <-bs.c.die:
+ return
+ case <-q.continueDequeue:
+ }
+ if q.sleepChs.clientWait == nil {
+ bs.mu.Lock()
+ delete(bs.set, q)
+ bs.mu.Unlock()
+ }
+ }
+}
+
+// For in-order control functions, the concept is slightly simpler but the
+// logic flow is the same. We wait for the head control function to wake up,
+// try to run it, and then wait for it to finish. The logic of this function is
+// the same as waitSet, minus the middle part where we wait for something to
+// wake up.
+func (bs *bsleep) waitQueue() {
+ for {
+ bs.mu.Lock()
+ if len(bs.queue) == 0 {
+ bs.mu.Unlock()
+ return
+ }
+ q0 := bs.queue[0]
+ bs.mu.Unlock()
+
+ if q0.sleepChs.clientWait != nil {
+ select {
+ case <-bs.c.die:
+ return
+ case <-q0.sleepChs.clientWait:
+ q0.sleepChs.clientWait = nil
+ }
+ }
+
+ select {
+ case <-bs.c.die:
+ return
+ case bs.c.wakeCh <- q0:
+ }
+
+ select {
+ case <-bs.c.die:
+ return
+ case <-q0.continueDequeue:
+ }
+ if q0.sleepChs.clientWait == nil {
+ bs.mu.Lock()
+ bs.queue = bs.queue[1:]
+ bs.mu.Unlock()
+ }
+ }
+}
+
+// Various administrative requests can be passed into the cluster to simulate
+// real-world operations. These are performed synchronously in the goroutine
+// that handles client requests.
+
+func (c *Cluster) admin(fn func()) {
+ ofn := fn
+ wait := make(chan struct{})
+ fn = func() { ofn(); close(wait) }
+ c.adminCh <- fn
+ <-wait
+}
+
+// MoveTopicPartition simulates the rebalancing of a partition to an alternative
+// broker. This returns an error if the topic, partition, or node does not exit.
+func (c *Cluster) MoveTopicPartition(topic string, partition int32, nodeID int32) error {
+ var err error
+ c.admin(func() {
+ var br *broker
+ for _, b := range c.bs {
+ if b.node == nodeID {
+ br = b
+ break
+ }
+ }
+ if br == nil {
+ err = fmt.Errorf("node %d not found", nodeID)
+ return
+ }
+ pd, ok := c.data.tps.getp(topic, partition)
+ if !ok {
+ err = errors.New("topic/partition not found")
+ return
+ }
+ pd.leader = br
+ pd.epoch++
+ })
+ return err
+}
+
+// CoordinatorFor returns the node ID of the group or transaction coordinator
+// for the given key.
+func (c *Cluster) CoordinatorFor(key string) int32 {
+ var n int32
+ c.admin(func() {
+ l := len(c.bs)
+ if l == 0 {
+ n = -1
+ return
+ }
+ n = c.coordinator(key).node
+ })
+ return n
+}
+
+// RehashCoordinators simulates group and transacational ID coordinators moving
+// around. All group and transactional IDs are rekeyed. This forces clients to
+// reload coordinators.
+func (c *Cluster) RehashCoordinators() {
+ c.coordinatorGen.Add(1)
+}
+
+// AddNode adds a node to the cluster. If nodeID is -1, the next node ID is
+// used. If port is 0 or negative, a random port is chosen. This returns the
+// added node ID and the port used, or an error if the node already exists or
+// the port cannot be listened to.
+func (c *Cluster) AddNode(nodeID int32, port int) (int32, int, error) {
+ var err error
+ c.admin(func() {
+ if nodeID >= 0 {
+ for _, b := range c.bs {
+ if b.node == nodeID {
+ err = fmt.Errorf("node %d already exists", nodeID)
+ return
+ }
+ }
+ } else if len(c.bs) > 0 {
+ // We go one higher than the max current node ID. We
+ // need to search all nodes because a person may have
+ // added and removed a bunch, with manual ID overrides.
+ nodeID = c.bs[0].node
+ for _, b := range c.bs[1:] {
+ if b.node > nodeID {
+ nodeID = b.node
+ }
+ }
+ nodeID++
+ } else {
+ nodeID = 0
+ }
+ if port < 0 {
+ port = 0
+ }
+ var ln net.Listener
+ if ln, err = newListener(port, c.cfg.tls); err != nil {
+ return
+ }
+ _, strPort, _ := net.SplitHostPort(ln.Addr().String())
+ port, _ = strconv.Atoi(strPort)
+ b := &broker{
+ c: c,
+ ln: ln,
+ node: nodeID,
+ bsIdx: len(c.bs),
+ }
+ c.bs = append(c.bs, b)
+ c.cfg.nbrokers++
+ c.shufflePartitionsLocked()
+ go b.listen()
+ })
+ return nodeID, port, err
+}
+
+// RemoveNode removes a ndoe from the cluster. This returns an error if the
+// node does not exist.
+func (c *Cluster) RemoveNode(nodeID int32) error {
+ var err error
+ c.admin(func() {
+ for i, b := range c.bs {
+ if b.node == nodeID {
+ if len(c.bs) == 1 {
+ err = errors.New("cannot remove all brokers")
+ return
+ }
+ b.ln.Close()
+ c.cfg.nbrokers--
+ c.bs[i] = c.bs[len(c.bs)-1]
+ c.bs[i].bsIdx = i
+ c.bs = c.bs[:len(c.bs)-1]
+ c.shufflePartitionsLocked()
+ return
+ }
+ }
+ err = fmt.Errorf("node %d not found", nodeID)
+ })
+ return err
+}
+
+// ShufflePartitionLeaders simulates a leader election for all partitions: all
+// partitions have a randomly selected new leader and their internal epochs are
+// bumped.
+func (c *Cluster) ShufflePartitionLeaders() {
+ c.admin(func() {
+ c.shufflePartitionsLocked()
+ })
+}
+
+func (c *Cluster) shufflePartitionsLocked() {
+ c.data.tps.each(func(_ string, _ int32, p *partData) {
+ var leader *broker
+ if len(c.bs) == 0 {
+ leader = c.noLeader()
+ } else {
+ leader = c.bs[rand.Intn(len(c.bs))]
+ }
+ p.leader = leader
+ p.epoch++
+ })
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/config.go b/vendor/github.com/twmb/franz-go/pkg/kfake/config.go
new file mode 100644
index 0000000000000..75b34fb21f660
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/config.go
@@ -0,0 +1,126 @@
+package kfake
+
+import (
+ "crypto/tls"
+ "time"
+)
+
+// Opt is an option to configure a client.
+type Opt interface {
+ apply(*cfg)
+}
+
+type opt struct{ fn func(*cfg) }
+
+func (opt opt) apply(cfg *cfg) { opt.fn(cfg) }
+
+type seedTopics struct {
+ p int32
+ ts []string
+}
+
+type cfg struct {
+ nbrokers int
+ ports []int
+ logger Logger
+ clusterID string
+ allowAutoTopic bool
+ defaultNumParts int
+ seedTopics []seedTopics
+
+ minSessionTimeout time.Duration
+ maxSessionTimeout time.Duration
+
+ enableSASL bool
+ sasls map[struct{ m, u string }]string // cleared after client initialization
+ tls *tls.Config
+
+ sleepOutOfOrder bool
+}
+
+// NumBrokers sets the number of brokers to start in the fake cluster.
+func NumBrokers(n int) Opt {
+ return opt{func(cfg *cfg) { cfg.nbrokers = n }}
+}
+
+// Ports sets the ports to listen on, overriding randomly choosing NumBrokers
+// amount of ports.
+func Ports(ports ...int) Opt {
+ return opt{func(cfg *cfg) { cfg.ports = ports }}
+}
+
+// WithLogger sets the logger to use.
+func WithLogger(logger Logger) Opt {
+ return opt{func(cfg *cfg) { cfg.logger = logger }}
+}
+
+// ClusterID sets the cluster ID to return in metadata responses.
+func ClusterID(clusterID string) Opt {
+ return opt{func(cfg *cfg) { cfg.clusterID = clusterID }}
+}
+
+// AllowAutoTopicCreation allows metadata requests to create topics if the
+// metadata request has its AllowAutoTopicCreation field set to true.
+func AllowAutoTopicCreation() Opt {
+ return opt{func(cfg *cfg) { cfg.allowAutoTopic = true }}
+}
+
+// DefaultNumPartitions sets the number of partitions to create by default for
+// auto created topics / CreateTopics with -1 partitions, overriding the
+// default of 10.
+func DefaultNumPartitions(n int) Opt {
+ return opt{func(cfg *cfg) { cfg.defaultNumParts = n }}
+}
+
+// GroupMinSessionTimeout sets the cluster's minimum session timeout allowed
+// for groups, overriding the default 6 seconds.
+func GroupMinSessionTimeout(d time.Duration) Opt {
+ return opt{func(cfg *cfg) { cfg.minSessionTimeout = d }}
+}
+
+// GroupMaxSessionTimeout sets the cluster's maximum session timeout allowed
+// for groups, overriding the default 5 minutes.
+func GroupMaxSessionTimeout(d time.Duration) Opt {
+ return opt{func(cfg *cfg) { cfg.maxSessionTimeout = d }}
+}
+
+// EnableSASL enables SASL authentication for the cluster. If you do not
+// configure a bootstrap user / pass, the default superuser is "admin" /
+// "admin" with the SCRAM-SHA-256 SASL mechanisms.
+func EnableSASL() Opt {
+ return opt{func(cfg *cfg) { cfg.enableSASL = true }}
+}
+
+// Superuser seeds the cluster with a superuser. The method must be either
+// PLAIN, SCRAM-SHA-256, or SCRAM-SHA-512.
+// Note that PLAIN superusers cannot be deleted.
+// SCRAM superusers can be modified with AlterUserScramCredentials.
+// If you delete all SASL users, the kfake cluster will be unusable.
+func Superuser(method, user, pass string) Opt {
+ return opt{func(cfg *cfg) { cfg.sasls[struct{ m, u string }{method, user}] = pass }}
+}
+
+// TLS enables TLS for the cluster, using the provided TLS config for
+// listening.
+func TLS(c *tls.Config) Opt {
+ return opt{func(cfg *cfg) { cfg.tls = c }}
+}
+
+// SeedTopics provides topics to create by default in the cluster. Each topic
+// will use the given partitions and use the default internal replication
+// factor. If you use a non-positive number for partitions, [DefaultNumPartitions]
+// is used. This option can be provided multiple times if you want to seed
+// topics with different partition counts. If a topic is provided in multiple
+// options, the last specification wins.
+func SeedTopics(partitions int32, ts ...string) Opt {
+ return opt{func(cfg *cfg) { cfg.seedTopics = append(cfg.seedTopics, seedTopics{partitions, ts}) }}
+}
+
+// SleepOutOfOrder allows functions to be handled out of order when control
+// functions are sleeping. The functions are be handled internally out of
+// order, but responses still wait for the sleeping requests to finish. This
+// can be used to set up complicated chains of control where functions only
+// advance when you know another request is actively being handled.
+func SleepOutOfOrder() Opt {
+ return opt{func(cfg *cfg) { cfg.sleepOutOfOrder = true }}
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/data.go b/vendor/github.com/twmb/franz-go/pkg/kfake/data.go
new file mode 100644
index 0000000000000..9f5d46c6b8687
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/data.go
@@ -0,0 +1,343 @@
+package kfake
+
+import (
+ "crypto/sha256"
+ "math/rand"
+ "sort"
+ "strconv"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO
+//
+// * Write to disk, if configured.
+// * When transactional, wait to send out data until txn committed or aborted.
+
+var noID uuid
+
+type (
+ uuid [16]byte
+
+ data struct {
+ c *Cluster
+ tps tps[partData]
+
+ id2t map[uuid]string // topic IDs => topic name
+ t2id map[string]uuid // topic name => topic IDs
+ treplicas map[string]int // topic name => # replicas
+ tcfgs map[string]map[string]*string // topic name => config name => config value
+ }
+
+ partData struct {
+ batches []partBatch
+ dir string
+
+ highWatermark int64
+ lastStableOffset int64
+ logStartOffset int64
+ epoch int32 // current epoch
+ maxTimestamp int64 // current max timestamp in all batches
+ nbytes int64
+
+ // abortedTxns
+ rf int8
+ leader *broker
+
+ watch map[*watchFetch]struct{}
+
+ createdAt time.Time
+ }
+
+ partBatch struct {
+ kmsg.RecordBatch
+ nbytes int
+ epoch int32 // epoch when appended
+
+ // For list offsets, we may need to return the first offset
+ // after a given requested timestamp. Client provided
+ // timestamps gan go forwards and backwards. We answer list
+ // offsets with a binary search: even if this batch has a small
+ // timestamp, this is produced _after_ a potentially higher
+ // timestamp, so it is after it in the list offset response.
+ //
+ // When we drop the earlier timestamp, we update all following
+ // firstMaxTimestamps that match the dropped timestamp.
+ maxEarlierTimestamp int64
+ }
+)
+
+func (d *data) mkt(t string, nparts int, nreplicas int, configs map[string]*string) {
+ if d.tps != nil {
+ if _, exists := d.tps[t]; exists {
+ panic("should have checked existence already")
+ }
+ }
+ var id uuid
+ for {
+ sha := sha256.Sum256([]byte(strconv.Itoa(int(time.Now().UnixNano()))))
+ copy(id[:], sha[:])
+ if _, exists := d.id2t[id]; !exists {
+ break
+ }
+ }
+
+ if nparts < 0 {
+ nparts = d.c.cfg.defaultNumParts
+ }
+ if nreplicas < 0 {
+ nreplicas = 3 // cluster default
+ }
+ d.id2t[id] = t
+ d.t2id[t] = id
+ d.treplicas[t] = nreplicas
+ d.tcfgs[t] = configs
+ for i := 0; i < nparts; i++ {
+ d.tps.mkp(t, int32(i), d.c.newPartData)
+ }
+}
+
+func (c *Cluster) noLeader() *broker {
+ return &broker{
+ c: c,
+ node: -1,
+ }
+}
+
+func (c *Cluster) newPartData() *partData {
+ return &partData{
+ dir: defLogDir,
+ leader: c.bs[rand.Intn(len(c.bs))],
+ watch: make(map[*watchFetch]struct{}),
+ createdAt: time.Now(),
+ }
+}
+
+func (pd *partData) pushBatch(nbytes int, b kmsg.RecordBatch) {
+ maxEarlierTimestamp := b.FirstTimestamp
+ if maxEarlierTimestamp < pd.maxTimestamp {
+ maxEarlierTimestamp = pd.maxTimestamp
+ } else {
+ pd.maxTimestamp = maxEarlierTimestamp
+ }
+ b.FirstOffset = pd.highWatermark
+ b.PartitionLeaderEpoch = pd.epoch
+ pd.batches = append(pd.batches, partBatch{b, nbytes, pd.epoch, maxEarlierTimestamp})
+ pd.highWatermark += int64(b.NumRecords)
+ pd.lastStableOffset += int64(b.NumRecords) // TODO
+ pd.nbytes += int64(nbytes)
+ for w := range pd.watch {
+ w.push(nbytes)
+ }
+}
+
+func (pd *partData) searchOffset(o int64) (index int, found bool, atEnd bool) {
+ if o < pd.logStartOffset || o > pd.highWatermark {
+ return 0, false, false
+ }
+ if len(pd.batches) == 0 {
+ if o == 0 {
+ return 0, false, true
+ }
+ } else {
+ lastBatch := pd.batches[len(pd.batches)-1]
+ if end := lastBatch.FirstOffset + int64(lastBatch.LastOffsetDelta) + 1; end == o {
+ return 0, false, true
+ }
+ }
+
+ index, found = sort.Find(len(pd.batches), func(idx int) int {
+ b := &pd.batches[idx]
+ if o < b.FirstOffset {
+ return -1
+ }
+ if o >= b.FirstOffset+int64(b.LastOffsetDelta)+1 {
+ return 1
+ }
+ return 0
+ })
+ return index, found, false
+}
+
+func (pd *partData) trimLeft() {
+ for len(pd.batches) > 0 {
+ b0 := pd.batches[0]
+ finRec := b0.FirstOffset + int64(b0.LastOffsetDelta)
+ if finRec >= pd.logStartOffset {
+ return
+ }
+ pd.batches = pd.batches[1:]
+ pd.nbytes -= int64(b0.nbytes)
+ }
+}
+
+/////////////
+// CONFIGS //
+/////////////
+
+// TODO support modifying config values changing cluster behavior
+
+// brokerConfigs calls fn for all:
+// - static broker configs (read only)
+// - default configs
+// - dynamic broker configs
+func (c *Cluster) brokerConfigs(node int32, fn func(k string, v *string, src kmsg.ConfigSource, sensitive bool)) {
+ if node >= 0 {
+ for _, b := range c.bs {
+ if b.node == node {
+ id := strconv.Itoa(int(node))
+ fn("broker.id", &id, kmsg.ConfigSourceStaticBrokerConfig, false)
+ break
+ }
+ }
+ }
+ for _, c := range []struct {
+ k string
+ v string
+ sens bool
+ }{
+ {k: "broker.rack", v: "krack"},
+ {k: "sasl.enabled.mechanisms", v: "PLAIN,SCRAM-SHA-256,SCRAM-SHA-512"},
+ {k: "super.users", sens: true},
+ } {
+ v := c.v
+ fn(c.k, &v, kmsg.ConfigSourceStaticBrokerConfig, c.sens)
+ }
+
+ for k, v := range configDefaults {
+ if _, ok := validBrokerConfigs[k]; ok {
+ v := v
+ fn(k, &v, kmsg.ConfigSourceDefaultConfig, false)
+ }
+ }
+
+ for k, v := range c.bcfgs {
+ fn(k, v, kmsg.ConfigSourceDynamicBrokerConfig, false)
+ }
+}
+
+// configs calls fn for all
+// - static broker configs (read only)
+// - default configs
+// - dynamic broker configs
+// - dynamic topic configs
+//
+// This differs from brokerConfigs by also including dynamic topic configs.
+func (d *data) configs(t string, fn func(k string, v *string, src kmsg.ConfigSource, sensitive bool)) {
+ for k, v := range configDefaults {
+ if _, ok := validTopicConfigs[k]; ok {
+ v := v
+ fn(k, &v, kmsg.ConfigSourceDefaultConfig, false)
+ }
+ }
+ for k, v := range d.c.bcfgs {
+ if topicEquiv, ok := validBrokerConfigs[k]; ok && topicEquiv != "" {
+ fn(k, v, kmsg.ConfigSourceDynamicBrokerConfig, false)
+ }
+ }
+ for k, v := range d.tcfgs[t] {
+ fn(k, v, kmsg.ConfigSourceDynamicTopicConfig, false)
+ }
+}
+
+// Unlike Kafka, we validate the value before allowing it to be set.
+func (c *Cluster) setBrokerConfig(k string, v *string, dry bool) bool {
+ if dry {
+ return true
+ }
+ c.bcfgs[k] = v
+ return true
+}
+
+func (d *data) setTopicConfig(t string, k string, v *string, dry bool) bool {
+ if dry {
+ return true
+ }
+ if _, ok := d.tcfgs[t]; !ok {
+ d.tcfgs[t] = make(map[string]*string)
+ }
+ d.tcfgs[t][k] = v
+ return true
+}
+
+// All valid topic configs we support, as well as the equivalent broker
+// config if there is one.
+var validTopicConfigs = map[string]string{
+ "cleanup.policy": "",
+ "compression.type": "compression.type",
+ "max.message.bytes": "log.message.max.bytes",
+ "message.timestamp.type": "log.message.timestamp.type",
+ "min.insync.replicas": "min.insync.replicas",
+ "retention.bytes": "log.retention.bytes",
+ "retention.ms": "log.retention.ms",
+}
+
+// All valid broker configs we support, as well as their equivalent
+// topic config if there is one.
+var validBrokerConfigs = map[string]string{
+ "broker.id": "",
+ "broker.rack": "",
+ "compression.type": "compression.type",
+ "default.replication.factor": "",
+ "fetch.max.bytes": "",
+ "log.dir": "",
+ "log.message.timestamp.type": "message.timestamp.type",
+ "log.retention.bytes": "retention.bytes",
+ "log.retention.ms": "retention.ms",
+ "message.max.bytes": "max.message.bytes",
+ "min.insync.replicas": "min.insync.replicas",
+ "sasl.enabled.mechanisms": "",
+ "super.users": "",
+}
+
+// Default topic and broker configs.
+var configDefaults = map[string]string{
+ "cleanup.policy": "delete",
+ "compression.type": "producer",
+ "max.message.bytes": "1048588",
+ "message.timestamp.type": "CreateTime",
+ "min.insync.replicas": "1",
+ "retention.bytes": "-1",
+ "retention.ms": "604800000",
+
+ "default.replication.factor": "3",
+ "fetch.max.bytes": "57671680",
+ "log.dir": defLogDir,
+ "log.message.timestamp.type": "CreateTime",
+ "log.retention.bytes": "-1",
+ "log.retention.ms": "604800000",
+ "message.max.bytes": "1048588",
+}
+
+const defLogDir = "/mem/kfake"
+
+func staticConfig(s ...string) func(*string) bool {
+ return func(v *string) bool {
+ if v == nil {
+ return false
+ }
+ for _, ok := range s {
+ if *v == ok {
+ return true
+ }
+ }
+ return false
+ }
+}
+
+func numberConfig(min int, hasMin bool, max int, hasMax bool) func(*string) bool {
+ return func(v *string) bool {
+ if v == nil {
+ return false
+ }
+ i, err := strconv.Atoi(*v)
+ if err != nil {
+ return false
+ }
+ if hasMin && i < min || hasMax && i > max {
+ return false
+ }
+ return true
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/groups.go b/vendor/github.com/twmb/franz-go/pkg/kfake/groups.go
new file mode 100644
index 0000000000000..bc6934d6dcb19
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/groups.go
@@ -0,0 +1,1195 @@
+package kfake
+
+import (
+ "bytes"
+ "fmt"
+ "sync"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// TODO instance IDs
+// TODO persisting groups so commits can happen to client-managed groups
+// we need lastCommit, and need to better prune empty groups
+
+type (
+ groups struct {
+ c *Cluster
+ gs map[string]*group
+ }
+
+ group struct {
+ c *Cluster
+ gs *groups
+ name string
+
+ state groupState
+
+ leader string
+ members map[string]*groupMember
+ pending map[string]*groupMember
+
+ commits tps[offsetCommit]
+
+ generation int32
+ protocolType string
+ protocols map[string]int
+ protocol string
+
+ reqCh chan *clientReq
+ controlCh chan func()
+
+ nJoining int
+
+ tRebalance *time.Timer
+
+ quit sync.Once
+ quitCh chan struct{}
+ }
+
+ groupMember struct {
+ memberID string
+ clientID string
+ clientHost string
+
+ join *kmsg.JoinGroupRequest // the latest join request
+
+ // waitingReply is non-nil if a client is waiting for a reply
+ // from us for a JoinGroupRequest or a SyncGroupRequest.
+ waitingReply *clientReq
+
+ assignment []byte
+
+ t *time.Timer
+ last time.Time
+ }
+
+ offsetCommit struct {
+ offset int64
+ leaderEpoch int32
+ metadata *string
+ }
+
+ groupState int8
+)
+
+const (
+ groupEmpty groupState = iota
+ groupStable
+ groupPreparingRebalance
+ groupCompletingRebalance
+ groupDead
+)
+
+func (gs groupState) String() string {
+ switch gs {
+ case groupEmpty:
+ return "Empty"
+ case groupStable:
+ return "Stable"
+ case groupPreparingRebalance:
+ return "PreparingRebalance"
+ case groupCompletingRebalance:
+ return "CompletingRebalance"
+ case groupDead:
+ return "Dead"
+ default:
+ return "Unknown"
+ }
+}
+
+func (c *Cluster) coordinator(id string) *broker {
+ gen := c.coordinatorGen.Load()
+ n := hashString(fmt.Sprintf("%d", gen)+"\x00\x00"+id) % uint64(len(c.bs))
+ return c.bs[n]
+}
+
+func (c *Cluster) validateGroup(creq *clientReq, group string) *kerr.Error {
+ switch key := kmsg.Key(creq.kreq.Key()); key {
+ case kmsg.OffsetCommit, kmsg.OffsetFetch, kmsg.DescribeGroups, kmsg.DeleteGroups:
+ default:
+ if group == "" {
+ return kerr.InvalidGroupID
+ }
+ }
+ coordinator := c.coordinator(group).node
+ if coordinator != creq.cc.b.node {
+ return kerr.NotCoordinator
+ }
+ return nil
+}
+
+func generateMemberID(clientID string, instanceID *string) string {
+ if instanceID == nil {
+ return clientID + "-" + randStrUUID()
+ }
+ return *instanceID + "-" + randStrUUID()
+}
+
+////////////
+// GROUPS //
+////////////
+
+func (gs *groups) newGroup(name string) *group {
+ return &group{
+ c: gs.c,
+ gs: gs,
+ name: name,
+ members: make(map[string]*groupMember),
+ pending: make(map[string]*groupMember),
+ protocols: make(map[string]int),
+ reqCh: make(chan *clientReq),
+ controlCh: make(chan func()),
+ quitCh: make(chan struct{}),
+ }
+}
+
+// handleJoin completely hijacks the incoming request.
+func (gs *groups) handleJoin(creq *clientReq) {
+ if gs.gs == nil {
+ gs.gs = make(map[string]*group)
+ }
+ req := creq.kreq.(*kmsg.JoinGroupRequest)
+start:
+ g := gs.gs[req.Group]
+ if g == nil {
+ g = gs.newGroup(req.Group)
+ waitJoin := make(chan struct{})
+ gs.gs[req.Group] = g
+ go g.manage(func() { close(waitJoin) })
+ defer func() { <-waitJoin }()
+ }
+ select {
+ case g.reqCh <- creq:
+ case <-g.quitCh:
+ goto start
+ }
+}
+
+// Returns true if the request is hijacked and handled, otherwise false if the
+// group does not exist.
+func (gs *groups) handleHijack(group string, creq *clientReq) bool {
+ if gs.gs == nil {
+ return false
+ }
+ g := gs.gs[group]
+ if g == nil {
+ return false
+ }
+ select {
+ case g.reqCh <- creq:
+ return true
+ case <-g.quitCh:
+ return false
+ }
+}
+
+func (gs *groups) handleSync(creq *clientReq) bool {
+ return gs.handleHijack(creq.kreq.(*kmsg.SyncGroupRequest).Group, creq)
+}
+
+func (gs *groups) handleHeartbeat(creq *clientReq) bool {
+ return gs.handleHijack(creq.kreq.(*kmsg.HeartbeatRequest).Group, creq)
+}
+
+func (gs *groups) handleLeave(creq *clientReq) bool {
+ return gs.handleHijack(creq.kreq.(*kmsg.LeaveGroupRequest).Group, creq)
+}
+
+func (gs *groups) handleOffsetCommit(creq *clientReq) {
+ if gs.gs == nil {
+ gs.gs = make(map[string]*group)
+ }
+ req := creq.kreq.(*kmsg.OffsetCommitRequest)
+start:
+ g := gs.gs[req.Group]
+ if g == nil {
+ g = gs.newGroup(req.Group)
+ waitCommit := make(chan struct{})
+ gs.gs[req.Group] = g
+ go g.manage(func() { close(waitCommit) })
+ defer func() { <-waitCommit }()
+ }
+ select {
+ case g.reqCh <- creq:
+ case <-g.quitCh:
+ goto start
+ }
+}
+
+func (gs *groups) handleOffsetDelete(creq *clientReq) bool {
+ return gs.handleHijack(creq.kreq.(*kmsg.OffsetDeleteRequest).Group, creq)
+}
+
+func (gs *groups) handleList(creq *clientReq) *kmsg.ListGroupsResponse {
+ req := creq.kreq.(*kmsg.ListGroupsRequest)
+ resp := req.ResponseKind().(*kmsg.ListGroupsResponse)
+
+ var states map[string]struct{}
+ if len(req.StatesFilter) > 0 {
+ states = make(map[string]struct{})
+ for _, state := range req.StatesFilter {
+ states[state] = struct{}{}
+ }
+ }
+
+ for _, g := range gs.gs {
+ if g.c.coordinator(g.name).node != creq.cc.b.node {
+ continue
+ }
+ g.waitControl(func() {
+ if states != nil {
+ if _, ok := states[g.state.String()]; !ok {
+ return
+ }
+ }
+ sg := kmsg.NewListGroupsResponseGroup()
+ sg.Group = g.name
+ sg.ProtocolType = g.protocolType
+ sg.GroupState = g.state.String()
+ resp.Groups = append(resp.Groups, sg)
+ })
+ }
+ return resp
+}
+
+func (gs *groups) handleDescribe(creq *clientReq) *kmsg.DescribeGroupsResponse {
+ req := creq.kreq.(*kmsg.DescribeGroupsRequest)
+ resp := req.ResponseKind().(*kmsg.DescribeGroupsResponse)
+
+ doneg := func(name string) *kmsg.DescribeGroupsResponseGroup {
+ sg := kmsg.NewDescribeGroupsResponseGroup()
+ sg.Group = name
+ resp.Groups = append(resp.Groups, sg)
+ return &resp.Groups[len(resp.Groups)-1]
+ }
+
+ for _, rg := range req.Groups {
+ sg := doneg(rg)
+ if kerr := gs.c.validateGroup(creq, rg); kerr != nil {
+ sg.ErrorCode = kerr.Code
+ continue
+ }
+ g, ok := gs.gs[rg]
+ if !ok {
+ sg.State = groupDead.String()
+ continue
+ }
+ if !g.waitControl(func() {
+ sg.State = g.state.String()
+ sg.ProtocolType = g.protocolType
+ if g.state == groupStable {
+ sg.Protocol = g.protocol
+ }
+ for _, m := range g.members {
+ sm := kmsg.NewDescribeGroupsResponseGroupMember()
+ sm.MemberID = m.memberID
+ sm.ClientID = m.clientID
+ sm.ClientHost = m.clientHost
+ if g.state == groupStable {
+ for _, p := range m.join.Protocols {
+ if p.Name == g.protocol {
+ sm.ProtocolMetadata = p.Metadata
+ break
+ }
+ }
+ sm.MemberAssignment = m.assignment
+ }
+ sg.Members = append(sg.Members, sm)
+
+ }
+ }) {
+ sg.State = groupDead.String()
+ }
+ }
+ return resp
+}
+
+func (gs *groups) handleDelete(creq *clientReq) *kmsg.DeleteGroupsResponse {
+ req := creq.kreq.(*kmsg.DeleteGroupsRequest)
+ resp := req.ResponseKind().(*kmsg.DeleteGroupsResponse)
+
+ doneg := func(name string) *kmsg.DeleteGroupsResponseGroup {
+ sg := kmsg.NewDeleteGroupsResponseGroup()
+ sg.Group = name
+ resp.Groups = append(resp.Groups, sg)
+ return &resp.Groups[len(resp.Groups)-1]
+ }
+
+ for _, rg := range req.Groups {
+ sg := doneg(rg)
+ if kerr := gs.c.validateGroup(creq, rg); kerr != nil {
+ sg.ErrorCode = kerr.Code
+ continue
+ }
+ g, ok := gs.gs[rg]
+ if !ok {
+ sg.ErrorCode = kerr.GroupIDNotFound.Code
+ continue
+ }
+ if !g.waitControl(func() {
+ switch g.state {
+ case groupDead:
+ sg.ErrorCode = kerr.GroupIDNotFound.Code
+ case groupEmpty:
+ g.quitOnce()
+ delete(gs.gs, rg)
+ case groupPreparingRebalance, groupCompletingRebalance, groupStable:
+ sg.ErrorCode = kerr.NonEmptyGroup.Code
+ }
+ }) {
+ sg.ErrorCode = kerr.GroupIDNotFound.Code
+ }
+ }
+ return resp
+}
+
+func (gs *groups) handleOffsetFetch(creq *clientReq) *kmsg.OffsetFetchResponse {
+ req := creq.kreq.(*kmsg.OffsetFetchRequest)
+ resp := req.ResponseKind().(*kmsg.OffsetFetchResponse)
+
+ if req.Version <= 7 {
+ rg := kmsg.NewOffsetFetchRequestGroup()
+ rg.Group = req.Group
+ if req.Topics != nil {
+ rg.Topics = make([]kmsg.OffsetFetchRequestGroupTopic, len(req.Topics))
+ }
+ for _, t := range req.Topics {
+ rt := kmsg.NewOffsetFetchRequestGroupTopic()
+ rt.Topic = t.Topic
+ rt.Partitions = t.Partitions
+ rg.Topics = append(rg.Topics, rt)
+ }
+ req.Groups = append(req.Groups, rg)
+
+ defer func() {
+ g0 := resp.Groups[0]
+ resp.ErrorCode = g0.ErrorCode
+ for _, t := range g0.Topics {
+ st := kmsg.NewOffsetFetchResponseTopic()
+ st.Topic = t.Topic
+ for _, p := range t.Partitions {
+ sp := kmsg.NewOffsetFetchResponseTopicPartition()
+ sp.Partition = p.Partition
+ sp.Offset = p.Offset
+ sp.LeaderEpoch = p.LeaderEpoch
+ sp.Metadata = p.Metadata
+ sp.ErrorCode = p.ErrorCode
+ st.Partitions = append(st.Partitions, sp)
+ }
+ resp.Topics = append(resp.Topics, st)
+ }
+ }()
+ }
+
+ doneg := func(name string) *kmsg.OffsetFetchResponseGroup {
+ sg := kmsg.NewOffsetFetchResponseGroup()
+ sg.Group = name
+ resp.Groups = append(resp.Groups, sg)
+ return &resp.Groups[len(resp.Groups)-1]
+ }
+
+ for _, rg := range req.Groups {
+ sg := doneg(rg.Group)
+ if kerr := gs.c.validateGroup(creq, rg.Group); kerr != nil {
+ sg.ErrorCode = kerr.Code
+ continue
+ }
+ g, ok := gs.gs[rg.Group]
+ if !ok {
+ sg.ErrorCode = kerr.GroupIDNotFound.Code
+ continue
+ }
+ if !g.waitControl(func() {
+ if rg.Topics == nil {
+ for t, ps := range g.commits {
+ st := kmsg.NewOffsetFetchResponseGroupTopic()
+ st.Topic = t
+ for p, c := range ps {
+ sp := kmsg.NewOffsetFetchResponseGroupTopicPartition()
+ sp.Partition = p
+ sp.Offset = c.offset
+ sp.LeaderEpoch = c.leaderEpoch
+ sp.Metadata = c.metadata
+ st.Partitions = append(st.Partitions, sp)
+ }
+ sg.Topics = append(sg.Topics, st)
+ }
+ } else {
+ for _, t := range rg.Topics {
+ st := kmsg.NewOffsetFetchResponseGroupTopic()
+ st.Topic = t.Topic
+ for _, p := range t.Partitions {
+ sp := kmsg.NewOffsetFetchResponseGroupTopicPartition()
+ sp.Partition = p
+ c, ok := g.commits.getp(t.Topic, p)
+ if !ok {
+ sp.Offset = -1
+ sp.LeaderEpoch = -1
+ } else {
+ sp.Offset = c.offset
+ sp.LeaderEpoch = c.leaderEpoch
+ sp.Metadata = c.metadata
+ }
+ st.Partitions = append(st.Partitions, sp)
+ }
+ sg.Topics = append(sg.Topics, st)
+ }
+ }
+ }) {
+ sg.ErrorCode = kerr.GroupIDNotFound.Code
+ }
+ }
+ return resp
+}
+
+func (g *group) handleOffsetDelete(creq *clientReq) *kmsg.OffsetDeleteResponse {
+ req := creq.kreq.(*kmsg.OffsetDeleteRequest)
+ resp := req.ResponseKind().(*kmsg.OffsetDeleteResponse)
+
+ if kerr := g.c.validateGroup(creq, req.Group); kerr != nil {
+ resp.ErrorCode = kerr.Code
+ return resp
+ }
+
+ tidx := make(map[string]int)
+ donet := func(t string, errCode int16) *kmsg.OffsetDeleteResponseTopic {
+ if i, ok := tidx[t]; ok {
+ return &resp.Topics[i]
+ }
+ tidx[t] = len(resp.Topics)
+ st := kmsg.NewOffsetDeleteResponseTopic()
+ st.Topic = t
+ resp.Topics = append(resp.Topics, st)
+ return &resp.Topics[len(resp.Topics)-1]
+ }
+ donep := func(t string, p int32, errCode int16) *kmsg.OffsetDeleteResponseTopicPartition {
+ sp := kmsg.NewOffsetDeleteResponseTopicPartition()
+ sp.Partition = p
+ sp.ErrorCode = errCode
+ st := donet(t, 0)
+ st.Partitions = append(st.Partitions, sp)
+ return &st.Partitions[len(st.Partitions)-1]
+ }
+
+ // empty: delete everything in request
+ // preparingRebalance, completingRebalance, stable:
+ // * if consumer, delete everything not subscribed to
+ // * if not consumer, delete nothing, error with non_empty_group
+ subTopics := make(map[string]struct{})
+ switch g.state {
+ default:
+ resp.ErrorCode = kerr.GroupIDNotFound.Code
+ return resp
+ case groupEmpty:
+ case groupPreparingRebalance, groupCompletingRebalance, groupStable:
+ if g.protocolType != "consumer" {
+ resp.ErrorCode = kerr.NonEmptyGroup.Code
+ return resp
+ }
+ for _, m := range []map[string]*groupMember{
+ g.members,
+ g.pending,
+ } {
+ for _, m := range m {
+ if m.join == nil {
+ continue
+ }
+ for _, proto := range m.join.Protocols {
+ var m kmsg.ConsumerMemberMetadata
+ if err := m.ReadFrom(proto.Metadata); err == nil {
+ for _, topic := range m.Topics {
+ subTopics[topic] = struct{}{}
+ }
+ }
+ }
+ }
+ }
+ }
+
+ for _, t := range req.Topics {
+ for _, p := range t.Partitions {
+ if _, ok := subTopics[t.Topic]; ok {
+ donep(t.Topic, p.Partition, kerr.GroupSubscribedToTopic.Code)
+ continue
+ }
+ g.commits.delp(t.Topic, p.Partition)
+ donep(t.Topic, p.Partition, 0)
+ }
+ }
+
+ return resp
+}
+
+////////////////////
+// GROUP HANDLING //
+////////////////////
+
+func (g *group) manage(detachNew func()) {
+ // On the first join only, we want to ensure that if the join is
+ // invalid, we clean the group up before we detach from the cluster
+ // serialization loop that is initializing us.
+ var firstJoin func(bool)
+ firstJoin = func(ok bool) {
+ firstJoin = func(bool) {}
+ if !ok {
+ delete(g.gs.gs, g.name)
+ g.quitOnce()
+ }
+ detachNew()
+ }
+
+ defer func() {
+ for _, m := range g.members {
+ if m.t != nil {
+ m.t.Stop()
+ }
+ }
+ for _, m := range g.pending {
+ if m.t != nil {
+ m.t.Stop()
+ }
+ }
+ }()
+
+ for {
+ select {
+ case <-g.quitCh:
+ return
+ case creq := <-g.reqCh:
+ var kresp kmsg.Response
+ switch creq.kreq.(type) {
+ case *kmsg.JoinGroupRequest:
+ var ok bool
+ kresp, ok = g.handleJoin(creq)
+ firstJoin(ok)
+ case *kmsg.SyncGroupRequest:
+ kresp = g.handleSync(creq)
+ case *kmsg.HeartbeatRequest:
+ kresp = g.handleHeartbeat(creq)
+ case *kmsg.LeaveGroupRequest:
+ kresp = g.handleLeave(creq)
+ case *kmsg.OffsetCommitRequest:
+ var ok bool
+ kresp, ok = g.handleOffsetCommit(creq)
+ firstJoin(ok)
+ case *kmsg.OffsetDeleteRequest:
+ kresp = g.handleOffsetDelete(creq)
+ }
+ if kresp != nil {
+ g.reply(creq, kresp, nil)
+ }
+
+ case fn := <-g.controlCh:
+ fn()
+ }
+ }
+}
+
+func (g *group) waitControl(fn func()) bool {
+ wait := make(chan struct{})
+ wfn := func() { fn(); close(wait) }
+ select {
+ case <-g.quitCh:
+ return false
+ case g.controlCh <- wfn:
+ <-wait
+ return true
+ }
+}
+
+// Called in the manage loop.
+func (g *group) quitOnce() {
+ g.quit.Do(func() {
+ g.state = groupDead
+ close(g.quitCh)
+ })
+}
+
+// Handles a join. We do not do the delayed join aspects in Kafka, we just punt
+// to the client to immediately rejoin if a new client enters the group.
+//
+// If this returns nil, the request will be replied to later.
+func (g *group) handleJoin(creq *clientReq) (kmsg.Response, bool) {
+ req := creq.kreq.(*kmsg.JoinGroupRequest)
+ resp := req.ResponseKind().(*kmsg.JoinGroupResponse)
+
+ if kerr := g.c.validateGroup(creq, req.Group); kerr != nil {
+ resp.ErrorCode = kerr.Code
+ return resp, false
+ }
+ if req.InstanceID != nil {
+ resp.ErrorCode = kerr.InvalidGroupID.Code
+ return resp, false
+ }
+ if st := int64(req.SessionTimeoutMillis); st < g.c.cfg.minSessionTimeout.Milliseconds() || st > g.c.cfg.maxSessionTimeout.Milliseconds() {
+ resp.ErrorCode = kerr.InvalidSessionTimeout.Code
+ return resp, false
+ }
+ if !g.protocolsMatch(req.ProtocolType, req.Protocols) {
+ resp.ErrorCode = kerr.InconsistentGroupProtocol.Code
+ return resp, false
+ }
+
+ // Clients first join with no member ID. For join v4+, we generate
+ // the member ID and add the member to pending. For v3 and below,
+ // we immediately enter rebalance.
+ if req.MemberID == "" {
+ memberID := generateMemberID(creq.cid, req.InstanceID)
+ resp.MemberID = memberID
+ m := &groupMember{
+ memberID: memberID,
+ clientID: creq.cid,
+ clientHost: creq.cc.conn.RemoteAddr().String(),
+ join: req,
+ }
+ if req.Version >= 4 {
+ g.addPendingRebalance(m)
+ resp.ErrorCode = kerr.MemberIDRequired.Code
+ return resp, true
+ }
+ g.addMemberAndRebalance(m, creq, req)
+ return nil, true
+ }
+
+ // Pending members rejoining immediately enters rebalance.
+ if m, ok := g.pending[req.MemberID]; ok {
+ g.addMemberAndRebalance(m, creq, req)
+ return nil, true
+ }
+ m, ok := g.members[req.MemberID]
+ if !ok {
+ resp.ErrorCode = kerr.UnknownMemberID.Code
+ return resp, false
+ }
+
+ switch g.state {
+ default:
+ resp.ErrorCode = kerr.UnknownMemberID.Code
+ return resp, false
+ case groupPreparingRebalance:
+ g.updateMemberAndRebalance(m, creq, req)
+ case groupCompletingRebalance:
+ if m.sameJoin(req) {
+ g.fillJoinResp(req, resp)
+ return resp, true
+ }
+ g.updateMemberAndRebalance(m, creq, req)
+ case groupStable:
+ if g.leader != req.MemberID || m.sameJoin(req) {
+ g.fillJoinResp(req, resp)
+ return resp, true
+ }
+ g.updateMemberAndRebalance(m, creq, req)
+ }
+ return nil, true
+}
+
+// Handles a sync, which can transition us to stable.
+func (g *group) handleSync(creq *clientReq) kmsg.Response {
+ req := creq.kreq.(*kmsg.SyncGroupRequest)
+ resp := req.ResponseKind().(*kmsg.SyncGroupResponse)
+
+ if kerr := g.c.validateGroup(creq, req.Group); kerr != nil {
+ resp.ErrorCode = kerr.Code
+ return resp
+ }
+ if req.InstanceID != nil {
+ resp.ErrorCode = kerr.InvalidGroupID.Code
+ return resp
+ }
+ m, ok := g.members[req.MemberID]
+ if !ok {
+ resp.ErrorCode = kerr.UnknownMemberID.Code
+ return resp
+ }
+ if req.Generation != g.generation {
+ resp.ErrorCode = kerr.IllegalGeneration.Code
+ return resp
+ }
+ if req.ProtocolType != nil && *req.ProtocolType != g.protocolType {
+ resp.ErrorCode = kerr.InconsistentGroupProtocol.Code
+ return resp
+ }
+ if req.Protocol != nil && *req.Protocol != g.protocol {
+ resp.ErrorCode = kerr.InconsistentGroupProtocol.Code
+ return resp
+ }
+
+ switch g.state {
+ default:
+ resp.ErrorCode = kerr.UnknownMemberID.Code
+ case groupPreparingRebalance:
+ resp.ErrorCode = kerr.RebalanceInProgress.Code
+ case groupCompletingRebalance:
+ m.waitingReply = creq
+ if req.MemberID == g.leader {
+ g.completeLeaderSync(req)
+ }
+ return nil
+ case groupStable: // member saw join and is now finally calling sync
+ resp.ProtocolType = kmsg.StringPtr(g.protocolType)
+ resp.Protocol = kmsg.StringPtr(g.protocol)
+ resp.MemberAssignment = m.assignment
+ }
+ return resp
+}
+
+// Handles a heartbeat, a relatively simple request that just delays our
+// session timeout timer.
+func (g *group) handleHeartbeat(creq *clientReq) kmsg.Response {
+ req := creq.kreq.(*kmsg.HeartbeatRequest)
+ resp := req.ResponseKind().(*kmsg.HeartbeatResponse)
+
+ if kerr := g.c.validateGroup(creq, req.Group); kerr != nil {
+ resp.ErrorCode = kerr.Code
+ return resp
+ }
+ if req.InstanceID != nil {
+ resp.ErrorCode = kerr.InvalidGroupID.Code
+ return resp
+ }
+ m, ok := g.members[req.MemberID]
+ if !ok {
+ resp.ErrorCode = kerr.UnknownMemberID.Code
+ return resp
+ }
+ if req.Generation != g.generation {
+ resp.ErrorCode = kerr.IllegalGeneration.Code
+ return resp
+ }
+
+ switch g.state {
+ default:
+ resp.ErrorCode = kerr.UnknownMemberID.Code
+ case groupPreparingRebalance:
+ resp.ErrorCode = kerr.RebalanceInProgress.Code
+ g.updateHeartbeat(m)
+ case groupCompletingRebalance, groupStable:
+ g.updateHeartbeat(m)
+ }
+ return resp
+}
+
+// Handles a leave. We trigger a rebalance for every member leaving in a batch
+// request, but that's fine because of our manage serialization.
+func (g *group) handleLeave(creq *clientReq) kmsg.Response {
+ req := creq.kreq.(*kmsg.LeaveGroupRequest)
+ resp := req.ResponseKind().(*kmsg.LeaveGroupResponse)
+
+ if kerr := g.c.validateGroup(creq, req.Group); kerr != nil {
+ resp.ErrorCode = kerr.Code
+ return resp
+ }
+ if req.Version < 3 {
+ req.Members = append(req.Members, kmsg.LeaveGroupRequestMember{
+ MemberID: req.MemberID,
+ })
+ defer func() { resp.ErrorCode = resp.Members[0].ErrorCode }()
+ }
+
+ for _, rm := range req.Members {
+ mresp := kmsg.NewLeaveGroupResponseMember()
+ mresp.MemberID = rm.MemberID
+ mresp.InstanceID = rm.InstanceID
+ resp.Members = append(resp.Members, mresp)
+
+ r := &resp.Members[len(resp.Members)-1]
+ if rm.InstanceID != nil {
+ r.ErrorCode = kerr.UnknownMemberID.Code
+ continue
+ }
+ if m, ok := g.members[rm.MemberID]; !ok {
+ if p, ok := g.pending[rm.MemberID]; !ok {
+ r.ErrorCode = kerr.UnknownMemberID.Code
+ } else {
+ g.stopPending(p)
+ }
+ } else {
+ g.updateMemberAndRebalance(m, nil, nil)
+ }
+ }
+
+ return resp
+}
+
+func fillOffsetCommit(req *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, code int16) {
+ for _, t := range req.Topics {
+ st := kmsg.NewOffsetCommitResponseTopic()
+ st.Topic = t.Topic
+ for _, p := range t.Partitions {
+ sp := kmsg.NewOffsetCommitResponseTopicPartition()
+ sp.Partition = p.Partition
+ sp.ErrorCode = code
+ st.Partitions = append(st.Partitions, sp)
+ }
+ resp.Topics = append(resp.Topics, st)
+ }
+}
+
+// Handles a commit.
+func (g *group) handleOffsetCommit(creq *clientReq) (*kmsg.OffsetCommitResponse, bool) {
+ req := creq.kreq.(*kmsg.OffsetCommitRequest)
+ resp := req.ResponseKind().(*kmsg.OffsetCommitResponse)
+
+ if kerr := g.c.validateGroup(creq, req.Group); kerr != nil {
+ fillOffsetCommit(req, resp, kerr.Code)
+ return resp, false
+ }
+ if req.InstanceID != nil {
+ fillOffsetCommit(req, resp, kerr.InvalidGroupID.Code)
+ return resp, false
+ }
+
+ var m *groupMember
+ if len(g.members) > 0 {
+ var ok bool
+ m, ok = g.members[req.MemberID]
+ if !ok {
+ fillOffsetCommit(req, resp, kerr.UnknownMemberID.Code)
+ return resp, false
+ }
+ if req.Generation != g.generation {
+ fillOffsetCommit(req, resp, kerr.IllegalGeneration.Code)
+ return resp, false
+ }
+ } else {
+ if req.MemberID != "" {
+ fillOffsetCommit(req, resp, kerr.UnknownMemberID.Code)
+ return resp, false
+ }
+ if req.Generation != -1 {
+ fillOffsetCommit(req, resp, kerr.IllegalGeneration.Code)
+ return resp, false
+ }
+ if g.state != groupEmpty {
+ panic("invalid state: no members, but group not empty")
+ }
+ }
+
+ switch g.state {
+ default:
+ fillOffsetCommit(req, resp, kerr.GroupIDNotFound.Code)
+ return resp, true
+ case groupEmpty:
+ for _, t := range req.Topics {
+ for _, p := range t.Partitions {
+ g.commits.set(t.Topic, p.Partition, offsetCommit{
+ offset: p.Offset,
+ leaderEpoch: p.LeaderEpoch,
+ metadata: p.Metadata,
+ })
+ }
+ }
+ fillOffsetCommit(req, resp, 0)
+ case groupPreparingRebalance, groupStable:
+ for _, t := range req.Topics {
+ for _, p := range t.Partitions {
+ g.commits.set(t.Topic, p.Partition, offsetCommit{
+ offset: p.Offset,
+ leaderEpoch: p.LeaderEpoch,
+ metadata: p.Metadata,
+ })
+ }
+ }
+ fillOffsetCommit(req, resp, 0)
+ g.updateHeartbeat(m)
+ case groupCompletingRebalance:
+ fillOffsetCommit(req, resp, kerr.RebalanceInProgress.Code)
+ g.updateHeartbeat(m)
+ }
+ return resp, true
+}
+
+// Transitions the group to the preparing rebalance state. We first need to
+// clear any member that is currently sitting in sync. If enough members have
+// entered join, we immediately proceed to completeRebalance, otherwise we
+// begin a wait timer.
+func (g *group) rebalance() {
+ if g.state == groupCompletingRebalance {
+ for _, m := range g.members {
+ m.assignment = nil
+ if m.waitingReply.empty() {
+ continue
+ }
+ sync, ok := m.waitingReply.kreq.(*kmsg.SyncGroupRequest)
+ if !ok {
+ continue
+ }
+ resp := sync.ResponseKind().(*kmsg.SyncGroupResponse)
+ resp.ErrorCode = kerr.RebalanceInProgress.Code
+ g.reply(m.waitingReply, resp, m)
+ }
+ }
+
+ g.state = groupPreparingRebalance
+
+ if g.nJoining >= len(g.members) {
+ g.completeRebalance()
+ return
+ }
+
+ var rebalanceTimeoutMs int32
+ for _, m := range g.members {
+ if m.join.RebalanceTimeoutMillis > rebalanceTimeoutMs {
+ rebalanceTimeoutMs = m.join.RebalanceTimeoutMillis
+ }
+ }
+ if g.tRebalance == nil {
+ g.tRebalance = time.AfterFunc(time.Duration(rebalanceTimeoutMs)*time.Millisecond, func() {
+ select {
+ case <-g.quitCh:
+ case g.controlCh <- func() {
+ g.completeRebalance()
+ }:
+ }
+ })
+ }
+}
+
+// Transitions the group to either dead or stable, depending on if any members
+// remain by the time we clear those that are not waiting in join.
+func (g *group) completeRebalance() {
+ if g.tRebalance != nil {
+ g.tRebalance.Stop()
+ g.tRebalance = nil
+ }
+ g.nJoining = 0
+
+ var foundLeader bool
+ for _, m := range g.members {
+ if m.waitingReply.empty() {
+ for _, p := range m.join.Protocols {
+ g.protocols[p.Name]--
+ }
+ delete(g.members, m.memberID)
+ if m.t != nil {
+ m.t.Stop()
+ }
+ continue
+ }
+ if m.memberID == g.leader {
+ foundLeader = true
+ }
+ }
+
+ g.generation++
+ if g.generation < 0 {
+ g.generation = 1
+ }
+ if len(g.members) == 0 {
+ g.state = groupEmpty
+ return
+ }
+ g.state = groupCompletingRebalance
+
+ var foundProto bool
+ for proto, nsupport := range g.protocols {
+ if nsupport == len(g.members) {
+ g.protocol = proto
+ foundProto = true
+ break
+ }
+ }
+ if !foundProto {
+ panic(fmt.Sprint("unable to find commonly supported protocol!", g.protocols, len(g.members)))
+ }
+
+ for _, m := range g.members {
+ if !foundLeader {
+ g.leader = m.memberID
+ }
+ req := m.join
+ resp := req.ResponseKind().(*kmsg.JoinGroupResponse)
+ g.fillJoinResp(req, resp)
+ g.reply(m.waitingReply, resp, m)
+ }
+}
+
+// Transitions the group to stable, the final step of a rebalance.
+func (g *group) completeLeaderSync(req *kmsg.SyncGroupRequest) {
+ for _, m := range g.members {
+ m.assignment = nil
+ }
+ for _, a := range req.GroupAssignment {
+ m, ok := g.members[a.MemberID]
+ if !ok {
+ continue
+ }
+ m.assignment = a.MemberAssignment
+ }
+ for _, m := range g.members {
+ if m.waitingReply.empty() {
+ continue // this member saw join but has not yet called sync
+ }
+ resp := m.waitingReply.kreq.ResponseKind().(*kmsg.SyncGroupResponse)
+ resp.ProtocolType = kmsg.StringPtr(g.protocolType)
+ resp.Protocol = kmsg.StringPtr(g.protocol)
+ resp.MemberAssignment = m.assignment
+ g.reply(m.waitingReply, resp, m)
+ }
+ g.state = groupStable
+}
+
+func (g *group) updateHeartbeat(m *groupMember) {
+ g.atSessionTimeout(m, func() {
+ g.updateMemberAndRebalance(m, nil, nil)
+ })
+}
+
+func (g *group) addPendingRebalance(m *groupMember) {
+ g.pending[m.memberID] = m
+ g.atSessionTimeout(m, func() {
+ delete(g.pending, m.memberID)
+ })
+}
+
+func (g *group) stopPending(m *groupMember) {
+ delete(g.pending, m.memberID)
+ if m.t != nil {
+ m.t.Stop()
+ }
+}
+
+func (g *group) atSessionTimeout(m *groupMember, fn func()) {
+ if m.t != nil {
+ m.t.Stop()
+ }
+ timeout := time.Millisecond * time.Duration(m.join.SessionTimeoutMillis)
+ m.last = time.Now()
+ tfn := func() {
+ select {
+ case <-g.quitCh:
+ case g.controlCh <- func() {
+ if time.Since(m.last) >= timeout {
+ fn()
+ }
+ }:
+ }
+ }
+ m.t = time.AfterFunc(timeout, tfn)
+}
+
+// This is used to update a member from a new join request, or to clear a
+// member from failed heartbeats.
+func (g *group) updateMemberAndRebalance(m *groupMember, waitingReply *clientReq, newJoin *kmsg.JoinGroupRequest) {
+ for _, p := range m.join.Protocols {
+ g.protocols[p.Name]--
+ }
+ m.join = newJoin
+ if m.join != nil {
+ for _, p := range m.join.Protocols {
+ g.protocols[p.Name]++
+ }
+ if m.waitingReply.empty() && !waitingReply.empty() {
+ g.nJoining++
+ }
+ m.waitingReply = waitingReply
+ } else {
+ delete(g.members, m.memberID)
+ if m.t != nil {
+ m.t.Stop()
+ }
+ if !m.waitingReply.empty() {
+ g.nJoining--
+ }
+ }
+ g.rebalance()
+}
+
+// Adds a new member to the group and rebalances.
+func (g *group) addMemberAndRebalance(m *groupMember, waitingReply *clientReq, join *kmsg.JoinGroupRequest) {
+ g.stopPending(m)
+ m.join = join
+ for _, p := range m.join.Protocols {
+ g.protocols[p.Name]++
+ }
+ g.members[m.memberID] = m
+ g.nJoining++
+ m.waitingReply = waitingReply
+ g.rebalance()
+}
+
+// Returns if a new join can even join the group based on the join's supported
+// protocols.
+func (g *group) protocolsMatch(protocolType string, protocols []kmsg.JoinGroupRequestProtocol) bool {
+ if g.protocolType == "" {
+ if protocolType == "" || len(protocols) == 0 {
+ return false
+ }
+ g.protocolType = protocolType
+ return true
+ }
+ if protocolType != g.protocolType {
+ return false
+ }
+ if len(g.protocols) == 0 {
+ return true
+ }
+ for _, p := range protocols {
+ if _, ok := g.protocols[p.Name]; ok {
+ return true
+ }
+ }
+ return false
+}
+
+// Returns if a new join request is the same as an old request; if so, for
+// non-leaders, we just return the old join response.
+func (m *groupMember) sameJoin(req *kmsg.JoinGroupRequest) bool {
+ if len(m.join.Protocols) != len(req.Protocols) {
+ return false
+ }
+ for i := range m.join.Protocols {
+ if m.join.Protocols[i].Name != req.Protocols[i].Name {
+ return false
+ }
+ if !bytes.Equal(m.join.Protocols[i].Metadata, req.Protocols[i].Metadata) {
+ return false
+ }
+ }
+ return true
+}
+
+func (g *group) fillJoinResp(req *kmsg.JoinGroupRequest, resp *kmsg.JoinGroupResponse) {
+ resp.Generation = g.generation
+ resp.ProtocolType = kmsg.StringPtr(g.protocolType)
+ resp.Protocol = kmsg.StringPtr(g.protocol)
+ resp.LeaderID = g.leader
+ resp.MemberID = req.MemberID
+ if g.leader == req.MemberID {
+ resp.Members = g.joinResponseMetadata()
+ }
+}
+
+func (g *group) joinResponseMetadata() []kmsg.JoinGroupResponseMember {
+ metadata := make([]kmsg.JoinGroupResponseMember, 0, len(g.members))
+members:
+ for _, m := range g.members {
+ for _, p := range m.join.Protocols {
+ if p.Name == g.protocol {
+ metadata = append(metadata, kmsg.JoinGroupResponseMember{
+ MemberID: m.memberID,
+ ProtocolMetadata: p.Metadata,
+ })
+ continue members
+ }
+ }
+ panic("inconsistent group protocol within saved members")
+ }
+ return metadata
+}
+
+func (g *group) reply(creq *clientReq, kresp kmsg.Response, m *groupMember) {
+ select {
+ case creq.cc.respCh <- clientResp{kresp: kresp, corr: creq.corr, seq: creq.seq}:
+ case <-g.c.die:
+ return
+ }
+ if m != nil {
+ m.waitingReply = nil
+ g.updateHeartbeat(m)
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/logger.go b/vendor/github.com/twmb/franz-go/pkg/kfake/logger.go
new file mode 100644
index 0000000000000..cc674bc8b7c3e
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/logger.go
@@ -0,0 +1,65 @@
+package kfake
+
+import (
+ "fmt"
+ "io"
+)
+
+// LogLevel designates which level the logger should log at.
+type LogLevel int8
+
+const (
+ // LogLevelNone disables logging.
+ LogLevelNone LogLevel = iota
+ // LogLevelError logs all errors. Generally, these should not happen.
+ LogLevelError
+ // LogLevelWarn logs all warnings, such as request failures.
+ LogLevelWarn
+ // LogLevelInfo logs informational messages, such as requests. This is
+ // usually the default log level.
+ LogLevelInfo
+ // LogLevelDebug logs verbose information, and is usually not used in
+ // production.
+ LogLevelDebug
+)
+
+func (l LogLevel) String() string {
+ switch l {
+ case LogLevelError:
+ return "ERR"
+ case LogLevelWarn:
+ return "WRN"
+ case LogLevelInfo:
+ return "INF"
+ case LogLevelDebug:
+ return "DBG"
+ default:
+ return "NON"
+ }
+}
+
+// Logger can be provided to hook into the fake cluster's logs.
+type Logger interface {
+ Logf(LogLevel, string, ...any)
+}
+
+type nopLogger struct{}
+
+func (*nopLogger) Logf(LogLevel, string, ...any) {}
+
+// BasicLogger returns a logger that writes newline delimited messages to dst.
+func BasicLogger(dst io.Writer, level LogLevel) Logger {
+ return &basicLogger{dst, level}
+}
+
+type basicLogger struct {
+ dst io.Writer
+ level LogLevel
+}
+
+func (b *basicLogger) Logf(level LogLevel, msg string, args ...any) {
+ if b.level < level {
+ return
+ }
+ fmt.Fprintf(b.dst, "[%s] "+msg+"\n", append([]any{level}, args...)...)
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/main.go b/vendor/github.com/twmb/franz-go/pkg/kfake/main.go
new file mode 100644
index 0000000000000..6d6596c9d5516
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/main.go
@@ -0,0 +1,31 @@
+//go:build none
+
+package main
+
+import (
+ "fmt"
+ "os"
+ "os/signal"
+
+ "github.com/twmb/franz-go/pkg/kfake"
+)
+
+func main() {
+ c, err := kfake.NewCluster(
+ kfake.Ports(9092, 9093, 9094),
+ kfake.SeedTopics(-1, "foo"),
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer c.Close()
+
+ addrs := c.ListenAddrs()
+ for _, addr := range addrs {
+ fmt.Println(addr)
+ }
+
+ sigs := make(chan os.Signal, 2)
+ signal.Notify(sigs, os.Interrupt)
+ <-sigs
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/misc.go b/vendor/github.com/twmb/franz-go/pkg/kfake/misc.go
new file mode 100644
index 0000000000000..75f2f24499476
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/misc.go
@@ -0,0 +1,63 @@
+package kfake
+
+import (
+ "crypto/rand"
+ "crypto/sha256"
+ "encoding/binary"
+ "fmt"
+ "io"
+ "sync"
+)
+
+func randFill(slice []byte) {
+ randPoolFill(slice)
+}
+
+func randBytes(n int) []byte {
+ r := make([]byte, n)
+ randPoolFill(r)
+ return r
+}
+
+func randUUID() [16]byte {
+ var uuid [16]byte
+ randPoolFill(uuid[:])
+ return uuid
+}
+
+func randStrUUID() string {
+ uuid := randUUID()
+ return fmt.Sprintf("%x", uuid[:])
+}
+
+func hashString(s string) uint64 {
+ sum := sha256.Sum256([]byte(s))
+ var n uint64
+ for i := 0; i < 4; i++ {
+ v := binary.BigEndian.Uint64(sum[i*8:])
+ n ^= v
+ }
+ return n
+}
+
+var (
+ mu sync.Mutex
+ randPool = make([]byte, 4<<10)
+ randPoolAt = len(randPool)
+)
+
+func randPoolFill(into []byte) {
+ mu.Lock()
+ defer mu.Unlock()
+ for len(into) != 0 {
+ n := copy(into, randPool[randPoolAt:])
+ into = into[n:]
+ randPoolAt += n
+ if randPoolAt == cap(randPool) {
+ if _, err := io.ReadFull(rand.Reader, randPool); err != nil {
+ panic(fmt.Sprintf("unable to read %d bytes from crypto/rand: %v", len(randPool), err))
+ }
+ randPoolAt = 0
+ }
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/pid.go b/vendor/github.com/twmb/franz-go/pkg/kfake/pid.go
new file mode 100644
index 0000000000000..eacf2cf87bd1f
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/pid.go
@@ -0,0 +1,96 @@
+package kfake
+
+import (
+ "hash/fnv"
+ "math"
+ "math/rand"
+)
+
+// TODO
+//
+// * Convert pids to struct, add heap of last use, add index to pidseqs, and
+// remove pidseqs as they exhaust max # of pids configured.
+//
+// * Wrap epochs
+
+type (
+ pids map[int64]*pidMap
+
+ pidMap struct {
+ id int64
+ epoch int16
+ tps tps[pidseqs]
+ }
+
+ pid struct {
+ id int64
+ epoch int16
+ }
+
+ pidseqs struct {
+ seqs [5]int32
+ at uint8
+ }
+)
+
+func (pids *pids) get(id int64, epoch int16, t string, p int32) (*pidseqs, int16) {
+ if *pids == nil {
+ return nil, 0
+ }
+ pm := (*pids)[id]
+ if pm == nil {
+ return nil, 0
+ }
+ return pm.tps.mkpDefault(t, p), pm.epoch
+}
+
+func (pids *pids) create(txnalID *string) pid {
+ if *pids == nil {
+ *pids = make(map[int64]*pidMap)
+ }
+ var id int64
+ if txnalID != nil {
+ hasher := fnv.New64()
+ hasher.Write([]byte(*txnalID))
+ id = int64(hasher.Sum64()) & math.MaxInt64
+ } else {
+ for {
+ id = int64(rand.Uint64()) & math.MaxInt64
+ if _, exists := (*pids)[id]; !exists {
+ break
+ }
+ }
+ }
+ pm, exists := (*pids)[id]
+ if exists {
+ pm.epoch++
+ return pid{id, pm.epoch}
+ }
+ pm = &pidMap{id: id}
+ (*pids)[id] = pm
+ return pid{id, 0}
+}
+
+func (seqs *pidseqs) pushAndValidate(firstSeq, numRecs int32) (ok, dup bool) {
+ // If there is no pid, we do not do duplicate detection.
+ if seqs == nil {
+ return true, false
+ }
+ var (
+ seq = firstSeq
+ seq64 = int64(seq)
+ next64 = (seq64 + int64(numRecs)) % math.MaxInt32
+ next = int32(next64)
+ )
+ for i := 0; i < 5; i++ {
+ if seqs.seqs[i] == seq && seqs.seqs[(i+1)%5] == next {
+ return true, true
+ }
+ }
+ if seqs.seqs[seqs.at] != seq {
+ return false, false
+ }
+ seqs.at = (seqs.at + 1) % 5
+ seqs.seqs[seqs.at] = next
+ return true, false
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/sasl.go b/vendor/github.com/twmb/franz-go/pkg/kfake/sasl.go
new file mode 100644
index 0000000000000..413bb3bd1c771
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/sasl.go
@@ -0,0 +1,296 @@
+package kfake
+
+import (
+ "bytes"
+ "crypto/hmac"
+ "crypto/sha256"
+ "crypto/sha512"
+ "encoding/base64"
+ "errors"
+ "fmt"
+ "regexp"
+ "strings"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+ "golang.org/x/crypto/pbkdf2"
+)
+
+// TODO server-error-value in serverFinal
+
+const (
+ saslPlain = "PLAIN"
+ saslScram256 = "SCRAM-SHA-256"
+ saslScram512 = "SCRAM-SHA-512"
+ scramIterations = 4096
+)
+
+type (
+ sasls struct {
+ plain map[string]string // user => pass
+ scram256 map[string]scramAuth // user => scram auth
+ scram512 map[string]scramAuth // user => scram auth
+ }
+
+ saslStage uint8
+)
+
+func (s sasls) empty() bool {
+ return len(s.plain) == 0 && len(s.scram256) == 0 && len(s.scram512) == 0
+}
+
+const (
+ saslStageBegin saslStage = iota
+ saslStageAuthPlain
+ saslStageAuthScram0_256
+ saslStageAuthScram0_512
+ saslStageAuthScram1
+ saslStageComplete
+)
+
+func (c *Cluster) handleSASL(creq *clientReq) (allow bool) {
+ switch creq.cc.saslStage {
+ case saslStageBegin:
+ switch creq.kreq.(type) {
+ case *kmsg.ApiVersionsRequest,
+ *kmsg.SASLHandshakeRequest:
+ return true
+ default:
+ return false
+ }
+ case saslStageAuthPlain,
+ saslStageAuthScram0_256,
+ saslStageAuthScram0_512,
+ saslStageAuthScram1:
+ switch creq.kreq.(type) {
+ case *kmsg.ApiVersionsRequest,
+ *kmsg.SASLAuthenticateRequest:
+ return true
+ default:
+ return false
+ }
+ case saslStageComplete:
+ return true
+ default:
+ panic("unreachable")
+ }
+}
+
+///////////
+// PLAIN //
+///////////
+
+func saslSplitPlain(auth []byte) (user, pass string, err error) {
+ parts := strings.SplitN(string(auth), "\x00", 3)
+ if len(parts) != 3 {
+ return "", "", errors.New("invalid plain auth")
+ }
+ if len(parts[0]) != 0 && parts[0] != parts[1] {
+ return "", "", errors.New("authzid is not equal to username") // see below
+ }
+ return parts[1], parts[2], nil
+}
+
+///////////
+// SCRAM //
+///////////
+
+func newScramAuth(mechanism, pass string) scramAuth {
+ var saltedPass []byte
+ salt := randBytes(10)
+ switch mechanism {
+ case saslScram256:
+ saltedPass = pbkdf2.Key([]byte(pass), salt, scramIterations, sha256.Size, sha256.New)
+ case saslScram512:
+ saltedPass = pbkdf2.Key([]byte(pass), salt, scramIterations, sha512.Size, sha512.New)
+ default:
+ panic("unreachable")
+ }
+ return scramAuth{
+ mechanism: mechanism,
+ iterations: scramIterations,
+ saltedPass: saltedPass,
+ salt: salt,
+ }
+}
+
+type scramAuth struct {
+ mechanism string // scram 256 or 512
+ iterations int
+ saltedPass []byte
+ salt []byte
+}
+
+// client-first-message
+type scramClient0 struct {
+ user string
+ bare []byte // client-first-message-bare
+ nonce []byte // nonce in client0
+}
+
+var scramUnescaper = strings.NewReplacer("=3D", "=", "=2C", ",")
+
+func scramParseClient0(client0 []byte) (scramClient0, error) {
+ m := reClient0.FindSubmatch(client0)
+ if len(m) == 0 {
+ return scramClient0{}, errors.New("invalid client0")
+ }
+ var (
+ zid = string(m[1])
+ bare = bytes.Clone(m[2])
+ user = string(m[3])
+ nonce = bytes.Clone(m[4])
+ ext = string(m[5])
+ )
+ if len(ext) != 0 {
+ return scramClient0{}, errors.New("invalid extensions")
+ }
+ if zid != "" && zid != user {
+ return scramClient0{}, errors.New("authzid is not equal to username") // Kafka & Redpanda enforce that a present zid == username
+ }
+ return scramClient0{
+ user: scramUnescaper.Replace(user),
+ bare: bare,
+ nonce: nonce,
+ }, nil
+}
+
+func scramServerFirst(client0 scramClient0, auth scramAuth) (scramServer0, []byte) {
+ nonce := append(client0.nonce, base64.RawStdEncoding.EncodeToString(randBytes(16))...)
+ serverFirst := []byte(fmt.Sprintf("r=%s,s=%s,i=%d",
+ nonce,
+ base64.StdEncoding.EncodeToString(auth.salt),
+ scramIterations,
+ ))
+ return scramServer0{
+ a: auth,
+ c0bare: client0.bare,
+ s0: serverFirst,
+ }, serverFirst
+}
+
+// server-first-message
+type scramServer0 struct {
+ a scramAuth
+ c0bare []byte
+ s0 []byte
+}
+
+// validates client-final-message and replies with server-final-message
+func (s *scramServer0) serverFinal(clientFinal []byte) ([]byte, error) {
+ m := reClientFinal.FindSubmatch(clientFinal)
+ if len(m) == 0 {
+ return nil, errors.New("invalid client-final-message")
+ }
+ var (
+ finalWithoutProof = m[1]
+ channel = m[2]
+ clientProof64 = m[3]
+ h = sha256.New
+ )
+ if s.a.mechanism == saslScram512 {
+ h = sha512.New
+ }
+ if !bytes.Equal(channel, []byte("biws")) { // "biws" == base64("n,,")
+ return nil, errors.New("invalid channel binding")
+ }
+ clientProof, err := base64.StdEncoding.DecodeString(string(clientProof64))
+ if err != nil {
+ return nil, errors.New("client proof is not std-base64")
+ }
+ if len(clientProof) != h().Size() {
+ return nil, fmt.Errorf("len(client proof) %d != expected %d", len(clientProof), h().Size())
+ }
+
+ var clientKey []byte // := HMAC(SaltedPass, "Client Key")
+ {
+ mac := hmac.New(h, s.a.saltedPass)
+ mac.Write([]byte("Client Key"))
+ clientKey = mac.Sum(nil)
+ }
+
+ var storedKey []byte // := H(ClientKey)
+ {
+ h := h()
+ h.Write(clientKey)
+ storedKey = h.Sum(nil)
+ }
+
+ var authMessage []byte // := client-first-bare-message + "," + server-first-message + "," + client-final-message-without-proof
+ {
+ authMessage = append(s.c0bare, ',')
+ authMessage = append(authMessage, s.s0...)
+ authMessage = append(authMessage, ',')
+ authMessage = append(authMessage, finalWithoutProof...)
+ }
+
+ var clientSignature []byte // := HMAC(StoredKey, AuthMessage)
+ {
+ mac := hmac.New(h, storedKey)
+ mac.Write(authMessage)
+ clientSignature = mac.Sum(nil)
+ }
+
+ usedKey := clientProof // := ClientKey XOR ClientSignature
+ {
+ for i, b := range clientSignature {
+ usedKey[i] ^= b
+ }
+ h := h()
+ h.Write(usedKey)
+ usedKey = h.Sum(nil)
+ }
+ if !bytes.Equal(usedKey, storedKey) {
+ return nil, errors.New("invalid password")
+ }
+
+ var serverKey []byte // := HMAC(SaltedPass, "Server Key")
+ {
+ mac := hmac.New(h, s.a.saltedPass)
+ mac.Write([]byte("Server Key"))
+ serverKey = mac.Sum(nil)
+ }
+ var serverSignature []byte // := HMAC(ServerKey, AuthMessage)
+ {
+ mac := hmac.New(h, serverKey)
+ mac.Write(authMessage)
+ serverSignature = mac.Sum(nil)
+ }
+
+ serverFinal := []byte(fmt.Sprintf("v=%s", base64.StdEncoding.EncodeToString(serverSignature)))
+ return serverFinal, nil
+}
+
+var reClient0, reClientFinal *regexp.Regexp
+
+func init() {
+ // https://datatracker.ietf.org/doc/html/rfc5802#section-7
+ const (
+ valueSafe = "[\x01-\x2b\x2d-\x3c\x3e-\x7f]+" // all except \0 - ,
+ value = "[\x01-\x2b\x2d-\x7f]+" // all except \0 ,
+ printable = "[\x21-\x2b\x2d-\x7e]+" // all except , (and DEL, unnoted)
+ saslName = "(?:[\x01-\x2b\x2d-\x3c\x3e-\x7f]|=2C|=3D)+" // valueSafe | others; kafka is lazy here
+ b64 = `[a-zA-Z0-9/+]+={0,3}` // we are lazy here matching up to 3 =
+ ext = "(?:,[a-zA-Z]+=[\x01-\x2b\x2d-\x7f]+)*"
+ )
+
+ // 0: entire match
+ // 1: authzid
+ // 2: client-first-message-bare
+ // 3: username
+ // 4: nonce
+ // 5: ext
+ client0 := fmt.Sprintf("^n,(?:a=(%s))?,((?:m=%s,)?n=(%s),r=(%s)(%s))$", saslName, value, saslName, printable, ext)
+
+ // We reject extensions in client0. Kafka does not validate the nonce
+ // and some clients may generate it incorrectly (i.e. old franz-go), so
+ // we do not validate it.
+ //
+ // 0: entire match
+ // 1: channel-final-message-without-proof
+ // 2: channel binding
+ // 3: proof
+ clientFinal := fmt.Sprintf("^(c=(%s),r=%s),p=(%s)$", b64, printable, b64)
+
+ reClient0 = regexp.MustCompile(client0)
+ reClientFinal = regexp.MustCompile(clientFinal)
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kfake/topic_partition.go b/vendor/github.com/twmb/franz-go/pkg/kfake/topic_partition.go
new file mode 100644
index 0000000000000..ac409c53e7d61
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kfake/topic_partition.go
@@ -0,0 +1,75 @@
+package kfake
+
+type tps[V any] map[string]map[int32]*V
+
+func (tps *tps[V]) getp(t string, p int32) (*V, bool) {
+ if *tps == nil {
+ return nil, false
+ }
+ ps := (*tps)[t]
+ if ps == nil {
+ return nil, false
+ }
+ v, ok := ps[p]
+ return v, ok
+}
+
+func (tps *tps[V]) gett(t string) (map[int32]*V, bool) {
+ if tps == nil {
+ return nil, false
+ }
+ ps, ok := (*tps)[t]
+ return ps, ok
+}
+
+func (tps *tps[V]) mkt(t string) map[int32]*V {
+ if *tps == nil {
+ *tps = make(map[string]map[int32]*V)
+ }
+ ps := (*tps)[t]
+ if ps == nil {
+ ps = make(map[int32]*V)
+ (*tps)[t] = ps
+ }
+ return ps
+}
+
+func (tps *tps[V]) mkp(t string, p int32, newFn func() *V) *V {
+ ps := tps.mkt(t)
+ v, ok := ps[p]
+ if !ok {
+ v = newFn()
+ ps[p] = v
+ }
+ return v
+}
+
+func (tps *tps[V]) mkpDefault(t string, p int32) *V {
+ return tps.mkp(t, p, func() *V { return new(V) })
+}
+
+func (tps *tps[V]) set(t string, p int32, v V) {
+ *tps.mkpDefault(t, p) = v
+}
+
+func (tps *tps[V]) each(fn func(t string, p int32, v *V)) {
+ for t, ps := range *tps {
+ for p, v := range ps {
+ fn(t, p, v)
+ }
+ }
+}
+
+func (tps *tps[V]) delp(t string, p int32) {
+ if *tps == nil {
+ return
+ }
+ ps := (*tps)[t]
+ if ps == nil {
+ return
+ }
+ delete(ps, p)
+ if len(ps) == 0 {
+ delete(*tps, t)
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/atomic_maybe_work.go b/vendor/github.com/twmb/franz-go/pkg/kgo/atomic_maybe_work.go
new file mode 100644
index 0000000000000..bfdd3c1deb714
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/atomic_maybe_work.go
@@ -0,0 +1,76 @@
+package kgo
+
+import "sync/atomic"
+
+const (
+ stateUnstarted = iota
+ stateWorking
+ stateContinueWorking
+)
+
+type workLoop struct{ state atomicU32 }
+
+// maybeBegin returns whether a work loop should begin.
+func (l *workLoop) maybeBegin() bool {
+ var state uint32
+ var done bool
+ for !done {
+ switch state = l.state.Load(); state {
+ case stateUnstarted:
+ done = l.state.CompareAndSwap(state, stateWorking)
+ state = stateWorking
+ case stateWorking:
+ done = l.state.CompareAndSwap(state, stateContinueWorking)
+ state = stateContinueWorking
+ case stateContinueWorking:
+ done = true
+ }
+ }
+
+ return state == stateWorking
+}
+
+// maybeFinish demotes loop's internal state and returns whether work should
+// keep going. This function should be called before looping to continue
+// work.
+//
+// If again is true, this will avoid demoting from working to not
+// working. Again would be true if the loop knows it should continue working;
+// calling this function is necessary even in this case to update loop's
+// internal state.
+//
+// This function is a no-op if the loop is already finished, but generally,
+// since the loop itself calls MaybeFinish after it has been started, this
+// should never be called if the loop is unstarted.
+func (l *workLoop) maybeFinish(again bool) bool {
+ switch state := l.state.Load(); state {
+ // Working:
+ // If again, we know we should continue; keep our state.
+ // If not again, we try to downgrade state and stop.
+ // If we cannot, then something slipped in to say keep going.
+ case stateWorking:
+ if !again {
+ again = !l.state.CompareAndSwap(state, stateUnstarted)
+ }
+ // Continue: demote ourself and run again no matter what.
+ case stateContinueWorking:
+ l.state.Store(stateWorking)
+ again = true
+ }
+
+ return again
+}
+
+func (l *workLoop) hardFinish() {
+ l.state.Store(stateUnstarted)
+}
+
+// lazyI32 is used in a few places where we want atomics _sometimes_. Some
+// uses do not need to be atomic (notably, setup), and we do not want the
+// noCopy guard.
+//
+// Specifically, this is used for a few int32 settings in the config.
+type lazyI32 int32
+
+func (v *lazyI32) store(s int32) { atomic.StoreInt32((*int32)(v), s) }
+func (v *lazyI32) load() int32 { return atomic.LoadInt32((*int32)(v)) }
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/broker.go b/vendor/github.com/twmb/franz-go/pkg/kgo/broker.go
new file mode 100644
index 0000000000000..c3d5a9a750857
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/broker.go
@@ -0,0 +1,1507 @@
+package kgo
+
+import (
+ "context"
+ "crypto/tls"
+ "encoding/binary"
+ "errors"
+ "fmt"
+ "io"
+ "math"
+ "math/rand"
+ "net"
+ "os"
+ "strconv"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+ "github.com/twmb/franz-go/pkg/sasl"
+)
+
+type pinReq struct {
+ kmsg.Request
+ min int16
+ max int16
+ pinMin bool
+ pinMax bool
+}
+
+func (p *pinReq) SetVersion(v int16) {
+ if p.pinMin && v < p.min {
+ v = p.min
+ }
+ if p.pinMax && v > p.max {
+ v = p.max
+ }
+ p.Request.SetVersion(v)
+}
+
+type promisedReq struct {
+ ctx context.Context
+ req kmsg.Request
+ promise func(kmsg.Response, error)
+ enqueue time.Time // used to calculate writeWait
+}
+
+type promisedResp struct {
+ ctx context.Context
+
+ corrID int32
+ // With flexible headers, we skip tags at the end of the response
+ // header for now because they're currently unused. However, the
+ // ApiVersions response uses v0 response header (no tags) even if the
+ // response body has flexible versions. This is done in support of the
+ // v0 fallback logic that allows for indexing into an exact offset.
+ // Thus, for ApiVersions specifically, this is false even if the
+ // request is flexible.
+ //
+ // As a side note, this note was not mentioned in KIP-482 which
+ // introduced flexible versions, and was mentioned in passing in
+ // KIP-511 which made ApiVersion flexible, so discovering what was
+ // wrong was not too fun ("Note that ApiVersionsResponse is flexible
+ // version but the response header is not flexible" is *it* in the
+ // entire KIP.)
+ //
+ // To see the version pinning, look at the code generator function
+ // generateHeaderVersion in
+ // generator/src/main/java/org/apache/kafka/message/ApiMessageTypeGenerator.java
+ flexibleHeader bool
+
+ resp kmsg.Response
+ promise func(kmsg.Response, error)
+ readTimeout time.Duration
+
+ // The following block is used for the read / e2e hooks.
+ bytesWritten int
+ writeWait time.Duration
+ timeToWrite time.Duration
+ readEnqueue time.Time
+}
+
+// NodeName returns the name of a node, given the kgo internal node ID.
+//
+// Internally, seed brokers are stored with very negative node IDs, and these
+// node IDs are visible in the BrokerMetadata struct. You can use NodeName to
+// convert the negative node ID into "seed_#". Brokers discovered through
+// metadata responses have standard non-negative numbers and this function just
+// returns the number as a string.
+func NodeName(nodeID int32) string {
+ return logID(nodeID)
+}
+
+func logID(id int32) string {
+ if id >= -10 {
+ return strconv.FormatInt(int64(id), 10)
+ }
+ return "seed_" + strconv.FormatInt(int64(id)-math.MinInt32, 10)
+}
+
+// BrokerMetadata is metadata for a broker.
+//
+// This struct mirrors kmsg.MetadataResponseBroker.
+type BrokerMetadata struct {
+ // NodeID is the broker node ID.
+ //
+ // Seed brokers will have very negative IDs; kgo does not try to map
+ // seed brokers to loaded brokers. You can use NodeName to convert
+ // the seed node ID into a formatted string.
+ NodeID int32
+
+ // Port is the port of the broker.
+ Port int32
+
+ // Host is the hostname of the broker.
+ Host string
+
+ // Rack is an optional rack of the broker. It is invalid to modify this
+ // field.
+ //
+ // Seed brokers will not have a rack.
+ Rack *string
+
+ _ struct{} // allow us to add fields later
+}
+
+func (me BrokerMetadata) equals(other kmsg.MetadataResponseBroker) bool {
+ return me.NodeID == other.NodeID &&
+ me.Port == other.Port &&
+ me.Host == other.Host &&
+ (me.Rack == nil && other.Rack == nil ||
+ me.Rack != nil && other.Rack != nil && *me.Rack == *other.Rack)
+}
+
+// broker manages the concept how a client would interact with a broker.
+type broker struct {
+ cl *Client
+
+ addr string // net.JoinHostPort(meta.Host, meta.Port)
+ meta BrokerMetadata
+
+ // versions tracks the first load of an ApiVersions. We store this
+ // after the first connect, which helps speed things up on future
+ // reconnects (across any of the three broker connections) because we
+ // will never look up API versions for this broker again.
+ versions atomic.Value // *brokerVersions
+
+ // The cxn fields each manage a single tcp connection to one broker.
+ // Each field is managed serially in handleReqs. This means that only
+ // one write can happen at a time, regardless of which connection the
+ // write goes to, but the write is expected to be fast whereas the wait
+ // for the response is expected to be slow.
+ //
+ // Produce requests go to cxnProduce, fetch to cxnFetch, join/sync go
+ // to cxnGroup, anything with TimeoutMillis goes to cxnSlow, and
+ // everything else goes to cxnNormal.
+ cxnNormal *brokerCxn
+ cxnProduce *brokerCxn
+ cxnFetch *brokerCxn
+ cxnGroup *brokerCxn
+ cxnSlow *brokerCxn
+
+ reapMu sync.Mutex // held when modifying a brokerCxn
+
+ // reqs manages incoming message requests.
+ reqs ringReq
+ // dead is an atomic so a backed up reqs cannot block broker stoppage.
+ dead atomicBool
+}
+
+// brokerVersions is loaded once (and potentially a few times concurrently if
+// multiple connections are opening at once) and then forever stored for a
+// broker.
+type brokerVersions struct {
+ versions [kmsg.MaxKey + 1]int16
+}
+
+func newBrokerVersions() *brokerVersions {
+ var v brokerVersions
+ for i := range &v.versions {
+ v.versions[i] = -1
+ }
+ return &v
+}
+
+func (*brokerVersions) len() int { return kmsg.MaxKey + 1 }
+
+func (b *broker) loadVersions() *brokerVersions {
+ loaded := b.versions.Load()
+ if loaded == nil {
+ return nil
+ }
+ return loaded.(*brokerVersions)
+}
+
+func (b *broker) storeVersions(v *brokerVersions) { b.versions.Store(v) }
+
+const unknownControllerID = -1
+
+var unknownBrokerMetadata = BrokerMetadata{
+ NodeID: -1,
+}
+
+// broker IDs are all positive, but Kafka uses -1 to signify unknown
+// controllers. To avoid issues where a client broker ID map knows of
+// a -1 ID controller, we start unknown seeds at MinInt32.
+func unknownSeedID(seedNum int) int32 {
+ return int32(math.MinInt32 + seedNum)
+}
+
+func (cl *Client) newBroker(nodeID int32, host string, port int32, rack *string) *broker {
+ return &broker{
+ cl: cl,
+
+ addr: net.JoinHostPort(host, strconv.Itoa(int(port))),
+ meta: BrokerMetadata{
+ NodeID: nodeID,
+ Host: host,
+ Port: port,
+ Rack: rack,
+ },
+ }
+}
+
+// stopForever permanently disables this broker.
+func (b *broker) stopForever() {
+ if b.dead.Swap(true) {
+ return
+ }
+
+ b.reqs.die() // no more pushing
+
+ b.reapMu.Lock()
+ defer b.reapMu.Unlock()
+
+ b.cxnNormal.die()
+ b.cxnProduce.die()
+ b.cxnFetch.die()
+ b.cxnGroup.die()
+ b.cxnSlow.die()
+}
+
+// do issues a request to the broker, eventually calling the response
+// once a the request either fails or is responded to (with failure or not).
+//
+// The promise will block broker processing.
+func (b *broker) do(
+ ctx context.Context,
+ req kmsg.Request,
+ promise func(kmsg.Response, error),
+) {
+ pr := promisedReq{ctx, req, promise, time.Now()}
+
+ first, dead := b.reqs.push(pr)
+
+ if first {
+ go b.handleReqs(pr)
+ } else if dead {
+ promise(nil, errChosenBrokerDead)
+ }
+}
+
+// waitResp runs a req, waits for the resp and returns the resp and err.
+func (b *broker) waitResp(ctx context.Context, req kmsg.Request) (kmsg.Response, error) {
+ var resp kmsg.Response
+ var err error
+ done := make(chan struct{})
+ wait := func(kresp kmsg.Response, kerr error) {
+ resp, err = kresp, kerr
+ close(done)
+ }
+ b.do(ctx, req, wait)
+ <-done
+ return resp, err
+}
+
+func (b *broker) handleReqs(pr promisedReq) {
+ var more, dead bool
+start:
+ if dead {
+ pr.promise(nil, errChosenBrokerDead)
+ } else {
+ b.handleReq(pr)
+ }
+
+ pr, more, dead = b.reqs.dropPeek()
+ if more {
+ goto start
+ }
+}
+
+func (b *broker) handleReq(pr promisedReq) {
+ req := pr.req
+ var cxn *brokerCxn
+ var retriedOnNewConnection bool
+start:
+ {
+ var err error
+ if cxn, err = b.loadConnection(pr.ctx, req); err != nil {
+ // It is rare, but it is possible that the broker has
+ // an immediate issue on a new connection. We retry
+ // once.
+ if isRetryableBrokerErr(err) && !retriedOnNewConnection {
+ retriedOnNewConnection = true
+ goto start
+ }
+ pr.promise(nil, err)
+ return
+ }
+ }
+
+ v := b.loadVersions()
+
+ if int(req.Key()) > v.len() || b.cl.cfg.maxVersions != nil && !b.cl.cfg.maxVersions.HasKey(req.Key()) {
+ pr.promise(nil, errUnknownRequestKey)
+ return
+ }
+
+ // If v.versions[0] is non-negative, then we loaded API
+ // versions. If the version for this request is negative, we
+ // know the broker cannot handle this request.
+ if v.versions[0] >= 0 && v.versions[req.Key()] < 0 {
+ pr.promise(nil, errBrokerTooOld)
+ return
+ }
+
+ ourMax := req.MaxVersion()
+ if b.cl.cfg.maxVersions != nil {
+ userMax, _ := b.cl.cfg.maxVersions.LookupMaxKeyVersion(req.Key()) // we validated HasKey above
+ if userMax < ourMax {
+ ourMax = userMax
+ }
+ }
+
+ // If brokerMax is negative at this point, we have no api
+ // versions because the client is pinned pre 0.10.0 and we
+ // stick with our max.
+ version := ourMax
+ if brokerMax := v.versions[req.Key()]; brokerMax >= 0 && brokerMax < ourMax {
+ version = brokerMax
+ }
+
+ minVersion := int16(-1)
+
+ // If the version now (after potential broker downgrading) is
+ // lower than we desire, we fail the request for the broker is
+ // too old.
+ if b.cl.cfg.minVersions != nil {
+ minVersion, _ = b.cl.cfg.minVersions.LookupMaxKeyVersion(req.Key())
+ if minVersion > -1 && version < minVersion {
+ pr.promise(nil, errBrokerTooOld)
+ return
+ }
+ }
+
+ req.SetVersion(version) // always go for highest version
+ setVersion := req.GetVersion()
+ if minVersion > -1 && setVersion < minVersion {
+ pr.promise(nil, fmt.Errorf("request key %d version returned %d below the user defined min of %d", req.Key(), setVersion, minVersion))
+ return
+ }
+ if version < setVersion {
+ // If we want to set an old version, but the request is pinned
+ // high, we need to fail with errBrokerTooOld. The broker wants
+ // an old version, we want a high version. We rely on this
+ // error in backcompat request sharding.
+ pr.promise(nil, errBrokerTooOld)
+ return
+ }
+
+ if !cxn.expiry.IsZero() && time.Now().After(cxn.expiry) {
+ // If we are after the reauth time, try to reauth. We
+ // can only have an expiry if we went the authenticate
+ // flow, so we know we are authenticating again.
+ //
+ // Some implementations (AWS) occasionally fail for
+ // unclear reasons (principals change, somehow). If
+ // we receive SASL_AUTHENTICATION_FAILED, we retry
+ // once on a new connection. See #249.
+ //
+ // For KIP-368.
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "sasl expiry limit reached, reauthenticating", "broker", logID(cxn.b.meta.NodeID))
+ if err := cxn.sasl(); err != nil {
+ cxn.die()
+ if errors.Is(err, kerr.SaslAuthenticationFailed) && !retriedOnNewConnection {
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "sasl reauth failed, retrying once on new connection", "broker", logID(cxn.b.meta.NodeID), "err", err)
+ retriedOnNewConnection = true
+ goto start
+ }
+ pr.promise(nil, err)
+ return
+ }
+ }
+
+ // Juuuust before we issue the request, we check if it was
+ // canceled. We could have previously tried this request, which
+ // then failed and retried.
+ //
+ // Checking the context was canceled here ensures we do not
+ // loop. We could be more precise with error tracking, though.
+ select {
+ case <-pr.ctx.Done():
+ pr.promise(nil, pr.ctx.Err())
+ return
+ default:
+ }
+
+ // Produce requests (and only produce requests) can be written
+ // without receiving a reply. If we see required acks is 0,
+ // then we immediately call the promise with no response.
+ //
+ // We provide a non-nil *kmsg.ProduceResponse for
+ // *kmsg.ProduceRequest just to ensure we do not return with no
+ // error and no kmsg.Response, per the client contract.
+ //
+ // As documented on the client's Request function, if this is a
+ // *kmsg.ProduceRequest, we rewrite the acks to match the
+ // client configured acks, and we rewrite the timeout millis if
+ // acks is 0. We do this to ensure that our discard goroutine
+ // is used correctly, and so that we do not write a request
+ // with 0 acks and then send it to handleResps where it will
+ // not get a response.
+ var isNoResp bool
+ var noResp *kmsg.ProduceResponse
+ switch r := req.(type) {
+ case *produceRequest:
+ isNoResp = r.acks == 0
+ case *kmsg.ProduceRequest:
+ r.Acks = b.cl.cfg.acks.val
+ if r.Acks == 0 {
+ isNoResp = true
+ r.TimeoutMillis = int32(b.cl.cfg.produceTimeout.Milliseconds())
+ }
+ noResp = kmsg.NewPtrProduceResponse()
+ noResp.Version = req.GetVersion()
+ }
+
+ corrID, bytesWritten, writeWait, timeToWrite, readEnqueue, writeErr := cxn.writeRequest(pr.ctx, pr.enqueue, req)
+
+ if writeErr != nil {
+ pr.promise(nil, writeErr)
+ cxn.die()
+ cxn.hookWriteE2E(req.Key(), bytesWritten, writeWait, timeToWrite, writeErr)
+ return
+ }
+
+ if isNoResp {
+ pr.promise(noResp, nil)
+ cxn.hookWriteE2E(req.Key(), bytesWritten, writeWait, timeToWrite, writeErr)
+ return
+ }
+
+ rt, _ := cxn.cl.connTimeouter.timeouts(req)
+
+ cxn.waitResp(promisedResp{
+ pr.ctx,
+ corrID,
+ req.IsFlexible() && req.Key() != 18, // response header not flexible if ApiVersions; see promisedResp doc
+ req.ResponseKind(),
+ pr.promise,
+ rt,
+ bytesWritten,
+ writeWait,
+ timeToWrite,
+ readEnqueue,
+ })
+}
+
+func (cxn *brokerCxn) hookWriteE2E(key int16, bytesWritten int, writeWait, timeToWrite time.Duration, writeErr error) {
+ cxn.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerE2E); ok {
+ h.OnBrokerE2E(cxn.b.meta, key, BrokerE2E{
+ BytesWritten: bytesWritten,
+ WriteWait: writeWait,
+ TimeToWrite: timeToWrite,
+ WriteErr: writeErr,
+ })
+ }
+ })
+}
+
+// bufPool is used to reuse issued-request buffers across writes to brokers.
+type bufPool struct{ p *sync.Pool }
+
+func newBufPool() bufPool {
+ return bufPool{
+ p: &sync.Pool{New: func() any { r := make([]byte, 1<<10); return &r }},
+ }
+}
+
+func (p bufPool) get() []byte { return (*p.p.Get().(*[]byte))[:0] }
+func (p bufPool) put(b []byte) { p.p.Put(&b) }
+
+// loadConection returns the broker's connection, creating it if necessary
+// and returning an error of if that fails.
+func (b *broker) loadConnection(ctx context.Context, req kmsg.Request) (*brokerCxn, error) {
+ var (
+ pcxn = &b.cxnNormal
+ isProduceCxn bool // see docs on brokerCxn.discard for why we do this
+ reqKey = req.Key()
+ _, isTimeout = req.(kmsg.TimeoutRequest)
+ )
+ switch {
+ case reqKey == 0:
+ pcxn = &b.cxnProduce
+ isProduceCxn = true
+ case reqKey == 1:
+ pcxn = &b.cxnFetch
+ case reqKey == 11 || reqKey == 14: // join || sync
+ pcxn = &b.cxnGroup
+ case isTimeout:
+ pcxn = &b.cxnSlow
+ }
+
+ if *pcxn != nil && !(*pcxn).dead.Load() {
+ return *pcxn, nil
+ }
+
+ conn, err := b.connect(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ cxn := &brokerCxn{
+ cl: b.cl,
+ b: b,
+
+ addr: b.addr,
+ conn: conn,
+ deadCh: make(chan struct{}),
+ }
+ if err = cxn.init(isProduceCxn); err != nil {
+ b.cl.cfg.logger.Log(LogLevelDebug, "connection initialization failed", "addr", b.addr, "broker", logID(b.meta.NodeID), "err", err)
+ cxn.closeConn()
+ return nil, err
+ }
+ b.cl.cfg.logger.Log(LogLevelDebug, "connection initialized successfully", "addr", b.addr, "broker", logID(b.meta.NodeID))
+
+ b.reapMu.Lock()
+ defer b.reapMu.Unlock()
+ *pcxn = cxn
+ return cxn, nil
+}
+
+func (cl *Client) reapConnectionsLoop() {
+ idleTimeout := cl.cfg.connIdleTimeout
+ if idleTimeout < 0 { // impossible due to cfg.validate, but just in case
+ return
+ }
+
+ ticker := time.NewTicker(idleTimeout)
+ defer ticker.Stop()
+ last := time.Now()
+ for {
+ select {
+ case <-cl.ctx.Done():
+ return
+ case tick := <-ticker.C:
+ start := time.Now()
+ reaped := cl.reapConnections(idleTimeout)
+ dur := time.Since(start)
+ if reaped > 0 {
+ cl.cfg.logger.Log(LogLevelDebug, "reaped connections", "time_since_last_reap", tick.Sub(last), "reap_dur", dur, "num_reaped", reaped)
+ }
+ last = tick
+ }
+ }
+}
+
+func (cl *Client) reapConnections(idleTimeout time.Duration) (total int) {
+ cl.brokersMu.Lock()
+ seeds := cl.loadSeeds()
+ brokers := make([]*broker, 0, len(cl.brokers)+len(seeds))
+ brokers = append(brokers, cl.brokers...)
+ brokers = append(brokers, seeds...)
+ cl.brokersMu.Unlock()
+
+ for _, broker := range brokers {
+ total += broker.reapConnections(idleTimeout)
+ }
+ return total
+}
+
+func (b *broker) reapConnections(idleTimeout time.Duration) (total int) {
+ b.reapMu.Lock()
+ defer b.reapMu.Unlock()
+
+ for _, cxn := range []*brokerCxn{
+ b.cxnNormal,
+ b.cxnProduce,
+ b.cxnFetch,
+ b.cxnGroup,
+ b.cxnSlow,
+ } {
+ if cxn == nil || cxn.dead.Load() {
+ continue
+ }
+
+ // If we have not written nor read in a long time, the
+ // connection can be reaped. If only one is idle, the other may
+ // be busy (or may not happen):
+ //
+ // - produce can write but never read
+ // - fetch can hang for a while reading (infrequent writes)
+
+ lastWrite := time.Unix(0, cxn.lastWrite.Load())
+ lastRead := time.Unix(0, cxn.lastRead.Load())
+
+ writeIdle := time.Since(lastWrite) > idleTimeout && !cxn.writing.Load()
+ readIdle := time.Since(lastRead) > idleTimeout && !cxn.reading.Load()
+
+ if writeIdle && readIdle {
+ cxn.die()
+ total++
+ }
+ }
+ return total
+}
+
+// connect connects to the broker's addr, returning the new connection.
+func (b *broker) connect(ctx context.Context) (net.Conn, error) {
+ b.cl.cfg.logger.Log(LogLevelDebug, "opening connection to broker", "addr", b.addr, "broker", logID(b.meta.NodeID))
+ start := time.Now()
+ conn, err := b.cl.cfg.dialFn(ctx, "tcp", b.addr)
+ since := time.Since(start)
+ b.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerConnect); ok {
+ h.OnBrokerConnect(b.meta, since, conn, err)
+ }
+ })
+ if err != nil {
+ if !errors.Is(err, ErrClientClosed) && !errors.Is(err, context.Canceled) && !strings.Contains(err.Error(), "operation was canceled") {
+ if errors.Is(err, io.EOF) {
+ b.cl.cfg.logger.Log(LogLevelWarn, "unable to open connection to broker due to an immediate EOF, which often means the client is using TLS when the broker is not expecting it (is TLS misconfigured?)", "addr", b.addr, "broker", logID(b.meta.NodeID), "err", err)
+ return nil, &ErrFirstReadEOF{kind: firstReadTLS, err: err}
+ }
+ b.cl.cfg.logger.Log(LogLevelWarn, "unable to open connection to broker", "addr", b.addr, "broker", logID(b.meta.NodeID), "err", err)
+ }
+ return nil, fmt.Errorf("unable to dial: %w", err)
+ }
+ b.cl.cfg.logger.Log(LogLevelDebug, "connection opened to broker", "addr", b.addr, "broker", logID(b.meta.NodeID))
+ return conn, nil
+}
+
+// brokerCxn manages an actual connection to a Kafka broker. This is separate
+// the broker struct to allow lazy connection (re)creation.
+type brokerCxn struct {
+ throttleUntil atomicI64 // atomic nanosec
+
+ conn net.Conn
+
+ cl *Client
+ b *broker
+
+ addr string
+
+ mechanism sasl.Mechanism
+ expiry time.Time
+
+ corrID int32
+
+ // The following four fields are used for connection reaping.
+ // Write is only updated in one location; read is updated in three
+ // due to readConn, readConnAsync, and discard.
+ lastWrite atomicI64
+ lastRead atomicI64
+ writing atomicBool
+ reading atomicBool
+
+ successes uint64
+
+ // resps manages reading kafka responses.
+ resps ringResp
+ // dead is an atomic so that a backed up resps cannot block cxn death.
+ dead atomicBool
+ // closed in cloneConn; allows throttle waiting to quit
+ deadCh chan struct{}
+}
+
+func (cxn *brokerCxn) init(isProduceCxn bool) error {
+ hasVersions := cxn.b.loadVersions() != nil
+ if !hasVersions {
+ if cxn.b.cl.cfg.maxVersions == nil || cxn.b.cl.cfg.maxVersions.HasKey(18) {
+ if err := cxn.requestAPIVersions(); err != nil {
+ if !errors.Is(err, ErrClientClosed) && !isRetryableBrokerErr(err) {
+ cxn.cl.cfg.logger.Log(LogLevelError, "unable to request api versions", "broker", logID(cxn.b.meta.NodeID), "err", err)
+ }
+ return err
+ }
+ } else {
+ // We have a max versions, and it indicates no support
+ // for ApiVersions. We just store a default -1 set.
+ cxn.b.storeVersions(newBrokerVersions())
+ }
+ }
+
+ if err := cxn.sasl(); err != nil {
+ if !errors.Is(err, ErrClientClosed) && !isRetryableBrokerErr(err) {
+ cxn.cl.cfg.logger.Log(LogLevelError, "unable to initialize sasl", "broker", logID(cxn.b.meta.NodeID), "err", err)
+ }
+ return err
+ }
+
+ if isProduceCxn && cxn.cl.cfg.acks.val == 0 {
+ go cxn.discard() // see docs on discard for why we do this
+ }
+ return nil
+}
+
+func (cxn *brokerCxn) requestAPIVersions() error {
+ maxVersion := int16(3)
+
+ // If the user configured a max versions, we check that the key exists
+ // before entering this function. Thus, we expect exists to be true,
+ // but we still doubly check it for sanity (as well as userMax, which
+ // can only be non-negative based off of LookupMaxKeyVersion's API).
+ if cxn.cl.cfg.maxVersions != nil {
+ userMax, exists := cxn.cl.cfg.maxVersions.LookupMaxKeyVersion(18) // 18 == api versions
+ if exists && userMax >= 0 {
+ maxVersion = userMax
+ }
+ }
+
+start:
+ req := kmsg.NewPtrApiVersionsRequest()
+ req.Version = maxVersion
+ req.ClientSoftwareName = cxn.cl.cfg.softwareName
+ req.ClientSoftwareVersion = cxn.cl.cfg.softwareVersion
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "issuing api versions request", "broker", logID(cxn.b.meta.NodeID), "version", maxVersion)
+ corrID, bytesWritten, writeWait, timeToWrite, readEnqueue, writeErr := cxn.writeRequest(nil, time.Now(), req)
+ if writeErr != nil {
+ cxn.hookWriteE2E(req.Key(), bytesWritten, writeWait, timeToWrite, writeErr)
+ return writeErr
+ }
+
+ rt, _ := cxn.cl.connTimeouter.timeouts(req)
+ // api versions does *not* use flexible response headers; see comment in promisedResp
+ rawResp, err := cxn.readResponse(nil, req.Key(), req.GetVersion(), corrID, false, rt, bytesWritten, writeWait, timeToWrite, readEnqueue)
+ if err != nil {
+ return err
+ }
+ if len(rawResp) < 2 {
+ return fmt.Errorf("invalid length %d short response from ApiVersions request", len(rawResp))
+ }
+
+ resp := req.ResponseKind().(*kmsg.ApiVersionsResponse)
+
+ // If we used a version larger than Kafka supports, Kafka replies with
+ // Version 0 and an UNSUPPORTED_VERSION error.
+ //
+ // Pre Kafka 2.4, we have to retry the request with version 0.
+ // Post, Kafka replies with all versions.
+ if rawResp[1] == 35 {
+ if maxVersion == 0 {
+ return errors.New("broker replied with UNSUPPORTED_VERSION to an ApiVersions request of version 0")
+ }
+ srawResp := string(rawResp)
+ if srawResp == "\x00\x23\x00\x00\x00\x00" ||
+ // EventHubs erroneously replies with v1, so we check
+ // for that as well.
+ srawResp == "\x00\x23\x00\x00\x00\x00\x00\x00\x00\x00" {
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "broker does not know our ApiVersions version, downgrading to version 0 and retrying", "broker", logID(cxn.b.meta.NodeID))
+ maxVersion = 0
+ goto start
+ }
+ resp.Version = 0
+ }
+
+ if err = resp.ReadFrom(rawResp); err != nil {
+ return fmt.Errorf("unable to read ApiVersions response: %w", err)
+ }
+ if len(resp.ApiKeys) == 0 {
+ return errors.New("ApiVersions response invalidly contained no ApiKeys")
+ }
+
+ v := newBrokerVersions()
+ for _, key := range resp.ApiKeys {
+ if key.ApiKey > kmsg.MaxKey || key.ApiKey < 0 {
+ continue
+ }
+ v.versions[key.ApiKey] = key.MaxVersion
+ }
+ cxn.b.storeVersions(v)
+ return nil
+}
+
+func (cxn *brokerCxn) sasl() error {
+ if len(cxn.cl.cfg.sasls) == 0 {
+ return nil
+ }
+ mechanism := cxn.cl.cfg.sasls[0]
+ retried := false
+ authenticate := false
+
+ v := cxn.b.loadVersions()
+ req := kmsg.NewPtrSASLHandshakeRequest()
+
+start:
+ if mechanism.Name() != "GSSAPI" && v.versions[req.Key()] >= 0 {
+ req.Mechanism = mechanism.Name()
+ req.Version = v.versions[req.Key()]
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "issuing SASLHandshakeRequest", "broker", logID(cxn.b.meta.NodeID))
+ corrID, bytesWritten, writeWait, timeToWrite, readEnqueue, writeErr := cxn.writeRequest(nil, time.Now(), req)
+ if writeErr != nil {
+ cxn.hookWriteE2E(req.Key(), bytesWritten, writeWait, timeToWrite, writeErr)
+ return writeErr
+ }
+
+ rt, _ := cxn.cl.connTimeouter.timeouts(req)
+ rawResp, err := cxn.readResponse(nil, req.Key(), req.GetVersion(), corrID, req.IsFlexible(), rt, bytesWritten, writeWait, timeToWrite, readEnqueue)
+ if err != nil {
+ return err
+ }
+ resp := req.ResponseKind().(*kmsg.SASLHandshakeResponse)
+ if err = resp.ReadFrom(rawResp); err != nil {
+ return err
+ }
+
+ err = kerr.ErrorForCode(resp.ErrorCode)
+ if err != nil {
+ if !retried && err == kerr.UnsupportedSaslMechanism {
+ for _, ours := range cxn.cl.cfg.sasls[1:] {
+ for _, supported := range resp.SupportedMechanisms {
+ if supported == ours.Name() {
+ mechanism = ours
+ retried = true
+ goto start
+ }
+ }
+ }
+ }
+ return err
+ }
+ authenticate = req.Version == 1
+ }
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "beginning sasl authentication", "broker", logID(cxn.b.meta.NodeID), "addr", cxn.addr, "mechanism", mechanism.Name(), "authenticate", authenticate)
+ cxn.mechanism = mechanism
+ return cxn.doSasl(authenticate)
+}
+
+func (cxn *brokerCxn) doSasl(authenticate bool) error {
+ session, clientWrite, err := cxn.mechanism.Authenticate(cxn.cl.ctx, cxn.addr)
+ if err != nil {
+ return err
+ }
+ if len(clientWrite) == 0 {
+ return fmt.Errorf("unexpected server-write sasl with mechanism %s", cxn.mechanism.Name())
+ }
+
+ prereq := time.Now() // used below for sasl lifetime calculation
+ var lifetimeMillis int64
+
+ // Even if we do not wrap our reads/writes in SASLAuthenticate, we
+ // still use the SASLAuthenticate timeouts.
+ rt, wt := cxn.cl.connTimeouter.timeouts(kmsg.NewPtrSASLAuthenticateRequest())
+
+ // We continue writing until both the challenging is done AND the
+ // responses are done. We can have an additional response once we
+ // are done with challenges.
+ step := -1
+ for done := false; !done || len(clientWrite) > 0; {
+ step++
+ var challenge []byte
+
+ if !authenticate {
+ buf := cxn.cl.bufPool.get()
+
+ buf = append(buf[:0], 0, 0, 0, 0)
+ binary.BigEndian.PutUint32(buf, uint32(len(clientWrite)))
+ buf = append(buf, clientWrite...)
+
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "issuing raw sasl authenticate", "broker", logID(cxn.b.meta.NodeID), "addr", cxn.addr, "step", step)
+ _, _, _, _, err = cxn.writeConn(context.Background(), buf, wt, time.Now())
+
+ cxn.cl.bufPool.put(buf)
+
+ if err != nil {
+ return err
+ }
+ if !done {
+ if _, challenge, _, _, err = cxn.readConn(context.Background(), rt, time.Now()); err != nil {
+ return err
+ }
+ }
+ } else {
+ req := kmsg.NewPtrSASLAuthenticateRequest()
+ req.SASLAuthBytes = clientWrite
+ req.Version = cxn.b.loadVersions().versions[req.Key()]
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "issuing SASLAuthenticate", "broker", logID(cxn.b.meta.NodeID), "version", req.Version, "step", step)
+
+ // Lifetime: we take the timestamp before we write our
+ // request; see usage below for why.
+ prereq = time.Now()
+ corrID, bytesWritten, writeWait, timeToWrite, readEnqueue, writeErr := cxn.writeRequest(nil, time.Now(), req)
+
+ // As mentioned above, we could have one final write
+ // without reading a response back (kerberos). If this
+ // is the case, we need to e2e.
+ if writeErr != nil || done {
+ cxn.hookWriteE2E(req.Key(), bytesWritten, writeWait, timeToWrite, writeErr)
+ if writeErr != nil {
+ return writeErr
+ }
+ }
+ if !done {
+ rawResp, err := cxn.readResponse(nil, req.Key(), req.GetVersion(), corrID, req.IsFlexible(), rt, bytesWritten, writeWait, timeToWrite, readEnqueue)
+ if err != nil {
+ return err
+ }
+ resp := req.ResponseKind().(*kmsg.SASLAuthenticateResponse)
+ if err = resp.ReadFrom(rawResp); err != nil {
+ return err
+ }
+
+ if err = kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ if resp.ErrorMessage != nil {
+ return fmt.Errorf("%s: %w", *resp.ErrorMessage, err)
+ }
+ return err
+ }
+ challenge = resp.SASLAuthBytes
+ lifetimeMillis = resp.SessionLifetimeMillis
+ }
+ }
+
+ clientWrite = nil
+
+ if !done {
+ if done, clientWrite, err = session.Challenge(challenge); err != nil {
+ return err
+ }
+ }
+ }
+
+ if lifetimeMillis > 0 {
+ // Lifetime is problematic. We need to be a bit pessimistic.
+ //
+ // We want a lowerbound: we use 1s (arbitrary), but if 1.1x our
+ // e2e sasl latency is more than 1s, we use the latency.
+ //
+ // We do not want to reauthenticate too close to the lifetime
+ // especially for larger lifetimes due to clock issues (#205).
+ // We take 95% to 98% of the lifetime.
+ minPessimismMillis := float64(time.Second.Milliseconds())
+ latencyMillis := 1.1 * float64(time.Since(prereq).Milliseconds())
+ if latencyMillis > minPessimismMillis {
+ minPessimismMillis = latencyMillis
+ }
+ var random float64
+ cxn.b.cl.rng(func(r *rand.Rand) { random = r.Float64() })
+ maxPessimismMillis := float64(lifetimeMillis) * (0.05 - 0.03*random) // 95 to 98% of lifetime (pessimism 2% to 5%)
+
+ // Our minimum lifetime is always 1s (or latency, if larger).
+ // When our max pessimism becomes more than min pessimism,
+ // every second after, we add between 0.05s or 0.08s to our
+ // backoff. At 12hr, we reauth ~24 to 28min before the
+ // lifetime.
+ usePessimismMillis := maxPessimismMillis
+ if minPessimismMillis > maxPessimismMillis {
+ usePessimismMillis = minPessimismMillis
+ }
+ useLifetimeMillis := lifetimeMillis - int64(usePessimismMillis)
+
+ // Subtracting our min pessimism may result in our connection
+ // immediately expiring. We always accept this one reauth to
+ // issue our one request, and our next request will again
+ // reauth. Brokers should give us longer lifetimes, but that
+ // may not always happen (see #136, #249).
+ now := time.Now()
+ cxn.expiry = now.Add(time.Duration(useLifetimeMillis) * time.Millisecond)
+ cxn.cl.cfg.logger.Log(LogLevelDebug, "sasl has a limited lifetime",
+ "broker", logID(cxn.b.meta.NodeID),
+ "session_lifetime", time.Duration(lifetimeMillis)*time.Millisecond,
+ "lifetime_pessimism", time.Duration(usePessimismMillis)*time.Millisecond,
+ "reauthenticate_in", cxn.expiry.Sub(now),
+ )
+ }
+ return nil
+}
+
+// Some internal requests use the client context to issue requests, so if the
+// client is closed, this select case can be selected. We want to return the
+// proper error.
+//
+// This function is used in this file anywhere the client context can cause
+// ErrClientClosed.
+func maybeUpdateCtxErr(clientCtx, reqCtx context.Context, err *error) {
+ if clientCtx == reqCtx {
+ *err = ErrClientClosed
+ }
+}
+
+// writeRequest writes a message request to the broker connection, bumping the
+// connection's correlation ID as appropriate for the next write.
+func (cxn *brokerCxn) writeRequest(ctx context.Context, enqueuedForWritingAt time.Time, req kmsg.Request) (corrID int32, bytesWritten int, writeWait, timeToWrite time.Duration, readEnqueue time.Time, writeErr error) {
+ // A nil ctx means we cannot be throttled.
+ if ctx != nil {
+ throttleUntil := time.Unix(0, cxn.throttleUntil.Load())
+ if sleep := time.Until(throttleUntil); sleep > 0 {
+ after := time.NewTimer(sleep)
+ select {
+ case <-after.C:
+ case <-ctx.Done():
+ writeErr = ctx.Err()
+ maybeUpdateCtxErr(cxn.cl.ctx, ctx, &writeErr)
+ case <-cxn.cl.ctx.Done():
+ writeErr = ErrClientClosed
+ case <-cxn.deadCh:
+ writeErr = errChosenBrokerDead
+ }
+ if writeErr != nil {
+ after.Stop()
+ writeWait = time.Since(enqueuedForWritingAt)
+ return
+ }
+ }
+ }
+
+ buf := cxn.cl.reqFormatter.AppendRequest(
+ cxn.cl.bufPool.get()[:0],
+ req,
+ cxn.corrID,
+ )
+
+ _, wt := cxn.cl.connTimeouter.timeouts(req)
+ bytesWritten, writeWait, timeToWrite, readEnqueue, writeErr = cxn.writeConn(ctx, buf, wt, enqueuedForWritingAt)
+
+ cxn.cl.bufPool.put(buf)
+
+ cxn.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerWrite); ok {
+ h.OnBrokerWrite(cxn.b.meta, req.Key(), bytesWritten, writeWait, timeToWrite, writeErr)
+ }
+ })
+ if logger := cxn.cl.cfg.logger; logger.Level() >= LogLevelDebug {
+ logger.Log(LogLevelDebug, fmt.Sprintf("wrote %s v%d", kmsg.NameForKey(req.Key()), req.GetVersion()), "broker", logID(cxn.b.meta.NodeID), "bytes_written", bytesWritten, "write_wait", writeWait, "time_to_write", timeToWrite, "err", writeErr)
+ }
+
+ if writeErr != nil {
+ return
+ }
+ corrID = cxn.corrID
+ cxn.corrID++
+ if cxn.corrID < 0 {
+ cxn.corrID = 0
+ }
+ return
+}
+
+func (cxn *brokerCxn) writeConn(
+ ctx context.Context,
+ buf []byte,
+ timeout time.Duration,
+ enqueuedForWritingAt time.Time,
+) (bytesWritten int, writeWait, timeToWrite time.Duration, readEnqueue time.Time, writeErr error) {
+ cxn.writing.Store(true)
+ defer func() {
+ cxn.lastWrite.Store(time.Now().UnixNano())
+ cxn.writing.Store(false)
+ }()
+
+ if ctx == nil {
+ ctx = context.Background()
+ }
+ if timeout > 0 {
+ cxn.conn.SetWriteDeadline(time.Now().Add(timeout))
+ }
+ defer cxn.conn.SetWriteDeadline(time.Time{})
+ writeDone := make(chan struct{})
+ go func() {
+ defer close(writeDone)
+ writeStart := time.Now()
+ bytesWritten, writeErr = cxn.conn.Write(buf)
+ // As soon as we are done writing, we track that we have now
+ // enqueued this request for reading.
+ readEnqueue = time.Now()
+ writeWait = writeStart.Sub(enqueuedForWritingAt)
+ timeToWrite = readEnqueue.Sub(writeStart)
+ }()
+ select {
+ case <-writeDone:
+ case <-cxn.cl.ctx.Done():
+ cxn.conn.SetWriteDeadline(time.Now())
+ <-writeDone
+ if writeErr != nil {
+ writeErr = ErrClientClosed
+ }
+ case <-ctx.Done():
+ cxn.conn.SetWriteDeadline(time.Now())
+ <-writeDone
+ if writeErr != nil && ctx.Err() != nil {
+ writeErr = ctx.Err()
+ maybeUpdateCtxErr(cxn.cl.ctx, ctx, &writeErr)
+ }
+ }
+ return
+}
+
+func (cxn *brokerCxn) readConn(
+ ctx context.Context,
+ timeout time.Duration,
+ enqueuedForReadingAt time.Time,
+) (nread int, buf []byte, readWait, timeToRead time.Duration, err error) {
+ cxn.reading.Store(true)
+ defer func() {
+ cxn.lastRead.Store(time.Now().UnixNano())
+ cxn.reading.Store(false)
+ }()
+
+ if ctx == nil {
+ ctx = context.Background()
+ }
+ if timeout > 0 {
+ cxn.conn.SetReadDeadline(time.Now().Add(timeout))
+ }
+ defer cxn.conn.SetReadDeadline(time.Time{})
+ readDone := make(chan struct{})
+ go func() {
+ defer close(readDone)
+ sizeBuf := make([]byte, 4)
+ readStart := time.Now()
+ defer func() {
+ timeToRead = time.Since(readStart)
+ readWait = readStart.Sub(enqueuedForReadingAt)
+ }()
+ if nread, err = io.ReadFull(cxn.conn, sizeBuf); err != nil {
+ return
+ }
+ var size int32
+ if size, err = cxn.parseReadSize(sizeBuf); err != nil {
+ return
+ }
+ buf = make([]byte, size)
+ var nread2 int
+ nread2, err = io.ReadFull(cxn.conn, buf)
+ nread += nread2
+ buf = buf[:nread2]
+ if err != nil {
+ return
+ }
+ }()
+ select {
+ case <-readDone:
+ case <-cxn.cl.ctx.Done():
+ cxn.conn.SetReadDeadline(time.Now())
+ <-readDone
+ if err != nil {
+ err = ErrClientClosed
+ }
+ case <-ctx.Done():
+ cxn.conn.SetReadDeadline(time.Now())
+ <-readDone
+ if err != nil && ctx.Err() != nil {
+ err = ctx.Err()
+ maybeUpdateCtxErr(cxn.cl.ctx, ctx, &err)
+ }
+ }
+ return
+}
+
+// Parses a length 4 slice and enforces the min / max read size based off the
+// client configuration.
+func (cxn *brokerCxn) parseReadSize(sizeBuf []byte) (int32, error) {
+ size := int32(binary.BigEndian.Uint32(sizeBuf))
+ if size < 0 {
+ return 0, fmt.Errorf("invalid negative response size %d", size)
+ }
+ if maxSize := cxn.b.cl.cfg.maxBrokerReadBytes; size > maxSize {
+ if size == 0x48545450 { // "HTTP"
+ return 0, fmt.Errorf("invalid large response size %d > limit %d; the four size bytes are 'HTTP' in ascii, the beginning of an HTTP response; is your broker port correct?", size, maxSize)
+ }
+ // A TLS alert is 21, and a TLS alert has the version
+ // following, where all major versions are 03xx. We
+ // look for an alert and major version byte to suspect
+ // if this we received a TLS alert.
+ tlsVersion := uint16(sizeBuf[1])<<8 | uint16(sizeBuf[2])
+ if sizeBuf[0] == 21 && tlsVersion&0x0300 != 0 {
+ versionGuess := fmt.Sprintf("unknown TLS version (hex %x)", tlsVersion)
+ for _, guess := range []struct {
+ num uint16
+ text string
+ }{
+ {tls.VersionSSL30, "SSL v3"},
+ {tls.VersionTLS10, "TLS v1.0"},
+ {tls.VersionTLS11, "TLS v1.1"},
+ {tls.VersionTLS12, "TLS v1.2"},
+ {tls.VersionTLS13, "TLS v1.3"},
+ } {
+ if tlsVersion == guess.num {
+ versionGuess = guess.text
+ }
+ }
+ return 0, fmt.Errorf("invalid large response size %d > limit %d; the first three bytes received appear to be a tls alert record for %s; is this a plaintext connection speaking to a tls endpoint?", size, maxSize, versionGuess)
+ }
+ return 0, fmt.Errorf("invalid large response size %d > limit %d", size, maxSize)
+ }
+ return size, nil
+}
+
+// readResponse reads a response from conn, ensures the correlation ID is
+// correct, and returns a newly allocated slice on success.
+//
+// This takes a bunch of extra arguments in support of HookBrokerE2E, overall
+// this function takes 11 bytes in arguments.
+func (cxn *brokerCxn) readResponse(
+ ctx context.Context,
+ key int16,
+ version int16,
+ corrID int32,
+ flexibleHeader bool,
+ timeout time.Duration,
+ bytesWritten int,
+ writeWait time.Duration,
+ timeToWrite time.Duration,
+ readEnqueue time.Time,
+) ([]byte, error) {
+ bytesRead, buf, readWait, timeToRead, readErr := cxn.readConn(ctx, timeout, readEnqueue)
+
+ cxn.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerRead); ok {
+ h.OnBrokerRead(cxn.b.meta, key, bytesRead, readWait, timeToRead, readErr)
+ }
+ if h, ok := h.(HookBrokerE2E); ok {
+ h.OnBrokerE2E(cxn.b.meta, key, BrokerE2E{
+ BytesWritten: bytesWritten,
+ BytesRead: bytesRead,
+ WriteWait: writeWait,
+ TimeToWrite: timeToWrite,
+ ReadWait: readWait,
+ TimeToRead: timeToRead,
+ ReadErr: readErr,
+ })
+ }
+ })
+ if logger := cxn.cl.cfg.logger; logger.Level() >= LogLevelDebug {
+ logger.Log(LogLevelDebug, fmt.Sprintf("read %s v%d", kmsg.NameForKey(key), version), "broker", logID(cxn.b.meta.NodeID), "bytes_read", bytesRead, "read_wait", readWait, "time_to_read", timeToRead, "err", readErr)
+ }
+
+ if readErr != nil {
+ return nil, readErr
+ }
+ if len(buf) < 4 {
+ return nil, kbin.ErrNotEnoughData
+ }
+ gotID := int32(binary.BigEndian.Uint32(buf))
+ if gotID != corrID {
+ return nil, errCorrelationIDMismatch
+ }
+ // If the response header is flexible, we skip the tags at the end of
+ // it. They are currently unused.
+ if flexibleHeader {
+ b := kbin.Reader{Src: buf[4:]}
+ kmsg.SkipTags(&b)
+ return b.Src, b.Complete()
+ }
+ return buf[4:], nil
+}
+
+// closeConn is the one place we close broker connections. This is always done
+// in either die, which is called when handleResps returns, or if init fails,
+// which means we did not succeed enough to start handleResps.
+func (cxn *brokerCxn) closeConn() {
+ cxn.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerDisconnect); ok {
+ h.OnBrokerDisconnect(cxn.b.meta, cxn.conn)
+ }
+ })
+ cxn.conn.Close()
+ close(cxn.deadCh)
+}
+
+// die kills a broker connection (which could be dead already) and replies to
+// all requests awaiting responses appropriately.
+func (cxn *brokerCxn) die() {
+ if cxn == nil || cxn.dead.Swap(true) {
+ return
+ }
+ cxn.closeConn()
+ cxn.resps.die()
+}
+
+// waitResp, called serially by a broker's handleReqs, manages handling a
+// message requests's response.
+func (cxn *brokerCxn) waitResp(pr promisedResp) {
+ first, dead := cxn.resps.push(pr)
+ if first {
+ go cxn.handleResps(pr)
+ } else if dead {
+ pr.promise(nil, errChosenBrokerDead)
+ cxn.hookWriteE2E(pr.resp.Key(), pr.bytesWritten, pr.writeWait, pr.timeToWrite, errChosenBrokerDead)
+ }
+}
+
+// If acks are zero, then a real Kafka installation never replies to produce
+// requests. Unfortunately, Microsoft EventHubs rolled their own implementation
+// and _does_ reply to ack-0 produce requests. We need to process these
+// responses, because otherwise kernel buffers will fill up, Microsoft will be
+// unable to reply, and then they will stop taking our produce requests.
+//
+// Thus, we just simply discard everything.
+//
+// Since we still want to support hooks, we still read the size of a response
+// and then read that entire size before calling a hook. There are a few
+// differences:
+//
+// (1) we do not know what version we produced, so we cannot validate the read,
+// we just have to trust that the size is valid (and the data follows
+// correctly).
+//
+// (2) rather than creating a slice for the response, we discard the entire
+// response into a reusable small slice. The small size is because produce
+// responses are relatively small to begin with, so we expect only a few reads
+// per response.
+//
+// (3) we have no time for when the read was enqueued, so we miss that in the
+// hook.
+//
+// (4) we start the time-to-read duration *after* the size bytes are read,
+// since we have no idea when a read actually should start, since we should not
+// receive responses to begin with.
+//
+// (5) we set a read deadline *after* the size bytes are read, and only if the
+// client has not yet closed.
+func (cxn *brokerCxn) discard() {
+ var firstTimeout bool
+ defer func() {
+ if !firstTimeout { // see below
+ cxn.die()
+ } else {
+ cxn.b.cl.cfg.logger.Log(LogLevelDebug, "produce acks==0 discard goroutine exiting; this broker looks to correctly not reply to ack==0 produce requests", "addr", cxn.b.addr, "broker", logID(cxn.b.meta.NodeID))
+ }
+ }()
+
+ discardBuf := make([]byte, 256)
+ for i := 0; ; i++ {
+ var (
+ nread int
+ err error
+ timeToRead time.Duration
+
+ deadlineMu sync.Mutex
+ deadlineSet bool
+
+ readDone = make(chan struct{})
+ )
+
+ // On all but the first request, we use no deadline. We could
+ // be hanging reading while we wait for more produce requests.
+ // We know we are talking to azure when i > 0 and we should not
+ // quit this goroutine.
+ //
+ // However, on the *first* produce request, we know that we are
+ // writing *right now*. We can deadline our read side with
+ // ample overhead, and if this first read hits the deadline,
+ // then we can quit this discard / read goroutine with no
+ // problems.
+ //
+ // We choose 3x our timeouts:
+ // - first we cover the write, connTimeoutOverhead + produceTimeout
+ // - then we cover the read, connTimeoutOverhead
+ // - then we throw in another connTimeoutOverhead just to be sure
+ //
+ deadline := time.Time{}
+ if i == 0 {
+ deadline = time.Now().Add(3*cxn.cl.cfg.requestTimeoutOverhead + cxn.cl.cfg.produceTimeout)
+ }
+ cxn.conn.SetReadDeadline(deadline)
+
+ go func() {
+ defer close(readDone)
+ if nread, err = io.ReadFull(cxn.conn, discardBuf[:4]); err != nil {
+ if i == 0 && errors.Is(err, os.ErrDeadlineExceeded) {
+ firstTimeout = true
+ }
+ return
+ }
+ deadlineMu.Lock()
+ if !deadlineSet {
+ cxn.conn.SetReadDeadline(time.Now().Add(cxn.cl.cfg.produceTimeout))
+ }
+ deadlineMu.Unlock()
+
+ cxn.reading.Store(true)
+ defer func() {
+ cxn.lastRead.Store(time.Now().UnixNano())
+ cxn.reading.Store(false)
+ }()
+
+ readStart := time.Now()
+ defer func() { timeToRead = time.Since(readStart) }()
+ var size int32
+ if size, err = cxn.parseReadSize(discardBuf[:4]); err != nil {
+ return
+ }
+
+ var nread2 int
+ for size > 0 && err == nil {
+ discard := discardBuf
+ if int(size) < len(discard) {
+ discard = discard[:size]
+ }
+ nread2, err = cxn.conn.Read(discard)
+ nread += nread2
+ size -= int32(nread2) // nread2 max is 128
+ }
+ }()
+
+ select {
+ case <-readDone:
+ case <-cxn.cl.ctx.Done():
+ deadlineMu.Lock()
+ deadlineSet = true
+ deadlineMu.Unlock()
+ cxn.conn.SetReadDeadline(time.Now())
+ <-readDone
+ return
+ }
+
+ cxn.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerRead); ok {
+ h.OnBrokerRead(cxn.b.meta, 0, nread, 0, timeToRead, err)
+ }
+ })
+ if err != nil {
+ return
+ }
+ }
+}
+
+// handleResps serially handles all broker responses for an single connection.
+func (cxn *brokerCxn) handleResps(pr promisedResp) {
+ var more, dead bool
+start:
+ if dead {
+ pr.promise(nil, errChosenBrokerDead)
+ cxn.hookWriteE2E(pr.resp.Key(), pr.bytesWritten, pr.writeWait, pr.timeToWrite, errChosenBrokerDead)
+ } else {
+ cxn.handleResp(pr)
+ }
+
+ pr, more, dead = cxn.resps.dropPeek()
+ if more {
+ goto start
+ }
+}
+
+func (cxn *brokerCxn) handleResp(pr promisedResp) {
+ rawResp, err := cxn.readResponse(
+ pr.ctx,
+ pr.resp.Key(),
+ pr.resp.GetVersion(),
+ pr.corrID,
+ pr.flexibleHeader,
+ pr.readTimeout,
+ pr.bytesWritten,
+ pr.writeWait,
+ pr.timeToWrite,
+ pr.readEnqueue,
+ )
+ if err != nil {
+ if !errors.Is(err, ErrClientClosed) && !errors.Is(err, context.Canceled) {
+ if cxn.successes > 0 || len(cxn.b.cl.cfg.sasls) > 0 {
+ cxn.b.cl.cfg.logger.Log(LogLevelDebug, "read from broker errored, killing connection", "req", kmsg.Key(pr.resp.Key()).Name(), "addr", cxn.b.addr, "broker", logID(cxn.b.meta.NodeID), "successful_reads", cxn.successes, "err", err)
+ } else {
+ cxn.b.cl.cfg.logger.Log(LogLevelWarn, "read from broker errored, killing connection after 0 successful responses (is SASL missing?)", "req", kmsg.Key(pr.resp.Key()).Name(), "addr", cxn.b.addr, "broker", logID(cxn.b.meta.NodeID), "err", err)
+ if err == io.EOF { // specifically avoid checking errors.Is to ensure this is not already wrapped
+ err = &ErrFirstReadEOF{kind: firstReadSASL, err: err}
+ }
+ }
+ }
+ pr.promise(nil, err)
+ cxn.die()
+ return
+ }
+
+ cxn.successes++
+ readErr := pr.resp.ReadFrom(rawResp)
+
+ // If we had no error, we read the response successfully.
+ //
+ // Any response that can cause throttling satisfies the
+ // kmsg.ThrottleResponse interface. We check that here.
+ if readErr == nil {
+ if throttleResponse, ok := pr.resp.(kmsg.ThrottleResponse); ok {
+ millis, throttlesAfterResp := throttleResponse.Throttle()
+ if millis > 0 {
+ cxn.b.cl.cfg.logger.Log(LogLevelInfo, "broker is throttling us in response", "broker", logID(cxn.b.meta.NodeID), "req", kmsg.Key(pr.resp.Key()).Name(), "throttle_millis", millis, "throttles_after_resp", throttlesAfterResp)
+ if throttlesAfterResp {
+ throttleUntil := time.Now().Add(time.Millisecond * time.Duration(millis)).UnixNano()
+ if throttleUntil > cxn.throttleUntil.Load() {
+ cxn.throttleUntil.Store(throttleUntil)
+ }
+ }
+ cxn.cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookBrokerThrottle); ok {
+ h.OnBrokerThrottle(cxn.b.meta, time.Duration(millis)*time.Millisecond, throttlesAfterResp)
+ }
+ })
+ }
+ }
+ }
+
+ pr.promise(pr.resp, readErr)
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/client.go b/vendor/github.com/twmb/franz-go/pkg/kgo/client.go
new file mode 100644
index 0000000000000..775a22e6ee215
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/client.go
@@ -0,0 +1,4553 @@
+// Package kgo provides a pure Go efficient Kafka client for Kafka 0.8+ with
+// support for transactions, regex topic consuming, the latest partition
+// strategies, and more. This client supports all client related KIPs.
+//
+// This client aims to be simple to use while still interacting with Kafka in a
+// near ideal way. For more overview of the entire client itself, please see
+// the README on the project's Github page.
+package kgo
+
+import (
+ "context"
+ "crypto/tls"
+ "errors"
+ "fmt"
+ "hash/crc32"
+ "math/rand"
+ "net"
+ "reflect"
+ "runtime"
+ "sort"
+ "strconv"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+ "github.com/twmb/franz-go/pkg/sasl"
+)
+
+var crc32c = crc32.MakeTable(crc32.Castagnoli) // record crc's use Castagnoli table; for consuming/producing
+
+// Client issues requests and handles responses to a Kafka cluster.
+type Client struct {
+ cfg cfg
+ opts []Opt
+
+ ctx context.Context
+ ctxCancel func()
+
+ rng func(func(*rand.Rand))
+
+ brokersMu sync.RWMutex
+ brokers []*broker // ordered by broker ID
+ seeds atomic.Value // []*broker, seed brokers, also ordered by ID
+ anyBrokerOrd []int32 // shuffled brokers, for random ordering
+ anySeedIdx int32
+ stopBrokers bool // set to true on close to stop updateBrokers
+
+ // A sink and a source is created once per node ID and persists
+ // forever. We expect the list to be small.
+ //
+ // The mutex only exists to allow consumer session stopping to read
+ // sources to notify when starting a session; all writes happen in the
+ // metadata loop.
+ sinksAndSourcesMu sync.Mutex
+ sinksAndSources map[int32]sinkAndSource
+
+ reqFormatter *kmsg.RequestFormatter
+ connTimeouter connTimeouter
+
+ bufPool bufPool // for to brokers to share underlying reusable request buffers
+ prsPool prsPool // for sinks to reuse []promisedNumberedRecord
+
+ controllerIDMu sync.Mutex
+ controllerID int32
+
+ // The following two ensure that we only have one fetchBrokerMetadata
+ // at once. This avoids unnecessary broker metadata requests and
+ // metadata trampling.
+ fetchingBrokersMu sync.Mutex
+ fetchingBrokers *struct {
+ done chan struct{}
+ err error
+ }
+
+ producer producer
+ consumer consumer
+
+ compressor *compressor
+ decompressor *decompressor
+
+ coordinatorsMu sync.Mutex
+ coordinators map[coordinatorKey]*coordinatorLoad
+
+ updateMetadataCh chan string
+ updateMetadataNowCh chan string // like above, but with high priority
+ blockingMetadataFnCh chan func()
+ metawait metawait
+ metadone chan struct{}
+
+ mappedMetaMu sync.Mutex
+ mappedMeta map[string]mappedMetadataTopic
+}
+
+func (cl *Client) idempotent() bool { return !cl.cfg.disableIdempotency }
+
+type sinkAndSource struct {
+ sink *sink
+ source *source
+}
+
+func (cl *Client) allSinksAndSources(fn func(sns sinkAndSource)) {
+ cl.sinksAndSourcesMu.Lock()
+ defer cl.sinksAndSourcesMu.Unlock()
+
+ for _, sns := range cl.sinksAndSources {
+ fn(sns)
+ }
+}
+
+type hostport struct {
+ host string
+ port int32
+}
+
+// ValidateOpts returns an error if the options are invalid.
+func ValidateOpts(opts ...Opt) error {
+ _, _, _, err := validateCfg(opts...)
+ return err
+}
+
+func parseSeeds(addrs []string) ([]hostport, error) {
+ seeds := make([]hostport, 0, len(addrs))
+ for _, seedBroker := range addrs {
+ hp, err := parseBrokerAddr(seedBroker)
+ if err != nil {
+ return nil, err
+ }
+ seeds = append(seeds, hp)
+ }
+ return seeds, nil
+}
+
+// This function validates the configuration and returns a few things that we
+// initialize while validating. The difference between this and NewClient
+// initialization is all NewClient initialization is infallible.
+func validateCfg(opts ...Opt) (cfg, []hostport, *compressor, error) {
+ cfg := defaultCfg()
+ for _, opt := range opts {
+ opt.apply(&cfg)
+ }
+ if err := cfg.validate(); err != nil {
+ return cfg, nil, nil, err
+ }
+ seeds, err := parseSeeds(cfg.seedBrokers)
+ if err != nil {
+ return cfg, nil, nil, err
+ }
+ compressor, err := newCompressor(cfg.compression...)
+ if err != nil {
+ return cfg, nil, nil, err
+ }
+ return cfg, seeds, compressor, nil
+}
+
+func namefn(fn any) string {
+ v := reflect.ValueOf(fn)
+ if v.Type().Kind() != reflect.Func {
+ return ""
+ }
+ name := runtime.FuncForPC(v.Pointer()).Name()
+ dot := strings.LastIndexByte(name, '.')
+ if dot >= 0 {
+ return name[dot+1:]
+ }
+ return name
+}
+
+// OptValue returns the value for the given configuration option. If the
+// given option does not exist, this returns nil. This function takes either a
+// raw Opt, or an Opt function name.
+//
+// If a configuration option has multiple inputs, this function returns only
+// the first input. If the function is a boolean function (such as
+// BlockRebalanceOnPoll), this function returns the value of the internal bool.
+// Variadic option inputs are returned as a single slice. Options that are
+// internally stored as a pointer (ClientID, TransactionalID, and InstanceID)
+// are returned as their string input; you can see if the option is internally
+// nil by looking at the second value returned from OptValues.
+//
+// var (
+// cl, _ := NewClient(
+// InstanceID("foo"),
+// ConsumeTopics("foo", "bar"),
+// )
+// iid = cl.OptValue(InstanceID) // iid is "foo"
+// gid = cl.OptValue(ConsumerGroup) // gid is "" since groups are not used
+// topics = cl.OptValue("ConsumeTopics") // topics is []string{"foo", "bar"}; string lookup for the option works
+// bpoll = cl.OptValue(BlockRebalanceOnPoll) // bpoll is false
+// t = cl.OptValue(SessionTimeout) // t is 45s, the internal default
+// td = t.(time.Duration) // safe conversion since SessionTimeout's input is a time.Duration
+// unk = cl.OptValue("Unknown"), // unk is nil
+// )
+func (cl *Client) OptValue(opt any) any {
+ vs := cl.OptValues(opt)
+ if len(vs) > 0 {
+ return vs[0]
+ }
+ return nil
+}
+
+// OptValues returns all values for options. This method is useful for
+// options that have multiple inputs (notably, SoftwareNameAndVersion). This is
+// also useful for options that are internally stored as a pointer (ClientID,
+// TransactionalID, and InstanceID) -- this function will return the string
+// value of the option but also whether the option is non-nil. Boolean options
+// are returned as a single-element slice with the bool value. Variadic inputs
+// are returned as a signle slice. If the input option does not exist, this
+// returns nil.
+//
+// var (
+// cl, _ = NewClient(
+// InstanceID("foo"),
+// ConsumeTopics("foo", "bar"),
+// )
+// idValues = cl.OptValues(InstanceID) // idValues is []any{"foo", true}
+// tValues = cl.OptValues(SessionTimeout) // tValues is []any{45 * time.Second}
+// topics = cl.OptValues(ConsumeTopics) // topics is []any{[]string{"foo", "bar"}
+// bpoll = cl.OptValues(BlockRebalanceOnPoll) // bpoll is []any{false}
+// unknown = cl.OptValues("Unknown") // unknown is nil
+// )
+func (cl *Client) OptValues(opt any) []any {
+ name := namefn(opt)
+ if s, ok := opt.(string); ok {
+ name = s
+ }
+ cfg := &cl.cfg
+
+ switch name {
+ case namefn(ClientID):
+ if cfg.id != nil {
+ return []any{*cfg.id, true}
+ }
+ return []any{"", false}
+ case namefn(SoftwareNameAndVersion):
+ return []any{cfg.softwareName, cfg.softwareVersion}
+ case namefn(WithLogger):
+ if _, wrapped := cfg.logger.(*wrappedLogger); wrapped {
+ return []any{cfg.logger.(*wrappedLogger).inner}
+ }
+ return []any{nil}
+ case namefn(RequestTimeoutOverhead):
+ return []any{cfg.requestTimeoutOverhead}
+ case namefn(ConnIdleTimeout):
+ return []any{cfg.connIdleTimeout}
+ case namefn(Dialer):
+ return []any{cfg.dialFn}
+ case namefn(DialTLSConfig):
+ return []any{cfg.dialTLS}
+ case namefn(DialTLS):
+ return []any{cfg.dialTLS != nil}
+ case namefn(SeedBrokers):
+ return []any{cfg.seedBrokers}
+ case namefn(MaxVersions):
+ return []any{cfg.maxVersions}
+ case namefn(MinVersions):
+ return []any{cfg.minVersions}
+ case namefn(RetryBackoffFn):
+ return []any{cfg.retryBackoff}
+ case namefn(RequestRetries):
+ return []any{cfg.retries}
+ case namefn(RetryTimeout):
+ return []any{cfg.retryTimeout(0)}
+ case namefn(RetryTimeoutFn):
+ return []any{cfg.retryTimeout}
+ case namefn(AllowAutoTopicCreation):
+ return []any{cfg.allowAutoTopicCreation}
+ case namefn(BrokerMaxWriteBytes):
+ return []any{cfg.maxBrokerWriteBytes}
+ case namefn(BrokerMaxReadBytes):
+ return []any{cfg.maxBrokerReadBytes}
+ case namefn(MetadataMaxAge):
+ return []any{cfg.metadataMaxAge}
+ case namefn(MetadataMinAge):
+ return []any{cfg.metadataMinAge}
+ case namefn(SASL):
+ return []any{cfg.sasls}
+ case namefn(WithHooks):
+ return []any{cfg.hooks}
+ case namefn(ConcurrentTransactionsBackoff):
+ return []any{cfg.txnBackoff}
+ case namefn(ConsiderMissingTopicDeletedAfter):
+ return []any{cfg.missingTopicDelete}
+
+ case namefn(DefaultProduceTopic):
+ return []any{cfg.defaultProduceTopic}
+ case namefn(RequiredAcks):
+ return []any{cfg.acks}
+ case namefn(DisableIdempotentWrite):
+ return []any{cfg.disableIdempotency}
+ case namefn(MaxProduceRequestsInflightPerBroker):
+ return []any{cfg.maxProduceInflight}
+ case namefn(ProducerBatchCompression):
+ return []any{cfg.compression}
+ case namefn(ProducerBatchMaxBytes):
+ return []any{cfg.maxRecordBatchBytes}
+ case namefn(MaxBufferedRecords):
+ return []any{cfg.maxBufferedRecords}
+ case namefn(MaxBufferedBytes):
+ return []any{cfg.maxBufferedBytes}
+ case namefn(RecordPartitioner):
+ return []any{cfg.partitioner}
+ case namefn(ProduceRequestTimeout):
+ return []any{cfg.produceTimeout}
+ case namefn(RecordRetries):
+ return []any{cfg.recordRetries}
+ case namefn(UnknownTopicRetries):
+ return []any{cfg.maxUnknownFailures}
+ case namefn(StopProducerOnDataLossDetected):
+ return []any{cfg.stopOnDataLoss}
+ case namefn(ProducerOnDataLossDetected):
+ return []any{cfg.onDataLoss}
+ case namefn(ProducerLinger):
+ return []any{cfg.linger}
+ case namefn(ManualFlushing):
+ return []any{cfg.manualFlushing}
+ case namefn(RecordDeliveryTimeout):
+ return []any{cfg.recordTimeout}
+ case namefn(TransactionalID):
+ if cfg.txnID != nil {
+ return []any{cfg.txnID, true}
+ }
+ return []any{"", false}
+ case namefn(TransactionTimeout):
+ return []any{cfg.txnTimeout}
+
+ case namefn(ConsumePartitions):
+ return []any{cfg.partitions}
+ case namefn(ConsumePreferringLagFn):
+ return []any{cfg.preferLagFn}
+ case namefn(ConsumeRegex):
+ return []any{cfg.regex}
+ case namefn(ConsumeResetOffset):
+ return []any{cfg.resetOffset}
+ case namefn(ConsumeTopics):
+ return []any{cfg.topics}
+ case namefn(DisableFetchSessions):
+ return []any{cfg.disableFetchSessions}
+ case namefn(FetchIsolationLevel):
+ return []any{cfg.isolationLevel}
+ case namefn(FetchMaxBytes):
+ return []any{int32(cfg.maxBytes)}
+ case namefn(FetchMaxPartitionBytes):
+ return []any{int32(cfg.maxPartBytes)}
+ case namefn(FetchMaxWait):
+ return []any{time.Duration(cfg.maxWait) * time.Millisecond}
+ case namefn(FetchMinBytes):
+ return []any{cfg.minBytes}
+ case namefn(KeepControlRecords):
+ return []any{cfg.keepControl}
+ case namefn(MaxConcurrentFetches):
+ return []any{cfg.maxConcurrentFetches}
+ case namefn(Rack):
+ return []any{cfg.rack}
+ case namefn(KeepRetryableFetchErrors):
+ return []any{cfg.keepRetryableFetchErrors}
+
+ case namefn(AdjustFetchOffsetsFn):
+ return []any{cfg.adjustOffsetsBeforeAssign}
+ case namefn(AutoCommitCallback):
+ return []any{cfg.commitCallback}
+ case namefn(AutoCommitInterval):
+ return []any{cfg.autocommitInterval}
+ case namefn(AutoCommitMarks):
+ return []any{cfg.autocommitMarks}
+ case namefn(Balancers):
+ return []any{cfg.balancers}
+ case namefn(BlockRebalanceOnPoll):
+ return []any{cfg.blockRebalanceOnPoll}
+ case namefn(ConsumerGroup):
+ return []any{cfg.group}
+ case namefn(DisableAutoCommit):
+ return []any{cfg.autocommitDisable}
+ case namefn(GreedyAutoCommit):
+ return []any{cfg.autocommitGreedy}
+ case namefn(GroupProtocol):
+ return []any{cfg.protocol}
+ case namefn(HeartbeatInterval):
+ return []any{cfg.heartbeatInterval}
+ case namefn(InstanceID):
+ if cfg.instanceID != nil {
+ return []any{*cfg.instanceID, true}
+ }
+ return []any{"", false}
+ case namefn(OnOffsetsFetched):
+ return []any{cfg.onFetched}
+ case namefn(OnPartitionsAssigned):
+ return []any{cfg.onAssigned}
+ case namefn(OnPartitionsLost):
+ return []any{cfg.onLost}
+ case namefn(OnPartitionsRevoked):
+ return []any{cfg.onRevoked}
+ case namefn(RebalanceTimeout):
+ return []any{cfg.rebalanceTimeout}
+ case namefn(RequireStableFetchOffsets):
+ return []any{cfg.requireStable}
+ case namefn(SessionTimeout):
+ return []any{cfg.sessionTimeout}
+ default:
+ return nil
+ }
+}
+
+// NewClient returns a new Kafka client with the given options or an error if
+// the options are invalid. Connections to brokers are lazily created only when
+// requests are written to them.
+//
+// By default, the client uses the latest stable request versions when talking
+// to Kafka. If you use a broker older than 0.10.0, then you need to manually
+// set a MaxVersions option. Otherwise, there is usually no harm in defaulting
+// to the latest API versions, although occasionally Kafka introduces new
+// required parameters that do not have zero value defaults.
+//
+// NewClient also launches a goroutine which periodically updates the cached
+// topic metadata.
+func NewClient(opts ...Opt) (*Client, error) {
+ cfg, seeds, compressor, err := validateCfg(opts...)
+ if err != nil {
+ return nil, err
+ }
+
+ if cfg.retryTimeout == nil {
+ cfg.retryTimeout = func(key int16) time.Duration {
+ switch key {
+ case ((*kmsg.JoinGroupRequest)(nil)).Key(),
+ ((*kmsg.SyncGroupRequest)(nil)).Key(),
+ ((*kmsg.HeartbeatRequest)(nil)).Key():
+ return cfg.sessionTimeout
+ }
+ return 30 * time.Second
+ }
+ }
+
+ if cfg.dialFn == nil {
+ dialer := &net.Dialer{Timeout: cfg.dialTimeout}
+ cfg.dialFn = dialer.DialContext
+ if cfg.dialTLS != nil {
+ cfg.dialFn = func(ctx context.Context, network, host string) (net.Conn, error) {
+ c := cfg.dialTLS.Clone()
+ if c.ServerName == "" {
+ server, _, err := net.SplitHostPort(host)
+ if err != nil {
+ return nil, fmt.Errorf("unable to split host:port for dialing: %w", err)
+ }
+ c.ServerName = server
+ }
+ return (&tls.Dialer{
+ NetDialer: dialer,
+ Config: c,
+ }).DialContext(ctx, network, host)
+ }
+ }
+ }
+
+ ctx, cancel := context.WithCancel(context.Background())
+
+ cl := &Client{
+ cfg: cfg,
+ opts: opts,
+ ctx: ctx,
+ ctxCancel: cancel,
+
+ rng: func() func(func(*rand.Rand)) {
+ var mu sync.Mutex
+ rng := rand.New(rand.NewSource(time.Now().UnixNano()))
+ return func(fn func(*rand.Rand)) {
+ mu.Lock()
+ defer mu.Unlock()
+ fn(rng)
+ }
+ }(),
+
+ controllerID: unknownControllerID,
+
+ sinksAndSources: make(map[int32]sinkAndSource),
+
+ reqFormatter: kmsg.NewRequestFormatter(),
+ connTimeouter: connTimeouter{def: cfg.requestTimeoutOverhead},
+
+ bufPool: newBufPool(),
+ prsPool: newPrsPool(),
+
+ compressor: compressor,
+ decompressor: newDecompressor(),
+
+ coordinators: make(map[coordinatorKey]*coordinatorLoad),
+
+ updateMetadataCh: make(chan string, 1),
+ updateMetadataNowCh: make(chan string, 1),
+ blockingMetadataFnCh: make(chan func()),
+ metadone: make(chan struct{}),
+ }
+
+ // Before we start any goroutines below, we must notify any interested
+ // hooks of our existence.
+ cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookNewClient); ok {
+ h.OnNewClient(cl)
+ }
+ })
+
+ cl.producer.init(cl)
+ cl.consumer.init(cl)
+ cl.metawait.init()
+
+ if cfg.id != nil {
+ cl.reqFormatter = kmsg.NewRequestFormatter(kmsg.FormatterClientID(*cfg.id))
+ }
+
+ seedBrokers := make([]*broker, 0, len(seeds))
+ for i, seed := range seeds {
+ b := cl.newBroker(unknownSeedID(i), seed.host, seed.port, nil)
+ seedBrokers = append(seedBrokers, b)
+ }
+ cl.seeds.Store(seedBrokers)
+ go cl.updateMetadataLoop()
+ go cl.reapConnectionsLoop()
+
+ return cl, nil
+}
+
+// Opts returns the options that were used to create this client. This can be
+// as a base to generate a new client, where you can add override options to
+// the end of the original input list. If you want to know a specific option
+// value, you can use OptValue or OptValues.
+func (cl *Client) Opts() []Opt {
+ return cl.opts
+}
+
+func (cl *Client) loadSeeds() []*broker {
+ return cl.seeds.Load().([]*broker)
+}
+
+// Ping returns whether any broker is reachable, iterating over any discovered
+// broker or seed broker until one returns a successful response to an
+// ApiVersions request. No discovered broker nor seed broker is attempted more
+// than once. If all requests fail, this returns final error.
+func (cl *Client) Ping(ctx context.Context) error {
+ req := kmsg.NewPtrApiVersionsRequest()
+ req.ClientSoftwareName = cl.cfg.softwareName
+ req.ClientSoftwareVersion = cl.cfg.softwareVersion
+
+ cl.brokersMu.RLock()
+ brokers := append([]*broker(nil), cl.brokers...)
+ cl.brokersMu.RUnlock()
+
+ var lastErr error
+ for _, brs := range [2][]*broker{
+ brokers,
+ cl.loadSeeds(),
+ } {
+ for _, br := range brs {
+ _, err := br.waitResp(ctx, req)
+ if lastErr = err; lastErr == nil {
+ return nil
+ }
+ }
+ }
+ return lastErr
+}
+
+// PurgeTopicsFromClient internally removes all internal information about the
+// input topics. If you you want to purge information for only consuming or
+// only producing, see the related functions [PurgeTopicsFromConsuming] and
+// [PurgeTopicsFromProducing].
+//
+// For producing, this clears all knowledge that these topics have ever been
+// produced to. Producing to the topic again may result in out of order
+// sequence number errors, or, if idempotency is disabled and the sequence
+// numbers align, may result in invisibly discarded records at the broker.
+// Purging a topic that was previously produced to may be useful to free up
+// resources if you are producing to many disparate and short lived topic in
+// the lifetime of this client and you do not plan to produce to the topic
+// anymore. You may want to flush buffered records before purging if records
+// for a topic you are purging are currently in flight.
+//
+// For consuming, this removes all concept of the topic from being consumed.
+// This is different from PauseFetchTopics, which literally pauses the fetching
+// of topics but keeps the topic information around for resuming fetching
+// later. Purging a topic that was being consumed can be useful if you know the
+// topic no longer exists, or if you are consuming via regex and know that some
+// previously consumed topics no longer exist, or if you simply do not want to
+// ever consume from a topic again. If you are group consuming, this function
+// will likely cause a rebalance.
+//
+// For admin requests, this deletes the topic from the cached metadata map for
+// sharded requests. Metadata for sharded admin requests is only cached for
+// MetadataMinAge anyway, but the map is not cleaned up one the metadata
+// expires. This function ensures the map is purged.
+func (cl *Client) PurgeTopicsFromClient(topics ...string) {
+ if len(topics) == 0 {
+ return
+ }
+ sort.Strings(topics) // for logging in the functions
+ cl.blockingMetadataFn(func() { // make reasoning about concurrency easier
+ var wg sync.WaitGroup
+ wg.Add(2)
+ go func() {
+ defer wg.Done()
+ cl.producer.purgeTopics(topics)
+ }()
+ go func() {
+ defer wg.Done()
+ cl.consumer.purgeTopics(topics)
+ }()
+ wg.Wait()
+ })
+ cl.mappedMetaMu.Lock()
+ for _, t := range topics {
+ delete(cl.mappedMeta, t)
+ }
+ cl.mappedMetaMu.Unlock()
+}
+
+// PurgeTopicsFromProducing internally removes all internal information for
+// producing about the input topics. This runs the producer bit of logic that
+// is documented in [PurgeTopicsFromClient]; see that function for more
+// details.
+func (cl *Client) PurgeTopicsFromProducing(topics ...string) {
+ if len(topics) == 0 {
+ return
+ }
+ sort.Strings(topics)
+ cl.blockingMetadataFn(func() {
+ cl.producer.purgeTopics(topics)
+ })
+}
+
+// PurgeTopicsFromConsuming internally removes all internal information for
+// consuming about the input topics. This runs the consumer bit of logic that
+// is documented in [PurgeTopicsFromClient]; see that function for more
+// details.
+func (cl *Client) PurgeTopicsFromConsuming(topics ...string) {
+ if len(topics) == 0 {
+ return
+ }
+ sort.Strings(topics)
+ cl.blockingMetadataFn(func() {
+ cl.consumer.purgeTopics(topics)
+ })
+}
+
+// Parse broker IP/host and port from a string, using the default Kafka port if
+// unspecified. Supported address formats:
+//
+// - IPv4 host/IP without port: "127.0.0.1", "localhost"
+// - IPv4 host/IP with port: "127.0.0.1:1234", "localhost:1234"
+// - IPv6 IP without port: "[2001:1000:2000::1]", "::1"
+// - IPv6 IP with port: "[2001:1000:2000::1]:1234"
+func parseBrokerAddr(addr string) (hostport, error) {
+ const defaultKafkaPort = 9092
+
+ // Bracketed IPv6
+ if strings.IndexByte(addr, '[') == 0 {
+ parts := strings.Split(addr[1:], "]")
+ if len(parts) != 2 {
+ return hostport{}, fmt.Errorf("invalid addr: %s", addr)
+ }
+ // No port specified -> use default
+ if len(parts[1]) == 0 {
+ return hostport{parts[0], defaultKafkaPort}, nil
+ }
+ port, err := strconv.ParseInt(parts[1][1:], 10, 32)
+ if err != nil {
+ return hostport{}, fmt.Errorf("unable to parse port from addr: %w", err)
+ }
+ return hostport{parts[0], int32(port)}, nil
+ }
+
+ // IPv4 with no port
+ if strings.IndexByte(addr, ':') == -1 {
+ return hostport{addr, defaultKafkaPort}, nil
+ }
+
+ // Either a IPv6 literal ("::1"), IP:port or host:port
+ // Try to parse as IP:port or host:port
+ h, p, err := net.SplitHostPort(addr)
+ if err != nil {
+ return hostport{addr, defaultKafkaPort}, nil //nolint:nilerr // ipv6 literal -- use default kafka port
+ }
+ port, err := strconv.ParseInt(p, 10, 32)
+ if err != nil {
+ return hostport{}, fmt.Errorf("unable to parse port from addr: %w", err)
+ }
+ return hostport{h, int32(port)}, nil
+}
+
+type connTimeouter struct {
+ def time.Duration
+ joinMu sync.Mutex
+ lastRebalanceTimeout time.Duration
+}
+
+func (c *connTimeouter) timeouts(req kmsg.Request) (r, w time.Duration) {
+ def := c.def
+ millis := func(m int32) time.Duration { return time.Duration(m) * time.Millisecond }
+ switch t := req.(type) {
+ default:
+ if timeoutRequest, ok := req.(kmsg.TimeoutRequest); ok {
+ timeoutMillis := timeoutRequest.Timeout()
+ return def + millis(timeoutMillis), def
+ }
+ return def, def
+
+ case *produceRequest:
+ return def + millis(t.timeout), def
+ case *fetchRequest:
+ return def + millis(t.maxWait), def
+ case *kmsg.FetchRequest:
+ return def + millis(t.MaxWaitMillis), def
+
+ // Join and sync can take a long time. Sync has no notion of
+ // timeouts, but since the flow of requests should be first
+ // join, then sync, we can stash the timeout from the join.
+
+ case *kmsg.JoinGroupRequest:
+ c.joinMu.Lock()
+ c.lastRebalanceTimeout = millis(t.RebalanceTimeoutMillis)
+ c.joinMu.Unlock()
+
+ return def + millis(t.RebalanceTimeoutMillis), def
+ case *kmsg.SyncGroupRequest:
+ read := def
+ c.joinMu.Lock()
+ if c.lastRebalanceTimeout != 0 {
+ read = c.lastRebalanceTimeout
+ }
+ c.joinMu.Unlock()
+
+ return read, def
+ }
+}
+
+func (cl *Client) reinitAnyBrokerOrd() {
+ cl.anyBrokerOrd = append(cl.anyBrokerOrd[:0], make([]int32, len(cl.brokers))...)
+ for i := range cl.anyBrokerOrd {
+ cl.anyBrokerOrd[i] = int32(i)
+ }
+ cl.rng(func(r *rand.Rand) {
+ r.Shuffle(len(cl.anyBrokerOrd), func(i, j int) {
+ cl.anyBrokerOrd[i], cl.anyBrokerOrd[j] = cl.anyBrokerOrd[j], cl.anyBrokerOrd[i]
+ })
+ })
+}
+
+// broker returns a random broker from all brokers ever known.
+func (cl *Client) broker() *broker {
+ cl.brokersMu.Lock()
+ defer cl.brokersMu.Unlock()
+
+ // Every time we loop through all discovered brokers, we issue one
+ // request to the next seed. This ensures that if all discovered
+ // brokers are down, we will *eventually* loop through seeds and
+ // hopefully have a reachable seed.
+ var b *broker
+
+ if len(cl.anyBrokerOrd) > 0 {
+ b = cl.brokers[cl.anyBrokerOrd[0]]
+ cl.anyBrokerOrd = cl.anyBrokerOrd[1:]
+ return b
+ }
+
+ seeds := cl.loadSeeds()
+ cl.anySeedIdx %= int32(len(seeds))
+ b = seeds[cl.anySeedIdx]
+ cl.anySeedIdx++
+
+ // If we have brokers, we ranged past discovered brokers.
+ // We now reset the anyBrokerOrd to begin ranging through
+ // discovered brokers again. If there are still no brokers,
+ // this reinit will do nothing and we will keep looping seeds.
+ cl.reinitAnyBrokerOrd()
+ return b
+}
+
+func (cl *Client) waitTries(ctx context.Context, backoff time.Duration) bool {
+ after := time.NewTimer(backoff)
+ defer after.Stop()
+ select {
+ case <-ctx.Done():
+ return false
+ case <-cl.ctx.Done():
+ return false
+ case <-after.C:
+ return true
+ }
+}
+
+// A broker may sometimes indicate it supports offset for leader epoch v2+ when
+// it does not. We need to catch that and avoid issuing offset for leader
+// epoch, because we will just loop continuously failing. We do not catch every
+// case, such as when a person explicitly assigns offsets with epochs, but we
+// catch a few areas that would be returned from a broker itself.
+//
+// This function is always used *after* at least one request has been issued.
+//
+// NOTE: This is a weak check; we check if any broker in the cluster supports
+// the request. We use this function in three locations:
+//
+// 1. When using the LeaderEpoch returned in a metadata response. This guards
+// against buggy brokers that return 0 rather than -1 even if they do not
+// support OffsetForLeaderEpoch. If any support, the cluster is in the
+// middle of an upgrade and we can start using the epoch.
+// 2. When deciding whether to keep LeaderEpoch from fetched offsets.
+// Realistically, clients should only commit epochs if the cluster supports
+// them.
+// 3. When receiving OffsetOutOfRange when follower fetching and we fetched
+// past the end.
+//
+// In any of these cases, if we OffsetForLeaderEpoch against a broker that does
+// not support (even though one in the cluster does), we will loop fail until
+// the rest of the cluster is upgraded and supports the request.
+func (cl *Client) supportsOffsetForLeaderEpoch() bool {
+ return cl.supportsKeyVersion(int16(kmsg.OffsetForLeaderEpoch), 2)
+}
+
+// A broker may not support some requests we want to make. This function checks
+// support. This should only be used *after* at least one successful response.
+func (cl *Client) supportsKeyVersion(key, version int16) bool {
+ cl.brokersMu.RLock()
+ defer cl.brokersMu.RUnlock()
+
+ for _, brokers := range [][]*broker{
+ cl.brokers,
+ cl.loadSeeds(),
+ } {
+ for _, b := range brokers {
+ if v := b.loadVersions(); v != nil && v.versions[key] >= version {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// fetchBrokerMetadata issues a metadata request solely for broker information.
+func (cl *Client) fetchBrokerMetadata(ctx context.Context) error {
+ cl.fetchingBrokersMu.Lock()
+ wait := cl.fetchingBrokers
+ if wait != nil {
+ cl.fetchingBrokersMu.Unlock()
+ <-wait.done
+ return wait.err
+ }
+ wait = &struct {
+ done chan struct{}
+ err error
+ }{done: make(chan struct{})}
+ cl.fetchingBrokers = wait
+ cl.fetchingBrokersMu.Unlock()
+
+ defer func() {
+ cl.fetchingBrokersMu.Lock()
+ defer cl.fetchingBrokersMu.Unlock()
+ cl.fetchingBrokers = nil
+ close(wait.done)
+ }()
+
+ _, _, wait.err = cl.fetchMetadata(ctx, kmsg.NewPtrMetadataRequest(), true)
+ return wait.err
+}
+
+func (cl *Client) fetchMetadataForTopics(ctx context.Context, all bool, topics []string) (*broker, *kmsg.MetadataResponse, error) {
+ req := kmsg.NewPtrMetadataRequest()
+ req.AllowAutoTopicCreation = cl.cfg.allowAutoTopicCreation
+ if all {
+ req.Topics = nil
+ } else if len(topics) == 0 {
+ req.Topics = []kmsg.MetadataRequestTopic{}
+ } else {
+ for _, topic := range topics {
+ reqTopic := kmsg.NewMetadataRequestTopic()
+ reqTopic.Topic = kmsg.StringPtr(topic)
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ }
+ return cl.fetchMetadata(ctx, req, true)
+}
+
+func (cl *Client) fetchMetadata(ctx context.Context, req *kmsg.MetadataRequest, limitRetries bool) (*broker, *kmsg.MetadataResponse, error) {
+ r := cl.retryable()
+
+ // We limit retries for internal metadata refreshes, because these do
+ // not need to retry forever and are usually blocking *other* requests.
+ // e.g., producing bumps load errors when metadata returns, so 3
+ // failures here will correspond to 1 bumped error count. To make the
+ // number more accurate, we should *never* retry here, but this is
+ // pretty intolerant of immediately-temporary network issues. Rather,
+ // we use a small count of 3 retries, which with the default backoff,
+ // will be <2s of retrying. This is still intolerant of temporary
+ // failures, but it does allow recovery from a dns issue / bad path.
+ if limitRetries {
+ r.limitRetries = 3
+ }
+
+ meta, err := req.RequestWith(ctx, r)
+ if err == nil {
+ if meta.ControllerID >= 0 {
+ cl.controllerIDMu.Lock()
+ cl.controllerID = meta.ControllerID
+ cl.controllerIDMu.Unlock()
+ }
+ cl.updateBrokers(meta.Brokers)
+ }
+ return r.last, meta, err
+}
+
+// updateBrokers is called with the broker portion of every metadata response.
+// All metadata responses contain all known live brokers, so we can always
+// use the response.
+func (cl *Client) updateBrokers(brokers []kmsg.MetadataResponseBroker) {
+ sort.Slice(brokers, func(i, j int) bool { return brokers[i].NodeID < brokers[j].NodeID })
+ newBrokers := make([]*broker, 0, len(brokers))
+
+ cl.brokersMu.Lock()
+ defer cl.brokersMu.Unlock()
+
+ if cl.stopBrokers {
+ return
+ }
+
+ for len(brokers) > 0 && len(cl.brokers) > 0 {
+ ob := cl.brokers[0]
+ nb := brokers[0]
+
+ switch {
+ case ob.meta.NodeID < nb.NodeID:
+ ob.stopForever()
+ cl.brokers = cl.brokers[1:]
+
+ case ob.meta.NodeID == nb.NodeID:
+ if !ob.meta.equals(nb) {
+ ob.stopForever()
+ ob = cl.newBroker(nb.NodeID, nb.Host, nb.Port, nb.Rack)
+ }
+ newBrokers = append(newBrokers, ob)
+ cl.brokers = cl.brokers[1:]
+ brokers = brokers[1:]
+
+ case ob.meta.NodeID > nb.NodeID:
+ newBrokers = append(newBrokers, cl.newBroker(nb.NodeID, nb.Host, nb.Port, nb.Rack))
+ brokers = brokers[1:]
+ }
+ }
+
+ for len(cl.brokers) > 0 {
+ ob := cl.brokers[0]
+ ob.stopForever()
+ cl.brokers = cl.brokers[1:]
+ }
+
+ for len(brokers) > 0 {
+ nb := brokers[0]
+ newBrokers = append(newBrokers, cl.newBroker(nb.NodeID, nb.Host, nb.Port, nb.Rack))
+ brokers = brokers[1:]
+ }
+
+ cl.brokers = newBrokers
+ cl.reinitAnyBrokerOrd()
+}
+
+// CloseAllowingRebalance allows rebalances, leaves any group, and closes all
+// connections and goroutines. This function is only useful if you are using
+// the BlockRebalanceOnPoll option. Close itself does not allow rebalances and
+// will hang if you polled, did not allow rebalances, and want to close. Close
+// does not automatically allow rebalances because leaving a group causes a
+// revoke, and the client does not assume that the final revoke is concurrency
+// safe. The CloseAllowingRebalance function exists a a shortcut to opt into
+// allowing rebalance while closing.
+func (cl *Client) CloseAllowingRebalance() {
+ cl.AllowRebalance()
+ cl.Close()
+}
+
+// Close leaves any group and closes all connections and goroutines. This
+// function waits for the group to be left. If you want to force leave a group
+// immediately and ensure a speedy shutdown you can use LeaveGroupContext first
+// (and then Close will be immediate).
+//
+// If you are group consuming and have overridden the default
+// OnPartitionsRevoked, you must manually commit offsets before closing the
+// client.
+//
+// If you are using the BlockRebalanceOnPoll option and have polled, this
+// function does not automatically allow rebalancing. You must AllowRebalance
+// before calling this function. Internally, this function leaves the group,
+// and leaving a group causes a rebalance so that you can get one final
+// notification of revoked partitions. If you want to automatically allow
+// rebalancing, use CloseAllowingRebalance.
+func (cl *Client) Close() {
+ cl.close(cl.ctx)
+}
+
+func (cl *Client) close(ctx context.Context) (rerr error) {
+ defer cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookClientClosed); ok {
+ h.OnClientClosed(cl)
+ }
+ })
+
+ c := &cl.consumer
+ c.kill.Store(true)
+ if c.g != nil {
+ rerr = cl.LeaveGroupContext(ctx)
+ } else if c.d != nil {
+ c.mu.Lock() // lock for assign
+ c.assignPartitions(nil, assignInvalidateAll, nil, "") // we do not use a log message when not in a group
+ c.mu.Unlock()
+ }
+
+ // After the above, consumers cannot consume anymore. LeaveGroup
+ // internally assigns nil, which uses noConsumerSession, which prevents
+ // loopFetch from starting. Assigning also waits for the prior session
+ // to be complete, meaning loopFetch cannot be running.
+
+ sessCloseCtx, sessCloseCancel := context.WithTimeout(ctx, time.Second)
+ var wg sync.WaitGroup
+ cl.allSinksAndSources(func(sns sinkAndSource) {
+ if sns.source.session.id != 0 {
+ sns := sns
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ sns.source.killSessionOnClose(sessCloseCtx)
+ }()
+ }
+ })
+ wg.Wait()
+ sessCloseCancel()
+
+ // Now we kill the client context and all brokers, ensuring all
+ // requests fail. This will finish all producer callbacks and
+ // stop the metadata loop.
+ cl.ctxCancel()
+ cl.brokersMu.Lock()
+ cl.stopBrokers = true
+ for _, broker := range cl.brokers {
+ broker.stopForever()
+ }
+ cl.brokersMu.Unlock()
+ for _, broker := range cl.loadSeeds() {
+ broker.stopForever()
+ }
+
+ // Wait for metadata to quit so we know no more erroring topic
+ // partitions will be created. After metadata has quit, we can
+ // safely stop sinks and sources, as no more will be made.
+ <-cl.metadone
+
+ for _, sns := range cl.sinksAndSources {
+ sns.sink.maybeDrain() // awaken anything in backoff
+ sns.source.maybeConsume() // same
+ }
+
+ cl.failBufferedRecords(ErrClientClosed)
+
+ // We need one final poll: if any sources buffered a fetch, then the
+ // manageFetchConcurrency loop only exits when all fetches have been
+ // drained, because draining a fetch is what decrements an "active"
+ // fetch. PollFetches with `nil` is instant.
+ cl.PollFetches(nil)
+
+ for _, s := range cl.cfg.sasls {
+ if closing, ok := s.(sasl.ClosingMechanism); ok {
+ closing.Close()
+ }
+ }
+
+ return rerr
+}
+
+// Request issues a request to Kafka, waiting for and returning the response.
+// If a retryable network error occurs, or if a retryable group / transaction
+// coordinator error occurs, the request is retried. All other errors are
+// returned.
+//
+// If the request is an admin request, this will issue it to the Kafka
+// controller. If the controller ID is unknown, this will attempt to fetch it.
+// If the fetch errors, this will return an unknown controller error.
+//
+// If the request is a group or transaction coordinator request, this will
+// issue the request to the appropriate group or transaction coordinator.
+//
+// For transaction requests, the request is issued to the transaction
+// coordinator. However, if the request is an init producer ID request and the
+// request has no transactional ID, the request goes to any broker.
+//
+// Some requests need to be split and sent to many brokers. For these requests,
+// it is *highly* recommended to use RequestSharded. Not all responses from
+// many brokers can be cleanly merged. However, for the requests that are
+// split, this does attempt to merge them in a sane way.
+//
+// The following requests are split:
+//
+// ListOffsets
+// OffsetFetch (if using v8+ for Kafka 3.0+)
+// FindCoordinator (if using v4+ for Kafka 3.0+)
+// DescribeGroups
+// ListGroups
+// DeleteRecords
+// OffsetForLeaderEpoch
+// DescribeConfigs
+// AlterConfigs
+// AlterReplicaLogDirs
+// DescribeLogDirs
+// DeleteGroups
+// IncrementalAlterConfigs
+// DescribeProducers
+// DescribeTransactions
+// ListTransactions
+//
+// Kafka 3.0 introduced batch OffsetFetch and batch FindCoordinator requests.
+// This function is forward and backward compatible: old requests will be
+// batched as necessary, and batched requests will be split as necessary. It is
+// recommended to always use batch requests for simplicity.
+//
+// In short, this method tries to do the correct thing depending on what type
+// of request is being issued.
+//
+// The passed context can be used to cancel a request and return early. Note
+// that if the request was written to Kafka but the context canceled before a
+// response is received, Kafka may still operate on the received request.
+//
+// If using this function to issue kmsg.ProduceRequest's, you must configure
+// the client with the same RequiredAcks option that you use in the request.
+// If you are issuing produce requests with 0 acks, you must configure the
+// client with the same timeout you use in the request. The client will
+// internally rewrite the incoming request's acks to match the client's
+// configuration, and it will rewrite the timeout millis if the acks is 0. It
+// is strongly recommended to not issue raw kmsg.ProduceRequest's.
+func (cl *Client) Request(ctx context.Context, req kmsg.Request) (kmsg.Response, error) {
+ resps, merge := cl.shardedRequest(ctx, req)
+ // If there is no merge function, only one request was issued directly
+ // to a broker. Return the resp and err directly.
+ if merge == nil {
+ return resps[0].Resp, resps[0].Err
+ }
+ return merge(resps)
+}
+
+func (cl *Client) retryable() *retryable {
+ return cl.retryableBrokerFn(func() (*broker, error) { return cl.broker(), nil })
+}
+
+func (cl *Client) retryableBrokerFn(fn func() (*broker, error)) *retryable {
+ return &retryable{cl: cl, br: fn}
+}
+
+func (cl *Client) shouldRetry(tries int, err error) bool {
+ return (kerr.IsRetriable(err) || isRetryableBrokerErr(err)) && int64(tries) < cl.cfg.retries
+}
+
+func (cl *Client) shouldRetryNext(tries int, err error) bool {
+ return isSkippableBrokerErr(err) && int64(tries) < cl.cfg.retries
+}
+
+type retryable struct {
+ cl *Client
+ br func() (*broker, error)
+ last *broker
+
+ // If non-zero, limitRetries may specify a smaller # of retries than
+ // the client RequestRetries number. This is used for internal requests
+ // that can fail / do not need to retry forever.
+ limitRetries int
+
+ // parseRetryErr, if non-nil, can delete stale cached brokers. We do
+ // *not* return the error from this function to the caller, but we do
+ // use it to potentially retry. It is not necessary, but also not
+ // harmful, to return the input error.
+ parseRetryErr func(kmsg.Response, error) error
+}
+
+type failDial struct{ fails int8 }
+
+// The controller and group/txn coordinators are cached. If dialing the broker
+// repeatedly fails, we need to forget our cache to force a re-load: the broker
+// may have completely died.
+func (d *failDial) isRepeatedDialFail(err error) bool {
+ if isAnyDialErr(err) {
+ d.fails++
+ if d.fails == 3 {
+ d.fails = 0
+ return true
+ }
+ }
+ return false
+}
+
+func (r *retryable) Request(ctx context.Context, req kmsg.Request) (kmsg.Response, error) {
+ tries := 0
+ tryStart := time.Now()
+ retryTimeout := r.cl.cfg.retryTimeout(req.Key())
+
+ next, nextErr := r.br()
+start:
+ tries++
+ br, err := next, nextErr
+ r.last = br
+ var resp kmsg.Response
+ var retryErr error
+ if err == nil {
+ resp, err = r.last.waitResp(ctx, req)
+ if r.parseRetryErr != nil {
+ retryErr = r.parseRetryErr(resp, err)
+ }
+ }
+
+ if err != nil || retryErr != nil {
+ if r.limitRetries == 0 || tries < r.limitRetries {
+ backoff := r.cl.cfg.retryBackoff(tries)
+ if retryTimeout == 0 || time.Now().Add(backoff).Sub(tryStart) <= retryTimeout {
+ // If this broker / request had a retryable error, we can
+ // just retry now. If the error is *not* retryable but
+ // is a broker-specific network error, and the next
+ // broker is different than the current, we also retry.
+ if r.cl.shouldRetry(tries, err) || r.cl.shouldRetry(tries, retryErr) {
+ r.cl.cfg.logger.Log(LogLevelDebug, "retrying request",
+ "tries", tries,
+ "backoff", backoff,
+ "request_error", err,
+ "response_error", retryErr,
+ )
+ if r.cl.waitTries(ctx, backoff) {
+ next, nextErr = r.br()
+ goto start
+ }
+ } else if r.cl.shouldRetryNext(tries, err) {
+ next, nextErr = r.br()
+ if next != br && r.cl.waitTries(ctx, backoff) {
+ goto start
+ }
+ }
+ }
+ }
+ }
+ return resp, err
+}
+
+// ResponseShard ties together a request with either the response it received
+// or an error that prevented a response from being received.
+type ResponseShard struct {
+ // Meta contains the broker that this request was issued to, or an
+ // unknown (node ID -1) metadata if the request could not be issued.
+ //
+ // Requests can fail to even be issued if an appropriate broker cannot
+ // be loaded of if the client cannot understand the request.
+ Meta BrokerMetadata
+
+ // Req is the request that was issued to this broker.
+ Req kmsg.Request
+
+ // Resp is the response received from the broker, if any.
+ Resp kmsg.Response
+
+ // Err, if non-nil, is the error that prevented a response from being
+ // received or the request from being issued.
+ Err error
+}
+
+// RequestSharded performs the same logic as Request, but returns all responses
+// from any broker that the request was split to. This always returns at least
+// one shard. If the request does not need to be issued (describing no groups),
+// this issues the request to a random broker just to ensure that one shard
+// exists.
+//
+// There are only a few requests that are strongly recommended to explicitly
+// use RequestSharded; the rest can by default use Request. These few requests
+// are mentioned in the documentation for Request.
+//
+// If, in the process of splitting a request, some topics or partitions are
+// found to not exist, or Kafka replies that a request should go to a broker
+// that does not exist, all those non-existent pieces are grouped into one
+// request to the first seed broker. This will show up as a seed broker node ID
+// (min int32) and the response will likely contain purely errors.
+//
+// The response shards are ordered by broker metadata.
+func (cl *Client) RequestSharded(ctx context.Context, req kmsg.Request) []ResponseShard {
+ resps, _ := cl.shardedRequest(ctx, req)
+ sort.Slice(resps, func(i, j int) bool {
+ l := &resps[i].Meta
+ r := &resps[j].Meta
+
+ if l.NodeID < r.NodeID {
+ return true
+ }
+ if r.NodeID < l.NodeID {
+ return false
+ }
+ if l.Host < r.Host {
+ return true
+ }
+ if r.Host < l.Host {
+ return false
+ }
+ if l.Port < r.Port {
+ return true
+ }
+ if r.Port < l.Port {
+ return false
+ }
+ if l.Rack == nil {
+ return true
+ }
+ if r.Rack == nil {
+ return false
+ }
+ return *l.Rack < *r.Rack
+ })
+ return resps
+}
+
+type shardMerge func([]ResponseShard) (kmsg.Response, error)
+
+func (cl *Client) shardedRequest(ctx context.Context, req kmsg.Request) ([]ResponseShard, shardMerge) {
+ ctx, cancel := context.WithCancel(ctx)
+ done := make(chan struct{})
+ defer close(done)
+ go func() {
+ defer cancel()
+ select {
+ case <-done:
+ case <-ctx.Done():
+ case <-cl.ctx.Done():
+ }
+ }()
+
+ // First, handle any sharded request. This comes before the conditional
+ // below because this handles two group requests, which we do not want
+ // to fall into the handleCoordinatorReq logic.
+ switch t := req.(type) {
+ case *kmsg.ListOffsetsRequest, // key 2
+ *kmsg.OffsetFetchRequest, // key 9
+ *kmsg.FindCoordinatorRequest, // key 10
+ *kmsg.DescribeGroupsRequest, // key 15
+ *kmsg.ListGroupsRequest, // key 16
+ *kmsg.DeleteRecordsRequest, // key 21
+ *kmsg.OffsetForLeaderEpochRequest, // key 23
+ *kmsg.AddPartitionsToTxnRequest, // key 24
+ *kmsg.WriteTxnMarkersRequest, // key 27
+ *kmsg.DescribeConfigsRequest, // key 32
+ *kmsg.AlterConfigsRequest, // key 33
+ *kmsg.AlterReplicaLogDirsRequest, // key 34
+ *kmsg.DescribeLogDirsRequest, // key 35
+ *kmsg.DeleteGroupsRequest, // key 42
+ *kmsg.IncrementalAlterConfigsRequest, // key 44
+ *kmsg.DescribeProducersRequest, // key 61
+ *kmsg.DescribeTransactionsRequest, // key 65
+ *kmsg.ListTransactionsRequest: // key 66
+ return cl.handleShardedReq(ctx, req)
+
+ case *kmsg.MetadataRequest:
+ // We hijack any metadata request so as to populate our
+ // own brokers and controller ID.
+ br, resp, err := cl.fetchMetadata(ctx, t, false)
+ return shards(shard(br, req, resp, err)), nil
+
+ case kmsg.AdminRequest:
+ return shards(cl.handleAdminReq(ctx, t)), nil
+
+ case kmsg.GroupCoordinatorRequest,
+ kmsg.TxnCoordinatorRequest:
+ return shards(cl.handleCoordinatorReq(ctx, t)), nil
+
+ case *kmsg.ApiVersionsRequest:
+ // As of v3, software name and version are required.
+ // If they are missing, we use the config options.
+ if t.ClientSoftwareName == "" && t.ClientSoftwareVersion == "" {
+ dup := *t
+ dup.ClientSoftwareName = cl.cfg.softwareName
+ dup.ClientSoftwareVersion = cl.cfg.softwareVersion
+ req = &dup
+ }
+ }
+
+ // All other requests not handled above can be issued to any broker
+ // with the default retryable logic.
+ r := cl.retryable()
+ resp, err := r.Request(ctx, req)
+ return shards(shard(r.last, req, resp, err)), nil
+}
+
+func shard(br *broker, req kmsg.Request, resp kmsg.Response, err error) ResponseShard {
+ if br == nil { // the broker could be nil if loading the broker failed.
+ return ResponseShard{unknownBrokerMetadata, req, resp, err}
+ }
+ return ResponseShard{br.meta, req, resp, err}
+}
+
+func shards(shard ...ResponseShard) []ResponseShard {
+ return shard
+}
+
+func findBroker(candidates []*broker, node int32) *broker {
+ n := sort.Search(len(candidates), func(n int) bool { return candidates[n].meta.NodeID >= node })
+ var b *broker
+ if n < len(candidates) {
+ c := candidates[n]
+ if c.meta.NodeID == node {
+ b = c
+ }
+ }
+ return b
+}
+
+// brokerOrErr returns the broker for ID or the error if the broker does not
+// exist.
+//
+// If tryLoad is true and the broker does not exist, this attempts a broker
+// metadata load once before failing. If the metadata load fails, this returns
+// that error.
+func (cl *Client) brokerOrErr(ctx context.Context, id int32, err error) (*broker, error) {
+ if id < 0 {
+ return nil, err
+ }
+
+ tryLoad := ctx != nil
+ tries := 0
+start:
+ var broker *broker
+ if id < 0 {
+ broker = findBroker(cl.loadSeeds(), id)
+ } else {
+ cl.brokersMu.RLock()
+ broker = findBroker(cl.brokers, id)
+ cl.brokersMu.RUnlock()
+ }
+
+ if broker == nil {
+ if tryLoad {
+ if loadErr := cl.fetchBrokerMetadata(ctx); loadErr != nil {
+ return nil, loadErr
+ }
+ // We will retry loading up to two times, if we load broker
+ // metadata twice successfully but neither load has the broker
+ // we are looking for, then we say our broker does not exist.
+ tries++
+ if tries < 2 {
+ goto start
+ }
+ }
+ return nil, err
+ }
+ return broker, nil
+}
+
+// controller returns the controller broker, forcing a broker load if
+// necessary.
+func (cl *Client) controller(ctx context.Context) (b *broker, err error) {
+ get := func() int32 {
+ cl.controllerIDMu.Lock()
+ defer cl.controllerIDMu.Unlock()
+ return cl.controllerID
+ }
+
+ defer func() {
+ if ec := (*errUnknownController)(nil); errors.As(err, &ec) {
+ cl.forgetControllerID(ec.id)
+ }
+ }()
+
+ var id int32
+ if id = get(); id < 0 {
+ if err := cl.fetchBrokerMetadata(ctx); err != nil {
+ return nil, err
+ }
+ if id = get(); id < 0 {
+ return nil, &errUnknownController{id}
+ }
+ }
+
+ return cl.brokerOrErr(nil, id, &errUnknownController{id})
+}
+
+// forgetControllerID is called once an admin requests sees NOT_CONTROLLER.
+func (cl *Client) forgetControllerID(id int32) {
+ cl.controllerIDMu.Lock()
+ defer cl.controllerIDMu.Unlock()
+ if cl.controllerID == id {
+ cl.controllerID = unknownControllerID
+ }
+}
+
+const (
+ coordinatorTypeGroup int8 = 0
+ coordinatorTypeTxn int8 = 1
+)
+
+type coordinatorKey struct {
+ name string
+ typ int8
+}
+
+type coordinatorLoad struct {
+ loadWait chan struct{}
+ node int32
+ err error
+}
+
+func (cl *Client) loadCoordinator(ctx context.Context, typ int8, key string) (*broker, error) {
+ berr := cl.loadCoordinators(ctx, typ, key)[key]
+ return berr.b, berr.err
+}
+
+func (cl *Client) loadCoordinators(ctx context.Context, typ int8, keys ...string) map[string]brokerOrErr {
+ mch := make(chan map[string]brokerOrErr, 1)
+ go func() { mch <- cl.doLoadCoordinators(ctx, typ, keys...) }()
+ select {
+ case m := <-mch:
+ return m
+ case <-ctx.Done():
+ m := make(map[string]brokerOrErr, len(keys))
+ for _, k := range keys {
+ m[k] = brokerOrErr{nil, ctx.Err()}
+ }
+ return m
+ }
+}
+
+// doLoadCoordinators uses the caller context to cancel loading metadata
+// (brokerOrErr), but we use the client context to actually issue the request.
+// There should be only one direct call to doLoadCoordinators, just above in
+// loadCoordinator. It is possible for two requests to be loading the same
+// coordinator (in fact, that's the point of this function -- collapse these
+// requests). We do not want the first request canceling it's context to cause
+// errors for the second request.
+//
+// It is ok to leave FindCoordinator running even if the caller quits. Worst
+// case, we just cache things for some time in the future; yay.
+func (cl *Client) doLoadCoordinators(ctx context.Context, typ int8, keys ...string) map[string]brokerOrErr {
+ m := make(map[string]brokerOrErr, len(keys))
+ if len(keys) == 0 {
+ return m
+ }
+
+ toRequest := make(map[string]bool, len(keys)) // true == bypass the cache
+ for _, key := range keys {
+ toRequest[key] = false
+ }
+
+ // For each of these keys, we have two cases:
+ //
+ // 1) The key is cached. It is either loading or loaded. We do not
+ // request the key ourselves; we wait for the load to finish.
+ //
+ // 2) The key is not cached, and we request it.
+ //
+ // If a key is cached but the coordinator no longer exists for us, we
+ // re-request to refresh the coordinator by setting toRequest[key] to
+ // true (bypass cache).
+ //
+ // If we ever request a key ourselves, we do not request it again. We
+ // ensure this by deleting from toRequest. We also delete if the key
+ // was cached with no error.
+ //
+ // We could have some keys cached and some that need to be requested.
+ // We issue a request but do not request what is cached.
+ //
+ // Lastly, we only ever trigger one metadata update, which happens if
+ // we have an unknown coordinator after we load coordinators.
+ var hasLoadedBrokers bool
+ for len(toRequest) > 0 {
+ var loadWait chan struct{}
+ load2key := make(map[*coordinatorLoad][]string)
+
+ cl.coordinatorsMu.Lock()
+ for key, bypassCache := range toRequest {
+ c, ok := cl.coordinators[coordinatorKey{key, typ}]
+ if !ok || bypassCache {
+ if loadWait == nil {
+ loadWait = make(chan struct{})
+ }
+ c = &coordinatorLoad{
+ loadWait: loadWait,
+ err: errors.New("coordinator was not returned in broker response"),
+ }
+ cl.coordinators[coordinatorKey{key, typ}] = c
+ }
+ load2key[c] = append(load2key[c], key)
+ }
+ cl.coordinatorsMu.Unlock()
+
+ if loadWait == nil { // all coordinators were cached
+ hasLoadedBrokers = cl.waitCoordinatorLoad(ctx, typ, load2key, !hasLoadedBrokers, toRequest, m)
+ continue
+ }
+
+ key2load := make(map[string]*coordinatorLoad)
+ req := kmsg.NewPtrFindCoordinatorRequest()
+ req.CoordinatorType = typ
+ for c, keys := range load2key {
+ if c.loadWait == loadWait { // if this is our wait, this is ours to request
+ req.CoordinatorKeys = append(req.CoordinatorKeys, keys...)
+ for _, key := range keys {
+ key2load[key] = c
+ delete(toRequest, key)
+ }
+ }
+ }
+
+ cl.cfg.logger.Log(LogLevelDebug, "prepared to issue find coordinator request",
+ "coordinator_type", typ,
+ "coordinator_keys", req.CoordinatorKeys,
+ )
+
+ shards := cl.RequestSharded(cl.ctx, req)
+
+ for _, shard := range shards {
+ if shard.Err != nil {
+ req := shard.Req.(*kmsg.FindCoordinatorRequest)
+ for _, key := range req.CoordinatorKeys {
+ c, ok := key2load[key]
+ if ok {
+ c.err = shard.Err
+ }
+ }
+ } else {
+ resp := shard.Resp.(*kmsg.FindCoordinatorResponse)
+ for _, rc := range resp.Coordinators {
+ c, ok := key2load[rc.Key]
+ if ok {
+ c.err = kerr.ErrorForCode(rc.ErrorCode)
+ c.node = rc.NodeID
+ }
+ }
+ }
+ }
+
+ // For anything we loaded, if it has a load failure (including
+ // not being replied to), we remove the key from the cache. We
+ // do not want to cache erroring values.
+ //
+ // We range key2load, which contains only coordinators we are
+ // responsible for loading.
+ cl.coordinatorsMu.Lock()
+ for key, c := range key2load {
+ if c.err != nil {
+ ck := coordinatorKey{key, typ}
+ if loading, ok := cl.coordinators[ck]; ok && loading == c {
+ delete(cl.coordinators, ck)
+ }
+ }
+ }
+ cl.coordinatorsMu.Unlock()
+
+ close(loadWait)
+ hasLoadedBrokers = cl.waitCoordinatorLoad(ctx, typ, load2key, !hasLoadedBrokers, toRequest, m)
+ }
+ return m
+}
+
+// After some prep work, we wait for coordinators to load. We update toRequest
+// values with true if the caller should bypass cache and re-load these
+// coordinators.
+//
+// This returns if we load brokers, and populates m with results.
+func (cl *Client) waitCoordinatorLoad(ctx context.Context, typ int8, load2key map[*coordinatorLoad][]string, shouldLoadBrokers bool, toRequest map[string]bool, m map[string]brokerOrErr) bool {
+ var loadedBrokers bool
+ for c, keys := range load2key {
+ <-c.loadWait
+ for _, key := range keys {
+ if c.err != nil {
+ delete(toRequest, key)
+ m[key] = brokerOrErr{nil, c.err}
+ continue
+ }
+
+ var brokerCtx context.Context
+ if shouldLoadBrokers && !loadedBrokers {
+ brokerCtx = ctx
+ loadedBrokers = true
+ }
+
+ b, err := cl.brokerOrErr(brokerCtx, c.node, &errUnknownCoordinator{c.node, coordinatorKey{key, typ}})
+ if err != nil {
+ if _, exists := toRequest[key]; exists {
+ toRequest[key] = true
+ continue
+ }
+ // If the key does not exist, we just loaded this
+ // coordinator and also the brokers. We do not
+ // re-request.
+ }
+ delete(toRequest, key)
+ m[key] = brokerOrErr{b, err}
+ }
+ }
+ return loadedBrokers
+}
+
+func (cl *Client) maybeDeleteStaleCoordinator(name string, typ int8, err error) bool {
+ switch {
+ case errors.Is(err, kerr.CoordinatorNotAvailable),
+ errors.Is(err, kerr.CoordinatorLoadInProgress),
+ errors.Is(err, kerr.NotCoordinator):
+ cl.deleteStaleCoordinator(name, typ)
+ return true
+ }
+ return false
+}
+
+func (cl *Client) deleteStaleCoordinator(name string, typ int8) {
+ cl.coordinatorsMu.Lock()
+ defer cl.coordinatorsMu.Unlock()
+ k := coordinatorKey{name, typ}
+ v := cl.coordinators[k]
+ if v == nil {
+ return
+ }
+ select {
+ case <-v.loadWait:
+ delete(cl.coordinators, k)
+ default:
+ // We are actively reloading this coordinator.
+ }
+}
+
+type brokerOrErr struct {
+ b *broker
+ err error
+}
+
+func (cl *Client) handleAdminReq(ctx context.Context, req kmsg.Request) ResponseShard {
+ // Loading a controller can perform some wait; we accept that and do
+ // not account for the retries or the time to load the controller as
+ // part of the retries / time to issue the req.
+ r := cl.retryableBrokerFn(func() (*broker, error) {
+ return cl.controller(ctx)
+ })
+
+ // The only request that can break mapped metadata is CreatePartitions,
+ // because our mapping will still be "valid" but behind the scenes,
+ // more partitions exist. If CreatePartitions is going through this
+ // client, we preemptively delete any mapping for these topics.
+ if t, ok := req.(*kmsg.CreatePartitionsRequest); ok {
+ var topics []string
+ for i := range t.Topics {
+ topics = append(topics, t.Topics[i].Topic)
+ }
+ cl.maybeDeleteMappedMetadata(false, topics...)
+ }
+
+ var d failDial
+ r.parseRetryErr = func(resp kmsg.Response, err error) error {
+ if err != nil {
+ if d.isRepeatedDialFail(err) {
+ cl.forgetControllerID(r.last.meta.NodeID)
+ }
+ return err
+ }
+ var code int16
+ switch t := resp.(type) {
+ case *kmsg.CreateTopicsResponse:
+ if len(t.Topics) > 0 {
+ code = t.Topics[0].ErrorCode
+ }
+ case *kmsg.DeleteTopicsResponse:
+ if len(t.Topics) > 0 {
+ code = t.Topics[0].ErrorCode
+ }
+ case *kmsg.CreatePartitionsResponse:
+ if len(t.Topics) > 0 {
+ code = t.Topics[0].ErrorCode
+ }
+ case *kmsg.ElectLeadersResponse:
+ if len(t.Topics) > 0 && len(t.Topics[0].Partitions) > 0 {
+ code = t.Topics[0].Partitions[0].ErrorCode
+ }
+ case *kmsg.AlterPartitionAssignmentsResponse:
+ code = t.ErrorCode
+ case *kmsg.ListPartitionReassignmentsResponse:
+ code = t.ErrorCode
+ case *kmsg.AlterUserSCRAMCredentialsResponse:
+ if len(t.Results) > 0 {
+ code = t.Results[0].ErrorCode
+ }
+ case *kmsg.VoteResponse:
+ code = t.ErrorCode
+ case *kmsg.BeginQuorumEpochResponse:
+ code = t.ErrorCode
+ case *kmsg.EndQuorumEpochResponse:
+ code = t.ErrorCode
+ case *kmsg.DescribeQuorumResponse:
+ code = t.ErrorCode
+ case *kmsg.AlterPartitionResponse:
+ code = t.ErrorCode
+ case *kmsg.UpdateFeaturesResponse:
+ code = t.ErrorCode
+ case *kmsg.EnvelopeResponse:
+ code = t.ErrorCode
+ }
+ if err := kerr.ErrorForCode(code); errors.Is(err, kerr.NotController) {
+ // There must be a last broker if we were able to issue
+ // the request and get a response.
+ cl.forgetControllerID(r.last.meta.NodeID)
+ return err
+ }
+ return nil
+ }
+
+ resp, err := r.Request(ctx, req)
+ return shard(r.last, req, resp, err)
+}
+
+// handleCoordinatorReq issues simple (non-shardable) group or txn requests.
+func (cl *Client) handleCoordinatorReq(ctx context.Context, req kmsg.Request) ResponseShard {
+ switch t := req.(type) {
+ default:
+ // All group requests should be listed below, so if it isn't,
+ // then we do not know what this request is.
+ return shard(nil, req, nil, errors.New("client is too old; this client does not know what to do with this request"))
+
+ /////////
+ // TXN // -- all txn reqs are simple
+ /////////
+
+ case *kmsg.InitProducerIDRequest:
+ if t.TransactionalID != nil {
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeTxn, *t.TransactionalID, req)
+ }
+ // InitProducerID can go to any broker if the transactional ID
+ // is nil. By using handleReqWithCoordinator, we get the
+ // retryable-error parsing, even though we are not actually
+ // using a defined txn coordinator. This is fine; by passing no
+ // names, we delete no coordinator.
+ coordinator, resp, err := cl.handleReqWithCoordinator(ctx, func() (*broker, error) { return cl.broker(), nil }, coordinatorTypeTxn, "", req)
+ return shard(coordinator, req, resp, err)
+ case *kmsg.AddOffsetsToTxnRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeTxn, t.TransactionalID, req)
+ case *kmsg.EndTxnRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeTxn, t.TransactionalID, req)
+
+ ///////////
+ // GROUP // -- most group reqs are simple
+ ///////////
+
+ case *kmsg.OffsetCommitRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ case *kmsg.TxnOffsetCommitRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ case *kmsg.JoinGroupRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ case *kmsg.HeartbeatRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ case *kmsg.LeaveGroupRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ case *kmsg.SyncGroupRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ case *kmsg.OffsetDeleteRequest:
+ return cl.handleCoordinatorReqSimple(ctx, coordinatorTypeGroup, t.Group, req)
+ }
+}
+
+// handleCoordinatorReqSimple issues a request that contains a single group or
+// txn to its coordinator.
+//
+// The error is inspected to see if it is a retryable error and, if so, the
+// coordinator is deleted.
+func (cl *Client) handleCoordinatorReqSimple(ctx context.Context, typ int8, name string, req kmsg.Request) ResponseShard {
+ coordinator, resp, err := cl.handleReqWithCoordinator(ctx, func() (*broker, error) {
+ return cl.loadCoordinator(ctx, typ, name)
+ }, typ, name, req)
+ return shard(coordinator, req, resp, err)
+}
+
+// handleReqWithCoordinator actually issues a request to a coordinator and
+// does retry handling.
+//
+// This avoids retries on the two group requests that need to be sharded.
+func (cl *Client) handleReqWithCoordinator(
+ ctx context.Context,
+ coordinator func() (*broker, error),
+ typ int8,
+ name string, // group ID or the transactional id
+ req kmsg.Request,
+) (*broker, kmsg.Response, error) {
+ r := cl.retryableBrokerFn(coordinator)
+ var d failDial
+ r.parseRetryErr = func(resp kmsg.Response, err error) error {
+ if err != nil {
+ if d.isRepeatedDialFail(err) {
+ cl.deleteStaleCoordinator(name, typ)
+ }
+ return err
+ }
+ var code int16
+ switch t := resp.(type) {
+ // TXN
+ case *kmsg.InitProducerIDResponse:
+ code = t.ErrorCode
+ case *kmsg.AddOffsetsToTxnResponse:
+ code = t.ErrorCode
+ case *kmsg.EndTxnResponse:
+ code = t.ErrorCode
+
+ // GROUP
+ case *kmsg.OffsetCommitResponse:
+ if len(t.Topics) > 0 && len(t.Topics[0].Partitions) > 0 {
+ code = t.Topics[0].Partitions[0].ErrorCode
+ }
+ case *kmsg.TxnOffsetCommitResponse:
+ if len(t.Topics) > 0 && len(t.Topics[0].Partitions) > 0 {
+ code = t.Topics[0].Partitions[0].ErrorCode
+ }
+ case *kmsg.JoinGroupResponse:
+ code = t.ErrorCode
+ case *kmsg.HeartbeatResponse:
+ code = t.ErrorCode
+ case *kmsg.LeaveGroupResponse:
+ code = t.ErrorCode
+ case *kmsg.SyncGroupResponse:
+ code = t.ErrorCode
+ }
+
+ // ListGroups, OffsetFetch, DeleteGroups, DescribeGroups, and
+ // DescribeTransactions handled in sharding.
+
+ if err := kerr.ErrorForCode(code); cl.maybeDeleteStaleCoordinator(name, typ, err) {
+ return err
+ }
+ return nil
+ }
+
+ resp, err := r.Request(ctx, req)
+ return r.last, resp, err
+}
+
+// Broker returns a handle to a specific broker to directly issue requests to.
+// Note that there is no guarantee that this broker exists; if it does not,
+// requests will fail with with an unknown broker error.
+func (cl *Client) Broker(id int) *Broker {
+ return &Broker{
+ id: int32(id),
+ cl: cl,
+ }
+}
+
+// DiscoveredBrokers returns all brokers that were discovered from prior
+// metadata responses. This does not actually issue a metadata request to load
+// brokers; if you wish to ensure this returns all brokers, be sure to manually
+// issue a metadata request before this. This also does not include seed
+// brokers, which are internally saved under special internal broker IDs (but,
+// it does include those brokers under their normal IDs as returned from a
+// metadata response).
+func (cl *Client) DiscoveredBrokers() []*Broker {
+ cl.brokersMu.RLock()
+ defer cl.brokersMu.RUnlock()
+
+ var bs []*Broker
+ for _, broker := range cl.brokers {
+ bs = append(bs, &Broker{id: broker.meta.NodeID, cl: cl})
+ }
+ return bs
+}
+
+// SeedBrokers returns the all seed brokers.
+func (cl *Client) SeedBrokers() []*Broker {
+ var bs []*Broker
+ for _, broker := range cl.loadSeeds() {
+ bs = append(bs, &Broker{id: broker.meta.NodeID, cl: cl})
+ }
+ return bs
+}
+
+// UpdateSeedBrokers updates the client's list of seed brokers. Over the course
+// of a long period of time, your might replace all brokers that you originally
+// specified as seeds. This command allows you to replace the client's list of
+// seeds.
+//
+// This returns an error if any of the input addrs is not a host:port. If the
+// input list is empty, the function returns without replacing the seeds.
+func (cl *Client) UpdateSeedBrokers(addrs ...string) error {
+ if len(addrs) == 0 {
+ return nil
+ }
+ seeds, err := parseSeeds(addrs)
+ if err != nil {
+ return err
+ }
+
+ seedBrokers := make([]*broker, 0, len(seeds))
+ for i, seed := range seeds {
+ b := cl.newBroker(unknownSeedID(i), seed.host, seed.port, nil)
+ seedBrokers = append(seedBrokers, b)
+ }
+
+ // We lock to guard against concurrently updating seeds; we do not need
+ // the lock for what this usually guards.
+ cl.brokersMu.Lock()
+ old := cl.loadSeeds()
+ cl.seeds.Store(seedBrokers)
+ cl.brokersMu.Unlock()
+
+ for _, b := range old {
+ b.stopForever()
+ }
+
+ return nil
+}
+
+// Broker pairs a broker ID with a client to directly issue requests to a
+// specific broker.
+type Broker struct {
+ id int32
+ cl *Client
+}
+
+// Request issues a request to a broker. If the broker does not exist in the
+// client, this returns an unknown broker error. Requests are not retried.
+//
+// The passed context can be used to cancel a request and return early.
+// Note that if the request is not canceled before it is written to Kafka,
+// you may just end up canceling and not receiving the response to what Kafka
+// inevitably does.
+//
+// It is more beneficial to always use RetriableRequest.
+func (b *Broker) Request(ctx context.Context, req kmsg.Request) (kmsg.Response, error) {
+ return b.request(ctx, false, req)
+}
+
+// RetriableRequest issues a request to a broker the same as Broker, but
+// retries in the face of retryable broker connection errors. This does not
+// retry on response internal errors.
+func (b *Broker) RetriableRequest(ctx context.Context, req kmsg.Request) (kmsg.Response, error) {
+ return b.request(ctx, true, req)
+}
+
+func (b *Broker) request(ctx context.Context, retry bool, req kmsg.Request) (kmsg.Response, error) {
+ ctx, cancel := context.WithCancel(ctx)
+ defer cancel()
+ var resp kmsg.Response
+ var err error
+ done := make(chan struct{})
+
+ go func() {
+ defer close(done)
+
+ if !retry {
+ var br *broker
+ br, err = b.cl.brokerOrErr(ctx, b.id, errUnknownBroker)
+ if err == nil {
+ resp, err = br.waitResp(ctx, req)
+ }
+ } else {
+ resp, err = b.cl.retryableBrokerFn(func() (*broker, error) {
+ return b.cl.brokerOrErr(ctx, b.id, errUnknownBroker)
+ }).Request(ctx, req)
+ }
+ }()
+
+ select {
+ case <-done:
+ return resp, err
+ case <-ctx.Done():
+ return nil, ctx.Err()
+ case <-b.cl.ctx.Done():
+ return nil, b.cl.ctx.Err()
+ }
+}
+
+//////////////////////
+// REQUEST SHARDING //
+//////////////////////
+
+// Below here lies all logic to handle requests that need to be split and sent
+// to many brokers. A lot of the logic for each sharding function is very
+// similar, but each sharding function uses slightly different types.
+
+// issueShard is a request that has been split and is ready to be sent to the
+// given broker ID.
+type issueShard struct {
+ req kmsg.Request
+ broker int32
+ any bool
+
+ // if non-nil, we could not map this request shard to any broker, and
+ // this error is the reason.
+ err error
+}
+
+// sharder splits a request.
+type sharder interface {
+ // shard splits a request and returns the requests to issue tied to the
+ // brokers to issue the requests to. This can return an error if there
+ // is some pre-loading that needs to happen. If an error is returned,
+ // the request that was intended for splitting is failed wholesale.
+ //
+ // Due to sharded requests not being retryable if a response is
+ // received, to avoid stale coordinator errors, this function should
+ // not use any previously cached metadata.
+ //
+ // This takes the last error if the request is being retried, which is
+ // currently only useful for errBrokerTooOld.
+ shard(context.Context, kmsg.Request, error) ([]issueShard, bool, error)
+
+ // onResp is called on a successful response to investigate the
+ // response and potentially perform cleanup, and potentially returns an
+ // error signifying to retry. See onShardRespErr below for more
+ // details.
+ onResp(kmsg.Request, kmsg.Response) error
+
+ // merge is a function that can be used to merge sharded responses into
+ // one response. This is used by the client.Request method.
+ merge([]ResponseShard) (kmsg.Response, error)
+}
+
+// handleShardedReq splits and issues requests to brokers, recursively
+// splitting as necessary if requests fail and need remapping.
+func (cl *Client) handleShardedReq(ctx context.Context, req kmsg.Request) ([]ResponseShard, shardMerge) {
+ // First, determine our sharder.
+ var sharder sharder
+ switch req.(type) {
+ case *kmsg.ListOffsetsRequest:
+ sharder = &listOffsetsSharder{cl}
+ case *kmsg.OffsetFetchRequest:
+ sharder = &offsetFetchSharder{cl}
+ case *kmsg.FindCoordinatorRequest:
+ sharder = &findCoordinatorSharder{cl}
+ case *kmsg.DescribeGroupsRequest:
+ sharder = &describeGroupsSharder{cl}
+ case *kmsg.ListGroupsRequest:
+ sharder = &listGroupsSharder{cl}
+ case *kmsg.DeleteRecordsRequest:
+ sharder = &deleteRecordsSharder{cl}
+ case *kmsg.OffsetForLeaderEpochRequest:
+ sharder = &offsetForLeaderEpochSharder{cl}
+ case *kmsg.AddPartitionsToTxnRequest:
+ sharder = &addPartitionsToTxnSharder{cl}
+ case *kmsg.WriteTxnMarkersRequest:
+ sharder = &writeTxnMarkersSharder{cl}
+ case *kmsg.DescribeConfigsRequest:
+ sharder = &describeConfigsSharder{cl}
+ case *kmsg.AlterConfigsRequest:
+ sharder = &alterConfigsSharder{cl}
+ case *kmsg.AlterReplicaLogDirsRequest:
+ sharder = &alterReplicaLogDirsSharder{cl}
+ case *kmsg.DescribeLogDirsRequest:
+ sharder = &describeLogDirsSharder{cl}
+ case *kmsg.DeleteGroupsRequest:
+ sharder = &deleteGroupsSharder{cl}
+ case *kmsg.IncrementalAlterConfigsRequest:
+ sharder = &incrementalAlterConfigsSharder{cl}
+ case *kmsg.DescribeProducersRequest:
+ sharder = &describeProducersSharder{cl}
+ case *kmsg.DescribeTransactionsRequest:
+ sharder = &describeTransactionsSharder{cl}
+ case *kmsg.ListTransactionsRequest:
+ sharder = &listTransactionsSharder{cl}
+ }
+
+ // If a request fails, we re-shard it (in case it needs to be split
+ // again). reqTry tracks how many total tries a request piece has had;
+ // we quit at either the max configured tries or max configured time.
+ type reqTry struct {
+ tries int
+ req kmsg.Request
+ lastErr error
+ }
+
+ var (
+ shardsMu sync.Mutex
+ shards []ResponseShard
+
+ addShard = func(shard ResponseShard) {
+ shardsMu.Lock()
+ defer shardsMu.Unlock()
+ shards = append(shards, shard)
+ }
+
+ start = time.Now()
+ retryTimeout = cl.cfg.retryTimeout(req.Key())
+
+ wg sync.WaitGroup
+ issue func(reqTry)
+ )
+
+ l := cl.cfg.logger
+ debug := l.Level() >= LogLevelDebug
+
+ // issue is called to progressively split and issue requests.
+ //
+ // This recursively calls itself if a request fails and can be retried.
+ // We avoid stack problems because this calls itself in a goroutine.
+ issue = func(try reqTry) {
+ issues, reshardable, err := sharder.shard(ctx, try.req, try.lastErr)
+ if err != nil {
+ l.Log(LogLevelDebug, "unable to shard request", "req", kmsg.Key(try.req.Key()).Name(), "previous_tries", try.tries, "err", err)
+ addShard(shard(nil, try.req, nil, err)) // failure to shard means data loading failed; this request is failed
+ return
+ }
+
+ // If the request actually does not need to be issued, we issue
+ // it to a random broker. There is no benefit to this, but at
+ // least we will return one shard.
+ if len(issues) == 0 {
+ issues = []issueShard{{
+ req: try.req,
+ any: true,
+ }}
+ reshardable = true
+ }
+
+ if debug {
+ var key int16
+ var brokerAnys []string
+ for _, issue := range issues {
+ key = issue.req.Key()
+ if issue.err != nil {
+ brokerAnys = append(brokerAnys, "err")
+ } else if issue.any {
+ brokerAnys = append(brokerAnys, "any")
+ } else {
+ brokerAnys = append(brokerAnys, fmt.Sprintf("%d", issue.broker))
+ }
+ }
+ l.Log(LogLevelDebug, "sharded request", "req", kmsg.Key(key).Name(), "destinations", brokerAnys)
+ }
+
+ for i := range issues {
+ myIssue := issues[i]
+ myUnderlyingReq := myIssue.req
+ var isPinned bool
+ if pinned, ok := myIssue.req.(*pinReq); ok {
+ myUnderlyingReq = pinned.Request
+ isPinned = true
+ }
+
+ if myIssue.err != nil {
+ addShard(shard(nil, myUnderlyingReq, nil, myIssue.err))
+ continue
+ }
+
+ tries := try.tries
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ start:
+ tries++
+
+ broker := cl.broker()
+ var err error
+ if !myIssue.any {
+ broker, err = cl.brokerOrErr(ctx, myIssue.broker, errUnknownBroker)
+ }
+ if err != nil {
+ addShard(shard(nil, myUnderlyingReq, nil, err)) // failure to load a broker is a failure to issue a request
+ return
+ }
+
+ resp, err := broker.waitResp(ctx, myIssue.req)
+ var errIsFromResp bool
+ if err == nil {
+ err = sharder.onResp(myUnderlyingReq, resp) // perform some potential cleanup, and potentially receive an error to retry
+ if ke := (*kerr.Error)(nil); errors.As(err, &ke) {
+ errIsFromResp = true
+ }
+ }
+
+ // If we failed to issue the request, we *maybe* will retry.
+ // We could have failed to even issue the request or receive
+ // a response, which is retryable.
+ //
+ // If a pinned req fails with errBrokerTooOld, we always retry
+ // immediately. The request was not even issued. However, as a
+ // safety, we only do this 3 times to avoid some super weird
+ // pathological spin loop.
+ backoff := cl.cfg.retryBackoff(tries)
+ if err != nil &&
+ (reshardable && isPinned && errors.Is(err, errBrokerTooOld) && tries <= 3) ||
+ (retryTimeout == 0 || time.Now().Add(backoff).Sub(start) <= retryTimeout) && cl.shouldRetry(tries, err) && cl.waitTries(ctx, backoff) {
+ // Non-reshardable re-requests just jump back to the
+ // top where the broker is loaded. This is the case on
+ // requests where the original request is split to
+ // dedicated brokers; we do not want to re-shard that.
+ if !reshardable {
+ l.Log(LogLevelDebug, "sharded request failed, reissuing without resharding", "req", kmsg.Key(myIssue.req.Key()).Name(), "time_since_start", time.Since(start), "tries", try.tries, "err", err)
+ goto start
+ }
+ l.Log(LogLevelDebug, "sharded request failed, resharding and reissuing", "req", kmsg.Key(myIssue.req.Key()).Name(), "time_since_start", time.Since(start), "tries", try.tries, "err", err)
+ issue(reqTry{tries, myUnderlyingReq, err})
+ return
+ }
+
+ // If we pulled an error out of the response body in an attempt
+ // to possibly retry, the request was NOT an error that we want
+ // to bubble as a shard error. The request was successful, we
+ // have a response. Before we add the shard, strip the error.
+ // The end user can parse the response ErrorCode.
+ if errIsFromResp {
+ err = nil
+ }
+ addShard(shard(broker, myUnderlyingReq, resp, err)) // the error was not retryable
+ }()
+ }
+ }
+
+ issue(reqTry{0, req, nil})
+ wg.Wait()
+
+ return shards, sharder.merge
+}
+
+// For sharded errors, we prefer to keep retryable errors rather than
+// non-retryable errors. We keep the non-retryable if everything is
+// non-retryable.
+//
+// We favor retryable because retryable means we used a stale cache value; we
+// clear the stale entries on failure and the retry uses fresh data. The
+// request will be split and remapped, and the non-retryable errors will be
+// encountered again.
+func onRespShardErr(err *error, newKerr error) {
+ if newKerr == nil || *err != nil && kerr.IsRetriable(*err) {
+ return
+ }
+ *err = newKerr
+}
+
+// a convenience function for when a request needs to be issued identically to
+// all brokers.
+func (cl *Client) allBrokersShardedReq(ctx context.Context, fn func() kmsg.Request) ([]issueShard, bool, error) {
+ if err := cl.fetchBrokerMetadata(ctx); err != nil {
+ return nil, false, err
+ }
+
+ var issues []issueShard
+ cl.brokersMu.RLock()
+ for _, broker := range cl.brokers {
+ issues = append(issues, issueShard{
+ req: fn(),
+ broker: broker.meta.NodeID,
+ })
+ }
+ cl.brokersMu.RUnlock()
+
+ return issues, false, nil // we do NOT re-shard these requests request
+}
+
+// a convenience function for saving the first ResponseShard error.
+func firstErrMerger(sresps []ResponseShard, merge func(kresp kmsg.Response)) error {
+ var firstErr error
+ for _, sresp := range sresps {
+ if sresp.Err != nil {
+ if firstErr == nil {
+ firstErr = sresp.Err
+ }
+ continue
+ }
+ merge(sresp.Resp)
+ }
+ return firstErr
+}
+
+type mappedMetadataTopic struct {
+ t kmsg.MetadataResponseTopic
+ ps map[int32]kmsg.MetadataResponseTopicPartition
+ when time.Time
+}
+
+// For NOT_LEADER_FOR_PARTITION:
+// We always delete stale metadata. It's possible that a leader rebalance
+// happened immediately after we requested metadata; we should not pin to
+// the stale metadata for 1s.
+//
+// For UNKNOWN_TOPIC_OR_PARTITION:
+// We only delete stale metadata if it is older than the min age or 1s,
+// whichever is smaller. We use 1s even if min age is larger, because we want
+// to encourage larger min age for caching purposes. More obvious would be to
+// *always* evict the cache here, but if we *just* requested metadata, then
+// evicting the cache would cause churn for a topic that genuinely does not
+// exist.
+func (cl *Client) maybeDeleteMappedMetadata(unknownTopic bool, ts ...string) (shouldRetry bool) {
+ if len(ts) == 0 {
+ return
+ }
+
+ var min time.Duration
+ if unknownTopic {
+ min = time.Second
+ if cl.cfg.metadataMinAge < min {
+ min = cl.cfg.metadataMinAge
+ }
+ }
+
+ cl.mappedMetaMu.Lock()
+ defer cl.mappedMetaMu.Unlock()
+ for _, t := range ts {
+ tcached, exists := cl.mappedMeta[t]
+ if exists && (min == 0 || time.Since(tcached.when) > min) {
+ shouldRetry = true
+ delete(cl.mappedMeta, t)
+ }
+ }
+ return shouldRetry
+}
+
+// We only cache for metadata min age. We could theoretically cache forever,
+// but an out of band CreatePartitions can result in our metadata being stale
+// and us never knowing. So, we choose metadata min age. There are only a few
+// requests that are sharded and use metadata, and the one this benefits most
+// is ListOffsets. Likely, ListOffsets for the same topic will be issued back
+// to back, so not caching for so long is ok.
+func (cl *Client) fetchCachedMappedMetadata(ts ...string) (map[string]mappedMetadataTopic, []string) {
+ cl.mappedMetaMu.Lock()
+ defer cl.mappedMetaMu.Unlock()
+ if cl.mappedMeta == nil {
+ return nil, ts
+ }
+ cached := make(map[string]mappedMetadataTopic)
+ needed := ts[:0]
+
+ for _, t := range ts {
+ tcached, exists := cl.mappedMeta[t]
+ if exists && time.Since(tcached.when) < cl.cfg.metadataMinAge {
+ cached[t] = tcached
+ } else {
+ needed = append(needed, t)
+ delete(cl.mappedMeta, t)
+ }
+ }
+ return cached, needed
+}
+
+// fetchMappedMetadata provides a convenience type of working with metadata;
+// this is garbage heavy, so it is only used in one off requests in this
+// package.
+func (cl *Client) fetchMappedMetadata(ctx context.Context, topics []string, useCache bool) (map[string]mappedMetadataTopic, error) {
+ var r map[string]mappedMetadataTopic
+ needed := topics
+ if useCache {
+ r, needed = cl.fetchCachedMappedMetadata(topics...)
+ if len(needed) == 0 {
+ return r, nil
+ }
+ }
+ if r == nil {
+ r = make(map[string]mappedMetadataTopic)
+ }
+
+ _, meta, err := cl.fetchMetadataForTopics(ctx, false, needed)
+ if err != nil {
+ return nil, err
+ }
+
+ // Cache the mapped metadata, and also store each topic in the results.
+ cl.storeCachedMappedMetadata(meta, func(entry mappedMetadataTopic) {
+ r[*entry.t.Topic] = entry
+ })
+
+ return r, nil
+}
+
+// storeCachedMappedMetadata caches the fetched metadata in the Client, and calls the onEachTopic callback
+// function for each topic in the MetadataResponse.
+func (cl *Client) storeCachedMappedMetadata(meta *kmsg.MetadataResponse, onEachTopic func(_ mappedMetadataTopic)) {
+ cl.mappedMetaMu.Lock()
+ defer cl.mappedMetaMu.Unlock()
+ if cl.mappedMeta == nil {
+ cl.mappedMeta = make(map[string]mappedMetadataTopic)
+ }
+ when := time.Now()
+ for _, topic := range meta.Topics {
+ if topic.Topic == nil {
+ // We do not request with topic IDs, so we should not
+ // receive topic IDs in the response.
+ continue
+ }
+ t := mappedMetadataTopic{
+ t: topic,
+ ps: make(map[int32]kmsg.MetadataResponseTopicPartition),
+ when: when,
+ }
+ cl.mappedMeta[*topic.Topic] = t
+ for _, partition := range topic.Partitions {
+ t.ps[partition.Partition] = partition
+ }
+
+ if onEachTopic != nil {
+ onEachTopic(t)
+ }
+ }
+ if len(meta.Topics) != len(cl.mappedMeta) {
+ for topic, mapped := range cl.mappedMeta {
+ if mapped.when.Equal(when) {
+ continue
+ }
+ if time.Since(mapped.when) > cl.cfg.metadataMinAge {
+ delete(cl.mappedMeta, topic)
+ }
+ }
+ }
+}
+
+func unknownOrCode(exists bool, code int16) error {
+ if !exists {
+ return kerr.UnknownTopicOrPartition
+ }
+ return kerr.ErrorForCode(code)
+}
+
+func noLeader(l int32) error {
+ if l < 0 {
+ return kerr.LeaderNotAvailable
+ }
+ return nil
+}
+
+// This is a helper for the sharded requests below; if mapping metadata fails
+// to load topics or partitions, we group the failures by error.
+//
+// We use a lot of reflect magic to make the actual usage much nicer.
+type unknownErrShards struct {
+ // load err => topic => mystery slice type
+ //
+ // The mystery type is basically just []Partition, where Partition can
+ // be any kmsg type.
+ mapped map[error]map[string]reflect.Value
+}
+
+// err stores a new failing partition with its failing error.
+//
+// partition's type is equal to the arg1 type of l.fn.
+func (l *unknownErrShards) err(err error, topic string, partition any) {
+ if l.mapped == nil {
+ l.mapped = make(map[error]map[string]reflect.Value)
+ }
+ t := l.mapped[err]
+ if t == nil {
+ t = make(map[string]reflect.Value)
+ l.mapped[err] = t
+ }
+ slice, ok := t[topic]
+ if !ok {
+ // We make a slice of the input partition type.
+ slice = reflect.MakeSlice(reflect.SliceOf(reflect.TypeOf(partition)), 0, 1)
+ }
+
+ t[topic] = reflect.Append(slice, reflect.ValueOf(partition))
+}
+
+// errs takes an input slice of partitions and stores each with its failing
+// error.
+//
+// partitions is a slice where each element has type of arg1 of l.fn.
+func (l *unknownErrShards) errs(err error, topic string, partitions any) {
+ v := reflect.ValueOf(partitions)
+ for i := 0; i < v.Len(); i++ {
+ l.err(err, topic, v.Index(i).Interface())
+ }
+}
+
+// Returns issueShards for each error stored in l.
+//
+// This takes a factory function: the first return is a new kmsg.Request, the
+// second is a function that adds a topic and its partitions to that request.
+//
+// Thus, fn is of type func() (kmsg.Request, func(string, []P))
+func (l *unknownErrShards) collect(mkreq, mergeParts any) []issueShard {
+ if len(l.mapped) == 0 {
+ return nil
+ }
+
+ var shards []issueShard
+
+ factory := reflect.ValueOf(mkreq)
+ perTopic := reflect.ValueOf(mergeParts)
+ for err, topics := range l.mapped {
+ req := factory.Call(nil)[0]
+
+ var ntopics, npartitions int
+ for topic, partitions := range topics {
+ ntopics++
+ npartitions += partitions.Len()
+ perTopic.Call([]reflect.Value{req, reflect.ValueOf(topic), partitions})
+ }
+
+ shards = append(shards, issueShard{
+ req: req.Interface().(kmsg.Request),
+ err: err,
+ })
+ }
+
+ return shards
+}
+
+// handles sharding ListOffsetsRequest
+type listOffsetsSharder struct{ *Client }
+
+func (cl *listOffsetsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.ListOffsetsRequest)
+
+ // For listing offsets, we need the broker leader for each partition we
+ // are listing. Thus, we first load metadata for the topics.
+ //
+ // Metadata loading performs retries; if we fail here, the we do not
+ // issue sharded requests.
+ var need []string
+ for _, topic := range req.Topics {
+ need = append(need, topic.Topic)
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, true)
+ if err != nil {
+ return nil, false, err
+ }
+
+ brokerReqs := make(map[int32]map[string][]kmsg.ListOffsetsRequestTopicPartition)
+ var unknowns unknownErrShards
+
+ // For any topic or partition that had an error load, we blindly issue
+ // a load to the first seed broker. We expect the list to fail, but it
+ // is the best we could do.
+ for _, topic := range req.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ unknowns.errs(err, t, topic.Partitions)
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition.Partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+ if err := noLeader(p.Leader); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+
+ brokerReq := brokerReqs[p.Leader]
+ if brokerReq == nil {
+ brokerReq = make(map[string][]kmsg.ListOffsetsRequestTopicPartition)
+ brokerReqs[p.Leader] = brokerReq
+ }
+ brokerReq[t] = append(brokerReq[t], partition)
+ }
+ }
+
+ mkreq := func() *kmsg.ListOffsetsRequest {
+ r := kmsg.NewPtrListOffsetsRequest()
+ r.ReplicaID = req.ReplicaID
+ r.IsolationLevel = req.IsolationLevel
+ return r
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := mkreq()
+ for topic, parts := range brokerReq {
+ reqTopic := kmsg.NewListOffsetsRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ return append(issues, unknowns.collect(mkreq, func(r *kmsg.ListOffsetsRequest, topic string, parts []kmsg.ListOffsetsRequestTopicPartition) {
+ reqTopic := kmsg.NewListOffsetsRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ r.Topics = append(r.Topics, reqTopic)
+ })...), true, nil // this is reshardable
+}
+
+func (cl *listOffsetsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ var (
+ resp = kresp.(*kmsg.ListOffsetsResponse)
+ del []string
+ retErr error
+ unknownTopic bool
+ )
+
+ for i := range resp.Topics {
+ t := &resp.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ err := kerr.ErrorForCode(p.ErrorCode)
+ if err == kerr.UnknownTopicOrPartition || err == kerr.NotLeaderForPartition {
+ del = append(del, t.Topic)
+ unknownTopic = unknownTopic || err == kerr.UnknownTopicOrPartition
+ }
+ onRespShardErr(&retErr, err)
+ }
+ }
+ if cl.maybeDeleteMappedMetadata(unknownTopic, del...) {
+ return retErr
+ }
+ return nil
+}
+
+func (*listOffsetsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrListOffsetsResponse()
+ topics := make(map[string][]kmsg.ListOffsetsResponseTopicPartition)
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.ListOffsetsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+
+ for _, topic := range resp.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ })
+ for topic, partitions := range topics {
+ respTopic := kmsg.NewListOffsetsResponseTopic()
+ respTopic.Topic = topic
+ respTopic.Partitions = partitions
+ merged.Topics = append(merged.Topics, respTopic)
+ }
+ return merged, firstErr
+}
+
+// handles sharding OffsetFetchRequest
+type offsetFetchSharder struct{ *Client }
+
+func offsetFetchReqToGroup(req *kmsg.OffsetFetchRequest) kmsg.OffsetFetchRequestGroup {
+ g := kmsg.NewOffsetFetchRequestGroup()
+ g.Group = req.Group
+ for _, topic := range req.Topics {
+ reqTopic := kmsg.NewOffsetFetchRequestGroupTopic()
+ reqTopic.Topic = topic.Topic
+ reqTopic.Partitions = topic.Partitions
+ g.Topics = append(g.Topics, reqTopic)
+ }
+ return g
+}
+
+func offsetFetchGroupToReq(requireStable bool, group kmsg.OffsetFetchRequestGroup) *kmsg.OffsetFetchRequest {
+ req := kmsg.NewPtrOffsetFetchRequest()
+ req.RequireStable = requireStable
+ req.Group = group.Group
+ for _, topic := range group.Topics {
+ reqTopic := kmsg.NewOffsetFetchRequestTopic()
+ reqTopic.Topic = topic.Topic
+ reqTopic.Partitions = topic.Partitions
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ return req
+}
+
+func offsetFetchRespToGroup(req *kmsg.OffsetFetchRequest, resp *kmsg.OffsetFetchResponse) kmsg.OffsetFetchResponseGroup {
+ g := kmsg.NewOffsetFetchResponseGroup()
+ g.Group = req.Group
+ g.ErrorCode = resp.ErrorCode
+ for _, topic := range resp.Topics {
+ t := kmsg.NewOffsetFetchResponseGroupTopic()
+ t.Topic = topic.Topic
+ for _, partition := range topic.Partitions {
+ p := kmsg.NewOffsetFetchResponseGroupTopicPartition()
+ p.Partition = partition.Partition
+ p.Offset = partition.Offset
+ p.LeaderEpoch = partition.LeaderEpoch
+ p.Metadata = partition.Metadata
+ p.ErrorCode = partition.ErrorCode
+ t.Partitions = append(t.Partitions, p)
+ }
+ g.Topics = append(g.Topics, t)
+ }
+ return g
+}
+
+func offsetFetchRespGroupIntoResp(g kmsg.OffsetFetchResponseGroup, into *kmsg.OffsetFetchResponse) {
+ into.ErrorCode = g.ErrorCode
+ into.Topics = into.Topics[:0]
+ for _, topic := range g.Topics {
+ t := kmsg.NewOffsetFetchResponseTopic()
+ t.Topic = topic.Topic
+ for _, partition := range topic.Partitions {
+ p := kmsg.NewOffsetFetchResponseTopicPartition()
+ p.Partition = partition.Partition
+ p.Offset = partition.Offset
+ p.LeaderEpoch = partition.LeaderEpoch
+ p.Metadata = partition.Metadata
+ p.ErrorCode = partition.ErrorCode
+ t.Partitions = append(t.Partitions, p)
+ }
+ into.Topics = append(into.Topics, t)
+ }
+}
+
+func (cl *offsetFetchSharder) shard(ctx context.Context, kreq kmsg.Request, lastErr error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.OffsetFetchRequest)
+
+ // We always try batching and only split at the end if lastErr
+ // indicates too old. We convert to batching immediately.
+ dup := *req
+ req = &dup
+
+ if len(req.Groups) == 0 {
+ req.Groups = append(req.Groups, offsetFetchReqToGroup(req))
+ }
+ groups := make([]string, 0, len(req.Groups))
+ for i := range req.Groups {
+ groups = append(groups, req.Groups[i].Group)
+ }
+
+ coordinators := cl.loadCoordinators(ctx, coordinatorTypeGroup, groups...)
+
+ // Loading coordinators can have each group fail with its unique error,
+ // or with a kerr.Error that can be merged. Unique errors get their own
+ // failure shard, while kerr.Error's get merged.
+ type unkerr struct {
+ err error
+ group kmsg.OffsetFetchRequestGroup
+ }
+ var (
+ brokerReqs = make(map[int32]*kmsg.OffsetFetchRequest)
+ kerrs = make(map[*kerr.Error][]kmsg.OffsetFetchRequestGroup)
+ unkerrs []unkerr
+ )
+
+ newReq := func(groups ...kmsg.OffsetFetchRequestGroup) *kmsg.OffsetFetchRequest {
+ newReq := kmsg.NewPtrOffsetFetchRequest()
+ newReq.RequireStable = req.RequireStable
+ newReq.Groups = groups
+ return newReq
+ }
+
+ for _, group := range req.Groups {
+ berr := coordinators[group.Group]
+ var ke *kerr.Error
+ switch {
+ case berr.err == nil:
+ brokerReq := brokerReqs[berr.b.meta.NodeID]
+ if brokerReq == nil {
+ brokerReq = newReq()
+ brokerReqs[berr.b.meta.NodeID] = brokerReq
+ }
+ brokerReq.Groups = append(brokerReq.Groups, group)
+ case errors.As(berr.err, &ke):
+ kerrs[ke] = append(kerrs[ke], group)
+ default:
+ unkerrs = append(unkerrs, unkerr{berr.err, group})
+ }
+ }
+
+ splitReq := errors.Is(lastErr, errBrokerTooOld)
+
+ var issues []issueShard
+ for id, req := range brokerReqs {
+ if splitReq {
+ for _, group := range req.Groups {
+ req := offsetFetchGroupToReq(req.RequireStable, group)
+ issues = append(issues, issueShard{
+ req: &pinReq{Request: req, pinMax: true, max: 7},
+ broker: id,
+ })
+ }
+ } else if len(req.Groups) == 1 {
+ single := offsetFetchGroupToReq(req.RequireStable, req.Groups[0])
+ single.Groups = req.Groups
+ issues = append(issues, issueShard{
+ req: single,
+ broker: id,
+ })
+ } else {
+ issues = append(issues, issueShard{
+ req: &pinReq{Request: req, pinMin: len(req.Groups) > 1, min: 8},
+ broker: id,
+ })
+ }
+ }
+ for _, unkerr := range unkerrs {
+ issues = append(issues, issueShard{
+ req: newReq(unkerr.group),
+ err: unkerr.err,
+ })
+ }
+ for kerr, groups := range kerrs {
+ issues = append(issues, issueShard{
+ req: newReq(groups...),
+ err: kerr,
+ })
+ }
+
+ return issues, true, nil // reshardable to load correct coordinators
+}
+
+func (cl *offsetFetchSharder) onResp(kreq kmsg.Request, kresp kmsg.Response) error {
+ req := kreq.(*kmsg.OffsetFetchRequest)
+ resp := kresp.(*kmsg.OffsetFetchResponse)
+
+ switch len(resp.Groups) {
+ case 0:
+ // Requested no groups: move top level into batch for v0-v7 to
+ // v8 forward compat.
+ resp.Groups = append(resp.Groups, offsetFetchRespToGroup(req, resp))
+ case 1:
+ // Requested 1 group v8+: set top level for v0-v7 back-compat.
+ offsetFetchRespGroupIntoResp(resp.Groups[0], resp)
+ default:
+ }
+
+ var retErr error
+ for i := range resp.Groups {
+ group := &resp.Groups[i]
+ err := kerr.ErrorForCode(group.ErrorCode)
+ cl.maybeDeleteStaleCoordinator(group.Group, coordinatorTypeGroup, err)
+ onRespShardErr(&retErr, err)
+ }
+
+ // For a final bit of extra fun, v0 and v1 do not have a top level
+ // error code but instead a per-partition error code. If the
+ // coordinator is loading &c, then all per-partition error codes are
+ // the same so we only need to look at the first partition.
+ if resp.Version < 2 && len(resp.Topics) > 0 && len(resp.Topics[0].Partitions) > 0 {
+ code := resp.Topics[0].Partitions[0].ErrorCode
+ err := kerr.ErrorForCode(code)
+ cl.maybeDeleteStaleCoordinator(req.Group, coordinatorTypeGroup, err)
+ onRespShardErr(&retErr, err)
+ }
+
+ return retErr
+}
+
+func (*offsetFetchSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrOffsetFetchResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.OffsetFetchResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Groups = append(merged.Groups, resp.Groups...)
+
+ // Old requests only support one group; *either* the commit
+ // used multiple groups and they are expecting the batch
+ // response, *or* the commit used one group and we always merge
+ // that one group into the old format.
+ if len(resp.Groups) == 1 {
+ offsetFetchRespGroupIntoResp(resp.Groups[0], merged)
+ }
+ })
+}
+
+// handles sharding FindCoordinatorRequest
+type findCoordinatorSharder struct{ *Client }
+
+func findCoordinatorRespCoordinatorIntoResp(c kmsg.FindCoordinatorResponseCoordinator, into *kmsg.FindCoordinatorResponse) {
+ into.NodeID = c.NodeID
+ into.Host = c.Host
+ into.Port = c.Port
+ into.ErrorCode = c.ErrorCode
+ into.ErrorMessage = c.ErrorMessage
+}
+
+func (*findCoordinatorSharder) shard(_ context.Context, kreq kmsg.Request, lastErr error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.FindCoordinatorRequest)
+
+ // We always try batching and only split at the end if lastErr
+ // indicates too old. We convert to batching immediately.
+ dup := *req
+ req = &dup
+
+ uniq := make(map[string]struct{}, len(req.CoordinatorKeys))
+ if len(req.CoordinatorKeys) == 0 {
+ uniq[req.CoordinatorKey] = struct{}{}
+ } else {
+ for _, key := range req.CoordinatorKeys {
+ uniq[key] = struct{}{}
+ }
+ }
+ req.CoordinatorKeys = req.CoordinatorKeys[:0]
+ for key := range uniq {
+ req.CoordinatorKeys = append(req.CoordinatorKeys, key)
+ }
+ if len(req.CoordinatorKeys) == 1 {
+ req.CoordinatorKey = req.CoordinatorKeys[0]
+ }
+
+ splitReq := errors.Is(lastErr, errBrokerTooOld)
+ if !splitReq {
+ // With only one key, we do not need to split nor pin this.
+ if len(req.CoordinatorKeys) <= 1 {
+ return []issueShard{{req: req, any: true}}, false, nil
+ }
+ return []issueShard{{
+ req: &pinReq{Request: req, pinMin: true, min: 4},
+ any: true,
+ }}, true, nil // this is "reshardable", in that we will split the request next
+ }
+
+ var issues []issueShard
+ for _, key := range req.CoordinatorKeys {
+ sreq := kmsg.NewPtrFindCoordinatorRequest()
+ sreq.CoordinatorType = req.CoordinatorType
+ sreq.CoordinatorKey = key
+ issues = append(issues, issueShard{
+ req: &pinReq{Request: sreq, pinMax: true, max: 3},
+ any: true,
+ })
+ }
+ return issues, false, nil // not reshardable
+}
+
+func (*findCoordinatorSharder) onResp(kreq kmsg.Request, kresp kmsg.Response) error {
+ req := kreq.(*kmsg.FindCoordinatorRequest)
+ resp := kresp.(*kmsg.FindCoordinatorResponse)
+
+ switch len(resp.Coordinators) {
+ case 0:
+ // Convert v3 and prior to v4+
+ rc := kmsg.NewFindCoordinatorResponseCoordinator()
+ rc.Key = req.CoordinatorKey
+ rc.NodeID = resp.NodeID
+ rc.Host = resp.Host
+ rc.Port = resp.Port
+ rc.ErrorCode = resp.ErrorCode
+ rc.ErrorMessage = resp.ErrorMessage
+ resp.Coordinators = append(resp.Coordinators, rc)
+ case 1:
+ // Convert v4 to v3 and prior
+ findCoordinatorRespCoordinatorIntoResp(resp.Coordinators[0], resp)
+ }
+
+ return nil
+}
+
+func (*findCoordinatorSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrFindCoordinatorResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.FindCoordinatorResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Coordinators = append(merged.Coordinators, resp.Coordinators...)
+
+ if len(resp.Coordinators) == 1 {
+ findCoordinatorRespCoordinatorIntoResp(resp.Coordinators[0], merged)
+ }
+ })
+}
+
+// handles sharding DescribeGroupsRequest
+type describeGroupsSharder struct{ *Client }
+
+func (cl *describeGroupsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DescribeGroupsRequest)
+
+ coordinators := cl.loadCoordinators(ctx, coordinatorTypeGroup, req.Groups...)
+ type unkerr struct {
+ err error
+ group string
+ }
+ var (
+ brokerReqs = make(map[int32]*kmsg.DescribeGroupsRequest)
+ kerrs = make(map[*kerr.Error][]string)
+ unkerrs []unkerr
+ )
+
+ newReq := func(groups ...string) *kmsg.DescribeGroupsRequest {
+ newReq := kmsg.NewPtrDescribeGroupsRequest()
+ newReq.IncludeAuthorizedOperations = req.IncludeAuthorizedOperations
+ newReq.Groups = groups
+ return newReq
+ }
+
+ for _, group := range req.Groups {
+ berr := coordinators[group]
+ var ke *kerr.Error
+ switch {
+ case berr.err == nil:
+ brokerReq := brokerReqs[berr.b.meta.NodeID]
+ if brokerReq == nil {
+ brokerReq = newReq()
+ brokerReqs[berr.b.meta.NodeID] = brokerReq
+ }
+ brokerReq.Groups = append(brokerReq.Groups, group)
+ case errors.As(berr.err, &ke):
+ kerrs[ke] = append(kerrs[ke], group)
+ default:
+ unkerrs = append(unkerrs, unkerr{berr.err, group})
+ }
+ }
+
+ var issues []issueShard
+ for id, req := range brokerReqs {
+ issues = append(issues, issueShard{
+ req: req,
+ broker: id,
+ })
+ }
+ for _, unkerr := range unkerrs {
+ issues = append(issues, issueShard{
+ req: newReq(unkerr.group),
+ err: unkerr.err,
+ })
+ }
+ for kerr, groups := range kerrs {
+ issues = append(issues, issueShard{
+ req: newReq(groups...),
+ err: kerr,
+ })
+ }
+
+ return issues, true, nil // reshardable to load correct coordinators
+}
+
+func (cl *describeGroupsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error { // cleanup any stale groups
+ resp := kresp.(*kmsg.DescribeGroupsResponse)
+ var retErr error
+ for i := range resp.Groups {
+ group := &resp.Groups[i]
+ err := kerr.ErrorForCode(group.ErrorCode)
+ cl.maybeDeleteStaleCoordinator(group.Group, coordinatorTypeGroup, err)
+ onRespShardErr(&retErr, err)
+ }
+ return retErr
+}
+
+func (*describeGroupsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDescribeGroupsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DescribeGroupsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Groups = append(merged.Groups, resp.Groups...)
+ })
+}
+
+// handles sharding ListGroupsRequest
+type listGroupsSharder struct{ *Client }
+
+func (cl *listGroupsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.ListGroupsRequest)
+ return cl.allBrokersShardedReq(ctx, func() kmsg.Request {
+ dup := *req
+ return &dup
+ })
+}
+
+func (*listGroupsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ resp := kresp.(*kmsg.ListGroupsResponse)
+ return kerr.ErrorForCode(resp.ErrorCode)
+}
+
+func (*listGroupsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrListGroupsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.ListGroupsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ if merged.ErrorCode == 0 {
+ merged.ErrorCode = resp.ErrorCode
+ }
+ merged.Groups = append(merged.Groups, resp.Groups...)
+ })
+}
+
+// handle sharding DeleteRecordsRequest
+type deleteRecordsSharder struct{ *Client }
+
+func (cl *deleteRecordsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DeleteRecordsRequest)
+
+ var need []string
+ for _, topic := range req.Topics {
+ need = append(need, topic.Topic)
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, true)
+ if err != nil {
+ return nil, false, err
+ }
+
+ brokerReqs := make(map[int32]map[string][]kmsg.DeleteRecordsRequestTopicPartition)
+ var unknowns unknownErrShards
+
+ for _, topic := range req.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ unknowns.errs(err, t, topic.Partitions)
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition.Partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+ if err := noLeader(p.Leader); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+
+ brokerReq := brokerReqs[p.Leader]
+ if brokerReq == nil {
+ brokerReq = make(map[string][]kmsg.DeleteRecordsRequestTopicPartition)
+ brokerReqs[p.Leader] = brokerReq
+ }
+ brokerReq[t] = append(brokerReq[t], partition)
+ }
+ }
+
+ mkreq := func() *kmsg.DeleteRecordsRequest {
+ r := kmsg.NewPtrDeleteRecordsRequest()
+ r.TimeoutMillis = req.TimeoutMillis
+ return r
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := mkreq()
+ for topic, parts := range brokerReq {
+ reqTopic := kmsg.NewDeleteRecordsRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ return append(issues, unknowns.collect(mkreq, func(r *kmsg.DeleteRecordsRequest, topic string, parts []kmsg.DeleteRecordsRequestTopicPartition) {
+ reqTopic := kmsg.NewDeleteRecordsRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ r.Topics = append(r.Topics, reqTopic)
+ })...), true, nil // this is reshardable
+}
+
+func (cl *deleteRecordsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ var (
+ resp = kresp.(*kmsg.DeleteRecordsResponse)
+ del []string
+ retErr error
+ unknownTopic bool
+ )
+ for i := range resp.Topics {
+ t := &resp.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ err := kerr.ErrorForCode(p.ErrorCode)
+ if err == kerr.UnknownTopicOrPartition || err == kerr.NotLeaderForPartition {
+ del = append(del, t.Topic)
+ unknownTopic = unknownTopic || err == kerr.UnknownTopicOrPartition
+ }
+ onRespShardErr(&retErr, err)
+ }
+ }
+ if cl.maybeDeleteMappedMetadata(unknownTopic, del...) {
+ return retErr
+ }
+ return nil
+}
+
+func (*deleteRecordsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDeleteRecordsResponse()
+ topics := make(map[string][]kmsg.DeleteRecordsResponseTopicPartition)
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DeleteRecordsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+
+ for _, topic := range resp.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ })
+ for topic, partitions := range topics {
+ respTopic := kmsg.NewDeleteRecordsResponseTopic()
+ respTopic.Topic = topic
+ respTopic.Partitions = partitions
+ merged.Topics = append(merged.Topics, respTopic)
+ }
+ return merged, firstErr
+}
+
+// handle sharding OffsetForLeaderEpochRequest
+type offsetForLeaderEpochSharder struct{ *Client }
+
+func (cl *offsetForLeaderEpochSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.OffsetForLeaderEpochRequest)
+
+ var need []string
+ for _, topic := range req.Topics {
+ need = append(need, topic.Topic)
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, true)
+ if err != nil {
+ return nil, false, err
+ }
+
+ brokerReqs := make(map[int32]map[string][]kmsg.OffsetForLeaderEpochRequestTopicPartition)
+ var unknowns unknownErrShards
+
+ for _, topic := range req.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ unknowns.errs(err, t, topic.Partitions)
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition.Partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+ if err := noLeader(p.Leader); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+
+ brokerReq := brokerReqs[p.Leader]
+ if brokerReq == nil {
+ brokerReq = make(map[string][]kmsg.OffsetForLeaderEpochRequestTopicPartition)
+ brokerReqs[p.Leader] = brokerReq
+ }
+ brokerReq[topic.Topic] = append(brokerReq[topic.Topic], partition)
+ }
+ }
+
+ mkreq := func() *kmsg.OffsetForLeaderEpochRequest {
+ r := kmsg.NewPtrOffsetForLeaderEpochRequest()
+ r.ReplicaID = req.ReplicaID
+ return r
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := mkreq()
+ for topic, parts := range brokerReq {
+ reqTopic := kmsg.NewOffsetForLeaderEpochRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ return append(issues, unknowns.collect(mkreq, func(r *kmsg.OffsetForLeaderEpochRequest, topic string, parts []kmsg.OffsetForLeaderEpochRequestTopicPartition) {
+ reqTopic := kmsg.NewOffsetForLeaderEpochRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ r.Topics = append(r.Topics, reqTopic)
+ })...), true, nil // this is reshardable
+}
+
+func (cl *offsetForLeaderEpochSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ var (
+ resp = kresp.(*kmsg.OffsetForLeaderEpochResponse)
+ del []string
+ retErr error
+ unknownTopic bool
+ )
+ for i := range resp.Topics {
+ t := &resp.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ err := kerr.ErrorForCode(p.ErrorCode)
+ if err == kerr.UnknownTopicOrPartition || err == kerr.NotLeaderForPartition {
+ del = append(del, t.Topic)
+ unknownTopic = unknownTopic || err == kerr.UnknownTopicOrPartition
+ }
+ onRespShardErr(&retErr, err)
+ }
+ }
+ if cl.maybeDeleteMappedMetadata(unknownTopic, del...) {
+ return retErr
+ }
+ return nil
+}
+
+func (*offsetForLeaderEpochSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrOffsetForLeaderEpochResponse()
+ topics := make(map[string][]kmsg.OffsetForLeaderEpochResponseTopicPartition)
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.OffsetForLeaderEpochResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+
+ for _, topic := range resp.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ })
+ for topic, partitions := range topics {
+ respTopic := kmsg.NewOffsetForLeaderEpochResponseTopic()
+ respTopic.Topic = topic
+ respTopic.Partitions = partitions
+ merged.Topics = append(merged.Topics, respTopic)
+ }
+ return merged, firstErr
+}
+
+// handle sharding AddPartitionsToTXn, where v4+ switched to batch requests
+type addPartitionsToTxnSharder struct{ *Client }
+
+func addPartitionsReqToTxn(req *kmsg.AddPartitionsToTxnRequest) {
+ t := kmsg.NewAddPartitionsToTxnRequestTransaction()
+ t.TransactionalID = req.TransactionalID
+ t.ProducerID = req.ProducerID
+ t.ProducerEpoch = req.ProducerEpoch
+ for i := range req.Topics {
+ rt := &req.Topics[i]
+ tt := kmsg.NewAddPartitionsToTxnRequestTransactionTopic()
+ tt.Topic = rt.Topic
+ tt.Partitions = rt.Partitions
+ t.Topics = append(t.Topics, tt)
+ }
+ req.Transactions = append(req.Transactions, t)
+}
+
+func addPartitionsTxnToReq(req *kmsg.AddPartitionsToTxnRequest) {
+ if len(req.Transactions) != 1 {
+ return
+ }
+ t0 := &req.Transactions[0]
+ req.TransactionalID = t0.TransactionalID
+ req.ProducerID = t0.ProducerID
+ req.ProducerEpoch = t0.ProducerEpoch
+ for _, tt := range t0.Topics {
+ rt := kmsg.NewAddPartitionsToTxnRequestTopic()
+ rt.Topic = tt.Topic
+ rt.Partitions = tt.Partitions
+ req.Topics = append(req.Topics, rt)
+ }
+}
+
+func addPartitionsTxnToResp(resp *kmsg.AddPartitionsToTxnResponse) {
+ if len(resp.Transactions) == 0 {
+ return
+ }
+ t0 := &resp.Transactions[0]
+ for _, tt := range t0.Topics {
+ rt := kmsg.NewAddPartitionsToTxnResponseTopic()
+ rt.Topic = tt.Topic
+ for _, tp := range tt.Partitions {
+ rp := kmsg.NewAddPartitionsToTxnResponseTopicPartition()
+ rp.Partition = tp.Partition
+ rp.ErrorCode = tp.ErrorCode
+ rt.Partitions = append(rt.Partitions, rp)
+ }
+ resp.Topics = append(resp.Topics, rt)
+ }
+}
+
+func (cl *addPartitionsToTxnSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.AddPartitionsToTxnRequest)
+
+ if len(req.Transactions) == 0 {
+ addPartitionsReqToTxn(req)
+ }
+ txnIDs := make([]string, 0, len(req.Transactions))
+ for i := range req.Transactions {
+ txnIDs = append(txnIDs, req.Transactions[i].TransactionalID)
+ }
+ coordinators := cl.loadCoordinators(ctx, coordinatorTypeTxn, txnIDs...)
+
+ type unkerr struct {
+ err error
+ txn kmsg.AddPartitionsToTxnRequestTransaction
+ }
+ var (
+ brokerReqs = make(map[int32]*kmsg.AddPartitionsToTxnRequest)
+ kerrs = make(map[*kerr.Error][]kmsg.AddPartitionsToTxnRequestTransaction)
+ unkerrs []unkerr
+ )
+
+ newReq := func(txns ...kmsg.AddPartitionsToTxnRequestTransaction) *kmsg.AddPartitionsToTxnRequest {
+ req := kmsg.NewPtrAddPartitionsToTxnRequest()
+ req.Transactions = txns
+ addPartitionsTxnToReq(req)
+ return req
+ }
+
+ for _, txn := range req.Transactions {
+ berr := coordinators[txn.TransactionalID]
+ var ke *kerr.Error
+ switch {
+ case berr.err == nil:
+ brokerReq := brokerReqs[berr.b.meta.NodeID]
+ if brokerReq == nil {
+ brokerReq = newReq(txn)
+ brokerReqs[berr.b.meta.NodeID] = brokerReq
+ } else {
+ brokerReq.Transactions = append(brokerReq.Transactions, txn)
+ }
+ case errors.As(berr.err, &ke):
+ kerrs[ke] = append(kerrs[ke], txn)
+ default:
+ unkerrs = append(unkerrs, unkerr{berr.err, txn})
+ }
+ }
+
+ var issues []issueShard
+ for id, req := range brokerReqs {
+ if len(req.Transactions) <= 1 || len(req.Transactions) == 1 && !req.Transactions[0].VerifyOnly {
+ issues = append(issues, issueShard{
+ req: &pinReq{Request: req, pinMax: true, max: 3},
+ broker: id,
+ })
+ } else {
+ issues = append(issues, issueShard{
+ req: req,
+ broker: id,
+ })
+ }
+ }
+ for _, unkerr := range unkerrs {
+ issues = append(issues, issueShard{
+ req: newReq(unkerr.txn),
+ err: unkerr.err,
+ })
+ }
+ for kerr, txns := range kerrs {
+ issues = append(issues, issueShard{
+ req: newReq(txns...),
+ err: kerr,
+ })
+ }
+
+ return issues, true, nil // reshardable to load correct coordinators
+}
+
+func (cl *addPartitionsToTxnSharder) onResp(kreq kmsg.Request, kresp kmsg.Response) error {
+ req := kreq.(*kmsg.AddPartitionsToTxnRequest)
+ resp := kresp.(*kmsg.AddPartitionsToTxnResponse)
+
+ // We default to the top level error, which is used in v4+. For v3
+ // (case 0), we use the per-partition error, which is the same for
+ // every partition on not_coordinator errors.
+ code := resp.ErrorCode
+ if code == 0 && len(resp.Transactions) == 0 {
+ // Convert v3 and prior to v4+
+ resptxn := kmsg.NewAddPartitionsToTxnResponseTransaction()
+ resptxn.TransactionalID = req.TransactionalID
+ for _, rt := range resp.Topics {
+ respt := kmsg.NewAddPartitionsToTxnResponseTransactionTopic()
+ respt.Topic = rt.Topic
+ for _, rp := range rt.Partitions {
+ respp := kmsg.NewAddPartitionsToTxnResponseTransactionTopicPartition()
+ respp.Partition = rp.Partition
+ respp.ErrorCode = rp.ErrorCode
+ code = rp.ErrorCode // v3 and prior has per-partition errors, not top level
+ respt.Partitions = append(respt.Partitions, respp)
+ }
+ resptxn.Topics = append(resptxn.Topics, respt)
+ }
+ resp.Transactions = append(resp.Transactions, resptxn)
+ } else {
+ // Convert v4 to v3 and prior: either we have a top level error
+ // code or we have at least one transaction.
+ //
+ // If the code is non-zero, we convert it to per-partition error
+ // codes; v3 does not have a top level err.
+ addPartitionsTxnToResp(resp)
+ if code != 0 {
+ for _, reqt := range req.Topics {
+ respt := kmsg.NewAddPartitionsToTxnResponseTopic()
+ respt.Topic = reqt.Topic
+ for _, reqp := range reqt.Partitions {
+ respp := kmsg.NewAddPartitionsToTxnResponseTopicPartition()
+ respp.Partition = reqp
+ respp.ErrorCode = resp.ErrorCode
+ respt.Partitions = append(respt.Partitions, respp)
+ }
+ resp.Topics = append(resp.Topics, respt)
+ }
+ }
+ }
+ if err := kerr.ErrorForCode(code); cl.maybeDeleteStaleCoordinator(req.TransactionalID, coordinatorTypeTxn, err) {
+ return err
+ }
+ return nil
+}
+
+func (*addPartitionsToTxnSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrAddPartitionsToTxnResponse()
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.AddPartitionsToTxnResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.ErrorCode = resp.ErrorCode
+ merged.Transactions = append(merged.Transactions, resp.Transactions...)
+ })
+ addPartitionsTxnToResp(merged)
+ return merged, firstErr
+}
+
+// handle sharding WriteTxnMarkersRequest
+type writeTxnMarkersSharder struct{ *Client }
+
+func (cl *writeTxnMarkersSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.WriteTxnMarkersRequest)
+
+ var need []string
+ for _, marker := range req.Markers {
+ for _, topic := range marker.Topics {
+ need = append(need, topic.Topic)
+ }
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, true)
+ if err != nil {
+ return nil, false, err
+ }
+
+ type pidEpochCommit struct {
+ pid int64
+ epoch int16
+ commit bool
+ }
+
+ brokerReqs := make(map[int32]map[pidEpochCommit]map[string][]int32)
+ unknown := make(map[error]map[pidEpochCommit]map[string][]int32) // err => pec => topic => partitions
+
+ addreq := func(b int32, pec pidEpochCommit, t string, p int32) {
+ pecs := brokerReqs[b]
+ if pecs == nil {
+ pecs = make(map[pidEpochCommit]map[string][]int32)
+ brokerReqs[b] = pecs
+ }
+ ts := pecs[pec]
+ if ts == nil {
+ ts = make(map[string][]int32)
+ pecs[pec] = ts
+ }
+ ts[t] = append(ts[t], p)
+ }
+ addunk := func(err error, pec pidEpochCommit, t string, p int32) {
+ pecs := unknown[err]
+ if pecs == nil {
+ pecs = make(map[pidEpochCommit]map[string][]int32)
+ unknown[err] = pecs
+ }
+ ts := pecs[pec]
+ if ts == nil {
+ ts = make(map[string][]int32)
+ pecs[pec] = ts
+ }
+ ts[t] = append(ts[t], p)
+ }
+
+ for _, marker := range req.Markers {
+ pec := pidEpochCommit{
+ marker.ProducerID,
+ marker.ProducerEpoch,
+ marker.Committed,
+ }
+ for _, topic := range marker.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ for _, partition := range topic.Partitions {
+ addunk(err, pec, t, partition)
+ }
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ addunk(err, pec, t, partition)
+ continue
+ }
+ if err := noLeader(p.Leader); err != nil {
+ addunk(err, pec, t, partition)
+ continue
+ }
+ addreq(p.Leader, pec, t, partition)
+ }
+ }
+ }
+
+ mkreq := kmsg.NewPtrWriteTxnMarkersRequest
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := mkreq()
+ for pec, topics := range brokerReq {
+ rm := kmsg.NewWriteTxnMarkersRequestMarker()
+ rm.ProducerID = pec.pid
+ rm.ProducerEpoch = pec.epoch
+ rm.Committed = pec.commit
+ for topic, parts := range topics {
+ rt := kmsg.NewWriteTxnMarkersRequestMarkerTopic()
+ rt.Topic = topic
+ rt.Partitions = parts
+ rm.Topics = append(rm.Topics, rt)
+ }
+ req.Markers = append(req.Markers, rm)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ for err, errReq := range unknown {
+ req := mkreq()
+ for pec, topics := range errReq {
+ rm := kmsg.NewWriteTxnMarkersRequestMarker()
+ rm.ProducerID = pec.pid
+ rm.ProducerEpoch = pec.epoch
+ rm.Committed = pec.commit
+ for topic, parts := range topics {
+ rt := kmsg.NewWriteTxnMarkersRequestMarkerTopic()
+ rt.Topic = topic
+ rt.Partitions = parts
+ rm.Topics = append(rm.Topics, rt)
+ }
+ req.Markers = append(req.Markers, rm)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ err: err,
+ })
+ }
+ return issues, true, nil // this is reshardable
+}
+
+func (cl *writeTxnMarkersSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ var (
+ resp = kresp.(*kmsg.WriteTxnMarkersResponse)
+ del []string
+ retErr error
+ unknownTopic bool
+ )
+ for i := range resp.Markers {
+ m := &resp.Markers[i]
+ for j := range m.Topics {
+ t := &m.Topics[j]
+ for k := range t.Partitions {
+ p := &t.Partitions[k]
+ err := kerr.ErrorForCode(p.ErrorCode)
+ if err == kerr.UnknownTopicOrPartition || err == kerr.NotLeaderForPartition {
+ del = append(del, t.Topic)
+ unknownTopic = unknownTopic || err == kerr.UnknownTopicOrPartition
+ }
+ onRespShardErr(&retErr, err)
+ }
+ }
+ }
+ if cl.maybeDeleteMappedMetadata(unknownTopic, del...) {
+ return retErr
+ }
+ return nil
+}
+
+func (*writeTxnMarkersSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrWriteTxnMarkersResponse()
+ markers := make(map[int64]map[string][]kmsg.WriteTxnMarkersResponseMarkerTopicPartition)
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.WriteTxnMarkersResponse)
+ merged.Version = resp.Version
+ for _, marker := range resp.Markers {
+ topics := markers[marker.ProducerID]
+ if topics == nil {
+ topics = make(map[string][]kmsg.WriteTxnMarkersResponseMarkerTopicPartition)
+ markers[marker.ProducerID] = topics
+ }
+ for _, topic := range marker.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ }
+ })
+ for pid, topics := range markers {
+ respMarker := kmsg.NewWriteTxnMarkersResponseMarker()
+ respMarker.ProducerID = pid
+ for topic, partitions := range topics {
+ respTopic := kmsg.NewWriteTxnMarkersResponseMarkerTopic()
+ respTopic.Topic = topic
+ respTopic.Partitions = append(respTopic.Partitions, partitions...)
+ respMarker.Topics = append(respMarker.Topics, respTopic)
+ }
+ merged.Markers = append(merged.Markers, respMarker)
+ }
+ return merged, firstErr
+}
+
+// handle sharding DescribeConfigsRequest
+type describeConfigsSharder struct{ *Client }
+
+func (*describeConfigsSharder) shard(_ context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DescribeConfigsRequest)
+
+ brokerReqs := make(map[int32][]kmsg.DescribeConfigsRequestResource)
+ var any []kmsg.DescribeConfigsRequestResource
+
+ for i := range req.Resources {
+ resource := req.Resources[i]
+ switch resource.ResourceType {
+ case kmsg.ConfigResourceTypeBroker:
+ case kmsg.ConfigResourceTypeBrokerLogger:
+ default:
+ any = append(any, resource)
+ continue
+ }
+ id, err := strconv.ParseInt(resource.ResourceName, 10, 32)
+ if err != nil || id < 0 {
+ any = append(any, resource)
+ continue
+ }
+ brokerReqs[int32(id)] = append(brokerReqs[int32(id)], resource)
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ newReq := kmsg.NewPtrDescribeConfigsRequest()
+ newReq.Resources = brokerReq
+ newReq.IncludeSynonyms = req.IncludeSynonyms
+ newReq.IncludeDocumentation = req.IncludeDocumentation
+
+ issues = append(issues, issueShard{
+ req: newReq,
+ broker: brokerID,
+ })
+ }
+
+ if len(any) > 0 {
+ newReq := kmsg.NewPtrDescribeConfigsRequest()
+ newReq.Resources = any
+ newReq.IncludeSynonyms = req.IncludeSynonyms
+ newReq.IncludeDocumentation = req.IncludeDocumentation
+ issues = append(issues, issueShard{
+ req: newReq,
+ any: true,
+ })
+ }
+
+ return issues, false, nil // this is not reshardable, but the any block can go anywhere
+}
+
+func (*describeConfigsSharder) onResp(kmsg.Request, kmsg.Response) error { return nil } // configs: topics not mapped, nothing retryable
+
+func (*describeConfigsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDescribeConfigsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DescribeConfigsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Resources = append(merged.Resources, resp.Resources...)
+ })
+}
+
+// handle sharding AlterConfigsRequest
+type alterConfigsSharder struct{ *Client }
+
+func (*alterConfigsSharder) shard(_ context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.AlterConfigsRequest)
+
+ brokerReqs := make(map[int32][]kmsg.AlterConfigsRequestResource)
+ var any []kmsg.AlterConfigsRequestResource
+
+ for i := range req.Resources {
+ resource := req.Resources[i]
+ switch resource.ResourceType {
+ case kmsg.ConfigResourceTypeBroker:
+ case kmsg.ConfigResourceTypeBrokerLogger:
+ default:
+ any = append(any, resource)
+ continue
+ }
+ id, err := strconv.ParseInt(resource.ResourceName, 10, 32)
+ if err != nil || id < 0 {
+ any = append(any, resource)
+ continue
+ }
+ brokerReqs[int32(id)] = append(brokerReqs[int32(id)], resource)
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ newReq := kmsg.NewPtrAlterConfigsRequest()
+ newReq.Resources = brokerReq
+ newReq.ValidateOnly = req.ValidateOnly
+
+ issues = append(issues, issueShard{
+ req: newReq,
+ broker: brokerID,
+ })
+ }
+
+ if len(any) > 0 {
+ newReq := kmsg.NewPtrAlterConfigsRequest()
+ newReq.Resources = any
+ newReq.ValidateOnly = req.ValidateOnly
+ issues = append(issues, issueShard{
+ req: newReq,
+ any: true,
+ })
+ }
+
+ return issues, false, nil // this is not reshardable, but the any block can go anywhere
+}
+
+func (*alterConfigsSharder) onResp(kmsg.Request, kmsg.Response) error { return nil } // configs: topics not mapped, nothing retryable
+
+func (*alterConfigsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrAlterConfigsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.AlterConfigsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Resources = append(merged.Resources, resp.Resources...)
+ })
+}
+
+// handles sharding AlterReplicaLogDirsRequest
+type alterReplicaLogDirsSharder struct{ *Client }
+
+func (cl *alterReplicaLogDirsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.AlterReplicaLogDirsRequest)
+
+ needMap := make(map[string]struct{})
+ for _, dir := range req.Dirs {
+ for _, topic := range dir.Topics {
+ needMap[topic.Topic] = struct{}{}
+ }
+ }
+ var need []string
+ for topic := range needMap {
+ need = append(need, topic)
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, false) // bypass cache, tricky to manage response
+ if err != nil {
+ return nil, false, err
+ }
+
+ brokerReqs := make(map[int32]map[string]map[string][]int32) // broker => dir => topic => partitions
+ unknowns := make(map[error]map[string]map[string][]int32) // err => dir => topic => partitions
+
+ addBroker := func(broker int32, dir, topic string, partition int32) {
+ brokerDirs := brokerReqs[broker]
+ if brokerDirs == nil {
+ brokerDirs = make(map[string]map[string][]int32)
+ brokerReqs[broker] = brokerDirs
+ }
+ dirTopics := brokerDirs[dir]
+ if dirTopics == nil {
+ dirTopics = make(map[string][]int32)
+ brokerDirs[dir] = dirTopics
+ }
+ dirTopics[topic] = append(dirTopics[topic], partition)
+ }
+
+ addUnknown := func(err error, dir, topic string, partition int32) {
+ dirs := unknowns[err]
+ if dirs == nil {
+ dirs = make(map[string]map[string][]int32)
+ unknowns[err] = dirs
+ }
+ dirTopics := dirs[dir]
+ if dirTopics == nil {
+ dirTopics = make(map[string][]int32)
+ dirs[dir] = dirTopics
+ }
+ dirTopics[topic] = append(dirTopics[topic], partition)
+ }
+
+ for _, dir := range req.Dirs {
+ for _, topic := range dir.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ for _, partition := range topic.Partitions {
+ addUnknown(err, dir.Dir, t, partition)
+ }
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ addUnknown(err, dir.Dir, t, partition)
+ continue
+ }
+
+ for _, replica := range p.Replicas {
+ addBroker(replica, dir.Dir, t, partition)
+ }
+ }
+ }
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := kmsg.NewPtrAlterReplicaLogDirsRequest()
+ for dir, topics := range brokerReq {
+ rd := kmsg.NewAlterReplicaLogDirsRequestDir()
+ rd.Dir = dir
+ for topic, partitions := range topics {
+ rdTopic := kmsg.NewAlterReplicaLogDirsRequestDirTopic()
+ rdTopic.Topic = topic
+ rdTopic.Partitions = partitions
+ rd.Topics = append(rd.Topics, rdTopic)
+ }
+ req.Dirs = append(req.Dirs, rd)
+ }
+
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ for err, dirs := range unknowns {
+ req := kmsg.NewPtrAlterReplicaLogDirsRequest()
+ for dir, topics := range dirs {
+ rd := kmsg.NewAlterReplicaLogDirsRequestDir()
+ rd.Dir = dir
+ for topic, partitions := range topics {
+ rdTopic := kmsg.NewAlterReplicaLogDirsRequestDirTopic()
+ rdTopic.Topic = topic
+ rdTopic.Partitions = partitions
+ rd.Topics = append(rd.Topics, rdTopic)
+ }
+ req.Dirs = append(req.Dirs, rd)
+ }
+
+ issues = append(issues, issueShard{
+ req: req,
+ err: err,
+ })
+ }
+
+ return issues, true, nil // this is reshardable
+}
+
+func (*alterReplicaLogDirsSharder) onResp(kmsg.Request, kmsg.Response) error { return nil } // topic / partitions: not retried
+
+// merge does not make sense for this function, but we provide a one anyway.
+func (*alterReplicaLogDirsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrAlterReplicaLogDirsResponse()
+ topics := make(map[string][]kmsg.AlterReplicaLogDirsResponseTopicPartition)
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.AlterReplicaLogDirsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+
+ for _, topic := range resp.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ })
+ for topic, partitions := range topics {
+ respTopic := kmsg.NewAlterReplicaLogDirsResponseTopic()
+ respTopic.Topic = topic
+ respTopic.Partitions = partitions
+ merged.Topics = append(merged.Topics, respTopic)
+ }
+ return merged, firstErr
+}
+
+// handles sharding DescribeLogDirsRequest
+type describeLogDirsSharder struct{ *Client }
+
+func (cl *describeLogDirsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DescribeLogDirsRequest)
+
+ // If req.Topics is nil, the request is to describe all logdirs. Thus,
+ // we will issue the request to all brokers (similar to ListGroups).
+ if req.Topics == nil {
+ return cl.allBrokersShardedReq(ctx, func() kmsg.Request {
+ dup := *req
+ return &dup
+ })
+ }
+
+ var need []string
+ for _, topic := range req.Topics {
+ need = append(need, topic.Topic)
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, false) // bypass cache, tricky to manage response
+ if err != nil {
+ return nil, false, err
+ }
+
+ brokerReqs := make(map[int32]map[string][]int32)
+ var unknowns unknownErrShards
+
+ for _, topic := range req.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ unknowns.errs(err, t, topic.Partitions)
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+
+ for _, replica := range p.Replicas {
+ brokerReq := brokerReqs[replica]
+ if brokerReq == nil {
+ brokerReq = make(map[string][]int32)
+ brokerReqs[replica] = brokerReq
+ }
+ brokerReq[topic.Topic] = append(brokerReq[topic.Topic], partition)
+ }
+ }
+ }
+
+ mkreq := kmsg.NewPtrDescribeLogDirsRequest
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := mkreq()
+ for topic, parts := range brokerReq {
+ reqTopic := kmsg.NewDescribeLogDirsRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ return append(issues, unknowns.collect(mkreq, func(r *kmsg.DescribeLogDirsRequest, topic string, parts []int32) {
+ reqTopic := kmsg.NewDescribeLogDirsRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ r.Topics = append(r.Topics, reqTopic)
+ })...), true, nil // this is reshardable
+}
+
+func (*describeLogDirsSharder) onResp(kmsg.Request, kmsg.Response) error { return nil } // topic / configs: not retried
+
+// merge does not make sense for this function, but we provide one anyway.
+// We lose the error code for directories.
+func (*describeLogDirsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDescribeLogDirsResponse()
+ dirs := make(map[string]map[string][]kmsg.DescribeLogDirsResponseDirTopicPartition)
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DescribeLogDirsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+
+ for _, dir := range resp.Dirs {
+ mergeDir := dirs[dir.Dir]
+ if mergeDir == nil {
+ mergeDir = make(map[string][]kmsg.DescribeLogDirsResponseDirTopicPartition)
+ dirs[dir.Dir] = mergeDir
+ }
+ for _, topic := range dir.Topics {
+ mergeDir[topic.Topic] = append(mergeDir[topic.Topic], topic.Partitions...)
+ }
+ }
+ })
+ for dir, topics := range dirs {
+ md := kmsg.NewDescribeLogDirsResponseDir()
+ md.Dir = dir
+ for topic, partitions := range topics {
+ mdTopic := kmsg.NewDescribeLogDirsResponseDirTopic()
+ mdTopic.Topic = topic
+ mdTopic.Partitions = partitions
+ md.Topics = append(md.Topics, mdTopic)
+ }
+ merged.Dirs = append(merged.Dirs, md)
+ }
+ return merged, firstErr
+}
+
+// handles sharding DeleteGroupsRequest
+type deleteGroupsSharder struct{ *Client }
+
+func (cl *deleteGroupsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DeleteGroupsRequest)
+
+ coordinators := cl.loadCoordinators(ctx, coordinatorTypeGroup, req.Groups...)
+ type unkerr struct {
+ err error
+ group string
+ }
+ var (
+ brokerReqs = make(map[int32]*kmsg.DeleteGroupsRequest)
+ kerrs = make(map[*kerr.Error][]string)
+ unkerrs []unkerr
+ )
+
+ newReq := func(groups ...string) *kmsg.DeleteGroupsRequest {
+ newReq := kmsg.NewPtrDeleteGroupsRequest()
+ newReq.Groups = groups
+ return newReq
+ }
+
+ for _, group := range req.Groups {
+ berr := coordinators[group]
+ var ke *kerr.Error
+ switch {
+ case berr.err == nil:
+ brokerReq := brokerReqs[berr.b.meta.NodeID]
+ if brokerReq == nil {
+ brokerReq = newReq()
+ brokerReqs[berr.b.meta.NodeID] = brokerReq
+ }
+ brokerReq.Groups = append(brokerReq.Groups, group)
+ case errors.As(berr.err, &ke):
+ kerrs[ke] = append(kerrs[ke], group)
+ default:
+ unkerrs = append(unkerrs, unkerr{berr.err, group})
+ }
+ }
+
+ var issues []issueShard
+ for id, req := range brokerReqs {
+ issues = append(issues, issueShard{
+ req: req,
+ broker: id,
+ })
+ }
+ for _, unkerr := range unkerrs {
+ issues = append(issues, issueShard{
+ req: newReq(unkerr.group),
+ err: unkerr.err,
+ })
+ }
+ for kerr, groups := range kerrs {
+ issues = append(issues, issueShard{
+ req: newReq(groups...),
+ err: kerr,
+ })
+ }
+
+ return issues, true, nil // reshardable to load correct coordinators
+}
+
+func (cl *deleteGroupsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ resp := kresp.(*kmsg.DeleteGroupsResponse)
+ var retErr error
+ for i := range resp.Groups {
+ group := &resp.Groups[i]
+ err := kerr.ErrorForCode(group.ErrorCode)
+ cl.maybeDeleteStaleCoordinator(group.Group, coordinatorTypeGroup, err)
+ onRespShardErr(&retErr, err)
+ }
+ return retErr
+}
+
+func (*deleteGroupsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDeleteGroupsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DeleteGroupsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Groups = append(merged.Groups, resp.Groups...)
+ })
+}
+
+// handle sharding IncrementalAlterConfigsRequest
+type incrementalAlterConfigsSharder struct{ *Client }
+
+func (*incrementalAlterConfigsSharder) shard(_ context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.IncrementalAlterConfigsRequest)
+
+ brokerReqs := make(map[int32][]kmsg.IncrementalAlterConfigsRequestResource)
+ var any []kmsg.IncrementalAlterConfigsRequestResource
+
+ for i := range req.Resources {
+ resource := req.Resources[i]
+ switch resource.ResourceType {
+ case kmsg.ConfigResourceTypeBroker:
+ case kmsg.ConfigResourceTypeBrokerLogger:
+ default:
+ any = append(any, resource)
+ continue
+ }
+ id, err := strconv.ParseInt(resource.ResourceName, 10, 32)
+ if err != nil || id < 0 {
+ any = append(any, resource)
+ continue
+ }
+ brokerReqs[int32(id)] = append(brokerReqs[int32(id)], resource)
+ }
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ newReq := kmsg.NewPtrIncrementalAlterConfigsRequest()
+ newReq.Resources = brokerReq
+ newReq.ValidateOnly = req.ValidateOnly
+
+ issues = append(issues, issueShard{
+ req: newReq,
+ broker: brokerID,
+ })
+ }
+
+ if len(any) > 0 {
+ newReq := kmsg.NewPtrIncrementalAlterConfigsRequest()
+ newReq.Resources = any
+ newReq.ValidateOnly = req.ValidateOnly
+ issues = append(issues, issueShard{
+ req: newReq,
+ any: true,
+ })
+ }
+
+ return issues, false, nil // this is not reshardable, but the any block can go anywhere
+}
+
+func (*incrementalAlterConfigsSharder) onResp(kmsg.Request, kmsg.Response) error { return nil } // configs: topics not mapped, nothing retryable
+
+func (*incrementalAlterConfigsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrIncrementalAlterConfigsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.IncrementalAlterConfigsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.Resources = append(merged.Resources, resp.Resources...)
+ })
+}
+
+// handle sharding DescribeProducersRequest
+type describeProducersSharder struct{ *Client }
+
+func (cl *describeProducersSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DescribeProducersRequest)
+
+ var need []string
+ for _, topic := range req.Topics {
+ need = append(need, topic.Topic)
+ }
+ mapping, err := cl.fetchMappedMetadata(ctx, need, true)
+ if err != nil {
+ return nil, false, err
+ }
+
+ brokerReqs := make(map[int32]map[string][]int32) // broker => topic => partitions
+ var unknowns unknownErrShards
+
+ for _, topic := range req.Topics {
+ t := topic.Topic
+ tmapping, exists := mapping[t]
+ if err := unknownOrCode(exists, tmapping.t.ErrorCode); err != nil {
+ unknowns.errs(err, t, topic.Partitions)
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ p, exists := tmapping.ps[partition]
+ if err := unknownOrCode(exists, p.ErrorCode); err != nil {
+ unknowns.err(err, t, partition)
+ continue
+ }
+
+ brokerReq := brokerReqs[p.Leader]
+ if brokerReq == nil {
+ brokerReq = make(map[string][]int32)
+ brokerReqs[p.Leader] = brokerReq
+ }
+ brokerReq[topic.Topic] = append(brokerReq[topic.Topic], partition)
+ }
+ }
+
+ mkreq := kmsg.NewPtrDescribeProducersRequest
+
+ var issues []issueShard
+ for brokerID, brokerReq := range brokerReqs {
+ req := mkreq()
+ for topic, parts := range brokerReq {
+ reqTopic := kmsg.NewDescribeProducersRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ req.Topics = append(req.Topics, reqTopic)
+ }
+ issues = append(issues, issueShard{
+ req: req,
+ broker: brokerID,
+ })
+ }
+
+ return append(issues, unknowns.collect(mkreq, func(r *kmsg.DescribeProducersRequest, topic string, parts []int32) {
+ reqTopic := kmsg.NewDescribeProducersRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = parts
+ r.Topics = append(r.Topics, reqTopic)
+ })...), true, nil // this is reshardable
+}
+
+func (cl *describeProducersSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ var (
+ resp = kresp.(*kmsg.DescribeProducersResponse)
+ del []string
+ retErr error
+ unknownTopic bool
+ )
+ for i := range resp.Topics {
+ t := &resp.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ err := kerr.ErrorForCode(p.ErrorCode)
+ if err == kerr.UnknownTopicOrPartition || err == kerr.NotLeaderForPartition {
+ del = append(del, t.Topic)
+ unknownTopic = unknownTopic || err == kerr.UnknownTopicOrPartition
+ }
+ onRespShardErr(&retErr, err)
+ }
+ }
+ if cl.maybeDeleteMappedMetadata(unknownTopic, del...) {
+ return retErr
+ }
+ return nil
+}
+
+func (*describeProducersSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDescribeProducersResponse()
+ topics := make(map[string][]kmsg.DescribeProducersResponseTopicPartition)
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DescribeProducersResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+
+ for _, topic := range resp.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ })
+ for topic, partitions := range topics {
+ respTopic := kmsg.NewDescribeProducersResponseTopic()
+ respTopic.Topic = topic
+ respTopic.Partitions = partitions
+ merged.Topics = append(merged.Topics, respTopic)
+ }
+ return merged, firstErr
+}
+
+// handles sharding DescribeTransactionsRequest
+type describeTransactionsSharder struct{ *Client }
+
+func (cl *describeTransactionsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.DescribeTransactionsRequest)
+
+ coordinators := cl.loadCoordinators(ctx, coordinatorTypeTxn, req.TransactionalIDs...)
+ type unkerr struct {
+ err error
+ txnID string
+ }
+ var (
+ brokerReqs = make(map[int32]*kmsg.DescribeTransactionsRequest)
+ kerrs = make(map[*kerr.Error][]string)
+ unkerrs []unkerr
+ )
+
+ newReq := func(txnIDs ...string) *kmsg.DescribeTransactionsRequest {
+ r := kmsg.NewPtrDescribeTransactionsRequest()
+ r.TransactionalIDs = txnIDs
+ return r
+ }
+
+ for _, txnID := range req.TransactionalIDs {
+ berr := coordinators[txnID]
+ var ke *kerr.Error
+ switch {
+ case berr.err == nil:
+ brokerReq := brokerReqs[berr.b.meta.NodeID]
+ if brokerReq == nil {
+ brokerReq = newReq()
+ brokerReqs[berr.b.meta.NodeID] = brokerReq
+ }
+ brokerReq.TransactionalIDs = append(brokerReq.TransactionalIDs, txnID)
+ case errors.As(berr.err, &ke):
+ kerrs[ke] = append(kerrs[ke], txnID)
+ default:
+ unkerrs = append(unkerrs, unkerr{berr.err, txnID})
+ }
+ }
+
+ var issues []issueShard
+ for id, req := range brokerReqs {
+ issues = append(issues, issueShard{
+ req: req,
+ broker: id,
+ })
+ }
+ for _, unkerr := range unkerrs {
+ issues = append(issues, issueShard{
+ req: newReq(unkerr.txnID),
+ err: unkerr.err,
+ })
+ }
+ for kerr, txnIDs := range kerrs {
+ issues = append(issues, issueShard{
+ req: newReq(txnIDs...),
+ err: kerr,
+ })
+ }
+
+ return issues, true, nil // reshardable to load correct coordinators
+}
+
+func (cl *describeTransactionsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error { // cleanup any stale coordinators
+ resp := kresp.(*kmsg.DescribeTransactionsResponse)
+ var retErr error
+ for i := range resp.TransactionStates {
+ txnState := &resp.TransactionStates[i]
+ err := kerr.ErrorForCode(txnState.ErrorCode)
+ cl.maybeDeleteStaleCoordinator(txnState.TransactionalID, coordinatorTypeTxn, err)
+ onRespShardErr(&retErr, err)
+ }
+ return retErr
+}
+
+func (*describeTransactionsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrDescribeTransactionsResponse()
+ return merged, firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.DescribeTransactionsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ merged.TransactionStates = append(merged.TransactionStates, resp.TransactionStates...)
+ })
+}
+
+// handles sharding ListTransactionsRequest
+type listTransactionsSharder struct{ *Client }
+
+func (cl *listTransactionsSharder) shard(ctx context.Context, kreq kmsg.Request, _ error) ([]issueShard, bool, error) {
+ req := kreq.(*kmsg.ListTransactionsRequest)
+ return cl.allBrokersShardedReq(ctx, func() kmsg.Request {
+ dup := *req
+ return &dup
+ })
+}
+
+func (*listTransactionsSharder) onResp(_ kmsg.Request, kresp kmsg.Response) error {
+ resp := kresp.(*kmsg.ListTransactionsResponse)
+ return kerr.ErrorForCode(resp.ErrorCode)
+}
+
+func (*listTransactionsSharder) merge(sresps []ResponseShard) (kmsg.Response, error) {
+ merged := kmsg.NewPtrListTransactionsResponse()
+
+ unknownStates := make(map[string]struct{})
+
+ firstErr := firstErrMerger(sresps, func(kresp kmsg.Response) {
+ resp := kresp.(*kmsg.ListTransactionsResponse)
+ merged.Version = resp.Version
+ merged.ThrottleMillis = resp.ThrottleMillis
+ if merged.ErrorCode == 0 {
+ merged.ErrorCode = resp.ErrorCode
+ }
+ for _, state := range resp.UnknownStateFilters {
+ unknownStates[state] = struct{}{}
+ }
+ merged.TransactionStates = append(merged.TransactionStates, resp.TransactionStates...)
+ })
+ for unknownState := range unknownStates {
+ merged.UnknownStateFilters = append(merged.UnknownStateFilters, unknownState)
+ }
+
+ return merged, firstErr
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/compression.go b/vendor/github.com/twmb/franz-go/pkg/kgo/compression.go
new file mode 100644
index 0000000000000..fe8ad645bbda9
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/compression.go
@@ -0,0 +1,346 @@
+package kgo
+
+import (
+ "bytes"
+ "compress/gzip"
+ "encoding/binary"
+ "errors"
+ "io"
+ "runtime"
+ "sync"
+
+ "github.com/klauspost/compress/s2"
+ "github.com/klauspost/compress/zstd"
+ "github.com/pierrec/lz4/v4"
+)
+
+var byteBuffers = sync.Pool{New: func() any { return bytes.NewBuffer(make([]byte, 8<<10)) }}
+
+type codecType int8
+
+const (
+ codecNone codecType = iota
+ codecGzip
+ codecSnappy
+ codecLZ4
+ codecZstd
+)
+
+// CompressionCodec configures how records are compressed before being sent.
+//
+// Records are compressed within individual topics and partitions, inside of a
+// RecordBatch. All records in a RecordBatch are compressed into one record
+// for that batch.
+type CompressionCodec struct {
+ codec codecType
+ level int
+}
+
+// NoCompression is a compression option that avoids compression. This can
+// always be used as a fallback compression.
+func NoCompression() CompressionCodec { return CompressionCodec{codecNone, 0} }
+
+// GzipCompression enables gzip compression with the default compression level.
+func GzipCompression() CompressionCodec { return CompressionCodec{codecGzip, gzip.DefaultCompression} }
+
+// SnappyCompression enables snappy compression.
+func SnappyCompression() CompressionCodec { return CompressionCodec{codecSnappy, 0} }
+
+// Lz4Compression enables lz4 compression with the fastest compression level.
+func Lz4Compression() CompressionCodec { return CompressionCodec{codecLZ4, 0} }
+
+// ZstdCompression enables zstd compression with the default compression level.
+func ZstdCompression() CompressionCodec { return CompressionCodec{codecZstd, 0} }
+
+// WithLevel changes the compression codec's "level", effectively allowing for
+// higher or lower compression ratios at the expense of CPU speed.
+//
+// For the zstd package, the level is a typed int; simply convert the type back
+// to an int for this function.
+//
+// If the level is invalid, compressors just use a default level.
+func (c CompressionCodec) WithLevel(level int) CompressionCodec {
+ c.level = level
+ return c
+}
+
+type compressor struct {
+ options []codecType
+ gzPool sync.Pool
+ lz4Pool sync.Pool
+ zstdPool sync.Pool
+}
+
+func newCompressor(codecs ...CompressionCodec) (*compressor, error) {
+ if len(codecs) == 0 {
+ return nil, nil
+ }
+
+ used := make(map[codecType]bool) // we keep one type of codec per CompressionCodec
+ var keepIdx int
+ for _, codec := range codecs {
+ if _, exists := used[codec.codec]; exists {
+ continue
+ }
+ used[codec.codec] = true
+ codecs[keepIdx] = codec
+ keepIdx++
+ }
+ codecs = codecs[:keepIdx]
+
+ for _, codec := range codecs {
+ if codec.codec < 0 || codec.codec > 4 {
+ return nil, errors.New("unknown compression codec")
+ }
+ }
+
+ c := new(compressor)
+
+out:
+ for _, codec := range codecs {
+ c.options = append(c.options, codec.codec)
+ switch codec.codec {
+ case codecNone:
+ break out
+ case codecGzip:
+ level := gzip.DefaultCompression
+ if codec.level != 0 {
+ if _, err := gzip.NewWriterLevel(nil, codec.level); err != nil {
+ level = codec.level
+ }
+ }
+ c.gzPool = sync.Pool{New: func() any { c, _ := gzip.NewWriterLevel(nil, level); return c }}
+ case codecSnappy: // (no pool needed for snappy)
+ case codecLZ4:
+ level := codec.level
+ if level < 0 {
+ level = 0 // 0 == lz4.Fast
+ }
+ fn := func() any { return lz4.NewWriter(new(bytes.Buffer)) }
+ w := lz4.NewWriter(new(bytes.Buffer))
+ if err := w.Apply(lz4.CompressionLevelOption(lz4.CompressionLevel(level))); err == nil {
+ fn = func() any {
+ w := lz4.NewWriter(new(bytes.Buffer))
+ w.Apply(lz4.CompressionLevelOption(lz4.CompressionLevel(level)))
+ return w
+ }
+ }
+ w.Close()
+ c.lz4Pool = sync.Pool{New: fn}
+ case codecZstd:
+ opts := []zstd.EOption{
+ zstd.WithWindowSize(64 << 10),
+ zstd.WithEncoderConcurrency(1),
+ zstd.WithZeroFrames(true),
+ }
+ fn := func() any {
+ zstdEnc, _ := zstd.NewWriter(nil, opts...)
+ r := &zstdEncoder{zstdEnc}
+ runtime.SetFinalizer(r, func(r *zstdEncoder) { r.inner.Close() })
+ return r
+ }
+ zstdEnc, err := zstd.NewWriter(nil, append(opts, zstd.WithEncoderLevel(zstd.EncoderLevel(codec.level)))...)
+ if err == nil {
+ zstdEnc.Close()
+ opts = append(opts, zstd.WithEncoderLevel(zstd.EncoderLevel(codec.level)))
+ }
+ c.zstdPool = sync.Pool{New: fn}
+ }
+ }
+
+ if c.options[0] == codecNone {
+ return nil, nil // first codec was passthrough
+ }
+
+ return c, nil
+}
+
+type zstdEncoder struct {
+ inner *zstd.Encoder
+}
+
+// Compress compresses src to buf, returning buf's inner slice once done or nil
+// if an error is encountered.
+//
+// The writer should be put back to its pool after the returned slice is done
+// being used.
+func (c *compressor) compress(dst *bytes.Buffer, src []byte, produceRequestVersion int16) ([]byte, codecType) {
+ var use codecType
+ for _, option := range c.options {
+ if option == codecZstd && produceRequestVersion < 7 {
+ continue
+ }
+ use = option
+ break
+ }
+
+ var out []byte
+ switch use {
+ case codecNone:
+ return src, 0
+ case codecGzip:
+ gz := c.gzPool.Get().(*gzip.Writer)
+ defer c.gzPool.Put(gz)
+ gz.Reset(dst)
+ if _, err := gz.Write(src); err != nil {
+ return nil, -1
+ }
+ if err := gz.Close(); err != nil {
+ return nil, -1
+ }
+ out = dst.Bytes()
+ case codecLZ4:
+ lz := c.lz4Pool.Get().(*lz4.Writer)
+ defer c.lz4Pool.Put(lz)
+ lz.Reset(dst)
+ if _, err := lz.Write(src); err != nil {
+ return nil, -1
+ }
+ if err := lz.Close(); err != nil {
+ return nil, -1
+ }
+ out = dst.Bytes()
+ case codecSnappy:
+ // Because the Snappy and Zstd codecs do not accept an io.Writer interface
+ // and directly take a []byte slice, here, the underlying []byte slice (`dst`)
+ // obtained from the bytes.Buffer{} from the pool is passed.
+ // As the `Write()` method on the buffer isn't used, its internal
+ // book-keeping goes out of sync, making the buffer unusable for further
+ // reading and writing via it's (eg: accessing via `Byte()`). For subsequent
+ // reads, the underlying slice has to be used directly.
+ //
+ // In this particular context, it is acceptable as there there are no subsequent
+ // operations performed on the buffer and it is immediately returned to the
+ // pool and `Reset()` the next time it is obtained and used where `compress()`
+ // is called.
+ if l := s2.MaxEncodedLen(len(src)); l > dst.Cap() {
+ dst.Grow(l)
+ }
+ out = s2.EncodeSnappy(dst.Bytes(), src)
+ case codecZstd:
+ zstdEnc := c.zstdPool.Get().(*zstdEncoder)
+ defer c.zstdPool.Put(zstdEnc)
+ if l := zstdEnc.inner.MaxEncodedSize(len(src)); l > dst.Cap() {
+ dst.Grow(l)
+ }
+ out = zstdEnc.inner.EncodeAll(src, dst.Bytes())
+ }
+
+ return out, use
+}
+
+type decompressor struct {
+ ungzPool sync.Pool
+ unlz4Pool sync.Pool
+ unzstdPool sync.Pool
+}
+
+func newDecompressor() *decompressor {
+ d := &decompressor{
+ ungzPool: sync.Pool{
+ New: func() any { return new(gzip.Reader) },
+ },
+ unlz4Pool: sync.Pool{
+ New: func() any { return lz4.NewReader(nil) },
+ },
+ unzstdPool: sync.Pool{
+ New: func() any {
+ zstdDec, _ := zstd.NewReader(nil,
+ zstd.WithDecoderLowmem(true),
+ zstd.WithDecoderConcurrency(1),
+ )
+ r := &zstdDecoder{zstdDec}
+ runtime.SetFinalizer(r, func(r *zstdDecoder) {
+ r.inner.Close()
+ })
+ return r
+ },
+ },
+ }
+ return d
+}
+
+type zstdDecoder struct {
+ inner *zstd.Decoder
+}
+
+func (d *decompressor) decompress(src []byte, codec byte) ([]byte, error) {
+ // Early return in case there is no compression
+ compCodec := codecType(codec)
+ if compCodec == codecNone {
+ return src, nil
+ }
+ out := byteBuffers.Get().(*bytes.Buffer)
+ out.Reset()
+ defer byteBuffers.Put(out)
+
+ switch compCodec {
+ case codecGzip:
+ ungz := d.ungzPool.Get().(*gzip.Reader)
+ defer d.ungzPool.Put(ungz)
+ if err := ungz.Reset(bytes.NewReader(src)); err != nil {
+ return nil, err
+ }
+ if _, err := io.Copy(out, ungz); err != nil {
+ return nil, err
+ }
+ return append([]byte(nil), out.Bytes()...), nil
+ case codecSnappy:
+ if len(src) > 16 && bytes.HasPrefix(src, xerialPfx) {
+ return xerialDecode(src)
+ }
+ decoded, err := s2.Decode(out.Bytes(), src)
+ if err != nil {
+ return nil, err
+ }
+ return append([]byte(nil), decoded...), nil
+ case codecLZ4:
+ unlz4 := d.unlz4Pool.Get().(*lz4.Reader)
+ defer d.unlz4Pool.Put(unlz4)
+ unlz4.Reset(bytes.NewReader(src))
+ if _, err := io.Copy(out, unlz4); err != nil {
+ return nil, err
+ }
+ return append([]byte(nil), out.Bytes()...), nil
+ case codecZstd:
+ unzstd := d.unzstdPool.Get().(*zstdDecoder)
+ defer d.unzstdPool.Put(unzstd)
+ decoded, err := unzstd.inner.DecodeAll(src, out.Bytes())
+ if err != nil {
+ return nil, err
+ }
+ return append([]byte(nil), decoded...), nil
+ default:
+ return nil, errors.New("unknown compression codec")
+ }
+}
+
+var xerialPfx = []byte{130, 83, 78, 65, 80, 80, 89, 0}
+
+var errMalformedXerial = errors.New("malformed xerial framing")
+
+func xerialDecode(src []byte) ([]byte, error) {
+ // bytes 0-8: xerial header
+ // bytes 8-16: xerial version
+ // everything after: uint32 chunk size, snappy chunk
+ // we come into this function knowing src is at least 16
+ src = src[16:]
+ var dst, chunk []byte
+ var err error
+ for len(src) > 0 {
+ if len(src) < 4 {
+ return nil, errMalformedXerial
+ }
+ size := int32(binary.BigEndian.Uint32(src))
+ src = src[4:]
+ if size < 0 || len(src) < int(size) {
+ return nil, errMalformedXerial
+ }
+ if chunk, err = s2.Decode(chunk[:cap(chunk)], src[:size]); err != nil {
+ return nil, err
+ }
+ src = src[size:]
+ dst = append(dst, chunk...)
+ }
+ return dst, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/config.go b/vendor/github.com/twmb/franz-go/pkg/kgo/config.go
new file mode 100644
index 0000000000000..92ebeaa39055c
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/config.go
@@ -0,0 +1,1758 @@
+package kgo
+
+import (
+ "context"
+ "crypto/tls"
+ "errors"
+ "fmt"
+ "math"
+ "math/rand"
+ "net"
+ "regexp"
+ "runtime/debug"
+ "sync"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+ "github.com/twmb/franz-go/pkg/kversion"
+ "github.com/twmb/franz-go/pkg/sasl"
+)
+
+// Opt is an option to configure a client.
+type Opt interface {
+ apply(*cfg)
+}
+
+// ProducerOpt is a producer specific option to configure a client.
+// This is simply a namespaced Opt.
+type ProducerOpt interface {
+ Opt
+ producerOpt()
+}
+
+// ConsumerOpt is a consumer specific option to configure a client.
+// This is simply a namespaced Opt.
+type ConsumerOpt interface {
+ Opt
+ consumerOpt()
+}
+
+// GroupOpt is a consumer group specific option to configure a client.
+// This is simply a namespaced Opt.
+type GroupOpt interface {
+ Opt
+ groupOpt()
+}
+
+type (
+ clientOpt struct{ fn func(*cfg) }
+ producerOpt struct{ fn func(*cfg) }
+ consumerOpt struct{ fn func(*cfg) }
+ groupOpt struct{ fn func(*cfg) }
+)
+
+func (opt clientOpt) apply(cfg *cfg) { opt.fn(cfg) }
+func (opt producerOpt) apply(cfg *cfg) { opt.fn(cfg) }
+func (opt consumerOpt) apply(cfg *cfg) { opt.fn(cfg) }
+func (opt groupOpt) apply(cfg *cfg) { opt.fn(cfg) }
+func (producerOpt) producerOpt() {}
+func (consumerOpt) consumerOpt() {}
+func (groupOpt) groupOpt() {}
+
+// A cfg can be written to while initializing a client, and after that it is
+// (mostly) only ever read from. Some areas can continue to be modified --
+// particularly reconfiguring what to consume from -- but most areas are
+// static.
+type cfg struct {
+ /////////////////////
+ // GENERAL SECTION //
+ /////////////////////
+
+ id *string // client ID
+ dialFn func(context.Context, string, string) (net.Conn, error)
+ dialTimeout time.Duration
+ dialTLS *tls.Config
+ requestTimeoutOverhead time.Duration
+ connIdleTimeout time.Duration
+
+ softwareName string // KIP-511
+ softwareVersion string // KIP-511
+
+ logger Logger
+
+ seedBrokers []string
+ maxVersions *kversion.Versions
+ minVersions *kversion.Versions
+
+ retryBackoff func(int) time.Duration
+ retries int64
+ retryTimeout func(int16) time.Duration
+
+ maxBrokerWriteBytes int32
+ maxBrokerReadBytes int32
+
+ allowAutoTopicCreation bool
+
+ metadataMaxAge time.Duration
+ metadataMinAge time.Duration
+
+ sasls []sasl.Mechanism
+
+ hooks hooks
+
+ //////////////////////
+ // PRODUCER SECTION //
+ //////////////////////
+
+ txnID *string
+ txnTimeout time.Duration
+ acks Acks
+ disableIdempotency bool
+ maxProduceInflight int // if idempotency is disabled, we allow a configurable max inflight
+ compression []CompressionCodec // order of preference
+
+ defaultProduceTopic string
+ maxRecordBatchBytes int32
+ maxBufferedRecords int64
+ maxBufferedBytes int64
+ produceTimeout time.Duration
+ recordRetries int64
+ maxUnknownFailures int64
+ linger time.Duration
+ recordTimeout time.Duration
+ manualFlushing bool
+ txnBackoff time.Duration
+ missingTopicDelete time.Duration
+
+ partitioner Partitioner
+
+ stopOnDataLoss bool
+ onDataLoss func(string, int32)
+
+ //////////////////////
+ // CONSUMER SECTION //
+ //////////////////////
+
+ maxWait int32
+ minBytes int32
+ maxBytes lazyI32
+ maxPartBytes lazyI32
+ resetOffset Offset
+ isolationLevel int8
+ keepControl bool
+ rack string
+ preferLagFn PreferLagFn
+
+ maxConcurrentFetches int
+ disableFetchSessions bool
+ keepRetryableFetchErrors bool
+
+ topics map[string]*regexp.Regexp // topics to consume; if regex is true, values are compiled regular expressions
+ partitions map[string]map[int32]Offset // partitions to directly consume from
+ regex bool
+
+ ////////////////////////////
+ // CONSUMER GROUP SECTION //
+ ////////////////////////////
+
+ group string // group we are in
+ instanceID *string // optional group instance ID
+ balancers []GroupBalancer // balancers we can use
+ protocol string // "consumer" by default, expected to never be overridden
+
+ sessionTimeout time.Duration
+ rebalanceTimeout time.Duration
+ heartbeatInterval time.Duration
+ requireStable bool
+
+ onAssigned func(context.Context, *Client, map[string][]int32)
+ onRevoked func(context.Context, *Client, map[string][]int32)
+ onLost func(context.Context, *Client, map[string][]int32)
+ onFetched func(context.Context, *Client, *kmsg.OffsetFetchResponse) error
+
+ adjustOffsetsBeforeAssign func(ctx context.Context, offsets map[string]map[int32]Offset) (map[string]map[int32]Offset, error)
+
+ blockRebalanceOnPoll bool
+
+ setAssigned bool
+ setRevoked bool
+ setLost bool
+ setCommitCallback bool
+
+ autocommitDisable bool // true if autocommit was disabled or we are transactional
+ autocommitGreedy bool
+ autocommitMarks bool
+ autocommitInterval time.Duration
+ commitCallback func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error)
+}
+
+func (cfg *cfg) validate() error {
+ if len(cfg.seedBrokers) == 0 {
+ return errors.New("config erroneously has no seed brokers")
+ }
+
+ // We clamp maxPartBytes to maxBytes because some fake Kafka endpoints
+ // (Oracle) cannot handle the mismatch correctly.
+ if cfg.maxPartBytes > cfg.maxBytes {
+ cfg.maxPartBytes = cfg.maxBytes
+ }
+
+ if cfg.disableIdempotency {
+ if cfg.txnID != nil {
+ return errors.New("cannot both disable idempotent writes and use transactional IDs")
+ }
+ if cfg.maxProduceInflight <= 0 {
+ return fmt.Errorf("invalid max produce inflight %d with idempotency disabled", cfg.maxProduceInflight)
+ }
+ } else {
+ if cfg.acks.val != -1 {
+ return errors.New("idempotency requires acks=all")
+ }
+ if cfg.maxProduceInflight != 1 {
+ return fmt.Errorf("invalid usage of MaxProduceRequestsInflightPerBroker with idempotency enabled")
+ }
+ }
+
+ for _, limit := range []struct {
+ name string
+ sp **string // if field is a *string, we take addr to it
+ s string
+ allowed int
+ }{
+ // A 256 byte ID / software name & version is good enough and
+ // fits with our max broker write byte min of 1K.
+ {name: "client id", sp: &cfg.id, allowed: 256},
+ {name: "software name", s: cfg.softwareName, allowed: 256},
+ {name: "software version", s: cfg.softwareVersion, allowed: 256},
+
+ // The following is the limit transitioning from two byte
+ // prefix for flexible stuff to three bytes; as with above, it
+ // is more than reasonable.
+ {name: "transactional id", sp: &cfg.txnID, allowed: 16382},
+
+ {name: "rack", s: cfg.rack, allowed: 512},
+ } {
+ s := limit.s
+ if limit.sp != nil && *limit.sp != nil {
+ s = **limit.sp
+ }
+ if len(s) > limit.allowed {
+ return fmt.Errorf("%s length %d is larger than max allowed %d", limit.name, len(s), limit.allowed)
+ }
+ }
+
+ i64lt := func(l, r int64) (bool, string) { return l < r, "less" }
+ i64gt := func(l, r int64) (bool, string) { return l > r, "larger" }
+ for _, limit := range []struct {
+ name string
+ v int64
+ allowed int64
+ badcmp func(int64, int64) (bool, string)
+
+ fmt string
+ durs bool
+ }{
+ // Min write of 1K and max of 1G is reasonable.
+ {name: "max broker write bytes", v: int64(cfg.maxBrokerWriteBytes), allowed: 1 << 10, badcmp: i64lt},
+ {name: "max broker write bytes", v: int64(cfg.maxBrokerWriteBytes), allowed: 1 << 30, badcmp: i64gt},
+
+ // Same for read bytes.
+ {name: "max broker read bytes", v: int64(cfg.maxBrokerReadBytes), allowed: 1 << 10, badcmp: i64lt},
+ {name: "max broker read bytes", v: int64(cfg.maxBrokerReadBytes), allowed: 1 << 30, badcmp: i64gt},
+
+ // For batches, we want at least 512 (reasonable), and the
+ // upper limit is the max num when a uvarint transitions from 4
+ // to 5 bytes. The upper limit is also more than reasonable
+ // (256MiB).
+ {name: "max record batch bytes", v: int64(cfg.maxRecordBatchBytes), allowed: 512, badcmp: i64lt},
+ {name: "max record batch bytes", v: int64(cfg.maxRecordBatchBytes), allowed: 256 << 20, badcmp: i64gt},
+
+ // We do not want the broker write bytes to be less than the
+ // record batch bytes, nor the read bytes to be less than what
+ // we indicate to fetch.
+ //
+ // We cannot enforce if a single batch is larger than the max
+ // fetch bytes limit, but hopefully we do not run into that.
+ {v: int64(cfg.maxBrokerWriteBytes), allowed: int64(cfg.maxRecordBatchBytes), badcmp: i64lt, fmt: "max broker write bytes %v is erroneously less than max record batch bytes %v"},
+ {v: int64(cfg.maxBrokerReadBytes), allowed: int64(cfg.maxBytes), badcmp: i64lt, fmt: "max broker read bytes %v is erroneously less than max fetch bytes %v"},
+
+ // 0 <= allowed concurrency
+ {name: "max concurrent fetches", v: int64(cfg.maxConcurrentFetches), allowed: 0, badcmp: i64lt},
+
+ // 1s <= request timeout overhead <= 15m
+ {name: "request timeout max overhead", v: int64(cfg.requestTimeoutOverhead), allowed: int64(15 * time.Minute), badcmp: i64gt, durs: true},
+ {name: "request timeout min overhead", v: int64(cfg.requestTimeoutOverhead), allowed: int64(time.Second), badcmp: i64lt, durs: true},
+
+ // 1s <= conn idle <= 15m
+ {name: "conn min idle timeout", v: int64(cfg.connIdleTimeout), allowed: int64(time.Second), badcmp: i64lt, durs: true},
+ {name: "conn max idle timeout", v: int64(cfg.connIdleTimeout), allowed: int64(15 * time.Minute), badcmp: i64gt, durs: true},
+
+ // 10ms <= metadata <= 1hr
+ {name: "metadata max age", v: int64(cfg.metadataMaxAge), allowed: int64(time.Hour), badcmp: i64gt, durs: true},
+ {name: "metadata min age", v: int64(cfg.metadataMinAge), allowed: int64(10 * time.Millisecond), badcmp: i64lt, durs: true},
+ {v: int64(cfg.metadataMaxAge), allowed: int64(cfg.metadataMinAge), badcmp: i64lt, fmt: "metadata max age %v is erroneously less than metadata min age %v", durs: true},
+
+ // Some random producer settings.
+ {name: "max buffered records", v: cfg.maxBufferedRecords, allowed: 1, badcmp: i64lt},
+ {name: "max buffered bytes", v: cfg.maxBufferedBytes, allowed: 0, badcmp: i64lt},
+ {name: "linger", v: int64(cfg.linger), allowed: int64(time.Minute), badcmp: i64gt, durs: true},
+ {name: "produce timeout", v: int64(cfg.produceTimeout), allowed: int64(100 * time.Millisecond), badcmp: i64lt, durs: true},
+ {name: "record timeout", v: int64(cfg.recordTimeout), allowed: int64(time.Second), badcmp: func(l, r int64) (bool, string) {
+ if l == 0 {
+ return false, "" // we print nothing when things are good
+ }
+ return l < r, "less"
+ }, durs: true},
+
+ // Consumer settings. maxWait is stored as int32 milliseconds,
+ // but we want the error message to be in the nice
+ // time.Duration string format.
+ {name: "max fetch wait", v: int64(cfg.maxWait) * int64(time.Millisecond), allowed: int64(10 * time.Millisecond), badcmp: i64lt, durs: true},
+
+ // Group settings.
+ {name: "number of balancers", v: int64(len(cfg.balancers)), allowed: 1, badcmp: i64lt},
+ {name: "consumer protocol length", v: int64(len(cfg.protocol)), allowed: 1, badcmp: i64lt},
+
+ {name: "session timeout", v: int64(cfg.sessionTimeout), allowed: int64(100 * time.Millisecond), badcmp: i64lt, durs: true},
+ {name: "rebalance timeout", v: int64(cfg.rebalanceTimeout), allowed: int64(100 * time.Millisecond), badcmp: i64lt, durs: true},
+ {name: "autocommit interval", v: int64(cfg.autocommitInterval), allowed: int64(100 * time.Millisecond), badcmp: i64lt, durs: true},
+
+ {v: int64(cfg.heartbeatInterval), allowed: int64(cfg.rebalanceTimeout) * int64(time.Millisecond), badcmp: i64gt, durs: true, fmt: "heartbeat interval %v is erroneously larger than the session timeout %v"},
+ } {
+ bad, cmp := limit.badcmp(limit.v, limit.allowed)
+ if bad {
+ if limit.fmt != "" {
+ if limit.durs {
+ return fmt.Errorf(limit.fmt, time.Duration(limit.v), time.Duration(limit.allowed))
+ }
+ return fmt.Errorf(limit.fmt, limit.v, limit.allowed)
+ }
+ if limit.durs {
+ return fmt.Errorf("%s %v is %s than allowed %v", limit.name, time.Duration(limit.v), cmp, time.Duration(limit.allowed))
+ }
+ return fmt.Errorf("%s %v is %s than allowed %v", limit.name, limit.v, cmp, limit.allowed)
+ }
+ }
+
+ if cfg.dialFn != nil {
+ if cfg.dialTLS != nil {
+ return errors.New("cannot set both Dialer and DialTLSConfig")
+ }
+ }
+
+ if len(cfg.group) > 0 {
+ if len(cfg.partitions) != 0 {
+ return errors.New("invalid direct-partition consuming option when consuming as a group")
+ }
+ }
+
+ if cfg.regex {
+ if len(cfg.partitions) != 0 {
+ return errors.New("invalid direct-partition consuming option when consuming as regex")
+ }
+ for re := range cfg.topics {
+ compiled, err := regexp.Compile(re)
+ if err != nil {
+ return fmt.Errorf("invalid regular expression %q", re)
+ }
+ cfg.topics[re] = compiled
+ }
+ }
+
+ if cfg.topics != nil && cfg.partitions != nil {
+ for topic := range cfg.partitions {
+ if _, exists := cfg.topics[topic]; exists {
+ return fmt.Errorf("topic %q seen in both ConsumePartitions and ConsumeTopics; these options are a union, it is invalid to specify specific partitions for a topic while also consuming the entire topic", topic)
+ }
+ }
+ }
+
+ if cfg.autocommitDisable && cfg.autocommitGreedy {
+ return errors.New("cannot both disable autocommitting and enable greedy autocommitting")
+ }
+ if cfg.autocommitDisable && cfg.autocommitMarks {
+ return errors.New("cannot both disable autocommitting and enable marked autocommitting")
+ }
+ if cfg.autocommitGreedy && cfg.autocommitMarks {
+ return errors.New("cannot enable both greedy autocommitting and marked autocommitting")
+ }
+ if (cfg.autocommitGreedy || cfg.autocommitDisable || cfg.autocommitMarks || cfg.setCommitCallback) && len(cfg.group) == 0 {
+ return errors.New("invalid autocommit options specified when a group was not specified")
+ }
+ if (cfg.setLost || cfg.setRevoked || cfg.setAssigned) && len(cfg.group) == 0 {
+ return errors.New("invalid group partition assigned/revoked/lost functions set when a group was not specified")
+ }
+
+ processedHooks, err := processHooks(cfg.hooks)
+ if err != nil {
+ return err
+ }
+ cfg.hooks = processedHooks
+
+ return nil
+}
+
+// processHooks will inspect and recursively unpack slices of hooks stopping
+// if the instance implements any hook interface. It will return an error on
+// the first instance that implements no hook interface
+func processHooks(hooks []Hook) ([]Hook, error) {
+ var processedHooks []Hook
+ for _, hook := range hooks {
+ if implementsAnyHook(hook) {
+ processedHooks = append(processedHooks, hook)
+ } else if moreHooks, ok := hook.([]Hook); ok {
+ more, err := processHooks(moreHooks)
+ if err != nil {
+ return nil, err
+ }
+ processedHooks = append(processedHooks, more...)
+ } else {
+ return nil, errors.New("found an argument that implements no hook interfaces")
+ }
+ }
+ return processedHooks, nil
+}
+
+var reVersion = regexp.MustCompile(`^[a-zA-Z0-9](?:[a-zA-Z0-9.-]*[a-zA-Z0-9])?$`)
+
+func softwareVersion() string {
+ info, ok := debug.ReadBuildInfo()
+ if ok {
+ for _, dep := range info.Deps {
+ if dep.Path == "github.com/twmb/franz-go" {
+ if reVersion.MatchString(dep.Version) {
+ return dep.Version
+ }
+ }
+ }
+ }
+ return "unknown"
+}
+
+func defaultCfg() cfg {
+ defaultID := "kgo"
+ return cfg{
+ /////////////
+ // general //
+ /////////////
+ id: &defaultID,
+
+ dialTimeout: 10 * time.Second,
+ requestTimeoutOverhead: 10 * time.Second,
+ connIdleTimeout: 20 * time.Second,
+
+ softwareName: "kgo",
+ softwareVersion: softwareVersion(),
+
+ logger: new(nopLogger),
+
+ seedBrokers: []string{"127.0.0.1"},
+ maxVersions: kversion.Stable(),
+
+ retryBackoff: func() func(int) time.Duration {
+ var rngMu sync.Mutex
+ rng := rand.New(rand.NewSource(time.Now().UnixNano()))
+ return func(fails int) time.Duration {
+ const (
+ min = 250 * time.Millisecond
+ max = 5 * time.Second / 2
+ )
+ if fails <= 0 {
+ return min
+ }
+ if fails > 10 {
+ return max
+ }
+
+ backoff := min * time.Duration(1<<(fails-1))
+
+ rngMu.Lock()
+ jitter := 0.8 + 0.4*rng.Float64()
+ rngMu.Unlock()
+
+ backoff = time.Duration(float64(backoff) * jitter)
+
+ if backoff > max {
+ return max
+ }
+ return backoff
+ }
+ }(),
+ retries: 20,
+
+ maxBrokerWriteBytes: 100 << 20, // Kafka socket.request.max.bytes default is 100<<20
+ maxBrokerReadBytes: 100 << 20,
+
+ metadataMaxAge: 5 * time.Minute,
+ metadataMinAge: 5 * time.Second,
+ missingTopicDelete: 15 * time.Second,
+
+ //////////////
+ // producer //
+ //////////////
+
+ txnTimeout: 40 * time.Second,
+ acks: AllISRAcks(),
+ maxProduceInflight: 1,
+ compression: []CompressionCodec{SnappyCompression(), NoCompression()},
+ maxRecordBatchBytes: 1000012, // Kafka max.message.bytes default is 1000012
+ maxBufferedRecords: 10000,
+ produceTimeout: 10 * time.Second,
+ recordRetries: math.MaxInt64, // effectively unbounded
+ maxUnknownFailures: 4,
+ partitioner: UniformBytesPartitioner(64<<10, true, true, nil),
+ txnBackoff: 20 * time.Millisecond,
+
+ //////////////
+ // consumer //
+ //////////////
+
+ maxWait: 5000,
+ minBytes: 1,
+ maxBytes: 50 << 20,
+ maxPartBytes: 1 << 20,
+ resetOffset: NewOffset().AtStart(),
+ isolationLevel: 0,
+
+ maxConcurrentFetches: 0, // unbounded default
+
+ ///////////
+ // group //
+ ///////////
+
+ balancers: []GroupBalancer{
+ CooperativeStickyBalancer(),
+ },
+ protocol: "consumer",
+
+ sessionTimeout: 45000 * time.Millisecond,
+ rebalanceTimeout: 60000 * time.Millisecond,
+ heartbeatInterval: 3000 * time.Millisecond,
+
+ autocommitInterval: 5 * time.Second,
+ }
+}
+
+//////////////////////////
+// CLIENT CONFIGURATION //
+//////////////////////////
+
+// ClientID uses id for all requests sent to Kafka brokers, overriding the
+// default "kgo".
+func ClientID(id string) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.id = &id }}
+}
+
+// SoftwareNameAndVersion sets the client software name and version that will
+// be sent to Kafka as part of the ApiVersions request as of Kafka 2.4,
+// overriding the default "kgo" and internal version number.
+//
+// Kafka exposes this through metrics to help operators understand the impact
+// of clients.
+//
+// It is generally not recommended to set this. As well, if you do, the name
+// and version must match the following regular expression:
+//
+// [a-zA-Z0-9](?:[a-zA-Z0-9\.-]*[a-zA-Z0-9])?
+//
+// Note this means neither the name nor version can be empty.
+func SoftwareNameAndVersion(name, version string) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.softwareName = name; cfg.softwareVersion = version }}
+}
+
+// WithLogger sets the client to use the given logger, overriding the default
+// to not use a logger.
+//
+// It is invalid to use a nil logger; doing so will cause panics.
+func WithLogger(l Logger) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.logger = &wrappedLogger{l} }}
+}
+
+// RequestTimeoutOverhead uses the given time as overhead while deadlining
+// requests, overriding the default overhead of 10s.
+//
+// For most requests, the timeout is set to the overhead. However, for
+// any request with a TimeoutMillis field, the overhead is added on top of the
+// request's TimeoutMillis. This ensures that we give Kafka enough time to
+// actually process the request given the timeout, while still having a
+// deadline on the connection as a whole to ensure it does not hang.
+//
+// For writes, the timeout is always the overhead. We buffer writes in our
+// client before one quick flush, so we always expect the write to be fast.
+//
+// Note that hitting the timeout kills a connection, which will fail any other
+// active writes or reads on the connection.
+//
+// This option is roughly equivalent to request.timeout.ms, but grants
+// additional time to requests that have timeout fields.
+func RequestTimeoutOverhead(overhead time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.requestTimeoutOverhead = overhead }}
+}
+
+// ConnIdleTimeout is a rough amount of time to allow connections to idle
+// before they are closed, overriding the default 20.
+//
+// In the worst case, a connection can be allowed to idle for up to 2x this
+// time, while the average is expected to be 1.5x (essentially, a uniform
+// distribution from this interval to 2x the interval).
+//
+// It is possible that a connection can be reaped just as it is about to be
+// written to, but the client internally retries in these cases.
+//
+// Connections are not reaped if they are actively being written to or read
+// from; thus, a request can take a really long time itself and not be reaped
+// (however, this may lead to the RequestTimeoutOverhead).
+func ConnIdleTimeout(timeout time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.connIdleTimeout = timeout }}
+}
+
+// Dialer uses fn to dial addresses, overriding the default dialer that uses a
+// 10s dial timeout and no TLS.
+//
+// The context passed to the dial function is the context used in the request
+// that caused the dial. If the request is a client-internal request, the
+// context is the context on the client itself (which is canceled when the
+// client is closed).
+//
+// This function has the same signature as net.Dialer's DialContext and
+// tls.Dialer's DialContext, meaning you can use this function like so:
+//
+// kgo.Dialer((&net.Dialer{Timeout: 10*time.Second}).DialContext)
+//
+// or
+//
+// kgo.Dialer((&tls.Dialer{...}).DialContext)
+func Dialer(fn func(ctx context.Context, network, host string) (net.Conn, error)) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.dialFn = fn }}
+}
+
+// DialTimeout sets the dial timeout, overriding the default of 10s. This
+// option is useful if you do not want to set a custom dialer, and is useful in
+// tandem with DialTLSConfig.
+func DialTimeout(timeout time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.dialTimeout = timeout }}
+}
+
+// DialTLSConfig opts into dialing brokers with the given TLS config with a
+// 10s dial timeout. This is a shortcut for manually specifying a tls dialer
+// using the Dialer option. You can also change the default 10s timeout with
+// DialTimeout.
+//
+// Every dial, the input config is cloned. If the config's ServerName is not
+// specified, this function uses net.SplitHostPort to extract the host from the
+// broker being dialed and sets the ServerName. In short, it is not necessary
+// to set the ServerName.
+func DialTLSConfig(c *tls.Config) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.dialTLS = c }}
+}
+
+// DialTLS opts into dialing brokers with TLS. This is a shortcut for
+// DialTLSConfig with an empty config. See DialTLSConfig for more details.
+func DialTLS() Opt {
+ return DialTLSConfig(new(tls.Config))
+}
+
+// SeedBrokers sets the seed brokers for the client to use, overriding the
+// default 127.0.0.1:9092.
+//
+// Any seeds that are missing a port use the default Kafka port 9092.
+func SeedBrokers(seeds ...string) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.seedBrokers = append(cfg.seedBrokers[:0], seeds...) }}
+}
+
+// MaxVersions sets the maximum Kafka version to try, overriding the
+// internal unbounded (latest stable) versions.
+//
+// Note that specific max version pinning is required if trying to interact
+// with versions pre 0.10.0. Otherwise, unless using more complicated requests
+// that this client itself does not natively use, it is generally safe to opt
+// for the latest version. If using the kmsg package directly to issue
+// requests, it is recommended to pin versions so that new fields on requests
+// do not get invalid default zero values before you update your usage.
+func MaxVersions(versions *kversion.Versions) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.maxVersions = versions }}
+}
+
+// MinVersions sets the minimum Kafka version a request can be downgraded to,
+// overriding the default of the lowest version.
+//
+// This option is useful if you are issuing requests that you absolutely do not
+// want to be downgraded; that is, if you are relying on features in newer
+// requests, and you are not sure if your brokers can handle those features.
+// By setting a min version, if the client detects it needs to downgrade past
+// the version, it will instead avoid issuing the request.
+//
+// Unlike MaxVersions, if a request is issued that is unknown to the min
+// versions, the request is allowed. It is assumed that there is no lower bound
+// for that request.
+func MinVersions(versions *kversion.Versions) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.minVersions = versions }}
+}
+
+// RetryBackoffFn sets the backoff strategy for how long to backoff for a given
+// amount of retries, overriding the default jittery exponential backoff that
+// ranges from 250ms min to 2.5s max.
+//
+// This (roughly) corresponds to Kafka's retry.backoff.ms setting and
+// retry.backoff.max.ms (which is being introduced with KIP-500).
+func RetryBackoffFn(backoff func(int) time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.retryBackoff = backoff }}
+}
+
+// RequestRetries sets the number of tries that retryable requests are allowed,
+// overriding the default of 20s.
+//
+// This option does not apply to produce requests; to limit produce request
+// retries / record retries, see RecordRetries.
+func RequestRetries(n int) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.retries = int64(n) }}
+}
+
+// RetryTimeout sets the upper limit on how long we allow a request to be
+// issued and then reissued on failure. That is, this control the total
+// end-to-end maximum time we allow for trying a request, This overrides the
+// default of:
+//
+// JoinGroup: cfg.SessionTimeout (default 45s)
+// SyncGroup: cfg.SessionTimeout (default 45s)
+// Heartbeat: cfg.SessionTimeout (default 45s)
+// others: 30s
+//
+// This timeout applies to any request issued through a client's Request
+// function. It does not apply to fetches nor produces.
+//
+// A value of zero indicates no request timeout.
+//
+// The timeout is evaluated after a request errors. If the time since the start
+// of the first request plus any backoff for the latest failure is less than
+// the retry timeout, the request will be issued again.
+func RetryTimeout(t time.Duration) Opt {
+ return RetryTimeoutFn(func(int16) time.Duration { return t })
+}
+
+// RetryTimeoutFn sets the upper limit on how long we allow a request to be
+// issued and then reissued on failure. That is, this control the total
+// end-to-end maximum time we allow for trying a request, This overrides the
+// default of:
+//
+// JoinGroup: cfg.SessionTimeout (default 45s)
+// SyncGroup: cfg.SessionTimeout (default 45s)
+// Heartbeat: cfg.SessionTimeout (default 45s)
+// others: 30s
+//
+// This timeout applies to any request issued through a client's Request
+// function. It does not apply to fetches nor produces.
+//
+// The function is called with the request key that is being retried. While it
+// is not expected that the request key will be used, including it gives users
+// the opportinuty to have different retry timeouts for different keys.
+//
+// If the function returns zero, there is no retry timeout.
+//
+// The timeout is evaluated after a request errors. If the time since the start
+// of the first request plus any backoff for the latest failure is less than
+// the retry timeout, the request will be issued again.
+func RetryTimeoutFn(t func(int16) time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.retryTimeout = t }}
+}
+
+// AllowAutoTopicCreation enables topics to be auto created if they do
+// not exist when fetching their metadata.
+func AllowAutoTopicCreation() Opt {
+ return clientOpt{func(cfg *cfg) { cfg.allowAutoTopicCreation = true }}
+}
+
+// BrokerMaxWriteBytes upper bounds the number of bytes written to a broker
+// connection in a single write, overriding the default 100MiB.
+//
+// This number corresponds to the a broker's socket.request.max.bytes, which
+// defaults to 100MiB.
+//
+// The only Kafka request that could come reasonable close to hitting this
+// limit should be produce requests, and thus this limit is only enforced for
+// produce requests.
+func BrokerMaxWriteBytes(v int32) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.maxBrokerWriteBytes = v }}
+}
+
+// BrokerMaxReadBytes sets the maximum response size that can be read from
+// Kafka, overriding the default 100MiB.
+//
+// This is a safety measure to avoid OOMing on invalid responses. This is
+// slightly double FetchMaxBytes; if bumping that, consider bump this. No other
+// response should run the risk of hitting this limit.
+func BrokerMaxReadBytes(v int32) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.maxBrokerReadBytes = v }}
+}
+
+// MetadataMaxAge sets the maximum age for the client's cached metadata,
+// overriding the default 5m, to allow detection of new topics, partitions,
+// etc.
+//
+// This corresponds to Kafka's metadata.max.age.ms.
+func MetadataMaxAge(age time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.metadataMaxAge = age }}
+}
+
+// MetadataMinAge sets the minimum time between metadata queries, overriding
+// the default 5s. You may want to raise or lower this to reduce the number of
+// metadata queries the client will make. Notably, if metadata detects an error
+// in any topic or partition, it triggers itself to update as soon as allowed.
+func MetadataMinAge(age time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.metadataMinAge = age }}
+}
+
+// SASL appends sasl authentication options to use for all connections.
+//
+// SASL is tried in order; if the broker supports the first mechanism, all
+// connections will use that mechanism. If the first mechanism fails, the
+// client will pick the first supported mechanism. If the broker does not
+// support any client mechanisms, connections will fail.
+func SASL(sasls ...sasl.Mechanism) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.sasls = append(cfg.sasls, sasls...) }}
+}
+
+// WithHooks sets hooks to call whenever relevant.
+//
+// Hooks can be used to layer in metrics (such as Prometheus hooks) or anything
+// else. The client will call all hooks in order. See the Hooks interface for
+// more information, as well as any interface that contains "Hook" in the name
+// to know the available hooks. A single hook can implement zero or all hook
+// interfaces, and only the hooks that it implements will be called.
+func WithHooks(hooks ...Hook) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.hooks = append(cfg.hooks, hooks...) }}
+}
+
+// ConcurrentTransactionsBackoff sets the backoff interval to use during
+// transactional requests in case we encounter CONCURRENT_TRANSACTIONS error,
+// overriding the default 20ms.
+//
+// Sometimes, when a client begins a transaction quickly enough after finishing
+// a previous one, Kafka will return a CONCURRENT_TRANSACTIONS error. Clients
+// are expected to backoff slightly and retry the operation. Lower backoffs may
+// increase load on the brokers, while higher backoffs may increase transaction
+// latency in clients.
+//
+// Note that if brokers are hanging in this concurrent transactions state for
+// too long, the client progressively increases the backoff.
+func ConcurrentTransactionsBackoff(backoff time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.txnBackoff = backoff }}
+}
+
+// ConsiderMissingTopicDeletedAfter sets the amount of time a topic can be
+// missing from metadata responses _after_ loading it at least once before it
+// is considered deleted, overriding the default of 15s. Note that for newer
+// versions of Kafka, it may take a bit of time (~15s) for the cluster to fully
+// recognize a newly created topic. If this option is set too low, there is
+// some risk that the client will internally purge and re-see a topic a few
+// times until the cluster fully broadcasts the topic creation.
+func ConsiderMissingTopicDeletedAfter(t time.Duration) Opt {
+ return clientOpt{func(cfg *cfg) { cfg.missingTopicDelete = t }}
+}
+
+////////////////////////////
+// PRODUCER CONFIGURATION //
+////////////////////////////
+
+// DefaultProduceTopic sets the default topic to produce to if the topic field
+// is empty in a Record.
+//
+// If this option is not used, if a record has an empty topic, the record
+// cannot be produced and will be failed immediately.
+func DefaultProduceTopic(t string) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.defaultProduceTopic = t }}
+}
+
+// Acks represents the number of acks a broker leader must have before
+// a produce request is considered complete.
+//
+// This controls the durability of written records and corresponds to "acks" in
+// Kafka's Producer Configuration documentation.
+//
+// The default is LeaderAck.
+type Acks struct {
+ val int16
+}
+
+// NoAck considers records sent as soon as they are written on the wire.
+// The leader does not reply to records.
+func NoAck() Acks { return Acks{0} }
+
+// LeaderAck causes Kafka to reply that a record is written after only
+// the leader has written a message. The leader does not wait for in-sync
+// replica replies.
+func LeaderAck() Acks { return Acks{1} }
+
+// AllISRAcks ensures that all in-sync replicas have acknowledged they
+// wrote a record before the leader replies success.
+func AllISRAcks() Acks { return Acks{-1} }
+
+// RequiredAcks sets the required acks for produced records,
+// overriding the default RequireAllISRAcks.
+func RequiredAcks(acks Acks) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.acks = acks }}
+}
+
+// DisableIdempotentWrite disables idempotent produce requests, opting out of
+// Kafka server-side deduplication in the face of reissued requests due to
+// transient network problems. Disabling idempotent write by default
+// upper-bounds the number of in-flight produce requests per broker to 1, vs.
+// the default of 5 when using idempotency.
+//
+// Idempotent production is strictly a win, but does require the
+// IDEMPOTENT_WRITE permission on CLUSTER (pre Kafka 3.0), and not all clients
+// can have that permission.
+//
+// This option is incompatible with specifying a transactional id.
+func DisableIdempotentWrite() ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.disableIdempotency = true }}
+}
+
+// MaxProduceRequestsInflightPerBroker changes the number of allowed produce
+// requests in flight per broker if you disable idempotency, overriding the
+// default of 1. If using idempotency, this option has no effect: the maximum
+// in flight for Kafka v0.11 is 1, and from v1 onward is 5.
+//
+// Using more than 1 may result in out of order records and may result in
+// duplicates if there are connection issues.
+func MaxProduceRequestsInflightPerBroker(n int) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.maxProduceInflight = n }}
+}
+
+// ProducerBatchCompression sets the compression codec to use for producing
+// records.
+//
+// Compression is chosen in the order preferred based on broker support. For
+// example, zstd compression was introduced in Kafka 2.1, so the preference
+// can be first zstd, fallback snappy, fallback none.
+//
+// The default preference is [snappy, none], which should be fine for all old
+// consumers since snappy compression has existed since Kafka 0.8.0. To use
+// zstd, your brokers must be at least 2.1 and all consumers must be upgraded
+// to support decoding zstd records.
+func ProducerBatchCompression(preference ...CompressionCodec) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.compression = preference }}
+}
+
+// ProducerBatchMaxBytes upper bounds the size of a record batch, overriding
+// the default 1,000,012 bytes. This mirrors Kafka's max.message.bytes.
+//
+// Record batches are independent of a ProduceRequest: a record batch is
+// specific to a topic and partition, whereas the produce request can contain
+// many record batches for many topics.
+//
+// If a single record encodes larger than this number (before compression), it
+// will will not be written and a callback will have the appropriate error.
+//
+// Note that this is the maximum size of a record batch before compression. If
+// a batch compresses poorly and actually grows the batch, the uncompressed
+// form will be used.
+func ProducerBatchMaxBytes(v int32) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.maxRecordBatchBytes = v }}
+}
+
+// MaxBufferedRecords sets the max amount of records the client will buffer,
+// blocking produces until records are finished if this limit is reached.
+// This overrides the default of 10,000.
+func MaxBufferedRecords(n int) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.maxBufferedRecords = int64(n) }}
+}
+
+// MaxBufferedBytes sets the max amount of bytes that the client will buffer
+// while producing, blocking produces until records are finished if this limit
+// is reached. This overrides the unlimited default.
+//
+// Note that this option does _not_ apply for consuming: the client cannot
+// limit bytes buffered for consuming because of decompression. You can roughly
+// control consuming memory by using [MaxConcurrentFetches], [FetchMaxBytes],
+// and [FetchMaxPartitionBytes].
+//
+// If you produce a record that is larger than n, the record is immediately
+// failed with kerr.MessageTooLarge.
+//
+// Note that this limit applies after [MaxBufferedRecords].
+func MaxBufferedBytes(n int) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.maxBufferedBytes = int64(n) }}
+}
+
+// RecordPartitioner uses the given partitioner to partition records, overriding
+// the default UniformBytesPartitioner(64KiB, true, true, nil).
+func RecordPartitioner(partitioner Partitioner) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.partitioner = partitioner }}
+}
+
+// ProduceRequestTimeout sets how long Kafka broker's are allowed to respond to
+// produce requests, overriding the default 10s. If a broker exceeds this
+// duration, it will reply with a request timeout error.
+//
+// This somewhat corresponds to Kafka's request.timeout.ms setting, but only
+// applies to produce requests. This settings sets the TimeoutMillis field in
+// the produce request itself. The RequestTimeoutOverhead is applied as a write
+// limit and read limit in addition to this.
+func ProduceRequestTimeout(limit time.Duration) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.produceTimeout = limit }}
+}
+
+// RecordRetries sets the number of tries for producing records, overriding the
+// unlimited default.
+//
+// If idempotency is enabled (as it is by default), this option is only
+// enforced if it is safe to do so without creating invalid sequence numbers.
+// It is safe to enforce if a record was never issued in a request to Kafka, or
+// if it was requested and received a response.
+//
+// If a record fails due to retries, all records buffered in the same partition
+// are failed as well. This ensures gapless ordering: the client will not fail
+// one record only to produce a later one successfully. This also allows for
+// easier sequence number ordering internally.
+//
+// If a topic repeatedly fails to load with UNKNOWN_TOPIC_OR_PARTITION, it has
+// a different limit (the UnknownTopicRetries option). All records for a topic
+// that repeatedly cannot be loaded are failed when that limit is hit.
+//
+// This option is different from RequestRetries to allow finer grained control
+// of when to fail when producing records.
+func RecordRetries(n int) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.recordRetries = int64(n) }}
+}
+
+// UnknownTopicRetries sets the number of times a record can fail with
+// UNKNOWN_TOPIC_OR_PARTITION, overriding the default 4.
+//
+// This is a separate limit from RecordRetries because unknown topic or
+// partition errors should only happen if the topic does not exist. It is
+// pointless for the client to continue producing to a topic that does not
+// exist, and if we repeatedly see that the topic does not exist across
+// multiple metadata queries (which are going to different brokers), then we
+// may as well stop trying and fail the records.
+//
+// If this is -1, the client never fails records with this error.
+func UnknownTopicRetries(n int) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.maxUnknownFailures = int64(n) }}
+}
+
+// StopProducerOnDataLossDetected sets the client to stop producing if data
+// loss is detected, overriding the default false.
+//
+// Note that if using this option, it is strongly recommended to not have a
+// retry limit. Doing so may lead to errors where the client fails a batch on a
+// recoverable error, which internally bumps the idempotent sequence number
+// used for producing, which may then later cause an inadvertent out of order
+// sequence number and false "data loss" detection.
+func StopProducerOnDataLossDetected() ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.stopOnDataLoss = true }}
+}
+
+// ProducerOnDataLossDetected sets a function to call if data loss is detected
+// when producing records if the client is configured to continue on data loss.
+// Thus, this option is mutually exclusive with StopProducerOnDataLossDetected.
+//
+// The passed function will be called with the topic and partition that data
+// loss was detected on.
+func ProducerOnDataLossDetected(fn func(string, int32)) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.onDataLoss = fn }}
+}
+
+// ProducerLinger sets how long individual topic partitions will linger waiting
+// for more records before triggering a request to be built.
+//
+// Note that this option should only be used in low volume producers. The only
+// benefit of lingering is to potentially build a larger batch to reduce cpu
+// usage on the brokers if you have many producers all producing small amounts.
+//
+// If a produce request is triggered by any topic partition, all partitions
+// with a possible batch to be sent are used and all lingers are reset.
+//
+// As mentioned, the linger is specific to topic partition. A high volume
+// producer will likely be producing to many partitions; it is both unnecessary
+// to linger in this case and inefficient because the client will have many
+// timers running (and stopping and restarting) unnecessarily.
+func ProducerLinger(linger time.Duration) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.linger = linger }}
+}
+
+// ManualFlushing disables auto-flushing when producing. While you can still
+// set lingering, it would be useless to do so.
+//
+// With manual flushing, producing while MaxBufferedRecords or MaxBufferedBytes
+// have already been produced and not flushed will return ErrMaxBuffered.
+func ManualFlushing() ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.manualFlushing = true }}
+}
+
+// RecordDeliveryTimeout sets a rough time of how long a record can sit around
+// in a batch before timing out, overriding the unlimited default.
+//
+// If idempotency is enabled (as it is by default), this option is only
+// enforced if it is safe to do so without creating invalid sequence numbers.
+// It is safe to enforce if a record was never issued in a request to Kafka, or
+// if it was requested and received a response.
+//
+// The timeout for all records in a batch inherit the timeout of the first
+// record in that batch. That is, once the first record's timeout expires, all
+// records in the batch are expired. This generally is a non-issue unless using
+// this option with lingering. In that case, simply add the linger to the
+// record timeout to avoid problems.
+//
+// If a record times out, all records buffered in the same partition are failed
+// as well. This ensures gapless ordering: the client will not fail one record
+// only to produce a later one successfully. This also allows for easier
+// sequence number ordering internally.
+//
+// The timeout is only evaluated evaluated before writing a request or after a
+// produce response. Thus, a sink backoff may delay record timeout slightly.
+//
+// This option is roughly equivalent to delivery.timeout.ms.
+func RecordDeliveryTimeout(timeout time.Duration) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.recordTimeout = timeout }}
+}
+
+// TransactionalID sets a transactional ID for the client, ensuring that
+// records are produced transactionally under this ID (exactly once semantics).
+//
+// For Kafka-to-Kafka transactions, the transactional ID is only one half of
+// the equation. You must also assign a group to consume from.
+//
+// To produce transactionally, you first BeginTransaction, then produce records
+// consumed from a group, then you EndTransaction. All records produced outside
+// of a transaction will fail immediately with an error.
+//
+// After producing a batch, you must commit what you consumed. Auto committing
+// offsets is disabled during transactional consuming / producing.
+//
+// Note that unless using Kafka 2.5, a consumer group rebalance may be
+// problematic. Production should finish and be committed before the client
+// rejoins the group. It may be safer to use an eager group balancer and just
+// abort the transaction. Alternatively, any time a partition is revoked, you
+// could abort the transaction and reset offsets being consumed.
+//
+// If the client detects an unrecoverable error, all records produced
+// thereafter will fail.
+//
+// Lastly, the default read level is READ_UNCOMMITTED. Be sure to use the
+// ReadIsolationLevel option if you want to only read committed.
+func TransactionalID(id string) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.txnID = &id }}
+}
+
+// TransactionTimeout sets the allowed for a transaction, overriding the
+// default 40s. It is a good idea to keep this less than a group's session
+// timeout, so that a group member will always be alive for the duration of a
+// transaction even if connectivity dies. This helps prevent a transaction
+// finishing after a rebalance, which is problematic pre-Kafka 2.5. If you
+// are on Kafka 2.5+, then you can use the RequireStableFetchOffsets option
+// when assigning the group, and you can set this to whatever you would like.
+//
+// Transaction timeouts begin when the first record is produced within a
+// transaction, not when a transaction begins.
+func TransactionTimeout(timeout time.Duration) ProducerOpt {
+ return producerOpt{func(cfg *cfg) { cfg.txnTimeout = timeout }}
+}
+
+////////////////////////////
+// CONSUMER CONFIGURATION //
+////////////////////////////
+
+// FetchMaxWait sets the maximum amount of time a broker will wait for a
+// fetch response to hit the minimum number of required bytes before returning,
+// overriding the default 5s.
+//
+// This corresponds to the Java fetch.max.wait.ms setting.
+func FetchMaxWait(wait time.Duration) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.maxWait = int32(wait.Milliseconds()) }}
+}
+
+// FetchMaxBytes sets the maximum amount of bytes a broker will try to send
+// during a fetch, overriding the default 50MiB. Note that brokers may not obey
+// this limit if it has records larger than this limit. Also note that this
+// client sends a fetch to each broker concurrently, meaning the client will
+// buffer up to worth of memory.
+//
+// This corresponds to the Java fetch.max.bytes setting.
+//
+// If bumping this, consider bumping BrokerMaxReadBytes.
+//
+// If what you are consuming is compressed, and compressed well, it is strongly
+// recommended to set this option so that decompression does not eat all of
+// your RAM.
+func FetchMaxBytes(b int32) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.maxBytes = lazyI32(b) }}
+}
+
+// FetchMinBytes sets the minimum amount of bytes a broker will try to send
+// during a fetch, overriding the default 1 byte.
+//
+// With the default of 1, data is sent as soon as it is available. By bumping
+// this, the broker will try to wait for more data, which may improve server
+// throughput at the expense of added latency.
+//
+// This corresponds to the Java fetch.min.bytes setting.
+func FetchMinBytes(b int32) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.minBytes = b }}
+}
+
+// FetchMaxPartitionBytes sets the maximum amount of bytes that will be
+// consumed for a single partition in a fetch request, overriding the default
+// 1MiB. Note that if a single batch is larger than this number, that batch
+// will still be returned so the client can make progress.
+//
+// This corresponds to the Java max.partition.fetch.bytes setting.
+func FetchMaxPartitionBytes(b int32) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.maxPartBytes = lazyI32(b) }}
+}
+
+// MaxConcurrentFetches sets the maximum number of fetch requests to allow in
+// flight or buffered at once, overriding the unbounded (i.e. number of
+// brokers) default.
+//
+// This setting, paired with FetchMaxBytes, can upper bound the maximum amount
+// of memory that the client can use for consuming.
+//
+// Requests are issued to brokers in a FIFO order: once the client is ready to
+// issue a request to a broker, it registers that request and issues it in
+// order with other registrations.
+//
+// If Kafka replies with any data, the client does not track the fetch as
+// completed until the user has polled the buffered fetch. Thus, a concurrent
+// fetch is not considered complete until all data from it is done being
+// processed and out of the client itself.
+//
+// Note that brokers are allowed to hang for up to FetchMaxWait before replying
+// to a request, so if this option is too constrained and you are consuming a
+// low throughput topic, the client may take a long time before requesting a
+// broker that has new data. For high throughput topics, or if the allowed
+// concurrent fetches is large enough, this should not be a concern.
+//
+// A value of 0 implies the allowed concurrency is unbounded and will be
+// limited only by the number of brokers in the cluster.
+func MaxConcurrentFetches(n int) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.maxConcurrentFetches = n }}
+}
+
+// ConsumeResetOffset sets the offset to start consuming from, or if
+// OffsetOutOfRange is seen while fetching, to restart consuming from. The
+// default is NewOffset().AtStart(), i.e., the earliest offset.
+//
+// For direct consumers, this is the offset that partitions begin to consume
+// from. For group consumers, this is the offset that partitions begin to
+// consume from if a partition has no commits. If partitions have commits, the
+// commit offset is used. While fetching, if OffsetOutOfRange is encountered,
+// the partition resets to ConsumeResetOffset. Using [NoResetOffset] stops
+// consuming a partition if the client encounters OffsetOutOfRange. Using
+// [Offset.AtCommitted] prevents consuming a partition in a group if the
+// partition has no prior commits.
+//
+// If you use an exact offset or relative offsets and the offset ends up out of
+// range, the client chooses the nearest of either the log start offset or the
+// high watermark: using At(3) when the partition starts at 8 results in the
+// partition being consumed from offset 8.
+//
+// In short form, the following determines the offset for when a partition is
+// seen for the first time, or reset while fetching:
+//
+// reset at start? => log start offset
+// reset at end? => high watermark
+// reset at exact? => this exact offset (3 means offset 3)
+// reset relative? => the above, + / - the relative amount
+// reset exact or relative out of bounds? => nearest boundary (start or end)
+// reset after millisec? => high watermark, or first offset after millisec if one exists
+//
+// To match Kafka's auto.offset.reset,
+//
+// NewOffset().AtStart() == auto.offset.reset "earliest"
+// NewOffset().AtEnd() == auto.offset.reset "latest"
+// NewOffset().AtCommitted() == auto.offset.reset "none"
+//
+// With the above, make sure to use NoResetOffset() if you want to stop
+// consuming when you encounter OffsetOutOfRange. It is highly recommended
+// to read the docs for all Offset methods to see a few other alternatives.
+func ConsumeResetOffset(offset Offset) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.resetOffset = offset }}
+}
+
+// Rack specifies where the client is physically located and changes fetch
+// requests to consume from the closest replica as opposed to the leader
+// replica.
+//
+// Consuming from a preferred replica can increase latency but can decrease
+// cross datacenter costs. See KIP-392 for more information.
+func Rack(rack string) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.rack = rack }}
+}
+
+// IsolationLevel controls whether uncommitted or only committed records are
+// returned from fetch requests.
+type IsolationLevel struct {
+ level int8
+}
+
+// ReadUncommitted (the default) is an isolation level that returns the latest
+// produced records, be they committed or not.
+func ReadUncommitted() IsolationLevel { return IsolationLevel{0} }
+
+// ReadCommitted is an isolation level to only fetch committed records.
+func ReadCommitted() IsolationLevel { return IsolationLevel{1} }
+
+// FetchIsolationLevel sets the "isolation level" used for fetching
+// records, overriding the default ReadUncommitted.
+func FetchIsolationLevel(level IsolationLevel) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.isolationLevel = level.level }}
+}
+
+// KeepControlRecords sets the client to keep control messages and return
+// them with fetches, overriding the default that discards them.
+//
+// Generally, control messages are not useful.
+func KeepControlRecords() ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.keepControl = true }}
+}
+
+// ConsumeTopics adds topics to use for consuming.
+//
+// By default, consuming will start at the beginning of partitions. To change
+// this, use the ConsumeResetOffset option.
+func ConsumeTopics(topics ...string) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) {
+ cfg.topics = make(map[string]*regexp.Regexp, len(topics))
+ for _, topic := range topics {
+ cfg.topics[topic] = nil
+ }
+ }}
+}
+
+// ConsumePartitions sets partitions to consume from directly and the offsets
+// to start consuming those partitions from.
+//
+// This option is basically a way to explicitly consume from subsets of
+// partitions in topics, or to consume at exact offsets. Offsets from this
+// option have higher precedence than the ConsumeResetOffset.
+//
+// This option is not compatible with group consuming and regex consuming. If
+// you want to assign partitions directly, but still use Kafka to commit
+// offsets, check out the kadm package's FetchOffsets and CommitOffsets
+// methods. These will allow you to commit as a group outside the context of a
+// Kafka group.
+func ConsumePartitions(partitions map[string]map[int32]Offset) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.partitions = partitions }}
+}
+
+// ConsumeRegex sets the client to parse all topics passed to ConsumeTopics as
+// regular expressions.
+//
+// When consuming via regex, every metadata request loads *all* topics, so that
+// all topics can be passed to any regular expressions. Every topic is
+// evaluated only once ever across all regular expressions; either it
+// permanently is known to match, or is permanently known to not match.
+func ConsumeRegex() ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.regex = true }}
+}
+
+// DisableFetchSessions sets the client to not use fetch sessions (Kafka 1.0+).
+//
+// A "fetch session" is is a way to reduce bandwidth for fetch requests &
+// responses, and to potentially reduce the amount of work that brokers have to
+// do to handle fetch requests. A fetch session opts into the broker tracking
+// some state of what the client is interested in. For example, say that you
+// are interested in thousands of topics, and most of these topics are
+// receiving data only rarely. A fetch session allows the client to register
+// that it is interested in those thousands of topics on the first request. On
+// future requests, if the offsets for these topics have not changed, those
+// topics will be elided from the request. The broker knows to reply with the
+// extra topics if any new data is available, otherwise the topics are also
+// elided from the response. This massively reduces the amount of information
+// that needs to be included in requests or responses.
+//
+// Using fetch sessions means more state is stored on brokers. Maintaining this
+// state eats some memory. If you have thousands of consumers, you may not want
+// fetch sessions to be used for everything. Brokers intelligently handle this
+// by not creating sessions if they are at their configured limit, but you may
+// consider disabling sessions if they are generally not useful to you. Brokers
+// have metrics for the number of fetch sessions active, so you can monitor
+// that to determine whether enabling or disabling sessions is beneficial or
+// not.
+//
+// For more details on fetch sessions, see KIP-227.
+func DisableFetchSessions() ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.disableFetchSessions = true }}
+}
+
+// ConsumePreferringLagFn allows you to re-order partitions before they are
+// fetched, given each partition's current lag.
+//
+// By default, the client rotates partitions fetched by one after every fetch
+// request. Kafka answers fetch requests in the order that partitions are
+// requested, filling the fetch response until FetchMaxBytes and
+// FetchMaxPartitionBytes are hit. All partitions eventually rotate to the
+// front, ensuring no partition is starved.
+//
+// With this option, you can return topic order and per-topic partition
+// ordering. These orders will sort to the front (first by topic, then by
+// partition). Any topic or partitions that you do not return are added to the
+// end, preserving their original ordering.
+//
+// For a simple lag preference that sorts the laggiest topics and partitions
+// first, use `kgo.ConsumePreferringLagFn(kgo.PreferLagAt(50))` (or some other
+// similar lag number).
+func ConsumePreferringLagFn(fn PreferLagFn) ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.preferLagFn = fn }}
+}
+
+// KeepRetryableFetchErrors switches the client to always return any retryable
+// broker error when fetching, rather than stripping them. By default, the
+// client strips retryable errors from fetch responses; these are usually
+// signals that a client needs to update its metadata to learn of where a
+// partition has moved to (from one broker to another), or they are signals
+// that one broker is temporarily unhealthy (broker not available). You can opt
+// into keeping these errors if you want to specifically react to certain
+// events. For example, if you want to react to you yourself deleting a topic,
+// you can watch for either UNKNOWN_TOPIC_OR_PARTITION or UNKNOWN_TOPIC_ID
+// errors being returned in fetches (and ignore the other errors).
+func KeepRetryableFetchErrors() ConsumerOpt {
+ return consumerOpt{func(cfg *cfg) { cfg.keepRetryableFetchErrors = true }}
+}
+
+//////////////////////////////////
+// CONSUMER GROUP CONFIGURATION //
+//////////////////////////////////
+
+// ConsumerGroup sets the consumer group for the client to join and consume in.
+// This option is required if using any other group options.
+//
+// Note that when group consuming, the default is to autocommit every 5s. To be
+// safe, autocommitting only commits what is *previously* polled. If you poll
+// once, nothing will be committed. If you poll again, the first poll is
+// available to be committed. This ensures at-least-once processing, but does
+// mean there is likely some duplicate processing during rebalances. When your
+// client shuts down, you should issue one final synchronous commit before
+// leaving the group (because you will not be polling again, and you are not
+// waiting for an autocommit).
+func ConsumerGroup(group string) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.group = group }}
+}
+
+// Balancers sets the group balancers to use for dividing topic partitions
+// among group members, overriding the current default [cooperative-sticky].
+// This option is equivalent to Kafka's partition.assignment.strategies option.
+//
+// For balancing, Kafka chooses the first protocol that all group members agree
+// to support.
+//
+// Note that if you opt into cooperative-sticky rebalancing, cooperative group
+// balancing is incompatible with eager (classical) rebalancing and requires a
+// careful rollout strategy (see KIP-429).
+func Balancers(balancers ...GroupBalancer) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.balancers = balancers }}
+}
+
+// SessionTimeout sets how long a member in the group can go between
+// heartbeats, overriding the default 45,000ms. If a member does not heartbeat
+// in this timeout, the broker will remove the member from the group and
+// initiate a rebalance.
+//
+// If you are using a GroupTransactSession for EOS, wish to lower this, and are
+// talking to a Kafka cluster pre 2.5, consider lowering the
+// TransactionTimeout. If you do not, you risk a transaction finishing after a
+// group has rebalanced, which could lead to duplicate processing. If you are
+// talking to a Kafka 2.5+ cluster, you can safely use the
+// RequireStableFetchOffsets group option and prevent any problems.
+//
+// This option corresponds to Kafka's session.timeout.ms setting and must be
+// within the broker's group.min.session.timeout.ms and
+// group.max.session.timeout.ms.
+func SessionTimeout(timeout time.Duration) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.sessionTimeout = timeout }}
+}
+
+// RebalanceTimeout sets how long group members are allowed to take when a a
+// rebalance has begun, overriding the default 60,000ms. This timeout is how
+// long all members are allowed to complete work and commit offsets, minus the
+// time it took to detect the rebalance (from a heartbeat).
+//
+// Kafka uses the largest rebalance timeout of all members in the group. If a
+// member does not rejoin within this timeout, Kafka will kick that member from
+// the group.
+//
+// This corresponds to Kafka's rebalance.timeout.ms.
+func RebalanceTimeout(timeout time.Duration) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.rebalanceTimeout = timeout }}
+}
+
+// HeartbeatInterval sets how long a group member goes between heartbeats to
+// Kafka, overriding the default 3,000ms.
+//
+// Kafka uses heartbeats to ensure that a group member's session stays active.
+// This value can be any value lower than the session timeout, but should be no
+// higher than 1/3rd the session timeout.
+//
+// This corresponds to Kafka's heartbeat.interval.ms.
+func HeartbeatInterval(interval time.Duration) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.heartbeatInterval = interval }}
+}
+
+// RequireStableFetchOffsets sets the group consumer to require "stable" fetch
+// offsets before consuming from the group. Proposed in KIP-447 and introduced
+// in Kafka 2.5, stable offsets are important when consuming from partitions
+// that a transactional producer could be committing to.
+//
+// With this option, Kafka will block group consumers from fetching offsets for
+// partitions that are in an active transaction. This option is **strongly**
+// recommended to help prevent duplication problems. See this repo's KIP-447
+// doc to learn more.
+//
+// Because this can block consumption, it is strongly recommended to set
+// transactional timeouts to a small value (10s) rather than the default 60s.
+// Lowering the transactional timeout will reduce the chance that consumers are
+// entirely blocked.
+func RequireStableFetchOffsets() GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.requireStable = true }}
+}
+
+// BlockRebalanceOnPoll switches the client to block rebalances whenever you
+// poll until you explicitly call AllowRebalance. This option also ensures that
+// any OnPartitions{Assigned,Revoked,Lost} callbacks are only called when you
+// allow rebalances; they cannot be called if you have polled and are
+// processing records.
+//
+// By default, a consumer group is managed completely independently of
+// consuming. A rebalance may occur at any moment. If you poll records, and
+// then a rebalance happens, and then you commit, you may be committing to
+// partitions you no longer own. This will result in duplicates. In the worst
+// case, you could rewind commits that a different member has already made
+// (risking duplicates if another rebalance were to happen before that other
+// member commits again).
+//
+// By blocking rebalancing after you poll until you call AllowRebalances, you
+// can be sure that you commit records that your member currently owns.
+// However, the big tradeoff is that by blocking rebalances, you put your group
+// member at risk of waiting so long that the group member is kicked from the
+// group because it exceeded the rebalance timeout. To compare clients, Sarama
+// takes the default choice of blocking rebalancing; this option makes kgo more
+// similar to Sarama.
+//
+// If you use this option, you should ensure that you always process records
+// quickly, and that your OnPartitions{Assigned,Revoked,Lost} callbacks are
+// fast. It is recommended you also use PollRecords rather than PollFetches so
+// that you can bound how many records you process at once. You must always
+// AllowRebalances when you are done processing the records you received. Only
+// rebalances that lose partitions are blocked; rebalances that are strictly
+// net additions or non-modifications do not block (the On callbacks are always
+// blocked so that you can ensure their serialization).
+//
+// This function can largely replace any commit logic you may want to do in
+// OnPartitionsRevoked.
+//
+// Lastly, note that this actually blocks any rebalance from calling
+// OnPartitions{Assigned,Revoked,Lost}. If you are using a cooperative
+// rebalancer such as CooperativeSticky, a rebalance can begin right before you
+// poll, and you will still receive records because no partitions are lost yet.
+// The in-progress rebalance only blocks if you are assigned new partitions or
+// if any of your partitions are revoked.
+func BlockRebalanceOnPoll() GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.blockRebalanceOnPoll = true }}
+}
+
+// AdjustFetchOffsetsFn sets the function to be called when a group is joined
+// after offsets are fetched so that a user can adjust offsets before
+// consumption begins.
+//
+// This function should not exceed the rebalance interval. It is possible
+// for the group, immediately after finishing a balance, to re-enter a new balancing
+// session. This function is passed a context that is canceled if the current group
+// session finishes (i.e., after revoking).
+//
+// If you are resetting the position of the offset, you may want to clear any existing
+// "epoch" with WithEpoch(-1). If the epoch is non-negative, the client performs
+// data loss detection, which may result in errors and unexpected behavior.
+//
+// This function is called after OnPartitionsAssigned and may be called before
+// or after OnPartitionsRevoked.
+func AdjustFetchOffsetsFn(adjustOffsetsBeforeAssign func(context.Context, map[string]map[int32]Offset) (map[string]map[int32]Offset, error)) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.adjustOffsetsBeforeAssign = adjustOffsetsBeforeAssign }}
+}
+
+// OnPartitionsAssigned sets the function to be called when a group is joined
+// after partitions are assigned before fetches for those partitions begin.
+//
+// This function, combined with OnPartitionsRevoked, should not exceed the
+// rebalance interval. It is possible for the group to re-enter a new balancing
+// session immediately after finishing a balance.
+//
+// This function is passed the client's context, which is only canceled if the
+// client is closed.
+//
+// This function is not called concurrent with any other OnPartitions callback,
+// and this function is given a new map that the user is free to modify.
+//
+// This function can be called at any time you are polling or processing
+// records. If you want to ensure this function is called serially with
+// processing, consider the BlockRebalanceOnPoll option.
+func OnPartitionsAssigned(onAssigned func(context.Context, *Client, map[string][]int32)) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.onAssigned, cfg.setAssigned = onAssigned, true }}
+}
+
+// OnPartitionsRevoked sets the function to be called once this group member
+// has partitions revoked.
+//
+// This function, combined with [OnPartitionsAssigned], should not exceed the
+// rebalance interval. It is possible for the group to re-enter a new balancing
+// session immediately after finishing a balance.
+//
+// If autocommit is enabled, the default OnPartitionsRevoked is a blocking
+// commit of all non-dirty offsets (where "dirty" is the most recent poll).
+//
+// The OnPartitionsRevoked function is passed the client's context, which is
+// only canceled if the client is closed. OnPartitionsRevoked function is
+// called at the end of a group session even if there are no partitions being
+// revoked. If you are committing offsets manually (have disabled
+// autocommitting), it is highly recommended to do a proper blocking commit in
+// OnPartitionsRevoked.
+//
+// This function is not called concurrent with any other OnPartitions callback,
+// and this function is given a new map that the user is free to modify.
+//
+// This function can be called at any time you are polling or processing
+// records. If you want to ensure this function is called serially with
+// processing, consider the BlockRebalanceOnPoll option.
+//
+// This function is called if a "fatal" group error is encountered and you have
+// not set [OnPartitionsLost]. See OnPartitionsLost for more details.
+func OnPartitionsRevoked(onRevoked func(context.Context, *Client, map[string][]int32)) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.onRevoked, cfg.setRevoked = onRevoked, true }}
+}
+
+// OnPartitionsLost sets the function to be called on "fatal" group errors,
+// such as IllegalGeneration, UnknownMemberID, and authentication failures.
+// This function differs from [OnPartitionsRevoked] in that it is unlikely that
+// commits will succeed when partitions are outright lost, whereas commits
+// likely will succeed when revoking partitions.
+//
+// Because this function is called on any fatal group error, it is possible for
+// this function to be called without the group ever being joined.
+//
+// This function is not called concurrent with any other OnPartitions callback,
+// and this function is given a new map that the user is free to modify.
+//
+// This function can be called at any time you are polling or processing
+// records. If you want to ensure this function is called serially with
+// processing, consider the BlockRebalanceOnPoll option.
+func OnPartitionsLost(onLost func(context.Context, *Client, map[string][]int32)) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.onLost, cfg.setLost = onLost, true }}
+}
+
+// OnOffsetsFetched sets a function to be called after offsets have been
+// fetched after a group has been balanced. This function is meant to allow
+// users to inspect offset commit metadata. An error can be returned to exit
+// this group session and exit back to join group.
+//
+// This function should not exceed the rebalance interval. It is possible for
+// the group, immediately after finishing a balance, to re-enter a new
+// balancing session. This function is passed a context that is canceled if the
+// current group session finishes (i.e., after revoking).
+//
+// This function is called after OnPartitionsAssigned and may be called before
+// or after OnPartitionsRevoked.
+func OnOffsetsFetched(onFetched func(context.Context, *Client, *kmsg.OffsetFetchResponse) error) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.onFetched = onFetched }}
+}
+
+// DisableAutoCommit disable auto committing.
+//
+// If you disable autocommitting, you may want to use a custom
+// OnPartitionsRevoked, otherwise you may end up doubly processing records
+// (which is fine, just leads to duplicate processing). Consider the scenario:
+// you, member A, are processing partition 0, and previously committed offset 4
+// and have now locally processed through offset 30. A rebalance happens, and
+// partition 0 moves to member B. If you use OnPartitionsRevoked, you can
+// detect that you are losing this partition and commit your work through
+// offset 30, so that member B can start processing at offset 30. If you do not
+// commit (i.e. you do not use a custom OnPartitionsRevoked), the other member
+// will start processing at offset 4. It may process through offset 50, leading
+// to double processing of offsets 4 through 29. Worse, you, member A, can
+// rewind member B's commit, because member B may commit offset 50 and you may
+// finally eventually commit offset 30. If a rebalance happens, then even more
+// duplicate processing will occur of offsets 30 through 49.
+//
+// Again, OnPartitionsRevoked is not necessary, and not using it just means
+// double processing, which for most workloads is fine since a simple group
+// consumer is not EOS / transactional, only at-least-once. But, this is
+// something to be aware of.
+func DisableAutoCommit() GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.autocommitDisable = true }}
+}
+
+// GreedyAutoCommit opts into committing everything that has been polled when
+// autocommitting (the dirty offsets), rather than committing what has
+// previously been polled. This option may result in message loss if your
+// application crashes.
+func GreedyAutoCommit() GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.autocommitGreedy = true }}
+}
+
+// AutoCommitInterval sets how long to go between autocommits, overriding the
+// default 5s.
+func AutoCommitInterval(interval time.Duration) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.autocommitInterval = interval }}
+}
+
+// AutoCommitMarks switches the autocommitting behavior to only commit "marked"
+// records, which can be done with the MarkCommitRecords method.
+//
+// This option is basically a halfway point between autocommitting and manually
+// committing. If you have slow batch processing of polls, then you can
+// manually mark records to be autocommitted before you poll again. This way,
+// if you usually take a long time between polls, your partial work can still
+// be automatically checkpointed through autocommitting.
+func AutoCommitMarks() GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.autocommitMarks = true }}
+}
+
+// InstanceID sets the group consumer's instance ID, switching the group member
+// from "dynamic" to "static".
+//
+// Prior to Kafka 2.3, joining a group gave a group member a new member ID.
+// The group leader could not tell if this was a rejoining member. Thus, any
+// join caused the group to rebalance.
+//
+// Kafka 2.3 introduced the concept of an instance ID, which can persist across
+// restarts. This allows for avoiding many costly rebalances and allows for
+// stickier rebalancing for rejoining members (since the ID for balancing stays
+// the same). The main downsides are that you, the user of a client, have to
+// manage instance IDs properly, and that it may take longer to rebalance in
+// the event that a client legitimately dies.
+//
+// When using an instance ID, the client does NOT send a leave group request
+// when closing. This allows for the client to restart with the same instance
+// ID and rejoin the group to avoid a rebalance. It is strongly recommended to
+// increase the session timeout enough to allow time for the restart (remember
+// that the default session timeout is 10s).
+//
+// To actually leave the group, you must use an external admin command that
+// issues a leave group request on behalf of this instance ID (see kcl), or you
+// can manually use the kmsg package with a proper LeaveGroupRequest.
+//
+// NOTE: Leaving a group with an instance ID is only supported in Kafka 2.4+.
+//
+// NOTE: If you restart a consumer group leader that is using an instance ID,
+// it will not cause a rebalance even if you change which topics the leader is
+// consuming. If your cluster is 3.2+, this client internally works around this
+// limitation and you do not need to trigger a rebalance manually.
+func InstanceID(id string) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.instanceID = &id }}
+}
+
+// GroupProtocol sets the group's join protocol, overriding the default value
+// "consumer". The only reason to override this is if you are implementing
+// custom join and sync group logic.
+func GroupProtocol(protocol string) GroupOpt {
+ return groupOpt{func(cfg *cfg) { cfg.protocol = protocol }}
+}
+
+// AutoCommitCallback sets the callback to use if autocommitting is enabled.
+// This overrides the default callback that logs errors and continues.
+func AutoCommitCallback(fn func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error)) GroupOpt {
+ return groupOpt{func(cfg *cfg) {
+ if fn != nil {
+ cfg.commitCallback, cfg.setCommitCallback = fn, true
+ }
+ }}
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go
new file mode 100644
index 0000000000000..01f98da486a65
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go
@@ -0,0 +1,2347 @@
+package kgo
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "math"
+ "sort"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// Offset is a message offset in a partition.
+type Offset struct {
+ at int64
+ relative int64
+ epoch int32
+
+ currentEpoch int32 // set by us when mapping offsets to brokers
+
+ noReset bool
+ afterMilli bool
+}
+
+// Random negative, only significant within this package.
+const atCommitted = -999
+
+// MarshalJSON implements json.Marshaler.
+func (o Offset) MarshalJSON() ([]byte, error) {
+ if o.relative == 0 {
+ return []byte(fmt.Sprintf(`{"At":%d,"Epoch":%d,"CurrentEpoch":%d}`, o.at, o.epoch, o.currentEpoch)), nil
+ }
+ return []byte(fmt.Sprintf(`{"At":%d,"Relative":%d,"Epoch":%d,"CurrentEpoch":%d}`, o.at, o.relative, o.epoch, o.currentEpoch)), nil
+}
+
+// String returns the offset as a string; the purpose of this is for logs.
+func (o Offset) String() string {
+ if o.relative == 0 {
+ return fmt.Sprintf("{%d e%d ce%d}", o.at, o.epoch, o.currentEpoch)
+ } else if o.relative > 0 {
+ return fmt.Sprintf("{%d+%d e%d ce%d}", o.at, o.relative, o.epoch, o.currentEpoch)
+ }
+ return fmt.Sprintf("{%d-%d e%d ce%d}", o.at, -o.relative, o.epoch, o.currentEpoch)
+}
+
+// EpochOffset returns this offset as an EpochOffset, allowing visibility into
+// what this offset actually currently is.
+func (o Offset) EpochOffset() EpochOffset {
+ return EpochOffset{
+ Epoch: o.epoch,
+ Offset: o.at,
+ }
+}
+
+// NewOffset creates and returns an offset to use in [ConsumePartitions] or
+// [ConsumeResetOffset].
+//
+// The default offset begins at the end.
+func NewOffset() Offset {
+ return Offset{
+ at: -1,
+ epoch: -1,
+ }
+}
+
+// NoResetOffset returns an offset that can be used as a "none" option for the
+// [ConsumeResetOffset] option. By default, NoResetOffset starts consuming from
+// the beginning of partitions (similar to NewOffset().AtStart()). This can be
+// changed with AtEnd, Relative, etc.
+//
+// Using this offset will make it such that if OffsetOutOfRange is ever
+// encountered while consuming, rather than trying to recover, the client will
+// return the error to the user and enter a fatal state (for the affected
+// partition).
+func NoResetOffset() Offset {
+ return Offset{
+ at: -1,
+ epoch: -1,
+ noReset: true,
+ }
+}
+
+// AfterMilli returns an offset that consumes from the first offset after a
+// given timestamp. This option is *not* compatible with any At options (nor
+// Relative nor WithEpoch); using any of those will clear the special
+// millisecond state.
+//
+// This option can be used to consume at the end of existing partitions, but at
+// the start of any new partitions that are created later:
+//
+// AfterMilli(time.Now().UnixMilli())
+//
+// By default when using this offset, if consuming encounters an
+// OffsetOutOfRange error, consuming will reset to the first offset after this
+// timestamp. You can use NoResetOffset().AfterMilli(...) to instead switch the
+// client to a fatal state (for the affected partition).
+func (o Offset) AfterMilli(millisec int64) Offset {
+ o.at = millisec
+ o.relative = 0
+ o.epoch = -1
+ o.afterMilli = true
+ return o
+}
+
+// AtStart copies 'o' and returns an offset starting at the beginning of a
+// partition.
+func (o Offset) AtStart() Offset {
+ o.afterMilli = false
+ o.at = -2
+ return o
+}
+
+// AtEnd copies 'o' and returns an offset starting at the end of a partition.
+// If you want to consume at the end of the topic as it exists right now, but
+// at the beginning of new partitions as they are added to the topic later,
+// check out AfterMilli.
+func (o Offset) AtEnd() Offset {
+ o.afterMilli = false
+ o.at = -1
+ return o
+}
+
+// AtCommitted copies 'o' and returns an offset that is used *only if*
+// there is an existing commit. This is only useful for group consumers.
+// If a partition being consumed does not have a commit, the partition will
+// enter a fatal state and return an error from PollFetches.
+//
+// Using this function automatically opts into [NoResetOffset].
+func (o Offset) AtCommitted() Offset {
+ o.noReset = true
+ o.afterMilli = false
+ o.at = atCommitted
+ return o
+}
+
+// Relative copies 'o' and returns an offset that starts 'n' relative to what
+// 'o' currently is. If 'o' is at the end (from [AtEnd]), Relative(-100) will
+// begin 100 before the end.
+func (o Offset) Relative(n int64) Offset {
+ o.afterMilli = false
+ o.relative = n
+ return o
+}
+
+// WithEpoch copies 'o' and returns an offset with the given epoch. to use the
+// given epoch. This epoch is used for truncation detection; the default of -1
+// implies no truncation detection.
+func (o Offset) WithEpoch(e int32) Offset {
+ o.afterMilli = false
+ if e < 0 {
+ e = -1
+ }
+ o.epoch = e
+ return o
+}
+
+// At returns a copy of the calling offset, changing the returned offset to
+// begin at exactly the requested offset.
+//
+// There are two potential special offsets to use: -2 allows for consuming at
+// the start, and -1 allows for consuming at the end. These two offsets are
+// equivalent to calling AtStart or AtEnd.
+//
+// If the offset is less than -2, the client bounds it to -2 to consume at the
+// start.
+func (o Offset) At(at int64) Offset {
+ o.afterMilli = false
+ if at < -2 {
+ at = -2
+ }
+ o.at = at
+ return o
+}
+
+type consumer struct {
+ bufferedRecords atomicI64
+ bufferedBytes atomicI64
+
+ cl *Client
+
+ pausedMu sync.Mutex // grabbed when updating paused
+ paused atomic.Value // loaded when issuing fetches
+
+ // mu is grabbed when
+ // - polling fetches, for quickly draining sources / updating group uncommitted
+ // - calling assignPartitions (group / direct updates)
+ mu sync.Mutex
+ d *directConsumer // if non-nil, we are consuming partitions directly
+ g *groupConsumer // if non-nil, we are consuming as a group member
+
+ // On metadata update, if the consumer is set (direct or group), the
+ // client begins a goroutine that updates the consumer kind's
+ // assignments.
+ //
+ // This is done in a goroutine to not block the metadata loop, because
+ // the update **could** wait on a group consumer leaving if a
+ // concurrent LeaveGroup is called, or if restarting a session takes
+ // just a little bit of time.
+ //
+ // The update realistically should be instantaneous, but if it is slow,
+ // some metadata updates could pile up. We loop with our atomic work
+ // loop, which collapses repeated updates into one extra update, so we
+ // loop as little as necessary.
+ outstandingMetadataUpdates workLoop
+
+ // sessionChangeMu is grabbed when a session is stopped and held through
+ // when a session can be started again. The sole purpose is to block an
+ // assignment change running concurrently with a metadata update.
+ sessionChangeMu sync.Mutex
+
+ session atomic.Value // *consumerSession
+ kill atomic.Bool
+
+ usingCursors usedCursors
+
+ sourcesReadyMu sync.Mutex
+ sourcesReadyCond *sync.Cond
+ sourcesReadyForDraining []*source
+ fakeReadyForDraining []Fetch
+
+ pollWaitMu sync.Mutex
+ pollWaitC *sync.Cond
+ pollWaitState uint64 // 0 == nothing, low 32 bits: # pollers, high 32: # waiting rebalances
+}
+
+func (c *consumer) loadPaused() pausedTopics { return c.paused.Load().(pausedTopics) }
+func (c *consumer) clonePaused() pausedTopics { return c.paused.Load().(pausedTopics).clone() }
+func (c *consumer) storePaused(p pausedTopics) { c.paused.Store(p) }
+
+func (c *consumer) waitAndAddPoller() {
+ if !c.cl.cfg.blockRebalanceOnPoll {
+ return
+ }
+ c.pollWaitMu.Lock()
+ defer c.pollWaitMu.Unlock()
+ for c.pollWaitState>>32 != 0 {
+ c.pollWaitC.Wait()
+ }
+ // Rebalance always takes priority, but if there are no active
+ // rebalances, our poll blocks rebalances.
+ c.pollWaitState++
+}
+
+func (c *consumer) unaddPoller() {
+ if !c.cl.cfg.blockRebalanceOnPoll {
+ return
+ }
+ c.pollWaitMu.Lock()
+ defer c.pollWaitMu.Unlock()
+ c.pollWaitState--
+ c.pollWaitC.Broadcast()
+}
+
+func (c *consumer) allowRebalance() {
+ if !c.cl.cfg.blockRebalanceOnPoll {
+ return
+ }
+ c.pollWaitMu.Lock()
+ defer c.pollWaitMu.Unlock()
+ // When allowing rebalances, the user is explicitly saying all pollers
+ // are done. We mask them out.
+ c.pollWaitState &= math.MaxUint32 << 32
+ c.pollWaitC.Broadcast()
+}
+
+func (c *consumer) waitAndAddRebalance() {
+ if !c.cl.cfg.blockRebalanceOnPoll {
+ return
+ }
+ c.pollWaitMu.Lock()
+ defer c.pollWaitMu.Unlock()
+ c.pollWaitState += 1 << 32
+ for c.pollWaitState&math.MaxUint32 != 0 {
+ c.pollWaitC.Wait()
+ }
+}
+
+func (c *consumer) unaddRebalance() {
+ if !c.cl.cfg.blockRebalanceOnPoll {
+ return
+ }
+ c.pollWaitMu.Lock()
+ defer c.pollWaitMu.Unlock()
+ c.pollWaitState -= 1 << 32
+ c.pollWaitC.Broadcast()
+}
+
+// BufferedFetchRecords returns the number of records currently buffered from
+// fetching within the client.
+//
+// This can be used as a gauge to determine how behind your application is for
+// processing records the client has fetched. Note that it is perfectly normal
+// to see a spike of buffered records, which would correspond to a fetch
+// response being processed just before a call to this function. It is only
+// problematic if for you if this function is consistently returning large
+// values.
+func (cl *Client) BufferedFetchRecords() int64 {
+ return cl.consumer.bufferedRecords.Load()
+}
+
+// BufferedFetchBytes returns the number of bytes currently buffered from
+// fetching within the client. This is the sum of all keys, values, and header
+// keys/values. See the related [BufferedFetchRecords] for more information.
+func (cl *Client) BufferedFetchBytes() int64 {
+ return cl.consumer.bufferedBytes.Load()
+}
+
+type usedCursors map[*cursor]struct{}
+
+func (u *usedCursors) use(c *cursor) {
+ if *u == nil {
+ *u = make(map[*cursor]struct{})
+ }
+ (*u)[c] = struct{}{}
+}
+
+func (c *consumer) init(cl *Client) {
+ c.cl = cl
+ c.paused.Store(make(pausedTopics))
+ c.sourcesReadyCond = sync.NewCond(&c.sourcesReadyMu)
+ c.pollWaitC = sync.NewCond(&c.pollWaitMu)
+
+ if len(cl.cfg.topics) > 0 || len(cl.cfg.partitions) > 0 {
+ defer cl.triggerUpdateMetadataNow("querying metadata for consumer initialization") // we definitely want to trigger a metadata update
+ }
+
+ if len(cl.cfg.group) == 0 {
+ c.initDirect()
+ } else {
+ c.initGroup()
+ }
+}
+
+func (c *consumer) consuming() bool {
+ return c.g != nil || c.d != nil
+}
+
+// addSourceReadyForDraining tracks that a source needs its buffered fetch
+// consumed.
+func (c *consumer) addSourceReadyForDraining(source *source) {
+ c.sourcesReadyMu.Lock()
+ c.sourcesReadyForDraining = append(c.sourcesReadyForDraining, source)
+ c.sourcesReadyMu.Unlock()
+ c.sourcesReadyCond.Broadcast()
+}
+
+// addFakeReadyForDraining saves a fake fetch that has important partition
+// errors--data loss or auth failures.
+func (c *consumer) addFakeReadyForDraining(topic string, partition int32, err error, why string) {
+ c.cl.cfg.logger.Log(LogLevelInfo, "injecting fake fetch with an error", "err", err, "why", why)
+ c.sourcesReadyMu.Lock()
+ c.fakeReadyForDraining = append(c.fakeReadyForDraining, Fetch{Topics: []FetchTopic{{
+ Topic: topic,
+ Partitions: []FetchPartition{{
+ Partition: partition,
+ Err: err,
+ }},
+ }}})
+ c.sourcesReadyMu.Unlock()
+ c.sourcesReadyCond.Broadcast()
+}
+
+// NewErrFetch returns a fake fetch containing a single empty topic with a
+// single zero partition with the given error.
+func NewErrFetch(err error) Fetches {
+ return []Fetch{{
+ Topics: []FetchTopic{{
+ Topic: "",
+ Partitions: []FetchPartition{{
+ Partition: -1,
+ Err: err,
+ }},
+ }},
+ }}
+}
+
+// PollFetches waits for fetches to be available, returning as soon as any
+// broker returns a fetch. If the context is nil, this function will return
+// immediately with any currently buffered records.
+//
+// If the client is closed, a fake fetch will be injected that has no topic, a
+// partition of 0, and a partition error of ErrClientClosed. If the context is
+// canceled, a fake fetch will be injected with ctx.Err. These injected errors
+// can be used to break out of a poll loop.
+//
+// It is important to check all partition errors in the returned fetches. If
+// any partition has a fatal error and actually had no records, fake fetch will
+// be injected with the error.
+//
+// If you are group consuming, a rebalance can happen under the hood while you
+// process the returned fetches. This can result in duplicate work, and you may
+// accidentally commit to partitions that you no longer own. You can prevent
+// this by using BlockRebalanceOnPoll, but this comes with different tradeoffs.
+// See the documentation on BlockRebalanceOnPoll for more information.
+func (cl *Client) PollFetches(ctx context.Context) Fetches {
+ return cl.PollRecords(ctx, 0)
+}
+
+// PollRecords waits for records to be available, returning as soon as any
+// broker returns records in a fetch. If the context is nil, this function will
+// return immediately with any currently buffered records.
+//
+// If the client is closed, a fake fetch will be injected that has no topic, a
+// partition of -1, and a partition error of ErrClientClosed. If the context is
+// canceled, a fake fetch will be injected with ctx.Err. These injected errors
+// can be used to break out of a poll loop.
+//
+// This returns a maximum of maxPollRecords total across all fetches, or
+// returns all buffered records if maxPollRecords is <= 0.
+//
+// It is important to check all partition errors in the returned fetches. If
+// any partition has a fatal error and actually had no records, fake fetch will
+// be injected with the error.
+//
+// If you are group consuming, a rebalance can happen under the hood while you
+// process the returned fetches. This can result in duplicate work, and you may
+// accidentally commit to partitions that you no longer own. You can prevent
+// this by using BlockRebalanceOnPoll, but this comes with different tradeoffs.
+// See the documentation on BlockRebalanceOnPoll for more information.
+func (cl *Client) PollRecords(ctx context.Context, maxPollRecords int) Fetches {
+ if maxPollRecords == 0 {
+ maxPollRecords = -1
+ }
+ c := &cl.consumer
+
+ c.g.undirtyUncommitted()
+
+ // If the user gave us a canceled context, we bail immediately after
+ // un-dirty-ing marked records.
+ if ctx != nil {
+ select {
+ case <-ctx.Done():
+ return NewErrFetch(ctx.Err())
+ default:
+ }
+ }
+
+ var fetches Fetches
+ fill := func() {
+ if c.cl.cfg.blockRebalanceOnPoll {
+ c.waitAndAddPoller()
+ defer func() {
+ if len(fetches) == 0 {
+ c.unaddPoller()
+ }
+ }()
+ }
+
+ paused := c.loadPaused()
+
+ // A group can grab the consumer lock then the group mu and
+ // assign partitions. The group mu is grabbed to update its
+ // uncommitted map. Assigning partitions clears sources ready
+ // for draining.
+ //
+ // We need to grab the consumer mu to ensure proper lock
+ // ordering and prevent lock inversion. Polling fetches also
+ // updates the group's uncommitted map; if we do not grab the
+ // consumer mu at the top, we have a problem: without the lock,
+ // we could have grabbed some sources, then a group assigned,
+ // and after the assign, we update uncommitted with fetches
+ // from the old assignment
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ c.sourcesReadyMu.Lock()
+ if maxPollRecords < 0 {
+ for _, ready := range c.sourcesReadyForDraining {
+ fetches = append(fetches, ready.takeBuffered(paused))
+ }
+ c.sourcesReadyForDraining = nil
+ } else {
+ for len(c.sourcesReadyForDraining) > 0 && maxPollRecords > 0 {
+ source := c.sourcesReadyForDraining[0]
+ fetch, taken, drained := source.takeNBuffered(paused, maxPollRecords)
+ if drained {
+ c.sourcesReadyForDraining = c.sourcesReadyForDraining[1:]
+ }
+ maxPollRecords -= taken
+ fetches = append(fetches, fetch)
+ }
+ }
+
+ realFetches := fetches
+
+ fetches = append(fetches, c.fakeReadyForDraining...)
+ c.fakeReadyForDraining = nil
+
+ c.sourcesReadyMu.Unlock()
+
+ if len(realFetches) == 0 {
+ return
+ }
+
+ // Before returning, we want to update our uncommitted. If we
+ // updated after, then we could end up with weird interactions
+ // with group invalidations where we return a stale fetch after
+ // committing in onRevoke.
+ //
+ // A blocking onRevoke commit, on finish, allows a new group
+ // session to start. If we returned stale fetches that did not
+ // have their uncommitted offset tracked, then we would allow
+ // duplicates.
+ if c.g != nil {
+ c.g.updateUncommitted(realFetches)
+ }
+ }
+
+ // We try filling fetches once before waiting. If we have no context,
+ // we guarantee that we just drain anything available and return.
+ fill()
+ if len(fetches) > 0 || ctx == nil {
+ return fetches
+ }
+
+ done := make(chan struct{})
+ quit := false
+ go func() {
+ c.sourcesReadyMu.Lock()
+ defer c.sourcesReadyMu.Unlock()
+ defer close(done)
+
+ for !quit && len(c.sourcesReadyForDraining) == 0 && len(c.fakeReadyForDraining) == 0 {
+ c.sourcesReadyCond.Wait()
+ }
+ }()
+
+ exit := func() {
+ c.sourcesReadyMu.Lock()
+ quit = true
+ c.sourcesReadyMu.Unlock()
+ c.sourcesReadyCond.Broadcast()
+ }
+
+ select {
+ case <-cl.ctx.Done():
+ exit()
+ return NewErrFetch(ErrClientClosed)
+ case <-ctx.Done():
+ exit()
+ return NewErrFetch(ctx.Err())
+ case <-done:
+ }
+
+ fill()
+ return fetches
+}
+
+// AllowRebalance allows a consumer group to rebalance if it was blocked by you
+// polling records in tandem with the BlockRebalanceOnPoll option.
+//
+// You can poll many times before calling this function; this function
+// internally resets the poll count and allows any blocked rebalances to
+// continue. Rebalances take priority: if a rebalance is blocked, and you allow
+// rebalances and then immediately poll, your poll will be blocked until the
+// rebalance completes. Internally, this function simply waits for lost
+// partitions to stop being fetched before allowing you to poll again.
+func (cl *Client) AllowRebalance() {
+ cl.consumer.allowRebalance()
+}
+
+// UpdateFetchMaxBytes updates the max bytes that a fetch request will ask for
+// and the max partition bytes that a fetch request will ask for each
+// partition.
+func (cl *Client) UpdateFetchMaxBytes(maxBytes, maxPartBytes int32) {
+ cl.cfg.maxBytes.store(maxBytes)
+ cl.cfg.maxPartBytes.store(maxPartBytes)
+}
+
+// PauseFetchTopics sets the client to no longer fetch the given topics and
+// returns all currently paused topics. Paused topics persist until resumed.
+// You can call this function with no topics to simply receive the list of
+// currently paused topics.
+//
+// Pausing topics is independent from pausing individual partitions with the
+// PauseFetchPartitions method. If you pause partitions for a topic with
+// PauseFetchPartitions, and then pause that same topic with PauseFetchTopics,
+// the individually paused partitions will not be unpaused if you only call
+// ResumeFetchTopics.
+func (cl *Client) PauseFetchTopics(topics ...string) []string {
+ c := &cl.consumer
+ if len(topics) == 0 {
+ return c.loadPaused().pausedTopics()
+ }
+ c.pausedMu.Lock()
+ defer c.pausedMu.Unlock()
+ paused := c.clonePaused()
+ paused.addTopics(topics...)
+ c.storePaused(paused)
+ return paused.pausedTopics()
+}
+
+// PauseFetchPartitions sets the client to no longer fetch the given partitions
+// and returns all currently paused partitions. Paused partitions persist until
+// resumed. You can call this function with no partitions to simply receive the
+// list of currently paused partitions.
+//
+// Pausing individual partitions is independent from pausing topics with the
+// PauseFetchTopics method. If you pause partitions for a topic with
+// PauseFetchPartitions, and then pause that same topic with PauseFetchTopics,
+// the individually paused partitions will not be unpaused if you only call
+// ResumeFetchTopics.
+func (cl *Client) PauseFetchPartitions(topicPartitions map[string][]int32) map[string][]int32 {
+ c := &cl.consumer
+ if len(topicPartitions) == 0 {
+ return c.loadPaused().pausedPartitions()
+ }
+ c.pausedMu.Lock()
+ defer c.pausedMu.Unlock()
+ paused := c.clonePaused()
+ paused.addPartitions(topicPartitions)
+ c.storePaused(paused)
+ return paused.pausedPartitions()
+}
+
+// ResumeFetchTopics resumes fetching the input topics if they were previously
+// paused. Resuming topics that are not currently paused is a per-topic no-op.
+// See the documentation on PauseTfetchTopics for more details.
+func (cl *Client) ResumeFetchTopics(topics ...string) {
+ defer cl.allSinksAndSources(func(sns sinkAndSource) {
+ sns.source.maybeConsume()
+ })
+
+ c := &cl.consumer
+ c.pausedMu.Lock()
+ defer c.pausedMu.Unlock()
+
+ paused := c.clonePaused()
+ paused.delTopics(topics...)
+ c.storePaused(paused)
+}
+
+// ResumeFetchPartitions resumes fetching the input partitions if they were
+// previously paused. Resuming partitions that are not currently paused is a
+// per-topic no-op. See the documentation on PauseFetchPartitions for more
+// details.
+func (cl *Client) ResumeFetchPartitions(topicPartitions map[string][]int32) {
+ defer cl.allSinksAndSources(func(sns sinkAndSource) {
+ sns.source.maybeConsume()
+ })
+
+ c := &cl.consumer
+ c.pausedMu.Lock()
+ defer c.pausedMu.Unlock()
+
+ paused := c.clonePaused()
+ paused.delPartitions(topicPartitions)
+ c.storePaused(paused)
+}
+
+// SetOffsets sets any matching offsets in setOffsets to the given
+// epoch/offset. Partitions that are not specified are not set. It is invalid
+// to set topics that were not yet returned from a PollFetches: this function
+// sets only partitions that were previously consumed, any extra partitions are
+// skipped.
+//
+// If directly consuming, this function operates as expected given the caveats
+// of the prior paragraph.
+//
+// If using transactions, it is advised to just use a GroupTransactSession and
+// avoid this function entirely.
+//
+// If using group consuming, It is strongly recommended to use this function
+// outside of the context of a PollFetches loop and only when you know the
+// group is not revoked (i.e., block any concurrent revoke while issuing this
+// call) and to not use this concurrent with committing. Any other usage is
+// prone to odd interactions.
+func (cl *Client) SetOffsets(setOffsets map[string]map[int32]EpochOffset) {
+ cl.setOffsets(setOffsets, true)
+}
+
+func (cl *Client) setOffsets(setOffsets map[string]map[int32]EpochOffset, log bool) {
+ if len(setOffsets) == 0 {
+ return
+ }
+
+ // We assignPartitions before returning, so we grab the consumer lock
+ // first to preserve consumer mu => group mu ordering, or to ensure
+ // no concurrent metadata assign for direct consuming.
+ c := &cl.consumer
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ var assigns map[string]map[int32]Offset
+ var tps *topicsPartitions
+ switch {
+ case c.d != nil:
+ assigns = c.d.getSetAssigns(setOffsets)
+ tps = c.d.tps
+ case c.g != nil:
+ assigns = c.g.getSetAssigns(setOffsets)
+ tps = c.g.tps
+ }
+ if len(assigns) == 0 {
+ return
+ }
+ if log {
+ c.assignPartitions(assigns, assignSetMatching, tps, "from manual SetOffsets")
+ } else {
+ c.assignPartitions(assigns, assignSetMatching, tps, "")
+ }
+}
+
+// This is guaranteed to be called in a blocking metadata fn, which ensures
+// that metadata does not load the tps we are changing. Basically, we ensure
+// everything w.r.t. consuming is at a stand still.
+func (c *consumer) purgeTopics(topics []string) {
+ if c.g == nil && c.d == nil {
+ return
+ }
+
+ purgeAssignments := make(map[string]map[int32]Offset, len(topics))
+ for _, topic := range topics {
+ purgeAssignments[topic] = nil
+ }
+
+ c.waitAndAddRebalance()
+ defer c.unaddRebalance()
+
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ // The difference for groups is we need to lock the group and there is
+ // a slight type difference in g.using vs d.using.
+ if c.g != nil {
+ c.g.mu.Lock()
+ defer c.g.mu.Unlock()
+ c.assignPartitions(purgeAssignments, assignPurgeMatching, c.g.tps, fmt.Sprintf("purge of %v requested", topics))
+ for _, topic := range topics {
+ delete(c.g.using, topic)
+ delete(c.g.reSeen, topic)
+ }
+ c.g.rejoin("rejoin from PurgeFetchTopics")
+ } else {
+ c.assignPartitions(purgeAssignments, assignPurgeMatching, c.d.tps, fmt.Sprintf("purge of %v requested", topics))
+ for _, topic := range topics {
+ delete(c.d.using, topic)
+ delete(c.d.reSeen, topic)
+ delete(c.d.m, topic)
+ }
+ }
+}
+
+// AddConsumeTopics adds new topics to be consumed. This function is a no-op if
+// the client is configured to consume via regex.
+//
+// Note that if you are directly consuming and specified ConsumePartitions,
+// this function will not add the rest of the partitions for a topic unless the
+// topic has been previously purged. That is, if you directly consumed only one
+// of five partitions originally, this will not add the other four until the
+// entire topic is purged.
+func (cl *Client) AddConsumeTopics(topics ...string) {
+ c := &cl.consumer
+ if len(topics) == 0 || c.g == nil && c.d == nil || cl.cfg.regex {
+ return
+ }
+
+ // We can do this outside of the metadata loop because we are strictly
+ // adding new topics and forbid regex consuming.
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ if c.g != nil {
+ c.g.tps.storeTopics(topics)
+ } else {
+ c.d.tps.storeTopics(topics)
+ for _, topic := range topics {
+ c.d.m.addt(topic)
+ }
+ }
+ cl.triggerUpdateMetadataNow("from AddConsumeTopics")
+}
+
+// GetConsumeTopics retrives a list of current topics being consumed.
+func (cl *Client) GetConsumeTopics() []string {
+ c := &cl.consumer
+ if c.g == nil && c.d == nil {
+ return nil
+ }
+ var m map[string]*topicPartitions
+ var ok bool
+ if c.g != nil {
+ m, ok = c.g.tps.v.Load().(topicsPartitionsData)
+ } else {
+ m, ok = c.d.tps.v.Load().(topicsPartitionsData)
+ }
+ if !ok {
+ return nil
+ }
+ topics := make([]string, 0, len(m))
+ for k := range m {
+ topics = append(topics, k)
+ }
+ return topics
+}
+
+// AddConsumePartitions adds new partitions to be consumed at the given
+// offsets. This function works only for direct, non-regex consumers.
+func (cl *Client) AddConsumePartitions(partitions map[string]map[int32]Offset) {
+ c := &cl.consumer
+ if c.d == nil || cl.cfg.regex {
+ return
+ }
+ var topics []string
+ for t, ps := range partitions {
+ if len(ps) == 0 {
+ delete(partitions, t)
+ continue
+ }
+ topics = append(topics, t)
+ }
+ if len(partitions) == 0 {
+ return
+ }
+
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ c.d.tps.storeTopics(topics)
+ for t, ps := range partitions {
+ if c.d.ps[t] == nil {
+ c.d.ps[t] = make(map[int32]Offset)
+ }
+ for p, o := range ps {
+ c.d.m.add(t, p)
+ c.d.ps[t][p] = o
+ }
+ }
+ cl.triggerUpdateMetadataNow("from AddConsumePartitions")
+}
+
+// RemoveConsumePartitions removes partitions from being consumed. This
+// function works only for direct, non-regex consumers.
+//
+// This method does not purge the concept of any topics from the client -- if
+// you remove all partitions from a topic that was being consumed, metadata
+// fetches will still occur for the topic. If you want to remove the topic
+// entirely, use PurgeTopicsFromClient.
+//
+// If you specified ConsumeTopics and this function removes all partitions for
+// a topic, the topic will no longer be consumed.
+func (cl *Client) RemoveConsumePartitions(partitions map[string][]int32) {
+ c := &cl.consumer
+ if c.d == nil || cl.cfg.regex {
+ return
+ }
+ for t, ps := range partitions {
+ if len(ps) == 0 {
+ delete(partitions, t)
+ continue
+ }
+ }
+ if len(partitions) == 0 {
+ return
+ }
+
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ removeOffsets := make(map[string]map[int32]Offset, len(partitions))
+ for t, ps := range partitions {
+ removePartitionOffsets := make(map[int32]Offset, len(ps))
+ for _, p := range ps {
+ removePartitionOffsets[p] = Offset{}
+ }
+ removeOffsets[t] = removePartitionOffsets
+ }
+
+ c.assignPartitions(removeOffsets, assignInvalidateMatching, c.d.tps, fmt.Sprintf("remove of %v requested", partitions))
+ for t, ps := range partitions {
+ for _, p := range ps {
+ c.d.using.remove(t, p)
+ c.d.m.remove(t, p)
+ delete(c.d.ps[t], p)
+ }
+ if len(c.d.ps[t]) == 0 {
+ delete(c.d.ps, t)
+ }
+ }
+}
+
+// assignHow controls how assignPartitions operates.
+type assignHow int8
+
+const (
+ // This option simply assigns new offsets, doing nothing with existing
+ // offsets / active fetches / buffered fetches.
+ assignWithoutInvalidating assignHow = iota
+
+ // This option invalidates active fetches so they will not buffer and
+ // drops all buffered fetches, and then continues to assign the new
+ // assignments.
+ assignInvalidateAll
+
+ // This option does not assign, but instead invalidates any active
+ // fetches for "assigned" (actually lost) partitions. This additionally
+ // drops all buffered fetches, because they could contain partitions we
+ // lost. Thus, with this option, the actual offset in the map is
+ // meaningless / a dummy offset.
+ assignInvalidateMatching
+
+ assignPurgeMatching
+
+ // The counterpart to assignInvalidateMatching, assignSetMatching
+ // resets all matching partitions to the specified offset / epoch.
+ assignSetMatching
+)
+
+func (h assignHow) String() string {
+ switch h {
+ case assignWithoutInvalidating:
+ return "assigning everything new, keeping current assignment"
+ case assignInvalidateAll:
+ return "unassigning everything"
+ case assignInvalidateMatching:
+ return "unassigning any currently assigned matching partition that is in the input"
+ case assignPurgeMatching:
+ return "unassigning and purging any partition matching the input topics"
+ case assignSetMatching:
+ return "reassigning any currently assigned matching partition to the input"
+ }
+ return ""
+}
+
+type fmtAssignment map[string]map[int32]Offset
+
+func (f fmtAssignment) String() string {
+ var sb strings.Builder
+
+ var topicsWritten int
+ for topic, partitions := range f {
+ topicsWritten++
+ sb.WriteString(topic)
+ sb.WriteString("[")
+
+ var partitionsWritten int
+ for partition, offset := range partitions {
+ fmt.Fprintf(&sb, "%d%s", partition, offset)
+ partitionsWritten++
+ if partitionsWritten < len(partitions) {
+ sb.WriteString(" ")
+ }
+ }
+
+ sb.WriteString("]")
+ if topicsWritten < len(f) {
+ sb.WriteString(", ")
+ }
+ }
+
+ return sb.String()
+}
+
+// assignPartitions, called under the consumer's mu, is used to set new cursors
+// or add to the existing cursors.
+//
+// We do not need to pass tps when we are bumping the session or when we are
+// invalidating all. All other cases, we want the tps -- the logic below does
+// not fully differentiate needing to start a new session vs. just reusing the
+// old (third if case below)
+func (c *consumer) assignPartitions(assignments map[string]map[int32]Offset, how assignHow, tps *topicsPartitions, why string) {
+ if c.mu.TryLock() {
+ c.mu.Unlock()
+ panic("assignPartitions called without holding the consumer lock, this is a bug in franz-go, please open an issue at github.com/twmb/franz-go")
+ }
+
+ // The internal code can avoid giving an assign reason in cases where
+ // the caller logs itself immediately before assigning. We only log if
+ // there is a reason.
+ if len(why) > 0 {
+ c.cl.cfg.logger.Log(LogLevelInfo, "assigning partitions",
+ "why", why,
+ "how", how,
+ "input", fmtAssignment(assignments),
+ )
+ }
+ var session *consumerSession
+ var loadOffsets listOrEpochLoads
+
+ defer func() {
+ if session == nil { // if nil, we stopped the session
+ session = c.startNewSession(tps)
+ } else { // else we guarded it
+ c.unguardSessionChange(session)
+ }
+ loadOffsets.loadWithSession(session, "loading offsets in new session from assign") // odds are this assign came from a metadata update, so no reason to force a refresh with loadWithSessionNow
+
+ // If we started a new session or if we unguarded, we have one
+ // worker. This one worker allowed us to safely add our load
+ // offsets before the session could be concurrently stopped
+ // again. Now that we have added the load offsets, we allow the
+ // session to be stopped.
+ session.decWorker()
+ }()
+
+ if how == assignWithoutInvalidating {
+ // Guarding a session change can actually create a new session
+ // if we had no session before, which is why we need to pass in
+ // our topicPartitions.
+ session = c.guardSessionChange(tps)
+ } else {
+ loadOffsets, _ = c.stopSession()
+
+ // First, over all cursors currently in use, we unset them or set them
+ // directly as appropriate. Anything we do not unset, we keep.
+
+ var keep usedCursors
+ for usedCursor := range c.usingCursors {
+ shouldKeep := true
+ if how == assignInvalidateAll {
+ usedCursor.unset()
+ shouldKeep = false
+ } else { // invalidateMatching or setMatching
+ if assignTopic, ok := assignments[usedCursor.topic]; ok {
+ if how == assignPurgeMatching { // topic level
+ usedCursor.source.removeCursor(usedCursor)
+ shouldKeep = false
+ } else if assignPart, ok := assignTopic[usedCursor.partition]; ok {
+ if how == assignInvalidateMatching {
+ usedCursor.unset()
+ shouldKeep = false
+ } else { // how == assignSetMatching
+ usedCursor.setOffset(cursorOffset{
+ offset: assignPart.at,
+ lastConsumedEpoch: assignPart.epoch,
+ })
+ }
+ }
+ }
+ }
+ if shouldKeep {
+ keep.use(usedCursor)
+ }
+ }
+ c.usingCursors = keep
+
+ // For any partition that was listing offsets or loading
+ // epochs, we want to ensure that if we are keeping those
+ // partitions, we re-start the list/load.
+ //
+ // Note that we do not need to unset cursors here; anything
+ // that actually resulted in a cursor is forever tracked in
+ // usedCursors. We only do not have used cursors if an
+ // assignment went straight to listing / epoch loading, and
+ // that list/epoch never finished.
+ switch how {
+ case assignWithoutInvalidating:
+ // Nothing to do -- this is handled above.
+ case assignInvalidateAll:
+ loadOffsets = listOrEpochLoads{}
+ case assignSetMatching:
+ // We had not yet loaded this partition, so there is
+ // nothing to set, and we keep everything.
+ case assignInvalidateMatching:
+ loadOffsets.keepFilter(func(t string, p int32) bool {
+ if assignTopic, ok := assignments[t]; ok {
+ if _, ok := assignTopic[p]; ok {
+ return false
+ }
+ }
+ return true
+ })
+ case assignPurgeMatching:
+ // This is slightly different than invalidate in that
+ // we invalidate whole topics.
+ loadOffsets.keepFilter(func(t string, _ int32) bool {
+ _, ok := assignments[t]
+ return !ok // assignments are topics to purge -- do NOT keep the topic if it is being purged
+ })
+ // We have to purge from tps _after_ the session is
+ // stopped. If we purge early while the session is
+ // ongoing, then another goroutine could be loading and
+ // using tps and expecting topics not yet removed from
+ // assignPartitions to still be there. Specifically,
+ // mapLoadsToBrokers could be expecting topic foo to be
+ // there (from the session!), so if we purge foo before
+ // stopping the session, we will panic.
+ topics := make([]string, 0, len(assignments))
+ for t := range assignments {
+ topics = append(topics, t)
+ }
+ tps.purgeTopics(topics)
+ }
+ }
+
+ // This assignment could contain nothing (for the purposes of
+ // invalidating active fetches), so we only do this if needed.
+ if len(assignments) == 0 || how != assignWithoutInvalidating {
+ return
+ }
+
+ c.cl.cfg.logger.Log(LogLevelDebug, "assign requires loading offsets")
+
+ topics := tps.load()
+ for topic, partitions := range assignments {
+ topicPartitions := topics.loadTopic(topic) // should be non-nil
+ if topicPartitions == nil {
+ c.cl.cfg.logger.Log(LogLevelError, "BUG! consumer was assigned topic that we did not ask for in ConsumeTopics nor ConsumePartitions, skipping!", "topic", topic)
+ continue
+ }
+
+ for partition, offset := range partitions {
+ // If we are loading the first record after a millisec,
+ // we go directly to listing offsets. Epoch validation
+ // does not ever set afterMilli.
+ if offset.afterMilli {
+ loadOffsets.addLoad(topic, partition, loadTypeList, offsetLoad{
+ replica: -1,
+ Offset: offset,
+ })
+ continue
+ }
+
+ // First, if the request is exact, get rid of the relative
+ // portion. We are modifying a copy of the offset, i.e. we
+ // are appropriately not modfying 'assignments' itself.
+ if offset.at >= 0 {
+ offset.at += offset.relative
+ if offset.at < 0 {
+ offset.at = 0
+ }
+ offset.relative = 0
+ }
+
+ // If we are requesting an exact offset with an epoch,
+ // we do truncation detection and then use the offset.
+ //
+ // Otherwise, an epoch is specified without an exact
+ // request which is useless for us, or a request is
+ // specified without a known epoch.
+ //
+ // The client ensures the epoch is non-negative from
+ // fetch offsets only if the broker supports KIP-320,
+ // but we do not override the user manually specifying
+ // an epoch.
+ if offset.at >= 0 && offset.epoch >= 0 {
+ loadOffsets.addLoad(topic, partition, loadTypeEpoch, offsetLoad{
+ replica: -1,
+ Offset: offset,
+ })
+ continue
+ }
+
+ // If an exact offset is specified and we have loaded
+ // the partition, we use it. We have to use epoch -1
+ // rather than the latest loaded epoch on the partition
+ // because the offset being requested to use could be
+ // from an epoch after OUR loaded epoch. Otherwise, we
+ // could update the metadata, see the later epoch,
+ // request the end offset for our prior epoch, and then
+ // think data loss occurred.
+ //
+ // If an offset is unspecified or we have not loaded
+ // the partition, we list offsets to find out what to
+ // use.
+ if offset.at >= 0 && partition >= 0 && partition < int32(len(topicPartitions.partitions)) {
+ part := topicPartitions.partitions[partition]
+ cursor := part.cursor
+ cursor.setOffset(cursorOffset{
+ offset: offset.at,
+ lastConsumedEpoch: -1,
+ })
+ cursor.allowUsable()
+ c.usingCursors.use(cursor)
+ continue
+ }
+
+ // If the offset is atCommitted, then no offset was
+ // loaded from FetchOffsets. We inject an error and
+ // avoid using this partition.
+ if offset.at == atCommitted {
+ c.addFakeReadyForDraining(topic, partition, errNoCommittedOffset, "notification of uncommitted partition")
+ continue
+ }
+
+ loadOffsets.addLoad(topic, partition, loadTypeList, offsetLoad{
+ replica: -1,
+ Offset: offset,
+ })
+ }
+ }
+}
+
+func (c *consumer) doOnMetadataUpdate() {
+ if !c.consuming() {
+ return
+ }
+
+ // See the comment on the outstandingMetadataUpdates field for why this
+ // block below.
+ if c.outstandingMetadataUpdates.maybeBegin() {
+ doUpdate := func() {
+ // We forbid reassignments while we do a quick check for
+ // new assignments--for the direct consumer particularly,
+ // this prevents TOCTOU, and guards against a concurrent
+ // assignment from SetOffsets.
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ switch {
+ case c.d != nil:
+ if new := c.d.findNewAssignments(); len(new) > 0 {
+ c.assignPartitions(new, assignWithoutInvalidating, c.d.tps, "new assignments from direct consumer")
+ }
+ case c.g != nil:
+ c.g.findNewAssignments()
+ }
+
+ go c.loadSession().doOnMetadataUpdate()
+ }
+
+ go func() {
+ again := true
+ for again {
+ doUpdate()
+ again = c.outstandingMetadataUpdates.maybeFinish(false)
+ }
+ }()
+ }
+}
+
+func (s *consumerSession) doOnMetadataUpdate() {
+ if s == nil || s == noConsumerSession { // no session started yet
+ return
+ }
+
+ s.listOrEpochMu.Lock()
+ defer s.listOrEpochMu.Unlock()
+
+ if s.listOrEpochMetaCh == nil {
+ return // nothing waiting to load epochs / offsets
+ }
+ select {
+ case s.listOrEpochMetaCh <- struct{}{}:
+ default:
+ }
+}
+
+type offsetLoadMap map[string]map[int32]offsetLoad
+
+// offsetLoad is effectively an Offset, but also includes a potential replica
+// to directly use if a cursor had a preferred replica.
+type offsetLoad struct {
+ replica int32 // -1 means leader
+ Offset
+}
+
+func (o offsetLoad) MarshalJSON() ([]byte, error) {
+ if o.replica == -1 {
+ return o.Offset.MarshalJSON()
+ }
+ if o.relative == 0 {
+ return []byte(fmt.Sprintf(`{"Replica":%d,"At":%d,"Epoch":%d,"CurrentEpoch":%d}`, o.replica, o.at, o.epoch, o.currentEpoch)), nil
+ }
+ return []byte(fmt.Sprintf(`{"Replica":%d,"At":%d,"Relative":%d,"Epoch":%d,"CurrentEpoch":%d}`, o.replica, o.at, o.relative, o.epoch, o.currentEpoch)), nil
+}
+
+func (o offsetLoadMap) errToLoaded(err error) []loadedOffset {
+ var loaded []loadedOffset
+ for t, ps := range o {
+ for p, o := range ps {
+ loaded = append(loaded, loadedOffset{
+ topic: t,
+ partition: p,
+ err: err,
+ request: o,
+ })
+ }
+ }
+ return loaded
+}
+
+// Combines list and epoch loads into one type for simplicity.
+type listOrEpochLoads struct {
+ // List and Epoch are public so that anything marshaling through
+ // reflect (i.e. json) can see the fields.
+ List offsetLoadMap
+ Epoch offsetLoadMap
+}
+
+type listOrEpochLoadType uint8
+
+const (
+ loadTypeList listOrEpochLoadType = iota
+ loadTypeEpoch
+)
+
+func (l listOrEpochLoadType) String() string {
+ switch l {
+ case loadTypeList:
+ return "list"
+ default:
+ return "epoch"
+ }
+}
+
+// adds an offset to be loaded, ensuring it exists only in the final loadType.
+func (l *listOrEpochLoads) addLoad(t string, p int32, loadType listOrEpochLoadType, load offsetLoad) {
+ l.removeLoad(t, p)
+ dst := &l.List
+ if loadType == loadTypeEpoch {
+ dst = &l.Epoch
+ }
+
+ if *dst == nil {
+ *dst = make(offsetLoadMap)
+ }
+ ps := (*dst)[t]
+ if ps == nil {
+ ps = make(map[int32]offsetLoad)
+ (*dst)[t] = ps
+ }
+ ps[p] = load
+}
+
+func (l *listOrEpochLoads) removeLoad(t string, p int32) {
+ for _, m := range []offsetLoadMap{
+ l.List,
+ l.Epoch,
+ } {
+ if m == nil {
+ continue
+ }
+ ps := m[t]
+ if ps == nil {
+ continue
+ }
+ delete(ps, p)
+ if len(ps) == 0 {
+ delete(m, t)
+ }
+ }
+}
+
+func (l listOrEpochLoads) each(fn func(string, int32)) {
+ for _, m := range []offsetLoadMap{
+ l.List,
+ l.Epoch,
+ } {
+ for topic, partitions := range m {
+ for partition := range partitions {
+ fn(topic, partition)
+ }
+ }
+ }
+}
+
+func (l *listOrEpochLoads) keepFilter(keep func(string, int32) bool) {
+ for _, m := range []offsetLoadMap{
+ l.List,
+ l.Epoch,
+ } {
+ for t, ps := range m {
+ for p := range ps {
+ if !keep(t, p) {
+ delete(ps, p)
+ if len(ps) == 0 {
+ delete(m, t)
+ }
+ }
+ }
+ }
+ }
+}
+
+// Merges loads into the caller; used to coalesce loads while a metadata update
+// is happening (see the only use below).
+func (l *listOrEpochLoads) mergeFrom(src listOrEpochLoads) {
+ for _, srcs := range []struct {
+ m offsetLoadMap
+ loadType listOrEpochLoadType
+ }{
+ {src.List, loadTypeList},
+ {src.Epoch, loadTypeEpoch},
+ } {
+ for t, ps := range srcs.m {
+ for p, load := range ps {
+ l.addLoad(t, p, srcs.loadType, load)
+ }
+ }
+ }
+}
+
+func (l listOrEpochLoads) isEmpty() bool { return len(l.List) == 0 && len(l.Epoch) == 0 }
+
+func (l listOrEpochLoads) loadWithSession(s *consumerSession, why string) {
+ if !l.isEmpty() {
+ s.incWorker()
+ go s.listOrEpoch(l, false, why)
+ }
+}
+
+func (l listOrEpochLoads) loadWithSessionNow(s *consumerSession, why string) bool {
+ if !l.isEmpty() {
+ s.incWorker()
+ go s.listOrEpoch(l, true, why)
+ return true
+ }
+ return false
+}
+
+// A consumer session is responsible for an era of fetching records for a set
+// of cursors. The set can be added to without killing an active session, but
+// it cannot be removed from. Removing any cursor from being consumed kills the
+// current consumer session and begins a new one.
+type consumerSession struct {
+ c *consumer
+
+ ctx context.Context
+ cancel func()
+
+ // tps tracks the topics that were assigned in this session. We use
+ // this field to build and handle list offset / load epoch requests.
+ tps *topicsPartitions
+
+ // desireFetchCh is sized to the number of concurrent fetches we are
+ // configured to be able to send.
+ //
+ // We receive desires from sources, we reply when they can fetch, and
+ // they send back when they are done. Thus, three level chan.
+ desireFetchCh chan chan chan struct{}
+ cancelFetchCh chan chan chan struct{}
+ allowedFetches int
+ fetchManagerStarted atomicBool // atomic, once true, we start the fetch manager
+
+ // Workers signify the number of fetch and list / epoch goroutines that
+ // are currently running within the context of this consumer session.
+ // Stopping a session only returns once workers hits zero.
+ workersMu sync.Mutex
+ workersCond *sync.Cond
+ workers int
+
+ listOrEpochMu sync.Mutex
+ listOrEpochLoadsWaiting listOrEpochLoads
+ listOrEpochMetaCh chan struct{} // non-nil if Loads is non-nil, signalled on meta update
+ listOrEpochLoadsLoading listOrEpochLoads
+}
+
+func (c *consumer) newConsumerSession(tps *topicsPartitions) *consumerSession {
+ if tps == nil || len(tps.load()) == 0 {
+ return noConsumerSession
+ }
+ ctx, cancel := context.WithCancel(c.cl.ctx)
+ session := &consumerSession{
+ c: c,
+
+ ctx: ctx,
+ cancel: cancel,
+
+ tps: tps,
+
+ // NOTE: This channel must be unbuffered. If it is buffered,
+ // then we can exit manageFetchConcurrency when we should not
+ // and have a deadlock:
+ //
+ // * source sends to desireFetchCh, is buffered
+ // * source seeds context canceled, tries sending to cancelFetchCh
+ // * session concurrently sees context canceled
+ // * session has not drained desireFetchCh, sees activeFetches is 0
+ // * session exits
+ // * source permanently hangs sending to desireFetchCh
+ //
+ // By having desireFetchCh unbuffered, we *ensure* that if the
+ // source indicates it wants a fetch, the session knows it and
+ // tracks it in wantFetch.
+ //
+ // See #198.
+ desireFetchCh: make(chan chan chan struct{}),
+
+ cancelFetchCh: make(chan chan chan struct{}, 4),
+ allowedFetches: c.cl.cfg.maxConcurrentFetches,
+ }
+ session.workersCond = sync.NewCond(&session.workersMu)
+ return session
+}
+
+func (s *consumerSession) desireFetch() chan chan chan struct{} {
+ if !s.fetchManagerStarted.Swap(true) {
+ go s.manageFetchConcurrency()
+ }
+ return s.desireFetchCh
+}
+
+func (s *consumerSession) manageFetchConcurrency() {
+ var (
+ activeFetches int
+ doneFetch = make(chan struct{}, 20)
+ wantFetch []chan chan struct{}
+
+ ctxCh = s.ctx.Done()
+ wantQuit bool
+ )
+ for {
+ select {
+ case register := <-s.desireFetchCh:
+ wantFetch = append(wantFetch, register)
+ case cancel := <-s.cancelFetchCh:
+ var found bool
+ for i, want := range wantFetch {
+ if want == cancel {
+ _ = append(wantFetch[i:], wantFetch[i+1:]...)
+ wantFetch = wantFetch[:len(wantFetch)-1]
+ found = true
+ }
+ }
+ // If we did not find the channel, then we have already
+ // sent to it, removed it from our wantFetch list, and
+ // bumped activeFetches.
+ if !found {
+ activeFetches--
+ }
+
+ case <-doneFetch:
+ activeFetches--
+ case <-ctxCh:
+ wantQuit = true
+ ctxCh = nil
+ }
+
+ if len(wantFetch) > 0 && (activeFetches < s.allowedFetches || s.allowedFetches == 0) { // 0 means unbounded
+ wantFetch[0] <- doneFetch
+ wantFetch = wantFetch[1:]
+ activeFetches++
+ continue
+ }
+
+ if wantQuit && activeFetches == 0 {
+ return
+ }
+ }
+}
+
+func (s *consumerSession) incWorker() {
+ if s == noConsumerSession { // from startNewSession
+ return
+ }
+ s.workersMu.Lock()
+ defer s.workersMu.Unlock()
+ s.workers++
+}
+
+func (s *consumerSession) decWorker() {
+ if s == noConsumerSession { // from followup to startNewSession
+ return
+ }
+ s.workersMu.Lock()
+ defer s.workersMu.Unlock()
+ s.workers--
+ if s.workers == 0 {
+ s.workersCond.Broadcast()
+ }
+}
+
+// noConsumerSession exists because we cannot store nil into an atomic.Value.
+var noConsumerSession = new(consumerSession)
+
+func (c *consumer) loadSession() *consumerSession {
+ if session := c.session.Load(); session != nil {
+ return session.(*consumerSession)
+ }
+ return noConsumerSession
+}
+
+// Guards against a session being stopped, and must be paired with an unguard.
+// This returns a new session if there was no session.
+//
+// The purpose of this function is when performing additive-only changes to an
+// existing session, because additive-only changes can avoid killing a running
+// session.
+func (c *consumer) guardSessionChange(tps *topicsPartitions) *consumerSession {
+ c.sessionChangeMu.Lock()
+
+ session := c.loadSession()
+ if session == noConsumerSession {
+ // If there is no session, we simply store one. This is fine;
+ // sources will be able to begin a fetch loop, but they will
+ // have no cursors to consume yet.
+ session = c.newConsumerSession(tps)
+ c.session.Store(session)
+ }
+
+ return session
+}
+
+// For the same reason below as in startNewSession, we inc a worker before
+// unguarding. This allows the unguarding to execute a bit of logic if
+// necessary before the session can be stopped.
+func (c *consumer) unguardSessionChange(session *consumerSession) {
+ session.incWorker()
+ c.sessionChangeMu.Unlock()
+}
+
+// Stops an active consumer session if there is one, and does not return until
+// all fetching, listing, offset for leader epoching is complete. This
+// invalidates any buffered fetches for the previous session and returns any
+// partitions that were listing offsets or loading epochs.
+func (c *consumer) stopSession() (listOrEpochLoads, *topicsPartitions) {
+ c.sessionChangeMu.Lock()
+
+ session := c.loadSession()
+
+ if session == noConsumerSession {
+ return listOrEpochLoads{}, nil // we had no session
+ }
+
+ // Before storing noConsumerSession, cancel our old. This pairs
+ // with the reverse ordering in source, which checks noConsumerSession
+ // then checks the session context.
+ session.cancel()
+
+ // At this point, any in progress fetches, offset lists, or epoch loads
+ // will quickly die.
+
+ c.session.Store(noConsumerSession)
+
+ // At this point, no source can be started, because the session is
+ // noConsumerSession.
+
+ session.workersMu.Lock()
+ for session.workers > 0 {
+ session.workersCond.Wait()
+ }
+ session.workersMu.Unlock()
+
+ // At this point, all fetches, lists, and loads are dead. We can close
+ // our num-fetches manager without worrying about a source trying to
+ // register itself.
+
+ c.cl.allSinksAndSources(func(sns sinkAndSource) {
+ sns.source.session.reset()
+ })
+
+ // At this point, if we begin fetching anew, then the sources will not
+ // be using stale fetch sessions.
+
+ c.sourcesReadyMu.Lock()
+ defer c.sourcesReadyMu.Unlock()
+ for _, ready := range c.sourcesReadyForDraining {
+ ready.discardBuffered()
+ }
+ c.sourcesReadyForDraining = nil
+
+ // At this point, we have invalidated any buffered data from the prior
+ // session. We leave any fake things that were ready so that the user
+ // can act on errors. The session is dead.
+
+ session.listOrEpochLoadsWaiting.mergeFrom(session.listOrEpochLoadsLoading)
+ return session.listOrEpochLoadsWaiting, session.tps
+}
+
+// Starts a new consumer session, allowing fetches to happen.
+//
+// If there are no topic partitions to start with, this returns noConsumerSession.
+//
+// This is returned with 1 worker; decWorker must be called after return. The
+// 1 worker allows for initialization work to prevent the session from being
+// immediately stopped.
+func (c *consumer) startNewSession(tps *topicsPartitions) *consumerSession {
+ if c.kill.Load() {
+ tps = nil
+ }
+ session := c.newConsumerSession(tps)
+ c.session.Store(session)
+
+ // Ensure that this session is usable before being stopped immediately.
+ // The caller must dec workers.
+ session.incWorker()
+
+ // At this point, sources can start consuming.
+
+ c.sessionChangeMu.Unlock()
+
+ c.cl.allSinksAndSources(func(sns sinkAndSource) {
+ sns.source.maybeConsume()
+ })
+
+ // At this point, any source that was not consuming becauase it saw the
+ // session was stopped has been notified to potentially start consuming
+ // again. The session is alive.
+
+ return session
+}
+
+// This function is responsible for issuing ListOffsets or
+// OffsetForLeaderEpoch. These requests's responses are only handled within
+// the context of a consumer session.
+func (s *consumerSession) listOrEpoch(waiting listOrEpochLoads, immediate bool, why string) {
+ defer s.decWorker()
+
+ // It is possible for a metadata update to try to migrate partition
+ // loads if the update moves partitions between brokers. If we are
+ // closing the client, the consumer session could already be stopped,
+ // but this stops before the metadata goroutine is killed. So, if we
+ // are in this function but actually have no session, we return.
+ if s == noConsumerSession {
+ return
+ }
+
+ wait := true
+ if immediate {
+ s.c.cl.triggerUpdateMetadataNow(why)
+ } else {
+ wait = s.c.cl.triggerUpdateMetadata(false, why) // avoid trigger if within refresh interval
+ }
+
+ s.listOrEpochMu.Lock() // collapse any listOrEpochs that occur during meta update into one
+ if !s.listOrEpochLoadsWaiting.isEmpty() {
+ s.listOrEpochLoadsWaiting.mergeFrom(waiting)
+ s.listOrEpochMu.Unlock()
+ return
+ }
+ s.listOrEpochLoadsWaiting = waiting
+ s.listOrEpochMetaCh = make(chan struct{}, 1)
+ s.listOrEpochMu.Unlock()
+
+ if wait {
+ select {
+ case <-s.ctx.Done():
+ return
+ case <-s.listOrEpochMetaCh:
+ }
+ }
+
+ s.listOrEpochMu.Lock()
+ loading := s.listOrEpochLoadsWaiting
+ s.listOrEpochLoadsLoading.mergeFrom(loading)
+ s.listOrEpochLoadsWaiting = listOrEpochLoads{}
+ s.listOrEpochMetaCh = nil
+ s.listOrEpochMu.Unlock()
+
+ brokerLoads := s.mapLoadsToBrokers(loading)
+
+ results := make(chan loadedOffsets, 2*len(brokerLoads)) // each broker can receive up to two requests
+
+ var issued, received int
+ for broker, brokerLoad := range brokerLoads {
+ s.c.cl.cfg.logger.Log(LogLevelDebug, "offsets to load broker", "broker", broker.meta.NodeID, "load", brokerLoad)
+ if len(brokerLoad.List) > 0 {
+ issued++
+ go s.c.cl.listOffsetsForBrokerLoad(s.ctx, broker, brokerLoad.List, s.tps, results)
+ }
+ if len(brokerLoad.Epoch) > 0 {
+ issued++
+ go s.c.cl.loadEpochsForBrokerLoad(s.ctx, broker, brokerLoad.Epoch, s.tps, results)
+ }
+ }
+
+ var reloads listOrEpochLoads
+ defer func() {
+ if !reloads.isEmpty() {
+ s.incWorker()
+ go func() {
+ // Before we dec our worker, we must add the
+ // reloads back into the session's waiting loads.
+ // Doing so allows a concurrent stopSession to
+ // track the waiting loads, whereas if we did not
+ // add things back to the session, we could abandon
+ // loading these offsets and have a stuck cursor.
+ defer s.decWorker()
+ defer reloads.loadWithSession(s, "reload offsets from load failure")
+ after := time.NewTimer(time.Second)
+ defer after.Stop()
+ select {
+ case <-after.C:
+ case <-s.ctx.Done():
+ return
+ }
+ }()
+ }
+ }()
+
+ for received != issued {
+ select {
+ case <-s.ctx.Done():
+ // If we return early, our session was canceled. We do
+ // not move loading list or epoch loads back to
+ // waiting; the session stopping manages that.
+ return
+ case loaded := <-results:
+ received++
+ reloads.mergeFrom(s.handleListOrEpochResults(loaded))
+ }
+ }
+}
+
+// Called within a consumer session, this function handles results from list
+// offsets or epoch loads and returns any loads that should be retried.
+//
+// To us, all errors are reloadable. We either have request level retryable
+// errors (unknown partition, etc) or non-retryable errors (auth), or we have
+// request issuing errors (no dial, connection cut repeatedly).
+//
+// For retryable request errors, we may as well back off a little bit to allow
+// Kafka to harmonize if the topic exists / etc.
+//
+// For non-retryable request errors, we may as well retry to both (a) allow the
+// user more signals about a problem that they can maybe fix within Kafka (i.e.
+// the auth), and (b) force the user to notice errors.
+//
+// For request issuing errors, we may as well continue to retry because there
+// is not much else we can do. RequestWith already retries, but returns when
+// the retry limit is hit. We will backoff 1s and then allow RequestWith to
+// continue requesting and backing off.
+func (s *consumerSession) handleListOrEpochResults(loaded loadedOffsets) (reloads listOrEpochLoads) {
+ // This function can be running twice concurrently, so we need to guard
+ // listOrEpochLoadsLoading and usingCursors. For simplicity, we just
+ // guard this entire function.
+
+ debug := s.c.cl.cfg.logger.Level() >= LogLevelDebug
+
+ var using map[string]map[int32]EpochOffset
+ type epochOffsetWhy struct {
+ EpochOffset
+ error
+ }
+ var reloading map[string]map[int32]epochOffsetWhy
+ if debug {
+ using = make(map[string]map[int32]EpochOffset)
+ reloading = make(map[string]map[int32]epochOffsetWhy)
+ defer func() {
+ t := "list"
+ if loaded.loadType == loadTypeEpoch {
+ t = "epoch"
+ }
+ s.c.cl.cfg.logger.Log(LogLevelDebug, fmt.Sprintf("handled %s results", t), "broker", logID(loaded.broker), "using", using, "reloading", reloading)
+ }()
+ }
+
+ s.listOrEpochMu.Lock()
+ defer s.listOrEpochMu.Unlock()
+
+ for _, load := range loaded.loaded {
+ s.listOrEpochLoadsLoading.removeLoad(load.topic, load.partition) // remove the tracking of this load from our session
+
+ use := func() {
+ if debug {
+ tusing := using[load.topic]
+ if tusing == nil {
+ tusing = make(map[int32]EpochOffset)
+ using[load.topic] = tusing
+ }
+ tusing[load.partition] = EpochOffset{load.leaderEpoch, load.offset}
+ }
+
+ load.cursor.setOffset(cursorOffset{
+ offset: load.offset,
+ lastConsumedEpoch: load.leaderEpoch,
+ })
+ load.cursor.allowUsable()
+ s.c.usingCursors.use(load.cursor)
+ }
+
+ var edl *ErrDataLoss
+ switch {
+ case errors.As(load.err, &edl):
+ s.c.addFakeReadyForDraining(load.topic, load.partition, load.err, "notification of data loss") // signal we lost data, but set the cursor to what we can
+ use()
+
+ case load.err == nil:
+ use()
+
+ default: // from ErrorCode in a response, or broker request err, or request is canceled as our session is ending
+ reloads.addLoad(load.topic, load.partition, loaded.loadType, load.request)
+ if !kerr.IsRetriable(load.err) && !isRetryableBrokerErr(load.err) && !isDialNonTimeoutErr(load.err) && !isContextErr(load.err) { // non-retryable response error; signal such in a response
+ s.c.addFakeReadyForDraining(load.topic, load.partition, load.err, fmt.Sprintf("notification of non-retryable error from %s request", loaded.loadType))
+ }
+
+ if debug {
+ treloading := reloading[load.topic]
+ if treloading == nil {
+ treloading = make(map[int32]epochOffsetWhy)
+ reloading[load.topic] = treloading
+ }
+ treloading[load.partition] = epochOffsetWhy{EpochOffset{load.leaderEpoch, load.offset}, load.err}
+ }
+ }
+ }
+
+ return reloads
+}
+
+// Splits the loads into per-broker loads, mapping each partition to the broker
+// that leads that partition.
+func (s *consumerSession) mapLoadsToBrokers(loads listOrEpochLoads) map[*broker]listOrEpochLoads {
+ brokerLoads := make(map[*broker]listOrEpochLoads)
+
+ s.c.cl.brokersMu.RLock() // hold mu so we can check if partition leaders exist
+ defer s.c.cl.brokersMu.RUnlock()
+
+ brokers := s.c.cl.brokers
+ seed := s.c.cl.loadSeeds()[0]
+
+ topics := s.tps.load()
+ for _, loads := range []struct {
+ m offsetLoadMap
+ loadType listOrEpochLoadType
+ }{
+ {loads.List, loadTypeList},
+ {loads.Epoch, loadTypeEpoch},
+ } {
+ for topic, partitions := range loads.m {
+ topicPartitions := topics.loadTopic(topic) // this must exist, it not existing would be a bug
+ for partition, offset := range partitions {
+ // We default to the first seed broker if we have no loaded
+ // the broker leader for this partition (we should have).
+ // Worst case, we get an error for the partition and retry.
+ broker := seed
+ if partition >= 0 && partition < int32(len(topicPartitions.partitions)) {
+ topicPartition := topicPartitions.partitions[partition]
+ brokerID := topicPartition.leader
+ if offset.replica != -1 {
+ // If we are fetching from a follower, we can list
+ // offsets against the follower itself. The replica
+ // being non-negative signals that.
+ brokerID = offset.replica
+ }
+ if tryBroker := findBroker(brokers, brokerID); tryBroker != nil {
+ broker = tryBroker
+ }
+ offset.currentEpoch = topicPartition.leaderEpoch // ensure we set our latest epoch for the partition
+ }
+
+ brokerLoad := brokerLoads[broker]
+ brokerLoad.addLoad(topic, partition, loads.loadType, offset)
+ brokerLoads[broker] = brokerLoad
+ }
+ }
+ }
+
+ return brokerLoads
+}
+
+// The result of ListOffsets or OffsetForLeaderEpoch for an individual
+// partition.
+type loadedOffset struct {
+ topic string
+ partition int32
+
+ // The following three are potentially unset if the error is non-nil
+ // and not ErrDataLoss; these are what we loaded.
+ cursor *cursor
+ offset int64
+ leaderEpoch int32
+
+ // Any error encountered for loading this partition, or for epoch
+ // loading, potentially ErrDataLoss. If this error is not retryable, we
+ // avoid reloading the offset and instead inject a fake partition for
+ // PollFetches containing this error.
+ err error
+
+ // The original request.
+ request offsetLoad
+}
+
+// The results of ListOffsets or OffsetForLeaderEpoch for an individual broker.
+type loadedOffsets struct {
+ broker int32
+ loaded []loadedOffset
+ loadType listOrEpochLoadType
+}
+
+func (l *loadedOffsets) add(a loadedOffset) { l.loaded = append(l.loaded, a) }
+func (l *loadedOffsets) addAll(as []loadedOffset) loadedOffsets {
+ l.loaded = append(l.loaded, as...)
+ return *l
+}
+
+func (cl *Client) listOffsetsForBrokerLoad(ctx context.Context, broker *broker, load offsetLoadMap, tps *topicsPartitions, results chan<- loadedOffsets) {
+ loaded := loadedOffsets{broker: broker.meta.NodeID, loadType: loadTypeList}
+
+ req1, req2 := load.buildListReq(cl.cfg.isolationLevel)
+ var (
+ wg sync.WaitGroup
+ kresp2 kmsg.Response
+ err2 error
+ )
+ if req2 != nil {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ kresp2, err2 = broker.waitResp(ctx, req2)
+ }()
+ }
+ kresp, err := broker.waitResp(ctx, req1)
+ wg.Wait()
+ if err != nil || err2 != nil {
+ results <- loaded.addAll(load.errToLoaded(err))
+ return
+ }
+
+ topics := tps.load()
+ resp := kresp.(*kmsg.ListOffsetsResponse)
+
+ // If we issued a second req to check that an exact offset is in
+ // bounds, then regrettably for safety, we have to ensure that the
+ // shapes of both responses match, and the topic & partition at each
+ // index matches. Anything that does not match is skipped (and would be
+ // a bug from Kafka), and we at the end return UnknownTopicOrPartition.
+ var resp2 *kmsg.ListOffsetsResponse
+ if req2 != nil {
+ resp2 = kresp2.(*kmsg.ListOffsetsResponse)
+ for _, r := range []*kmsg.ListOffsetsResponse{
+ resp,
+ resp2,
+ } {
+ ts := r.Topics
+ sort.Slice(ts, func(i, j int) bool {
+ return ts[i].Topic < ts[j].Topic
+ })
+ for i := range ts {
+ ps := ts[i].Partitions
+ sort.Slice(ps, func(i, j int) bool {
+ return ps[i].Partition < ps[j].Partition
+ })
+ }
+ }
+
+ lt := resp.Topics
+ rt := resp2.Topics
+ lkeept := lt[:0]
+ rkeept := rt[:0]
+ // Over each response, we only keep the topic if the topics match.
+ for len(lt) > 0 && len(rt) > 0 {
+ if lt[0].Topic < rt[0].Topic {
+ lt = lt[1:]
+ continue
+ }
+ if rt[0].Topic < lt[0].Topic {
+ rt = rt[1:]
+ continue
+ }
+ // As well, for topics that match, we only keep
+ // partitions that match. In this case, we also want
+ // both partitions to be error free, otherwise we keep
+ // an error on both. If one has old style offsets,
+ // both must.
+ lp := lt[0].Partitions
+ rp := rt[0].Partitions
+ lkeepp := lp[:0]
+ rkeepp := rp[:0]
+ for len(lp) > 0 && len(rp) > 0 {
+ if lp[0].Partition < rp[0].Partition {
+ lp = lp[1:]
+ continue
+ }
+ if rp[0].Partition < lp[0].Partition {
+ rp = rp[1:]
+ continue
+ }
+ if len(lp[0].OldStyleOffsets) > 0 && len(rp[0].OldStyleOffsets) == 0 ||
+ len(lp[0].OldStyleOffsets) == 0 && len(rp[0].OldStyleOffsets) > 0 {
+ lp = lp[1:]
+ rp = rp[1:]
+ continue
+ }
+ if lp[0].ErrorCode != 0 {
+ rp[0].ErrorCode = lp[0].ErrorCode
+ } else if rp[0].ErrorCode != 0 {
+ lp[0].ErrorCode = rp[0].ErrorCode
+ }
+ lkeepp = append(lkeepp, lp[0])
+ rkeepp = append(rkeepp, rp[0])
+ lp = lp[1:]
+ rp = rp[1:]
+ }
+ // Now we update the partitions in the topic we are
+ // keeping, and keep our topic.
+ lt[0].Partitions = lkeepp
+ rt[0].Partitions = rkeepp
+ lkeept = append(lkeept, lt[0])
+ rkeept = append(rkeept, rt[0])
+ lt = lt[1:]
+ rt = rt[1:]
+ }
+ // Finally, update each response with the topics we kept. The
+ // shapes and indices are the same.
+ resp.Topics = lkeept
+ resp2.Topics = rkeept
+ }
+
+ poffset := func(p *kmsg.ListOffsetsResponseTopicPartition) int64 {
+ offset := p.Offset
+ if len(p.OldStyleOffsets) > 0 {
+ offset = p.OldStyleOffsets[0] // list offsets v0
+ }
+ return offset
+ }
+
+ for i, rTopic := range resp.Topics {
+ topic := rTopic.Topic
+ loadParts, ok := load[topic]
+ if !ok {
+ continue // should not happen: kafka replied with something we did not ask for
+ }
+
+ topicPartitions := topics.loadTopic(topic) // must be non-nil at this point
+ for j, rPartition := range rTopic.Partitions {
+ partition := rPartition.Partition
+ loadPart, ok := loadParts[partition]
+ if !ok {
+ continue // should not happen: kafka replied with something we did not ask for
+ }
+
+ if err := kerr.ErrorForCode(rPartition.ErrorCode); err != nil {
+ loaded.add(loadedOffset{
+ topic: topic,
+ partition: partition,
+ err: err,
+ request: loadPart,
+ })
+ continue // partition err: handled in results
+ }
+
+ if partition < 0 || partition >= int32(len(topicPartitions.partitions)) {
+ continue // should not happen: we have not seen this partition from a metadata response
+ }
+ topicPartition := topicPartitions.partitions[partition]
+
+ delete(loadParts, partition)
+ if len(loadParts) == 0 {
+ delete(load, topic)
+ }
+
+ offset := poffset(&rPartition)
+ end := func() int64 { return poffset(&resp2.Topics[i].Partitions[j]) }
+
+ // We ensured the resp2 shape is as we want and has no
+ // error, so resp2 lookups are safe.
+ if loadPart.afterMilli {
+ // If after a milli, if the milli is after the
+ // end of a partition, the offset is -1. We use
+ // our end offset request: anything after the
+ // end offset *now* is after our milli.
+ if offset == -1 {
+ offset = end()
+ }
+ } else if loadPart.at >= 0 {
+ // If an exact offset, we listed start and end.
+ // We validate the offset is within bounds.
+ end := end()
+ want := loadPart.at + loadPart.relative
+ if want >= offset {
+ offset = want
+ }
+ if want >= end {
+ offset = end
+ }
+ } else if loadPart.at == -2 && loadPart.relative > 0 {
+ // Relative to the start: both start & end were
+ // issued, and we bound to the end.
+ offset += loadPart.relative
+ if end := end(); offset >= end {
+ offset = end
+ }
+ } else if loadPart.at == -1 && loadPart.relative < 0 {
+ // Relative to the end: both start & end were
+ // issued, offset is currently the start, so we
+ // set to the end and then bound to the start.
+ start := offset
+ offset = end()
+ offset += loadPart.relative
+ if offset <= start {
+ offset = start
+ }
+ }
+ if offset < 0 {
+ offset = 0 // sanity
+ }
+
+ loaded.add(loadedOffset{
+ topic: topic,
+ partition: partition,
+ cursor: topicPartition.cursor,
+ offset: offset,
+ leaderEpoch: rPartition.LeaderEpoch,
+ request: loadPart,
+ })
+ }
+ }
+
+ results <- loaded.addAll(load.errToLoaded(kerr.UnknownTopicOrPartition))
+}
+
+func (*Client) loadEpochsForBrokerLoad(ctx context.Context, broker *broker, load offsetLoadMap, tps *topicsPartitions, results chan<- loadedOffsets) {
+ loaded := loadedOffsets{broker: broker.meta.NodeID, loadType: loadTypeEpoch}
+
+ kresp, err := broker.waitResp(ctx, load.buildEpochReq())
+ if err != nil {
+ results <- loaded.addAll(load.errToLoaded(err))
+ return
+ }
+
+ // If the version is < 2, we are speaking to an old broker. We should
+ // not have an old version, but we could have spoken to a new broker
+ // first then an old broker in the middle of a broker roll. For now, we
+ // will just loop retrying until the broker is upgraded.
+
+ topics := tps.load()
+ resp := kresp.(*kmsg.OffsetForLeaderEpochResponse)
+ for _, rTopic := range resp.Topics {
+ topic := rTopic.Topic
+ loadParts, ok := load[topic]
+ if !ok {
+ continue // should not happen: kafka replied with something we did not ask for
+ }
+
+ topicPartitions := topics.loadTopic(topic) // must be non-nil at this point
+ for _, rPartition := range rTopic.Partitions {
+ partition := rPartition.Partition
+ loadPart, ok := loadParts[partition]
+ if !ok {
+ continue // should not happen: kafka replied with something we did not ask for
+ }
+
+ if err := kerr.ErrorForCode(rPartition.ErrorCode); err != nil {
+ loaded.add(loadedOffset{
+ topic: topic,
+ partition: partition,
+ err: err,
+ request: loadPart,
+ })
+ continue // partition err: handled in results
+ }
+
+ if partition < 0 || partition >= int32(len(topicPartitions.partitions)) {
+ continue // should not happen: we have not seen this partition from a metadata response
+ }
+ topicPartition := topicPartitions.partitions[partition]
+
+ delete(loadParts, partition)
+ if len(loadParts) == 0 {
+ delete(load, topic)
+ }
+
+ // Epoch loading never uses noReset nor afterMilli;
+ // this at is the offset we wanted to consume and are
+ // validating.
+ offset := loadPart.at
+ var err error
+ if rPartition.EndOffset < offset {
+ err = &ErrDataLoss{topic, partition, offset, rPartition.EndOffset}
+ offset = rPartition.EndOffset
+ }
+
+ loaded.add(loadedOffset{
+ topic: topic,
+ partition: partition,
+ cursor: topicPartition.cursor,
+ offset: offset,
+ leaderEpoch: rPartition.LeaderEpoch,
+ err: err,
+ request: loadPart,
+ })
+ }
+ }
+
+ results <- loaded.addAll(load.errToLoaded(kerr.UnknownTopicOrPartition))
+}
+
+// In general this returns one request, but if the user is using exact offsets
+// rather than start/end, then we issue both the start and end requests to
+// ensure the user's requested offset is within bounds.
+func (o offsetLoadMap) buildListReq(isolationLevel int8) (r1, r2 *kmsg.ListOffsetsRequest) {
+ r1 = kmsg.NewPtrListOffsetsRequest()
+ r1.ReplicaID = -1
+ r1.IsolationLevel = isolationLevel
+ r1.Topics = make([]kmsg.ListOffsetsRequestTopic, 0, len(o))
+ var createEnd bool
+ for topic, partitions := range o {
+ parts := make([]kmsg.ListOffsetsRequestTopicPartition, 0, len(partitions))
+ for partition, offset := range partitions {
+ // If this is a milli request, we issue two lists: if
+ // our milli is after the end of a partition, we get no
+ // offset back and we want to know the start offset
+ // (since it will be after our milli).
+ //
+ // If we are using an exact offset request, we issue
+ // the start and end so that we can bound the exact
+ // offset to being within that range.
+ //
+ // If we are using a relative offset, we potentially
+ // issue the end request because relative may shift us
+ // too far in the other direction.
+ timestamp := offset.at
+ if offset.afterMilli {
+ createEnd = true
+ } else if timestamp >= 0 || timestamp == -2 && offset.relative > 0 || timestamp == -1 && offset.relative < 0 {
+ timestamp = -2
+ createEnd = true
+ }
+ p := kmsg.NewListOffsetsRequestTopicPartition()
+ p.Partition = partition
+ p.CurrentLeaderEpoch = offset.currentEpoch // KIP-320
+ p.Timestamp = timestamp
+ p.MaxNumOffsets = 1
+
+ parts = append(parts, p)
+ }
+ t := kmsg.NewListOffsetsRequestTopic()
+ t.Topic = topic
+ t.Partitions = parts
+ r1.Topics = append(r1.Topics, t)
+ }
+
+ if createEnd {
+ r2 = kmsg.NewPtrListOffsetsRequest()
+ *r2 = *r1
+ r2.Topics = append([]kmsg.ListOffsetsRequestTopic(nil), r1.Topics...)
+ for i := range r1.Topics {
+ l := &r2.Topics[i]
+ r := &r1.Topics[i]
+ *l = *r
+ l.Partitions = append([]kmsg.ListOffsetsRequestTopicPartition(nil), r.Partitions...)
+ for i := range l.Partitions {
+ l.Partitions[i].Timestamp = -1
+ }
+ }
+ }
+
+ return r1, r2
+}
+
+func (o offsetLoadMap) buildEpochReq() *kmsg.OffsetForLeaderEpochRequest {
+ req := kmsg.NewPtrOffsetForLeaderEpochRequest()
+ req.ReplicaID = -1
+ req.Topics = make([]kmsg.OffsetForLeaderEpochRequestTopic, 0, len(o))
+ for topic, partitions := range o {
+ parts := make([]kmsg.OffsetForLeaderEpochRequestTopicPartition, 0, len(partitions))
+ for partition, offset := range partitions {
+ p := kmsg.NewOffsetForLeaderEpochRequestTopicPartition()
+ p.Partition = partition
+ p.CurrentLeaderEpoch = offset.currentEpoch
+ p.LeaderEpoch = offset.epoch
+ parts = append(parts, p)
+ }
+ t := kmsg.NewOffsetForLeaderEpochRequestTopic()
+ t.Topic = topic
+ t.Partitions = parts
+ req.Topics = append(req.Topics, t)
+ }
+ return req
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go
new file mode 100644
index 0000000000000..bf42dbcae4e8c
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go
@@ -0,0 +1,159 @@
+package kgo
+
+type directConsumer struct {
+ cfg *cfg
+ tps *topicsPartitions // data for topics that the user assigned
+ using mtmps // topics we are currently using
+ m mtmps // mirrors cfg.topics and cfg.partitions, but can change with Purge or Add
+ ps map[string]map[int32]Offset // mirrors cfg.partitions, changed in Purge or Add
+ reSeen map[string]bool // topics we evaluated against regex, and whether we want them or not
+}
+
+func (c *consumer) initDirect() {
+ d := &directConsumer{
+ cfg: &c.cl.cfg,
+ tps: newTopicsPartitions(),
+ reSeen: make(map[string]bool),
+ using: make(mtmps),
+ m: make(mtmps),
+ ps: make(map[string]map[int32]Offset),
+ }
+ c.d = d
+
+ if d.cfg.regex {
+ return
+ }
+
+ var topics []string
+ for topic, partitions := range d.cfg.partitions {
+ topics = append(topics, topic)
+ for partition := range partitions {
+ d.m.add(topic, partition)
+ }
+ p := make(map[int32]Offset, len(partitions))
+ for partition, offset := range partitions {
+ p[partition] = offset
+ }
+ d.ps[topic] = p
+ }
+ for topic := range d.cfg.topics {
+ topics = append(topics, topic)
+ d.m.addt(topic)
+ }
+ d.tps.storeTopics(topics) // prime topics to load if non-regex (this is of no benefit if regex)
+}
+
+// For SetOffsets, unlike the group consumer, we just blindly translate the
+// input EpochOffsets into Offsets, and those will be set directly.
+func (*directConsumer) getSetAssigns(setOffsets map[string]map[int32]EpochOffset) (assigns map[string]map[int32]Offset) {
+ assigns = make(map[string]map[int32]Offset)
+ for topic, partitions := range setOffsets {
+ set := make(map[int32]Offset)
+ for partition, eo := range partitions {
+ set[partition] = Offset{
+ at: eo.Offset,
+ epoch: eo.Epoch,
+ }
+ }
+ assigns[topic] = set
+ }
+ return assigns
+}
+
+// findNewAssignments returns new partitions to consume at given offsets
+// based off the current topics.
+func (d *directConsumer) findNewAssignments() map[string]map[int32]Offset {
+ topics := d.tps.load()
+
+ var rns reNews
+ if d.cfg.regex {
+ defer rns.log(d.cfg)
+ }
+
+ toUse := make(map[string]map[int32]Offset, 10)
+ for topic, topicPartitions := range topics {
+ var useTopic bool
+ if d.cfg.regex {
+ want, seen := d.reSeen[topic]
+ if !seen {
+ for rawRe, re := range d.cfg.topics {
+ if want = re.MatchString(topic); want {
+ rns.add(rawRe, topic)
+ break
+ }
+ }
+ if !want {
+ rns.skip(topic)
+ }
+ d.reSeen[topic] = want
+ }
+ useTopic = want
+ } else {
+ useTopic = d.m.onlyt(topic)
+ }
+
+ // If the above detected that we want to keep this topic, we
+ // set all partitions as usable.
+ //
+ // For internal partitions, we only allow consuming them if
+ // the topic is explicitly specified.
+ if useTopic {
+ partitions := topicPartitions.load()
+ if d.cfg.regex && partitions.isInternal || len(partitions.partitions) == 0 {
+ continue
+ }
+ toUseTopic := make(map[int32]Offset, len(partitions.partitions))
+ for partition := range partitions.partitions {
+ toUseTopic[int32(partition)] = d.cfg.resetOffset
+ }
+ toUse[topic] = toUseTopic
+ }
+
+ // Lastly, if this topic has some specific partitions pinned,
+ // we set those. We only use partitions from topics that have
+ // not been purged.
+ for topic := range d.m {
+ for partition, offset := range d.ps[topic] {
+ toUseTopic, exists := toUse[topic]
+ if !exists {
+ toUseTopic = make(map[int32]Offset, 10)
+ toUse[topic] = toUseTopic
+ }
+ toUseTopic[partition] = offset
+ }
+ }
+ }
+
+ // With everything we want to consume, remove what we are already.
+ for topic, partitions := range d.using {
+ toUseTopic, exists := toUse[topic]
+ if !exists {
+ continue // metadata update did not return this topic (regex or failing load)
+ }
+ for partition := range partitions {
+ delete(toUseTopic, partition)
+ }
+ if len(toUseTopic) == 0 {
+ delete(toUse, topic)
+ }
+ }
+
+ if len(toUse) == 0 {
+ return nil
+ }
+
+ // Finally, toUse contains new partitions that we must consume.
+ // Add them to our using map and assign them.
+ for topic, partitions := range toUse {
+ topicUsing, exists := d.using[topic]
+ if !exists {
+ topicUsing = make(map[int32]struct{})
+ d.using[topic] = topicUsing
+ }
+ for partition := range partitions {
+ topicUsing[partition] = struct{}{}
+ }
+ }
+
+ return toUse
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go
new file mode 100644
index 0000000000000..c1946eb40e72f
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go
@@ -0,0 +1,2908 @@
+package kgo
+
+import (
+ "bytes"
+ "context"
+ "errors"
+ "fmt"
+ "sort"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+type groupConsumer struct {
+ c *consumer // used to change consumer state; generally c.mu is grabbed on access
+ cl *Client // used for running requests / adding to topics map
+ cfg *cfg
+
+ ctx context.Context
+ cancel func()
+ manageDone chan struct{} // closed once when the manage goroutine quits
+
+ cooperative atomicBool // true if the group balancer chosen during Join is cooperative
+
+ // The data for topics that the user assigned. Metadata updates the
+ // atomic.Value in each pointer atomically. If we are consuming via
+ // regex, metadata grabs the lock to add new topics.
+ tps *topicsPartitions
+
+ reSeen map[string]bool // topics we evaluated against regex, and whether we want them or not
+
+ // Full lock grabbed in CommitOffsetsSync, read lock grabbed in
+ // CommitOffsets, this lock ensures that only one sync commit can
+ // happen at once, and if it is happening, no other commit can be
+ // happening.
+ syncCommitMu sync.RWMutex
+
+ rejoinCh chan string // cap 1; sent to if subscription changes (regex)
+
+ // For EOS, before we commit, we force a heartbeat. If the client and
+ // group member are both configured properly, then the transactional
+ // timeout will be less than the session timeout. By forcing a
+ // heartbeat before the commit, if the heartbeat was successful, then
+ // we ensure that we will complete the transaction within the group
+ // session, meaning we will not commit after the group has rebalanced.
+ heartbeatForceCh chan func(error)
+
+ // The following two are only updated in the manager / join&sync loop
+ // The nowAssigned map is read when commits fail: if the commit fails
+ // with ILLEGAL_GENERATION and it contains only partitions that are in
+ // nowAssigned, we re-issue.
+ lastAssigned map[string][]int32
+ nowAssigned amtps
+
+ // Fetching ensures we continue fetching offsets across cooperative
+ // rebalance if an offset fetch returns early due to an immediate
+ // rebalance. See the large comment on adjustCooperativeFetchOffsets
+ // for more details.
+ //
+ // This is modified only in that function, or in the manage loop on a
+ // hard error once the heartbeat/fetch has returned.
+ fetching map[string]map[int32]struct{}
+
+ // onFetchedMu ensures we do not call onFetched nor adjustOffsets
+ // concurrent with onRevoked.
+ //
+ // The group session itself ensures that OnPartitions functions are
+ // serial, but offset fetching is concurrent with heartbeating and can
+ // finish before or after heartbeating has already detected a revoke.
+ // To make user lives easier, we guarantee that offset fetch callbacks
+ // cannot be concurrent with onRevoked with this mu. If fetch callbacks
+ // are present, we hook this mu into onRevoked, and we grab it in the
+ // locations fetch callbacks are called. We only have to worry about
+ // onRevoked because fetching offsets occurs after onAssigned, and
+ // onLost happens after fetching offsets is done.
+ onFetchedMu sync.Mutex
+
+ // leader is whether we are the leader right now. This is set to false
+ //
+ // - set to false at the beginning of a join group session
+ // - set to true if join group response indicates we are leader
+ // - read on metadata updates in findNewAssignments
+ leader atomicBool
+
+ // Set to true when ending a transaction committing transaction
+ // offsets, and then set to false immediately after before calling
+ // EndTransaction.
+ offsetsAddedToTxn bool
+
+ // If we are leader, then other members may express interest to consume
+ // topics that we are not interested in consuming. We track the entire
+ // group's topics in external, and our fetchMetadata loop uses this.
+ // We store this as a pointer for address comparisons.
+ external atomic.Value // *groupExternal
+
+ // See the big comment on `commit`. If we allow committing between
+ // join&sync, we occasionally see RebalanceInProgress or
+ // IllegalGeneration errors while cooperative consuming.
+ noCommitDuringJoinAndSync sync.RWMutex
+
+ //////////////
+ // mu block //
+ //////////////
+ mu sync.Mutex
+
+ // using is updated when finding new assignments, we always add to this
+ // if we want to consume a topic (or see there are more potential
+ // partitions). Only the leader can trigger a new group session if there
+ // are simply more partitions for existing topics.
+ //
+ // This is read when joining a group or leaving a group.
+ using map[string]int // topics *we* are currently using => # partitions known in that topic
+
+ // uncommitted is read and updated all over:
+ // - updated before PollFetches returns
+ // - updated when directly setting offsets (to rewind, for transactions)
+ // - emptied when leaving a group
+ // - updated when revoking
+ // - updated after fetching offsets once we receive our group assignment
+ // - updated after we commit
+ // - read when getting uncommitted or committed
+ uncommitted uncommitted
+
+ // memberID and generation are written to in the join and sync loop,
+ // and mostly read within that loop. This can be read during commits,
+ // which can happy any time. It is **recommended** to be done within
+ // the context of a group session, but (a) users may have some unique
+ // use cases, and (b) the onRevoke hook may take longer than a user
+ // expects, which would rotate a session.
+ memberGen groupMemberGen
+
+ // commitCancel and commitDone are set under mu before firing off an
+ // async commit request. If another commit happens, it cancels the
+ // prior commit, waits for the prior to be done, and then starts its
+ // own.
+ commitCancel func()
+ commitDone chan struct{}
+
+ // blockAuto is set and cleared in CommitOffsets{,Sync} to block
+ // autocommitting if autocommitting is active. This ensures that an
+ // autocommit does not cancel the user's manual commit.
+ blockAuto bool
+
+ // We set this once to manage the group lifecycle once.
+ managing bool
+
+ dying bool // set when closing, read in findNewAssignments
+ left chan struct{}
+ leaveErr error // set before left is closed
+}
+
+type groupMemberGen struct {
+ v atomic.Value // *groupMemberGenT
+}
+
+type groupMemberGenT struct {
+ memberID string
+ generation int32
+}
+
+func (g *groupMemberGen) memberID() string {
+ memberID, _ := g.load()
+ return memberID
+}
+
+func (g *groupMemberGen) generation() int32 {
+ _, generation := g.load()
+ return generation
+}
+
+func (g *groupMemberGen) load() (memberID string, generation int32) {
+ v := g.v.Load()
+ if v == nil {
+ return "", -1
+ }
+ t := v.(*groupMemberGenT)
+ return t.memberID, t.generation
+}
+
+func (g *groupMemberGen) store(memberID string, generation int32) {
+ g.v.Store(&groupMemberGenT{memberID, generation})
+}
+
+func (g *groupMemberGen) storeMember(memberID string) {
+ g.store(memberID, g.generation())
+}
+
+// LeaveGroup leaves a group. Close automatically leaves the group, so this is
+// only necessary to call if you plan to leave the group but continue to use
+// the client. If a rebalance is in progress, this function waits for the
+// rebalance to complete before the group can be left. This is necessary to
+// allow you to safely issue one final offset commit in OnPartitionsRevoked. If
+// you have overridden the default revoke, you must manually commit offsets
+// before leaving the group.
+//
+// If you have configured the group with an InstanceID, this does not leave the
+// group. With instance IDs, it is expected that clients will restart and
+// re-use the same instance ID. To leave a group using an instance ID, you must
+// manually issue a kmsg.LeaveGroupRequest or use an external tool (kafka
+// scripts or kcl).
+//
+// It is recommended to use LeaveGroupContext to see if the leave was
+// successful.
+func (cl *Client) LeaveGroup() {
+ cl.LeaveGroupContext(cl.ctx)
+}
+
+// LeaveGroup leaves a group. Close automatically leaves the group, so this is
+// only necessary to call if you plan to leave the group but continue to use
+// the client. If a rebalance is in progress, this function waits for the
+// rebalance to complete before the group can be left. This is necessary to
+// allow you to safely issue one final offset commit in OnPartitionsRevoked. If
+// you have overridden the default revoke, you must manually commit offsets
+// before leaving the group.
+//
+// The context can be used to avoid waiting for the client to leave the group.
+// Not waiting may result in your client being stuck in the group and the
+// partitions this client was consuming being stuck until the session timeout.
+// This function returns any leave group error or context cancel error. If the
+// context is nil, this immediately leaves the group and does not wait and does
+// not return an error.
+//
+// If you have configured the group with an InstanceID, this does not leave the
+// group. With instance IDs, it is expected that clients will restart and
+// re-use the same instance ID. To leave a group using an instance ID, you must
+// manually issue a kmsg.LeaveGroupRequest or use an external tool (kafka
+// scripts or kcl).
+func (cl *Client) LeaveGroupContext(ctx context.Context) error {
+ c := &cl.consumer
+ if c.g == nil {
+ return nil
+ }
+ var immediate bool
+ if ctx == nil {
+ var cancel func()
+ ctx, cancel = context.WithCancel(context.Background())
+ cancel()
+ immediate = true
+ }
+
+ go func() {
+ c.waitAndAddRebalance()
+ c.mu.Lock() // lock for assign
+ c.assignPartitions(nil, assignInvalidateAll, nil, "invalidating all assignments in LeaveGroup")
+ c.g.leave(ctx)
+ c.mu.Unlock()
+ c.unaddRebalance()
+ }()
+
+ select {
+ case <-ctx.Done():
+ if immediate {
+ return nil
+ }
+ return ctx.Err()
+ case <-c.g.left:
+ return c.g.leaveErr
+ }
+}
+
+// GroupMetadata returns the current group member ID and generation, or an
+// empty string and -1 if not in the group.
+func (cl *Client) GroupMetadata() (string, int32) {
+ g := cl.consumer.g
+ if g == nil {
+ return "", -1
+ }
+ return g.memberGen.load()
+}
+
+func (c *consumer) initGroup() {
+ ctx, cancel := context.WithCancel(c.cl.ctx)
+ g := &groupConsumer{
+ c: c,
+ cl: c.cl,
+ cfg: &c.cl.cfg,
+
+ ctx: ctx,
+ cancel: cancel,
+
+ reSeen: make(map[string]bool),
+
+ manageDone: make(chan struct{}),
+ tps: newTopicsPartitions(),
+ rejoinCh: make(chan string, 1),
+ heartbeatForceCh: make(chan func(error)),
+ using: make(map[string]int),
+
+ left: make(chan struct{}),
+ }
+ c.g = g
+ if !g.cfg.setCommitCallback {
+ g.cfg.commitCallback = g.defaultCommitCallback
+ }
+
+ if g.cfg.txnID == nil {
+ // We only override revoked / lost if they were not explicitly
+ // set by options.
+ if !g.cfg.setRevoked {
+ g.cfg.onRevoked = g.defaultRevoke
+ }
+ // For onLost, we do not want to commit in onLost, so we
+ // explicitly set onLost to an empty function to avoid the
+ // fallback to onRevoked.
+ if !g.cfg.setLost {
+ g.cfg.onLost = func(context.Context, *Client, map[string][]int32) {}
+ }
+ } else {
+ g.cfg.autocommitDisable = true
+ }
+
+ for _, logOn := range []struct {
+ name string
+ set *func(context.Context, *Client, map[string][]int32)
+ }{
+ {"OnPartitionsAssigned", &g.cfg.onAssigned},
+ {"OnPartitionsRevoked", &g.cfg.onRevoked},
+ {"OnPartitionsLost", &g.cfg.onLost},
+ } {
+ user := *logOn.set
+ name := logOn.name
+ *logOn.set = func(ctx context.Context, cl *Client, m map[string][]int32) {
+ var ctxExpired bool
+ select {
+ case <-ctx.Done():
+ ctxExpired = true
+ default:
+ }
+ if ctxExpired {
+ cl.cfg.logger.Log(LogLevelDebug, "entering "+name, "with", m, "context_expired", ctxExpired)
+ } else {
+ cl.cfg.logger.Log(LogLevelDebug, "entering "+name, "with", m)
+ }
+ if user != nil {
+ dup := make(map[string][]int32)
+ for k, vs := range m {
+ dup[k] = append([]int32(nil), vs...)
+ }
+ user(ctx, cl, dup)
+ }
+ }
+ }
+
+ if g.cfg.onFetched != nil || g.cfg.adjustOffsetsBeforeAssign != nil {
+ revoked := g.cfg.onRevoked
+ g.cfg.onRevoked = func(ctx context.Context, cl *Client, m map[string][]int32) {
+ g.onFetchedMu.Lock()
+ defer g.onFetchedMu.Unlock()
+ revoked(ctx, cl, m)
+ }
+ }
+
+ // For non-regex topics, we explicitly ensure they exist for loading
+ // metadata. This is of no impact if we are *also* consuming via regex,
+ // but that is no problem.
+ if len(g.cfg.topics) > 0 && !g.cfg.regex {
+ topics := make([]string, 0, len(g.cfg.topics))
+ for topic := range g.cfg.topics {
+ topics = append(topics, topic)
+ }
+ g.tps.storeTopics(topics)
+ }
+}
+
+// Manages the group consumer's join / sync / heartbeat / fetch offset flow.
+//
+// Once a group is assigned, we fire a metadata request for all topics the
+// assignment specified interest in. Only after we finally have some topic
+// metadata do we join the group, and once joined, this management runs in a
+// dedicated goroutine until the group is left.
+func (g *groupConsumer) manage() {
+ defer close(g.manageDone)
+ g.cfg.logger.Log(LogLevelInfo, "beginning to manage the group lifecycle", "group", g.cfg.group)
+ if !g.cfg.autocommitDisable && g.cfg.autocommitInterval > 0 {
+ g.cfg.logger.Log(LogLevelInfo, "beginning autocommit loop", "group", g.cfg.group)
+ go g.loopCommit()
+ }
+
+ var consecutiveErrors int
+ joinWhy := "beginning to manage the group lifecycle"
+ for {
+ if joinWhy == "" {
+ joinWhy = "rejoining from normal rebalance"
+ }
+ err := g.joinAndSync(joinWhy)
+ if err == nil {
+ if joinWhy, err = g.setupAssignedAndHeartbeat(); err != nil {
+ if errors.Is(err, kerr.RebalanceInProgress) {
+ err = nil
+ }
+ }
+ }
+ if err == nil {
+ consecutiveErrors = 0
+ continue
+ }
+ joinWhy = "rejoining after we previously errored and backed off"
+
+ // If the user has BlockPollOnRebalance enabled, we have to
+ // block around the onLost and assigning.
+ g.c.waitAndAddRebalance()
+
+ if errors.Is(err, context.Canceled) && g.cfg.onRevoked != nil {
+ // The cooperative consumer does not revoke everything
+ // while rebalancing, meaning if our context is
+ // canceled, we may have uncommitted data. Rather than
+ // diving into onLost, we should go into onRevoked,
+ // because for the most part, a context cancelation
+ // means we are leaving the group. Going into onRevoked
+ // gives us an opportunity to commit outstanding
+ // offsets. For the eager consumer, since we always
+ // revoke before exiting the heartbeat loop, we do not
+ // really care so much about *needing* to call
+ // onRevoked, but since we are handling this case for
+ // the cooperative consumer we may as well just also
+ // include the eager consumer.
+ g.cfg.onRevoked(g.cl.ctx, g.cl, g.nowAssigned.read())
+ } else {
+ // Any other error is perceived as a fatal error,
+ // and we go into onLost as appropriate.
+ if g.cfg.onLost != nil {
+ g.cfg.onLost(g.cl.ctx, g.cl, g.nowAssigned.read())
+ }
+ g.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookGroupManageError); ok {
+ h.OnGroupManageError(err)
+ }
+ })
+ g.c.addFakeReadyForDraining("", 0, &ErrGroupSession{err}, "notification of group management loop error")
+ }
+
+ // If we are eager, we should have invalidated everything
+ // before getting here, but we do so doubly just in case.
+ //
+ // If we are cooperative, the join and sync could have failed
+ // during the cooperative rebalance where we were still
+ // consuming. We need to invalidate everything. Waiting to
+ // resume from poll is necessary, but the user will likely be
+ // unable to commit.
+ {
+ g.c.mu.Lock()
+ g.c.assignPartitions(nil, assignInvalidateAll, nil, "clearing assignment at end of group management session")
+ g.mu.Lock() // before allowing poll to touch uncommitted, lock the group
+ g.c.mu.Unlock() // now part of poll can continue
+ g.uncommitted = nil
+ g.mu.Unlock()
+
+ g.nowAssigned.store(nil)
+ g.lastAssigned = nil
+ g.fetching = nil
+
+ g.leader.Store(false)
+ g.resetExternal()
+ }
+
+ // Unblock bolling now that we have called onLost and
+ // re-assigned.
+ g.c.unaddRebalance()
+
+ if errors.Is(err, context.Canceled) { // context was canceled, quit now
+ return
+ }
+
+ // Waiting for the backoff is a good time to update our
+ // metadata; maybe the error is from stale metadata.
+ consecutiveErrors++
+ backoff := g.cfg.retryBackoff(consecutiveErrors)
+ g.cfg.logger.Log(LogLevelError, "join and sync loop errored",
+ "group", g.cfg.group,
+ "err", err,
+ "consecutive_errors", consecutiveErrors,
+ "backoff", backoff,
+ )
+ deadline := time.Now().Add(backoff)
+ g.cl.waitmeta(g.ctx, backoff, "waitmeta during join & sync error backoff")
+ after := time.NewTimer(time.Until(deadline))
+ select {
+ case <-g.ctx.Done():
+ after.Stop()
+ return
+ case <-after.C:
+ }
+ }
+}
+
+func (g *groupConsumer) leave(ctx context.Context) {
+ // If g.using is nonzero before this check, then a manage goroutine has
+ // started. If not, it will never start because we set dying.
+ g.mu.Lock()
+ wasDead := g.dying
+ g.dying = true
+ wasManaging := g.managing
+ g.cancel()
+ g.mu.Unlock()
+
+ go func() {
+ if wasManaging {
+ // We want to wait for the manage goroutine to be done
+ // so that we call the user's on{Assign,RevokeLost}.
+ <-g.manageDone
+ }
+ if wasDead {
+ // If we already called leave(), then we just wait for
+ // the prior leave to finish and we avoid re-issuing a
+ // LeaveGroup request.
+ return
+ }
+
+ defer close(g.left)
+
+ if g.cfg.instanceID != nil {
+ return
+ }
+
+ memberID := g.memberGen.memberID()
+ g.cfg.logger.Log(LogLevelInfo, "leaving group",
+ "group", g.cfg.group,
+ "member_id", memberID,
+ )
+ // If we error when leaving, there is not much
+ // we can do. We may as well just return.
+ req := kmsg.NewPtrLeaveGroupRequest()
+ req.Group = g.cfg.group
+ req.MemberID = memberID
+ member := kmsg.NewLeaveGroupRequestMember()
+ member.MemberID = memberID
+ member.Reason = kmsg.StringPtr("client leaving group per normal operation")
+ req.Members = append(req.Members, member)
+
+ resp, err := req.RequestWith(ctx, g.cl)
+ if err != nil {
+ g.leaveErr = err
+ return
+ }
+ g.leaveErr = kerr.ErrorForCode(resp.ErrorCode)
+ }()
+}
+
+// returns the difference of g.nowAssigned and g.lastAssigned.
+func (g *groupConsumer) diffAssigned() (added, lost map[string][]int32) {
+ nowAssigned := g.nowAssigned.clone()
+ if !g.cooperative.Load() {
+ return nowAssigned, nil
+ }
+
+ added = make(map[string][]int32, len(nowAssigned))
+ lost = make(map[string][]int32, len(nowAssigned))
+
+ // First, we diff lasts: any topic in last but not now is lost,
+ // otherwise, (1) new partitions are added, (2) common partitions are
+ // ignored, and (3) partitions no longer in now are lost.
+ lasts := make(map[int32]struct{}, 100)
+ for topic, lastPartitions := range g.lastAssigned {
+ nowPartitions, exists := nowAssigned[topic]
+ if !exists {
+ lost[topic] = lastPartitions
+ continue
+ }
+
+ for _, lastPartition := range lastPartitions {
+ lasts[lastPartition] = struct{}{}
+ }
+
+ // Anything now that does not exist in last is new,
+ // otherwise it is in common and we ignore it.
+ for _, nowPartition := range nowPartitions {
+ if _, exists := lasts[nowPartition]; !exists {
+ added[topic] = append(added[topic], nowPartition)
+ } else {
+ delete(lasts, nowPartition)
+ }
+ }
+
+ // Anything remanining in last does not exist now
+ // and is thus lost.
+ for last := range lasts {
+ lost[topic] = append(lost[topic], last)
+ delete(lasts, last) // reuse lasts
+ }
+ }
+
+ // Finally, any new topics in now assigned are strictly added.
+ for topic, nowPartitions := range nowAssigned {
+ if _, exists := g.lastAssigned[topic]; !exists {
+ added[topic] = nowPartitions
+ }
+ }
+
+ return added, lost
+}
+
+type revokeStage int8
+
+const (
+ revokeLastSession = iota
+ revokeThisSession
+)
+
+// revoke calls onRevoked for partitions that this group member is losing and
+// updates the uncommitted map after the revoke.
+//
+// For eager consumers, this simply revokes g.assigned. This will only be
+// called at the end of a group session.
+//
+// For cooperative consumers, this either
+//
+// (1) if revoking lost partitions from a prior session (i.e., after sync),
+// this revokes the passed in lost
+// (2) if revoking at the end of a session, this revokes topics that the
+// consumer is no longer interested in consuming
+//
+// Lastly, for cooperative consumers, this must selectively delete what was
+// lost from the uncommitted map.
+func (g *groupConsumer) revoke(stage revokeStage, lost map[string][]int32, leaving bool) {
+ g.c.waitAndAddRebalance()
+ defer g.c.unaddRebalance()
+
+ if !g.cooperative.Load() || leaving { // stage == revokeThisSession if not cooperative
+ // If we are an eager consumer, we stop fetching all of our
+ // current partitions as we will be revoking them.
+ g.c.mu.Lock()
+ if leaving {
+ g.c.assignPartitions(nil, assignInvalidateAll, nil, "revoking all assignments because we are leaving the group")
+ } else {
+ g.c.assignPartitions(nil, assignInvalidateAll, nil, "revoking all assignments because we are not cooperative")
+ }
+ g.c.mu.Unlock()
+
+ if !g.cooperative.Load() {
+ g.cfg.logger.Log(LogLevelInfo, "eager consumer revoking prior assigned partitions", "group", g.cfg.group, "revoking", g.nowAssigned.read())
+ } else {
+ g.cfg.logger.Log(LogLevelInfo, "cooperative consumer revoking prior assigned partitions because leaving group", "group", g.cfg.group, "revoking", g.nowAssigned.read())
+ }
+ if g.cfg.onRevoked != nil {
+ g.cfg.onRevoked(g.cl.ctx, g.cl, g.nowAssigned.read())
+ }
+ g.nowAssigned.store(nil)
+ g.lastAssigned = nil
+
+ // After nilling uncommitted here, nothing should recreate
+ // uncommitted until a future fetch after the group is
+ // rejoined. This _can_ be broken with a manual SetOffsets or
+ // with CommitOffsets{,Sync} but we explicitly document not
+ // to do that outside the context of a live group session.
+ g.mu.Lock()
+ g.uncommitted = nil
+ g.mu.Unlock()
+ return
+ }
+
+ switch stage {
+ case revokeLastSession:
+ // we use lost in this case
+
+ case revokeThisSession:
+ // lost is nil for cooperative assigning. Instead, we determine
+ // lost by finding subscriptions we are no longer interested
+ // in. This would be from a user's PurgeConsumeTopics call.
+ //
+ // We just paused metadata, but purging triggers a rebalance
+ // which causes a new metadata request -- in short, this could
+ // be concurrent with a metadata findNewAssignments, so we
+ // lock.
+ g.nowAssigned.write(func(nowAssigned map[string][]int32) {
+ g.mu.Lock()
+ for topic, partitions := range nowAssigned {
+ if _, exists := g.using[topic]; !exists {
+ if lost == nil {
+ lost = make(map[string][]int32)
+ }
+ lost[topic] = partitions
+ delete(nowAssigned, topic)
+ }
+ }
+ g.mu.Unlock()
+ })
+ }
+
+ if len(lost) > 0 {
+ // We must now stop fetching anything we lost and invalidate
+ // any buffered fetches before falling into onRevoked.
+ //
+ // We want to invalidate buffered fetches since they may
+ // contain partitions that we lost, and we do not want a future
+ // poll to return those fetches.
+ lostOffsets := make(map[string]map[int32]Offset, len(lost))
+
+ for lostTopic, lostPartitions := range lost {
+ lostPartitionOffsets := make(map[int32]Offset, len(lostPartitions))
+ for _, lostPartition := range lostPartitions {
+ lostPartitionOffsets[lostPartition] = Offset{}
+ }
+ lostOffsets[lostTopic] = lostPartitionOffsets
+ }
+
+ // We must invalidate before revoking and before updating
+ // uncommitted, because we want any commits in onRevoke to be
+ // for the final polled offsets. We do not want to allow the
+ // logical race of allowing fetches for revoked partitions
+ // after a revoke but before an invalidation.
+ g.c.mu.Lock()
+ g.c.assignPartitions(lostOffsets, assignInvalidateMatching, g.tps, "revoking assignments from cooperative consuming")
+ g.c.mu.Unlock()
+ }
+
+ if len(lost) > 0 || stage == revokeThisSession {
+ if len(lost) == 0 {
+ g.cfg.logger.Log(LogLevelInfo, "cooperative consumer calling onRevoke at the end of a session even though no partitions were lost", "group", g.cfg.group)
+ } else {
+ g.cfg.logger.Log(LogLevelInfo, "cooperative consumer calling onRevoke", "group", g.cfg.group, "lost", lost, "stage", stage)
+ }
+ if g.cfg.onRevoked != nil {
+ g.cfg.onRevoked(g.cl.ctx, g.cl, lost)
+ }
+ }
+
+ if len(lost) == 0 { // if we lost nothing, do nothing
+ return
+ }
+
+ if stage != revokeThisSession { // cooperative consumers rejoin after they revoking what they lost
+ defer g.rejoin("cooperative rejoin after revoking what we lost from a rebalance")
+ }
+
+ // The block below deletes everything lost from our uncommitted map.
+ // All commits should be **completed** by the time this runs. An async
+ // commit can undo what we do below. The default revoke runs a sync
+ // commit.
+ g.mu.Lock()
+ defer g.mu.Unlock()
+ if g.uncommitted == nil {
+ return
+ }
+ for lostTopic, lostPartitions := range lost {
+ uncommittedPartitions := g.uncommitted[lostTopic]
+ if uncommittedPartitions == nil {
+ continue
+ }
+ for _, lostPartition := range lostPartitions {
+ delete(uncommittedPartitions, lostPartition)
+ }
+ if len(uncommittedPartitions) == 0 {
+ delete(g.uncommitted, lostTopic)
+ }
+ }
+ if len(g.uncommitted) == 0 {
+ g.uncommitted = nil
+ }
+}
+
+// assignRevokeSession aids in sequencing prerevoke/assign/revoke.
+type assignRevokeSession struct {
+ prerevokeDone chan struct{}
+ assignDone chan struct{}
+ revokeDone chan struct{}
+}
+
+func newAssignRevokeSession() *assignRevokeSession {
+ return &assignRevokeSession{
+ prerevokeDone: make(chan struct{}),
+ assignDone: make(chan struct{}),
+ revokeDone: make(chan struct{}),
+ }
+}
+
+// For cooperative consumers, the first thing a cooperative consumer does is to
+// diff its last assignment and its new assignment and revoke anything lost.
+// We call this a "prerevoke".
+func (s *assignRevokeSession) prerevoke(g *groupConsumer, lost map[string][]int32) <-chan struct{} {
+ go func() {
+ defer close(s.prerevokeDone)
+ if g.cooperative.Load() && len(lost) > 0 {
+ g.revoke(revokeLastSession, lost, false)
+ }
+ }()
+ return s.prerevokeDone
+}
+
+func (s *assignRevokeSession) assign(g *groupConsumer, newAssigned map[string][]int32) <-chan struct{} {
+ go func() {
+ defer close(s.assignDone)
+ <-s.prerevokeDone
+ if g.cfg.onAssigned != nil {
+ // We always call on assigned, even if nothing new is
+ // assigned. This allows consumers to know that
+ // assignment is done and do setup logic.
+ //
+ // If configured, we have to block polling.
+ g.c.waitAndAddRebalance()
+ defer g.c.unaddRebalance()
+ g.cfg.onAssigned(g.cl.ctx, g.cl, newAssigned)
+ }
+ }()
+ return s.assignDone
+}
+
+// At the end of a group session, before we leave the heartbeat loop, we call
+// revoke. For non-cooperative consumers, this revokes everything in the
+// current session, and before revoking, we invalidate all partitions. For the
+// cooperative consumer, this does nothing but does notify the client that a
+// revoke has begun / the group session is ending.
+//
+// This may not run before returning from the heartbeat loop: if we encounter a
+// fatal error, we return before revoking so that we can instead call onLost in
+// the manage loop.
+func (s *assignRevokeSession) revoke(g *groupConsumer, leaving bool) <-chan struct{} {
+ go func() {
+ defer close(s.revokeDone)
+ <-s.assignDone
+ g.revoke(revokeThisSession, nil, leaving)
+ }()
+ return s.revokeDone
+}
+
+// This chunk of code "pre" revokes lost partitions for the cooperative
+// consumer and then begins heartbeating while fetching offsets. This returns
+// when heartbeating errors (or if fetch offsets errors).
+//
+// Before returning, this function ensures that
+// - onAssigned is complete
+// - which ensures that pre revoking is complete
+// - fetching is complete
+// - heartbeating is complete
+func (g *groupConsumer) setupAssignedAndHeartbeat() (string, error) {
+ type hbquit struct {
+ rejoinWhy string
+ err error
+ }
+ hbErrCh := make(chan hbquit, 1)
+ fetchErrCh := make(chan error, 1)
+
+ s := newAssignRevokeSession()
+ added, lost := g.diffAssigned()
+ g.lastAssigned = g.nowAssigned.clone() // now that we are done with our last assignment, update it per the new assignment
+
+ g.cfg.logger.Log(LogLevelInfo, "new group session begun", "group", g.cfg.group, "added", mtps(added), "lost", mtps(lost))
+ s.prerevoke(g, lost) // for cooperative consumers
+
+ // Since we have joined the group, we immediately begin heartbeating.
+ // This will continue until the heartbeat errors, the group is killed,
+ // or the fetch offsets below errors.
+ ctx, cancel := context.WithCancel(g.ctx)
+ go func() {
+ defer cancel() // potentially kill offset fetching
+ g.cfg.logger.Log(LogLevelInfo, "beginning heartbeat loop", "group", g.cfg.group)
+ rejoinWhy, err := g.heartbeat(fetchErrCh, s)
+ hbErrCh <- hbquit{rejoinWhy, err}
+ }()
+
+ // We immediately begin fetching offsets. We want to wait until the
+ // fetch function returns, since it assumes within it that another
+ // assign cannot happen (it assigns partitions itself). Returning
+ // before the fetch completes would be not good.
+ //
+ // The difference between fetchDone and fetchErrCh is that fetchErrCh
+ // can kill heartbeating, or signal it to continue, while fetchDone
+ // is specifically used for this function's return.
+ fetchDone := make(chan struct{})
+ defer func() { <-fetchDone }()
+
+ // Before we fetch offsets, we wait for the user's onAssign callback to
+ // be done. This ensures a few things:
+ //
+ // * that we wait for for prerevoking to be done, which updates the
+ // uncommitted field. Waiting for that ensures that a rejoin and poll
+ // does not have weird concurrent interaction.
+ //
+ // * that our onLost will not be concurrent with onAssign
+ //
+ // * that the user can start up any per-partition processors necessary
+ // before we begin consuming that partition.
+ //
+ // We especially need to wait here because heartbeating may not
+ // necessarily run onRevoke before returning (because of a fatal
+ // error).
+ s.assign(g, added)
+
+ // If cooperative consuming, we may have to resume fetches. See the
+ // comment on adjustCooperativeFetchOffsets.
+ //
+ // We do this AFTER the user's callback. If we add more partitions
+ // to `added` that are from a previously canceled fetch, we do NOT
+ // want to pass those fetch-resumed partitions to the user callback
+ // again. See #705.
+ if g.cooperative.Load() {
+ added = g.adjustCooperativeFetchOffsets(added, lost)
+ }
+
+ <-s.assignDone
+
+ if len(added) > 0 {
+ go func() {
+ defer close(fetchDone)
+ defer close(fetchErrCh)
+ fetchErrCh <- g.fetchOffsets(ctx, added)
+ }()
+ } else {
+ close(fetchDone)
+ close(fetchErrCh)
+ }
+
+ // Finally, we simply return whatever the heartbeat error is. This will
+ // be the fetch offset error if that function is what killed this.
+
+ done := <-hbErrCh
+ return done.rejoinWhy, done.err
+}
+
+// heartbeat issues heartbeat requests to Kafka for the duration of a group
+// session.
+//
+// This function begins before fetching offsets to allow the consumer's
+// onAssigned to be called before fetching. If the eventual offset fetch
+// errors, we continue heartbeating until onRevoked finishes and our metadata
+// is updated. If the error is not RebalanceInProgress, we return immediately.
+//
+// If the offset fetch is successful, then we basically sit in this function
+// until a heartbeat errors or we, being the leader, decide to re-join.
+func (g *groupConsumer) heartbeat(fetchErrCh <-chan error, s *assignRevokeSession) (string, error) {
+ ticker := time.NewTicker(g.cfg.heartbeatInterval)
+ defer ticker.Stop()
+
+ // We issue one heartbeat quickly if we are cooperative because
+ // cooperative consumers rejoin the group immediately, and we want to
+ // detect that in 500ms rather than 3s.
+ var cooperativeFastCheck <-chan time.Time
+ if g.cooperative.Load() {
+ cooperativeFastCheck = time.After(500 * time.Millisecond)
+ }
+
+ var metadone, revoked <-chan struct{}
+ var heartbeat, didMetadone, didRevoke bool
+ var rejoinWhy string
+ var lastErr error
+
+ ctxCh := g.ctx.Done()
+
+ for {
+ var err error
+ var force func(error)
+ heartbeat = false
+ select {
+ case <-cooperativeFastCheck:
+ heartbeat = true
+ case <-ticker.C:
+ heartbeat = true
+ case force = <-g.heartbeatForceCh:
+ heartbeat = true
+ case rejoinWhy = <-g.rejoinCh:
+ // If a metadata update changes our subscription,
+ // we just pretend we are rebalancing.
+ g.cfg.logger.Log(LogLevelInfo, "forced rejoin quitting heartbeat loop", "why", rejoinWhy)
+ err = kerr.RebalanceInProgress
+ case err = <-fetchErrCh:
+ fetchErrCh = nil
+ case <-metadone:
+ metadone = nil
+ didMetadone = true
+ case <-revoked:
+ revoked = nil
+ didRevoke = true
+ case <-ctxCh:
+ // Even if the group is left, we need to wait for our
+ // revoke to finish before returning, otherwise the
+ // manage goroutine will race with us setting
+ // nowAssigned.
+ ctxCh = nil
+ err = context.Canceled
+ }
+
+ if heartbeat {
+ g.cfg.logger.Log(LogLevelDebug, "heartbeating", "group", g.cfg.group)
+ req := kmsg.NewPtrHeartbeatRequest()
+ req.Group = g.cfg.group
+ memberID, generation := g.memberGen.load()
+ req.Generation = generation
+ req.MemberID = memberID
+ req.InstanceID = g.cfg.instanceID
+ var resp *kmsg.HeartbeatResponse
+ if resp, err = req.RequestWith(g.ctx, g.cl); err == nil {
+ err = kerr.ErrorForCode(resp.ErrorCode)
+ }
+ g.cfg.logger.Log(LogLevelDebug, "heartbeat complete", "group", g.cfg.group, "err", err)
+ if force != nil {
+ force(err)
+ }
+ }
+
+ // The first error either triggers a clean revoke and metadata
+ // update or it returns immediately. If we triggered the
+ // revoke, we wait for it to complete regardless of any future
+ // error.
+ if didMetadone && didRevoke {
+ return rejoinWhy, lastErr
+ }
+
+ if err == nil {
+ continue
+ }
+
+ if lastErr == nil {
+ g.cfg.logger.Log(LogLevelInfo, "heartbeat errored", "group", g.cfg.group, "err", err)
+ } else {
+ g.cfg.logger.Log(LogLevelInfo, "heartbeat errored again while waiting for user revoke to finish", "group", g.cfg.group, "err", err)
+ }
+
+ // Since we errored, we must revoke.
+ if !didRevoke && revoked == nil {
+ // If our error is not from rebalancing, then we
+ // encountered IllegalGeneration or UnknownMemberID or
+ // our context closed all of which are unexpected and
+ // unrecoverable.
+ //
+ // We return early rather than revoking and updating
+ // metadata; the groupConsumer's manage function will
+ // call onLost with all partitions.
+ //
+ // setupAssignedAndHeartbeat still waits for onAssigned
+ // to be done so that we avoid calling onLost
+ // concurrently.
+ if !errors.Is(err, kerr.RebalanceInProgress) && revoked == nil {
+ return "", err
+ }
+
+ // Now we call the user provided revoke callback, even
+ // if cooperative: if cooperative, this only revokes
+ // partitions we no longer want to consume.
+ //
+ // If the err is context.Canceled, the group is being
+ // left and we revoke everything.
+ revoked = s.revoke(g, errors.Is(err, context.Canceled))
+ }
+ // Since we errored, while waiting for the revoke to finish, we
+ // update our metadata. A leader may have re-joined with new
+ // metadata, and we want the update.
+ if !didMetadone && metadone == nil {
+ waited := make(chan struct{})
+ metadone = waited
+ go func() {
+ g.cl.waitmeta(g.ctx, g.cfg.sessionTimeout, "waitmeta after heartbeat error")
+ close(waited)
+ }()
+ }
+
+ // We always save the latest error; generally this should be
+ // REBALANCE_IN_PROGRESS, but if the revoke takes too long,
+ // Kafka may boot us and we will get a different error.
+ lastErr = err
+ }
+}
+
+// ForceRebalance quits a group member's heartbeat loop so that the member
+// rejoins with a JoinGroupRequest.
+//
+// This function is only useful if you either (a) know that the group member is
+// a leader, and want to force a rebalance for any particular reason, or (b)
+// are using a custom group balancer, and have changed the metadata that will
+// be returned from its JoinGroupMetadata method. This function has no other
+// use; see KIP-568 for more details around this function's motivation.
+//
+// If neither of the cases above are true (this member is not a leader, and the
+// join group metadata has not changed), then Kafka will not actually trigger a
+// rebalance and will instead reply to the member with its current assignment.
+func (cl *Client) ForceRebalance() {
+ if g := cl.consumer.g; g != nil {
+ g.rejoin("rejoin from ForceRebalance")
+ }
+}
+
+// rejoin is called after a cooperative member revokes what it lost at the
+// beginning of a session, or if we are leader and detect new partitions to
+// consume.
+func (g *groupConsumer) rejoin(why string) {
+ select {
+ case g.rejoinCh <- why:
+ default:
+ }
+}
+
+// Joins and then syncs, issuing the two slow requests in goroutines to allow
+// for group cancelation to return early.
+func (g *groupConsumer) joinAndSync(joinWhy string) error {
+ g.noCommitDuringJoinAndSync.Lock()
+ g.cfg.logger.Log(LogLevelDebug, "blocking commits from join&sync")
+ defer g.noCommitDuringJoinAndSync.Unlock()
+ defer g.cfg.logger.Log(LogLevelDebug, "unblocking commits from join&sync")
+
+ g.cfg.logger.Log(LogLevelInfo, "joining group", "group", g.cfg.group)
+ g.leader.Store(false)
+ g.getAndResetExternalRejoin()
+ defer func() {
+ // If we are not leader, we clear any tracking of external
+ // topics from when we were previously leader, since tracking
+ // these is just a waste.
+ if !g.leader.Load() {
+ g.resetExternal()
+ }
+ }()
+
+start:
+ select {
+ case <-g.rejoinCh: // drain to avoid unnecessary rejoins
+ default:
+ }
+
+ joinReq := kmsg.NewPtrJoinGroupRequest()
+ joinReq.Group = g.cfg.group
+ joinReq.SessionTimeoutMillis = int32(g.cfg.sessionTimeout.Milliseconds())
+ joinReq.RebalanceTimeoutMillis = int32(g.cfg.rebalanceTimeout.Milliseconds())
+ joinReq.ProtocolType = g.cfg.protocol
+ joinReq.MemberID = g.memberGen.memberID()
+ joinReq.InstanceID = g.cfg.instanceID
+ joinReq.Protocols = g.joinGroupProtocols()
+ if joinWhy != "" {
+ joinReq.Reason = kmsg.StringPtr(joinWhy)
+ }
+ var (
+ joinResp *kmsg.JoinGroupResponse
+ err error
+ joined = make(chan struct{})
+ )
+
+ // NOTE: For this function, we have to use the client context, not the
+ // group context. We want to allow people to issue one final commit in
+ // OnPartitionsRevoked before leaving a group, so we need to block
+ // commits during join&sync. If we used the group context, we would be
+ // cancled immediately when leaving while a join or sync is inflight,
+ // and then our final commit will receive either REBALANCE_IN_PROGRESS
+ // or ILLEGAL_GENERATION.
+
+ go func() {
+ defer close(joined)
+ joinResp, err = joinReq.RequestWith(g.cl.ctx, g.cl)
+ }()
+
+ select {
+ case <-joined:
+ case <-g.cl.ctx.Done():
+ return g.cl.ctx.Err() // client closed
+ }
+ if err != nil {
+ return err
+ }
+
+ restart, protocol, plan, err := g.handleJoinResp(joinResp)
+ if restart {
+ goto start
+ }
+ if err != nil {
+ g.cfg.logger.Log(LogLevelWarn, "join group failed", "group", g.cfg.group, "err", err)
+ return err
+ }
+
+ syncReq := kmsg.NewPtrSyncGroupRequest()
+ syncReq.Group = g.cfg.group
+ memberID, generation := g.memberGen.load()
+ syncReq.Generation = generation
+ syncReq.MemberID = memberID
+ syncReq.InstanceID = g.cfg.instanceID
+ syncReq.ProtocolType = &g.cfg.protocol
+ syncReq.Protocol = &protocol
+ if !joinResp.SkipAssignment {
+ syncReq.GroupAssignment = plan // nil unless we are the leader
+ }
+ var (
+ syncResp *kmsg.SyncGroupResponse
+ synced = make(chan struct{})
+ )
+
+ g.cfg.logger.Log(LogLevelInfo, "syncing", "group", g.cfg.group, "protocol_type", g.cfg.protocol, "protocol", protocol)
+ go func() {
+ defer close(synced)
+ syncResp, err = syncReq.RequestWith(g.cl.ctx, g.cl)
+ }()
+
+ select {
+ case <-synced:
+ case <-g.cl.ctx.Done():
+ return g.cl.ctx.Err()
+ }
+ if err != nil {
+ return err
+ }
+
+ if err = g.handleSyncResp(protocol, syncResp); err != nil {
+ if errors.Is(err, kerr.RebalanceInProgress) {
+ g.cfg.logger.Log(LogLevelInfo, "sync failed with RebalanceInProgress, rejoining", "group", g.cfg.group)
+ goto start
+ }
+ g.cfg.logger.Log(LogLevelWarn, "sync group failed", "group", g.cfg.group, "err", err)
+ return err
+ }
+
+ // KIP-814 fixes one limitation with KIP-345, but has another
+ // fundamental limitation. When an instance ID leader restarts, its
+ // first join always gets its old assignment *even if* the member's
+ // topic interests have changed. The broker tells us to skip doing
+ // assignment ourselves, but we ignore that for our well known
+ // balancers. Instead, we balance (but avoid sending it while syncing,
+ // as we are supposed to), and if our sync assignment differs from our
+ // own calculated assignment, We know we have a stale broker assignment
+ // and must trigger a rebalance.
+ if plan != nil && joinResp.SkipAssignment {
+ for _, assign := range plan {
+ if assign.MemberID == memberID {
+ if !bytes.Equal(assign.MemberAssignment, syncResp.MemberAssignment) {
+ g.rejoin("instance group leader restarted and was reassigned old plan, our topic interests changed and we must rejoin to force a rebalance")
+ }
+ break
+ }
+ }
+ }
+
+ return nil
+}
+
+func (g *groupConsumer) handleJoinResp(resp *kmsg.JoinGroupResponse) (restart bool, protocol string, plan []kmsg.SyncGroupRequestGroupAssignment, err error) {
+ if err = kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ switch err {
+ case kerr.MemberIDRequired:
+ g.memberGen.storeMember(resp.MemberID) // KIP-394
+ g.cfg.logger.Log(LogLevelInfo, "join returned MemberIDRequired, rejoining with response's MemberID", "group", g.cfg.group, "member_id", resp.MemberID)
+ return true, "", nil, nil
+ case kerr.UnknownMemberID:
+ g.memberGen.storeMember("")
+ g.cfg.logger.Log(LogLevelInfo, "join returned UnknownMemberID, rejoining without a member id", "group", g.cfg.group)
+ return true, "", nil, nil
+ }
+ return // Request retries as necessary, so this must be a failure
+ }
+ g.memberGen.store(resp.MemberID, resp.Generation)
+
+ if resp.Protocol != nil {
+ protocol = *resp.Protocol
+ }
+
+ for _, balancer := range g.cfg.balancers {
+ if protocol == balancer.ProtocolName() {
+ cooperative := balancer.IsCooperative()
+ if !cooperative && g.cooperative.Load() {
+ g.cfg.logger.Log(LogLevelWarn, "downgrading from cooperative group to eager group, this is not supported per KIP-429!")
+ }
+ g.cooperative.Store(cooperative)
+ break
+ }
+ }
+
+ // KIP-345 has a fundamental limitation that KIP-814 also does not
+ // solve.
+ //
+ // When using instance IDs, if a leader restarts, its first join
+ // receives its old assignment no matter what. KIP-345 resulted in
+ // leaderless consumer groups, KIP-814 fixes this by notifying the
+ // restarted leader that it is still leader but that it should not
+ // balance.
+ //
+ // If the join response is <= v8, we hackily work around the leaderless
+ // situation by checking if the LeaderID is prefixed with our
+ // InstanceID. This is how Kafka and Redpanda are both implemented. At
+ // worst, if we mis-predict the leader, then we may accidentally try to
+ // cause a rebalance later and it will do nothing. That's fine. At
+ // least we can cause rebalances now, rather than having a leaderless,
+ // not-ever-rebalancing client.
+ //
+ // KIP-814 does not solve our problem fully: if we restart and rejoin,
+ // we always get our old assignment even if we changed what topics we
+ // were interested in. Because we have our old assignment, we think
+ // that the plan is fine *even with* our new interests, and we wait for
+ // some external rebalance trigger. We work around this limitation
+ // above (see "KIP-814") only for well known balancers; we cannot work
+ // around this limitation for not well known balancers because they may
+ // do so weird things we cannot control nor reason about.
+ leader := resp.LeaderID == resp.MemberID
+ leaderNoPlan := !leader && resp.Version <= 8 && g.cfg.instanceID != nil && strings.HasPrefix(resp.LeaderID, *g.cfg.instanceID+"-")
+ if leader {
+ g.leader.Store(true)
+ g.cfg.logger.Log(LogLevelInfo, "joined, balancing group",
+ "group", g.cfg.group,
+ "member_id", resp.MemberID,
+ "instance_id", strptr{g.cfg.instanceID},
+ "generation", resp.Generation,
+ "balance_protocol", protocol,
+ "leader", true,
+ )
+ plan, err = g.balanceGroup(protocol, resp.Members, resp.SkipAssignment)
+ } else if leaderNoPlan {
+ g.leader.Store(true)
+ g.cfg.logger.Log(LogLevelInfo, "joined as leader but unable to balance group due to KIP-345 limitations",
+ "group", g.cfg.group,
+ "member_id", resp.MemberID,
+ "instance_id", strptr{g.cfg.instanceID},
+ "generation", resp.Generation,
+ "balance_protocol", protocol,
+ "leader", true,
+ )
+ } else {
+ g.cfg.logger.Log(LogLevelInfo, "joined",
+ "group", g.cfg.group,
+ "member_id", resp.MemberID,
+ "instance_id", strptr{g.cfg.instanceID},
+ "generation", resp.Generation,
+ "leader", false,
+ )
+ }
+ return
+}
+
+type strptr struct {
+ s *string
+}
+
+func (s strptr) String() string {
+ if s.s == nil {
+ return ""
+ }
+ return *s.s
+}
+
+// If other group members consume topics we are not interested in, we track the
+// entire group's topics in this groupExternal type. On metadata update, we see
+// if any partitions for any of these topics have changed, and if so, we as
+// leader rejoin the group.
+//
+// Our external topics are cleared whenever we join and are not leader. We keep
+// our previous external topics if we are leader: on the first balance as
+// leader, we request metadata for all topics, then on followup balances, we
+// already have that metadata and do not need to reload it when balancing.
+//
+// Whenever metadata updates, we detect if a rejoin is needed and always reset
+// the rejoin status.
+type groupExternal struct {
+ tps atomic.Value // map[string]int32
+ rejoin atomicBool
+}
+
+func (g *groupConsumer) loadExternal() *groupExternal {
+ e := g.external.Load()
+ if e != nil {
+ return e.(*groupExternal)
+ }
+ return nil
+}
+
+// We reset our external topics whenever join&sync loop errors, or when we join
+// and are not leader.
+func (g *groupConsumer) resetExternal() {
+ g.external.Store((*groupExternal)(nil))
+}
+
+// If this is our first join as leader, or if a new member joined with new
+// topics we were not tracking, we re-initialize external with the all-topics
+// metadata refresh.
+func (g *groupConsumer) initExternal(current map[string]int32) {
+ var e groupExternal
+ e.tps.Store(dupmsi32(current))
+ g.external.Store(&e)
+}
+
+// Reset whenever we join, & potentially used to rejoin when finding new
+// assignments (i.e., end of metadata).
+func (g *groupConsumer) getAndResetExternalRejoin() bool {
+ e := g.loadExternal()
+ if e == nil {
+ return false
+ }
+ defer e.rejoin.Store(false)
+ return e.rejoin.Load()
+}
+
+// Runs fn over a load, not copy, of our map.
+func (g *groupExternal) fn(fn func(map[string]int32)) {
+ if g == nil {
+ return
+ }
+ v := g.tps.Load()
+ if v == nil {
+ return
+ }
+ tps := v.(map[string]int32)
+ fn(tps)
+}
+
+// Runs fn over a clone of our external map and updates the map.
+func (g *groupExternal) cloned(fn func(map[string]int32)) {
+ g.fn(func(tps map[string]int32) {
+ dup := dupmsi32(tps)
+ fn(dup)
+ g.tps.Store(dup)
+ })
+}
+
+func (g *groupExternal) eachTopic(fn func(string)) {
+ g.fn(func(tps map[string]int32) {
+ for t := range tps {
+ fn(t)
+ }
+ })
+}
+
+func (g *groupExternal) updateLatest(meta map[string]*metadataTopic) {
+ g.cloned(func(tps map[string]int32) {
+ var rejoin bool
+ for t, ps := range tps {
+ latest, exists := meta[t]
+ if !exists || latest.loadErr != nil {
+ continue
+ }
+ if psLatest := int32(len(latest.partitions)); psLatest != ps {
+ rejoin = true
+ tps[t] = psLatest
+ }
+ }
+ if rejoin {
+ g.rejoin.Store(true)
+ }
+ })
+}
+
+func (g *groupConsumer) handleSyncResp(protocol string, resp *kmsg.SyncGroupResponse) error {
+ if err := kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ return err
+ }
+
+ b, err := g.findBalancer("sync assignment", protocol)
+ if err != nil {
+ return err
+ }
+
+ assigned, err := b.ParseSyncAssignment(resp.MemberAssignment)
+ if err != nil {
+ g.cfg.logger.Log(LogLevelError, "sync assignment parse failed", "group", g.cfg.group, "err", err)
+ return err
+ }
+
+ g.cfg.logger.Log(LogLevelInfo, "synced", "group", g.cfg.group, "assigned", mtps(assigned))
+
+ // Past this point, we will fall into the setupAssigned prerevoke code,
+ // meaning for cooperative, we will revoke what we need to.
+ g.nowAssigned.store(assigned)
+ return nil
+}
+
+func (g *groupConsumer) joinGroupProtocols() []kmsg.JoinGroupRequestProtocol {
+ g.mu.Lock()
+
+ topics := make([]string, 0, len(g.using))
+ for topic := range g.using {
+ topics = append(topics, topic)
+ }
+ lastDup := make(map[string][]int32, len(g.lastAssigned))
+ for t, ps := range g.lastAssigned {
+ lastDup[t] = append([]int32(nil), ps...) // deep copy to allow modifications
+ }
+
+ g.mu.Unlock()
+
+ sort.Strings(topics) // we guarantee to JoinGroupMetadata that the input strings are sorted
+ for _, partitions := range lastDup {
+ sort.Slice(partitions, func(i, j int) bool { return partitions[i] < partitions[j] }) // same for partitions
+ }
+
+ gen := g.memberGen.generation()
+ var protos []kmsg.JoinGroupRequestProtocol
+ for _, balancer := range g.cfg.balancers {
+ proto := kmsg.NewJoinGroupRequestProtocol()
+ proto.Name = balancer.ProtocolName()
+ proto.Metadata = balancer.JoinGroupMetadata(topics, lastDup, gen)
+ protos = append(protos, proto)
+ }
+ return protos
+}
+
+// If we are cooperatively consuming, we have a potential problem: if fetch
+// offsets is canceled due to an immediate rebalance, when we resume, we will
+// not re-fetch offsets for partitions we were previously assigned and are
+// still assigned. We will only fetch offsets for new assignments.
+//
+// To work around that issue, we track everything we are fetching in g.fetching
+// and only clear g.fetching if fetchOffsets returns with no error.
+//
+// Now, if fetching returns early due to an error, when we rejoin and re-fetch,
+// we will resume fetching what we were previously:
+//
+// - first we remove what was lost
+// - then we add anything new
+// - then we translate our total set into the "added" list to be fetched on return
+//
+// Any time a group is completely lost, the manage loop clears fetching. When
+// cooperative consuming, a hard error is basically losing the entire state and
+// rejoining from scratch.
+func (g *groupConsumer) adjustCooperativeFetchOffsets(added, lost map[string][]int32) map[string][]int32 {
+ if g.fetching != nil {
+ // We were fetching previously: remove anything lost.
+ for topic, partitions := range lost {
+ ft := g.fetching[topic]
+ if ft == nil {
+ continue // we were not fetching this topic
+ }
+ for _, partition := range partitions {
+ delete(ft, partition)
+ }
+ if len(ft) == 0 {
+ delete(g.fetching, topic)
+ }
+ }
+ } else {
+ // We were not fetching previously: start a new map for what we
+ // are adding.
+ g.fetching = make(map[string]map[int32]struct{})
+ }
+
+ // Merge everything we are newly fetching to our fetching map.
+ for topic, partitions := range added {
+ ft := g.fetching[topic]
+ if ft == nil {
+ ft = make(map[int32]struct{}, len(partitions))
+ g.fetching[topic] = ft
+ }
+ for _, partition := range partitions {
+ ft[partition] = struct{}{}
+ }
+ }
+
+ // Now translate our full set (previously fetching ++ newly fetching --
+ // lost) into a new "added" map to be fetched.
+ added = make(map[string][]int32, len(g.fetching))
+ for topic, partitions := range g.fetching {
+ ps := make([]int32, 0, len(partitions))
+ for partition := range partitions {
+ ps = append(ps, partition)
+ }
+ added[topic] = ps
+ }
+ return added
+}
+
+// fetchOffsets is issued once we join a group to see what the prior commits
+// were for the partitions we were assigned.
+func (g *groupConsumer) fetchOffsets(ctx context.Context, added map[string][]int32) (rerr error) { // we must use "rerr"! see introducing commit
+ // If we fetch successfully, we can clear the cross-group-cycle
+ // fetching tracking.
+ defer func() {
+ if rerr == nil {
+ g.fetching = nil
+ }
+ }()
+
+ // Our client maps the v0 to v7 format to v8+ when sharding this
+ // request, if we are only requesting one group, as well as maps the
+ // response back, so we do not need to worry about v8+ here.
+start:
+ req := kmsg.NewPtrOffsetFetchRequest()
+ req.Group = g.cfg.group
+ req.RequireStable = g.cfg.requireStable
+ for topic, partitions := range added {
+ reqTopic := kmsg.NewOffsetFetchRequestTopic()
+ reqTopic.Topic = topic
+ reqTopic.Partitions = partitions
+ req.Topics = append(req.Topics, reqTopic)
+ }
+
+ var resp *kmsg.OffsetFetchResponse
+ var err error
+
+ fetchDone := make(chan struct{})
+ go func() {
+ defer close(fetchDone)
+ resp, err = req.RequestWith(ctx, g.cl)
+ }()
+ select {
+ case <-fetchDone:
+ case <-ctx.Done():
+ g.cfg.logger.Log(LogLevelInfo, "fetch offsets failed due to context cancelation", "group", g.cfg.group)
+ return ctx.Err()
+ }
+ if err != nil {
+ g.cfg.logger.Log(LogLevelError, "fetch offsets failed with non-retryable error", "group", g.cfg.group, "err", err)
+ return err
+ }
+
+ // Even if a leader epoch is returned, if brokers do not support
+ // OffsetForLeaderEpoch for some reason (odd set of supported reqs), we
+ // cannot use the returned leader epoch.
+ kip320 := g.cl.supportsOffsetForLeaderEpoch()
+
+ offsets := make(map[string]map[int32]Offset)
+ for _, rTopic := range resp.Topics {
+ topicOffsets := make(map[int32]Offset)
+ offsets[rTopic.Topic] = topicOffsets
+ for _, rPartition := range rTopic.Partitions {
+ if err = kerr.ErrorForCode(rPartition.ErrorCode); err != nil {
+ // KIP-447: Unstable offset commit means there is a
+ // pending transaction that should be committing soon.
+ // We sleep for 1s and retry fetching offsets.
+ if errors.Is(err, kerr.UnstableOffsetCommit) {
+ g.cfg.logger.Log(LogLevelInfo, "fetch offsets failed with UnstableOffsetCommit, waiting 1s and retrying",
+ "group", g.cfg.group,
+ "topic", rTopic.Topic,
+ "partition", rPartition.Partition,
+ )
+ select {
+ case <-ctx.Done():
+ case <-time.After(time.Second):
+ goto start
+ }
+ }
+ g.cfg.logger.Log(LogLevelError, "fetch offsets failed",
+ "group", g.cfg.group,
+ "topic", rTopic.Topic,
+ "partition", rPartition.Partition,
+ "err", err,
+ )
+ return err
+ }
+ offset := Offset{
+ at: rPartition.Offset,
+ epoch: -1,
+ }
+ if resp.Version >= 5 && kip320 { // KIP-320
+ offset.epoch = rPartition.LeaderEpoch
+ }
+ if rPartition.Offset == -1 {
+ offset = g.cfg.resetOffset
+ }
+ topicOffsets[rPartition.Partition] = offset
+ }
+ }
+
+ groupTopics := g.tps.load()
+ for fetchedTopic := range offsets {
+ if !groupTopics.hasTopic(fetchedTopic) {
+ delete(offsets, fetchedTopic)
+ g.cfg.logger.Log(LogLevelWarn, "member was assigned topic that we did not ask for in ConsumeTopics! skipping assigning this topic!", "group", g.cfg.group, "topic", fetchedTopic)
+ }
+ }
+
+ if g.cfg.onFetched != nil {
+ g.onFetchedMu.Lock()
+ err = g.cfg.onFetched(ctx, g.cl, resp)
+ g.onFetchedMu.Unlock()
+ if err != nil {
+ return err
+ }
+ }
+ if g.cfg.adjustOffsetsBeforeAssign != nil {
+ g.onFetchedMu.Lock()
+ offsets, err = g.cfg.adjustOffsetsBeforeAssign(ctx, offsets)
+ g.onFetchedMu.Unlock()
+ if err != nil {
+ return err
+ }
+ }
+
+ // Lock for assign and then updating uncommitted.
+ g.c.mu.Lock()
+ defer g.c.mu.Unlock()
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ // Eager: we already invalidated everything; nothing to re-invalidate.
+ // Cooperative: assign without invalidating what we are consuming.
+ g.c.assignPartitions(offsets, assignWithoutInvalidating, g.tps, fmt.Sprintf("newly fetched offsets for group %s", g.cfg.group))
+
+ // We need to update the uncommitted map so that SetOffsets(Committed)
+ // does not rewind before the committed offsets we just fetched.
+ if g.uncommitted == nil {
+ g.uncommitted = make(uncommitted, 10)
+ }
+ for topic, partitions := range offsets {
+ topicUncommitted := g.uncommitted[topic]
+ if topicUncommitted == nil {
+ topicUncommitted = make(map[int32]uncommit, 20)
+ g.uncommitted[topic] = topicUncommitted
+ }
+ for partition, offset := range partitions {
+ if offset.at < 0 {
+ continue // not yet committed
+ }
+ committed := EpochOffset{
+ Epoch: offset.epoch,
+ Offset: offset.at,
+ }
+ topicUncommitted[partition] = uncommit{
+ dirty: committed,
+ head: committed,
+ committed: committed,
+ }
+ }
+ }
+ return nil
+}
+
+// findNewAssignments updates topics the group wants to use and other metadata.
+// We only grab the group mu at the end if we need to.
+//
+// This joins the group if
+// - the group has never been joined
+// - new topics are found for consuming (changing this consumer's join metadata)
+//
+// Additionally, if the member is the leader, this rejoins the group if the
+// leader notices new partitions in an existing topic.
+//
+// This does not rejoin if the leader notices a partition is lost, which is
+// finicky.
+func (g *groupConsumer) findNewAssignments() {
+ topics := g.tps.load()
+
+ type change struct {
+ isNew bool
+ delta int
+ }
+
+ var rns reNews
+ if g.cfg.regex {
+ defer rns.log(&g.cl.cfg)
+ }
+
+ var numNewTopics int
+ toChange := make(map[string]change, len(topics))
+ for topic, topicPartitions := range topics {
+ parts := topicPartitions.load()
+ numPartitions := len(parts.partitions)
+ // If we are already using this topic, add that it changed if
+ // there are more partitions than we were using prior.
+ if used, exists := g.using[topic]; exists {
+ if added := numPartitions - used; added > 0 {
+ toChange[topic] = change{delta: added}
+ }
+ continue
+ }
+
+ // We are iterating over g.tps, which is initialized in the
+ // group.init from the config's topics, but can also be added
+ // to in AddConsumeTopics. By default, we use the topic. If
+ // this is regex based, the config's topics are regular
+ // expressions that we need to evaluate against (and we do not
+ // support adding new regex).
+ useTopic := true
+ if g.cfg.regex {
+ want, seen := g.reSeen[topic]
+ if !seen {
+ for rawRe, re := range g.cfg.topics {
+ if want = re.MatchString(topic); want {
+ rns.add(rawRe, topic)
+ break
+ }
+ }
+ if !want {
+ rns.skip(topic)
+ }
+ g.reSeen[topic] = want
+ }
+ useTopic = want
+ }
+
+ // We only track using the topic if there are partitions for
+ // it; if there are none, then the topic was set by _us_ as "we
+ // want to load the metadata", but the topic was not returned
+ // in the metadata (or it was returned with an error).
+ if useTopic && numPartitions > 0 {
+ if g.cfg.regex && parts.isInternal {
+ continue
+ }
+ toChange[topic] = change{isNew: true, delta: numPartitions}
+ numNewTopics++
+ }
+ }
+
+ externalRejoin := g.leader.Load() && g.getAndResetExternalRejoin()
+
+ if len(toChange) == 0 && !externalRejoin {
+ return
+ }
+
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ if g.dying {
+ return
+ }
+
+ for topic, change := range toChange {
+ g.using[topic] += change.delta
+ }
+
+ if !g.managing {
+ g.managing = true
+ go g.manage()
+ return
+ }
+
+ if numNewTopics > 0 {
+ g.rejoin("rejoining because there are more topics to consume, our interests have changed")
+ } else if g.leader.Load() {
+ if len(toChange) > 0 {
+ g.rejoin("rejoining because we are the leader and noticed some topics have new partitions")
+ } else if externalRejoin {
+ g.rejoin("leader detected that partitions on topics another member is consuming have changed, rejoining to trigger rebalance")
+ }
+ }
+}
+
+// uncommit tracks the latest offset polled (+1) and the latest commit.
+// The reason head is just past the latest offset is because we want
+// to commit TO an offset, not BEFORE an offset.
+type uncommit struct {
+ dirty EpochOffset // if autocommitting, what will move to head on next Poll
+ head EpochOffset // ready to commit
+ committed EpochOffset // what is committed
+}
+
+// EpochOffset combines a record offset with the leader epoch the broker
+// was at when the record was written.
+type EpochOffset struct {
+ // Epoch is the leader epoch of the record being committed. Truncation
+ // detection relies on the epoch of the CURRENT record. For truncation
+ // detection, the client asks "what is the the end of this epoch?",
+ // which returns one after the end offset (see the next field, and
+ // check the docs on kmsg.OffsetForLeaderEpochRequest).
+ Epoch int32
+
+ // Offset is the offset of a record. If committing, this should be one
+ // AFTER a record's offset. Clients start consuming at the offset that
+ // is committed.
+ Offset int64
+}
+
+// Less returns whether the this EpochOffset is less than another. This is less
+// than the other if this one's epoch is less, or the epoch's are equal and
+// this one's offset is less.
+func (e EpochOffset) Less(o EpochOffset) bool {
+ return e.Epoch < o.Epoch || e.Epoch == o.Epoch && e.Offset < o.Offset
+}
+
+type uncommitted map[string]map[int32]uncommit
+
+// updateUncommitted sets the latest uncommitted offset.
+func (g *groupConsumer) updateUncommitted(fetches Fetches) {
+ var b bytes.Buffer
+ debug := g.cfg.logger.Level() >= LogLevelDebug
+
+ // We set the head offset if autocommitting is disabled (because we
+ // only use head / committed in that case), or if we are greedily
+ // autocommitting (so that the latest head is available to autocommit).
+ setHead := g.cfg.autocommitDisable || g.cfg.autocommitGreedy
+
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ for _, fetch := range fetches {
+ for _, topic := range fetch.Topics {
+ if debug {
+ fmt.Fprintf(&b, "%s[", topic.Topic)
+ }
+ var topicOffsets map[int32]uncommit
+ for _, partition := range topic.Partitions {
+ if len(partition.Records) == 0 {
+ continue
+ }
+ final := partition.Records[len(partition.Records)-1]
+
+ if topicOffsets == nil {
+ if g.uncommitted == nil {
+ g.uncommitted = make(uncommitted, 10)
+ }
+ topicOffsets = g.uncommitted[topic.Topic]
+ if topicOffsets == nil {
+ topicOffsets = make(map[int32]uncommit, 20)
+ g.uncommitted[topic.Topic] = topicOffsets
+ }
+ }
+
+ // Our new head points just past the final consumed offset,
+ // that is, if we rejoin, this is the offset to begin at.
+ set := EpochOffset{
+ final.LeaderEpoch, // -1 if old message / unknown
+ final.Offset + 1,
+ }
+ prior := topicOffsets[partition.Partition]
+
+ if debug {
+ if setHead {
+ fmt.Fprintf(&b, "%d{%d=>%d r%d}, ", partition.Partition, prior.head.Offset, set.Offset, len(partition.Records))
+ } else {
+ fmt.Fprintf(&b, "%d{%d=>%d=>%d r%d}, ", partition.Partition, prior.head.Offset, prior.dirty.Offset, set.Offset, len(partition.Records))
+ }
+ }
+
+ prior.dirty = set
+ if setHead {
+ prior.head = set
+ }
+ topicOffsets[partition.Partition] = prior
+ }
+
+ if debug {
+ if bytes.HasSuffix(b.Bytes(), []byte(", ")) {
+ b.Truncate(b.Len() - 2)
+ }
+ b.WriteString("], ")
+ }
+ }
+ }
+
+ if debug {
+ update := b.String()
+ update = strings.TrimSuffix(update, ", ") // trim trailing comma and space after final topic
+ g.cfg.logger.Log(LogLevelDebug, "updated uncommitted", "group", g.cfg.group, "to", update)
+ }
+}
+
+// Called at the start of PollXyz only if autocommitting is enabled and we are
+// not committing greedily, this ensures that when we enter poll, everything
+// previously consumed is a candidate for autocommitting.
+func (g *groupConsumer) undirtyUncommitted() {
+ if g == nil {
+ return
+ }
+ // Disabling autocommit means we do not use the dirty offset: we always
+ // update head, and then manual commits use that.
+ if g.cfg.autocommitDisable {
+ return
+ }
+ // Greedy autocommitting does not use dirty offsets, because we always
+ // just set head to the latest.
+ if g.cfg.autocommitGreedy {
+ return
+ }
+ // If we are autocommitting marked records only, then we do not
+ // automatically un-dirty our offsets.
+ if g.cfg.autocommitMarks {
+ return
+ }
+
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ for _, partitions := range g.uncommitted {
+ for partition, uncommit := range partitions {
+ if uncommit.dirty != uncommit.head {
+ uncommit.head = uncommit.dirty
+ partitions[partition] = uncommit
+ }
+ }
+ }
+}
+
+// updateCommitted updates the group's uncommitted map. This function triply
+// verifies that the resp matches the req as it should and that the req does
+// not somehow contain more than what is in our uncommitted map.
+func (g *groupConsumer) updateCommitted(
+ req *kmsg.OffsetCommitRequest,
+ resp *kmsg.OffsetCommitResponse,
+) {
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ if req.Generation != g.memberGen.generation() {
+ return
+ }
+ if g.uncommitted == nil {
+ g.cfg.logger.Log(LogLevelWarn, "received an OffsetCommitResponse after our group session has ended, unable to handle this (were we kicked from the group?)")
+ return
+ }
+ if len(req.Topics) != len(resp.Topics) { // bad kafka
+ g.cfg.logger.Log(LogLevelError, fmt.Sprintf("broker replied to our OffsetCommitRequest incorrectly! Num topics in request: %d, in reply: %d, we cannot handle this!", len(req.Topics), len(resp.Topics)), "group", g.cfg.group)
+ return
+ }
+
+ sort.Slice(req.Topics, func(i, j int) bool {
+ return req.Topics[i].Topic < req.Topics[j].Topic
+ })
+ sort.Slice(resp.Topics, func(i, j int) bool {
+ return resp.Topics[i].Topic < resp.Topics[j].Topic
+ })
+
+ var b bytes.Buffer
+ debug := g.cfg.logger.Level() >= LogLevelDebug
+
+ for i := range resp.Topics {
+ reqTopic := &req.Topics[i]
+ respTopic := &resp.Topics[i]
+ topic := g.uncommitted[respTopic.Topic]
+ if topic == nil || // just in case
+ reqTopic.Topic != respTopic.Topic || // bad kafka
+ len(reqTopic.Partitions) != len(respTopic.Partitions) { // same
+ g.cfg.logger.Log(LogLevelError, fmt.Sprintf("broker replied to our OffsetCommitRequest incorrectly! Topic at request index %d: %s, reply at index: %s; num partitions on request topic: %d, in reply: %d, we cannot handle this!", i, reqTopic.Topic, respTopic.Topic, len(reqTopic.Partitions), len(respTopic.Partitions)), "group", g.cfg.group)
+ continue
+ }
+
+ sort.Slice(reqTopic.Partitions, func(i, j int) bool {
+ return reqTopic.Partitions[i].Partition < reqTopic.Partitions[j].Partition
+ })
+ sort.Slice(respTopic.Partitions, func(i, j int) bool {
+ return respTopic.Partitions[i].Partition < respTopic.Partitions[j].Partition
+ })
+
+ if debug {
+ fmt.Fprintf(&b, "%s[", respTopic.Topic)
+ }
+ for i := range respTopic.Partitions {
+ reqPart := &reqTopic.Partitions[i]
+ respPart := &respTopic.Partitions[i]
+ uncommit, exists := topic[respPart.Partition]
+ if !exists { // just in case
+ continue
+ }
+ if reqPart.Partition != respPart.Partition { // bad kafka
+ g.cfg.logger.Log(LogLevelError, fmt.Sprintf("broker replied to our OffsetCommitRequest incorrectly! Topic %s partition %d != resp partition %d", reqTopic.Topic, reqPart.Partition, respPart.Partition), "group", g.cfg.group)
+ continue
+ }
+ if respPart.ErrorCode != 0 {
+ g.cfg.logger.Log(LogLevelWarn, "unable to commit offset for topic partition",
+ "group", g.cfg.group,
+ "topic", reqTopic.Topic,
+ "partition", reqPart.Partition,
+ "commit_from", uncommit.committed.Offset,
+ "commit_to", reqPart.Offset,
+ "commit_epoch", reqPart.LeaderEpoch,
+ "error_code", respPart.ErrorCode,
+ )
+ continue
+ }
+
+ if debug {
+ fmt.Fprintf(&b, "%d{%d=>%d}, ", reqPart.Partition, uncommit.committed.Offset, reqPart.Offset)
+ }
+
+ set := EpochOffset{
+ reqPart.LeaderEpoch,
+ reqPart.Offset,
+ }
+ uncommit.committed = set
+
+ // head is set in four places:
+ // (1) if manually committing or greedily autocommitting,
+ // then head is bumped on poll
+ // (2) if autocommitting normally, then head is bumped
+ // to the prior poll on poll
+ // (3) if using marks, head is bumped on mark
+ // (4) here, and we can be here on autocommit or on
+ // manual commit (usually manual in an onRevoke)
+ //
+ // head is usually at or past the commit: usually, head
+ // is used to build the commit itself. However, in case 4
+ // when the user manually commits in onRevoke, the user
+ // is likely committing with UncommittedOffsets, i.e.,
+ // the dirty offsets that are past the current head.
+ // We want to ensure we forward the head so that using
+ // it later does not rewind the manual commit.
+ //
+ // This does not affect the first case, because dirty == head,
+ // and manually committing dirty changes nothing.
+ //
+ // This does not affect the second case, because effectively,
+ // this is just bumping head early (dirty == head, no change).
+ //
+ // This *could* affect the third case, because an
+ // autocommit could begin, followed by a mark rewind,
+ // followed by autocommit completion. We document that
+ // using marks to rewind is not recommended.
+ //
+ // The user could also muck the offsets with SetOffsets.
+ // We document that concurrent committing is not encouraged,
+ // we do not attempt to guard past that.
+ //
+ // w.r.t. leader epoch's, we document that modifying
+ // leader epoch's is not recommended.
+ if uncommit.head.Less(set) {
+ uncommit.head = set
+ }
+
+ topic[respPart.Partition] = uncommit
+ }
+
+ if debug {
+ if bytes.HasSuffix(b.Bytes(), []byte(", ")) {
+ b.Truncate(b.Len() - 2)
+ }
+ b.WriteString("], ")
+ }
+ }
+
+ if debug {
+ update := b.String()
+ update = strings.TrimSuffix(update, ", ") // trim trailing comma and space after final topic
+ g.cfg.logger.Log(LogLevelDebug, "updated committed", "group", g.cfg.group, "to", update)
+ }
+}
+
+func (g *groupConsumer) defaultCommitCallback(_ *Client, _ *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ if err != nil {
+ if !errors.Is(err, context.Canceled) {
+ g.cfg.logger.Log(LogLevelError, "default commit failed", "group", g.cfg.group, "err", err)
+ } else {
+ g.cfg.logger.Log(LogLevelDebug, "default commit canceled", "group", g.cfg.group)
+ }
+ return
+ }
+ for _, topic := range resp.Topics {
+ for _, partition := range topic.Partitions {
+ if err := kerr.ErrorForCode(partition.ErrorCode); err != nil {
+ g.cfg.logger.Log(LogLevelError, "in default commit: unable to commit offsets for topic partition",
+ "group", g.cfg.group,
+ "topic", topic.Topic,
+ "partition", partition.Partition,
+ "error", err)
+ }
+ }
+ }
+}
+
+func (g *groupConsumer) loopCommit() {
+ ticker := time.NewTicker(g.cfg.autocommitInterval)
+ defer ticker.Stop()
+
+ for {
+ select {
+ case <-ticker.C:
+ case <-g.ctx.Done():
+ return
+ }
+
+ // We use the group context for the default autocommit; revokes
+ // use the client context so that we can be sure we commit even
+ // after the group context is canceled (which is the first
+ // thing that happens so as to quit the manage loop before
+ // leaving a group).
+ //
+ // We always commit only the head. If we are autocommitting
+ // dirty, then updateUncommitted updates the head to dirty
+ // offsets.
+ g.noCommitDuringJoinAndSync.RLock()
+ g.mu.Lock()
+ if !g.blockAuto {
+ uncommitted := g.getUncommittedLocked(true, false)
+ if len(uncommitted) == 0 {
+ g.cfg.logger.Log(LogLevelDebug, "skipping autocommit due to no offsets to commit", "group", g.cfg.group)
+ g.noCommitDuringJoinAndSync.RUnlock()
+ } else {
+ g.cfg.logger.Log(LogLevelDebug, "autocommitting", "group", g.cfg.group)
+ g.commit(g.ctx, uncommitted, func(cl *Client, req *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ g.noCommitDuringJoinAndSync.RUnlock()
+ g.cfg.commitCallback(cl, req, resp, err)
+ })
+ }
+ } else {
+ g.noCommitDuringJoinAndSync.RUnlock()
+ }
+ g.mu.Unlock()
+ }
+}
+
+// For SetOffsets, the gist of what follows:
+//
+// We need to set uncommitted.committed; that is the guarantee of this
+// function. However, if, for everything we are setting, the head equals the
+// commit, then we do not need to actually invalidate our current assignments.
+// This is a great optimization for transactions that are resetting their state
+// on abort.
+func (g *groupConsumer) getSetAssigns(setOffsets map[string]map[int32]EpochOffset) (assigns map[string]map[int32]Offset) {
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ groupTopics := g.tps.load()
+
+ if g.uncommitted == nil {
+ g.uncommitted = make(uncommitted)
+ }
+ for topic, partitions := range setOffsets {
+ if !groupTopics.hasTopic(topic) {
+ continue // trying to set a topic that was not assigned...
+ }
+ topicUncommitted := g.uncommitted[topic]
+ if topicUncommitted == nil {
+ topicUncommitted = make(map[int32]uncommit)
+ g.uncommitted[topic] = topicUncommitted
+ }
+ var topicAssigns map[int32]Offset
+ for partition, epochOffset := range partitions {
+ current, exists := topicUncommitted[partition]
+ topicUncommitted[partition] = uncommit{
+ dirty: epochOffset,
+ head: epochOffset,
+ committed: epochOffset,
+ }
+ if exists && current.dirty == epochOffset {
+ continue
+ } else if topicAssigns == nil {
+ topicAssigns = make(map[int32]Offset, len(partitions))
+ }
+ topicAssigns[partition] = Offset{
+ at: epochOffset.Offset,
+ epoch: epochOffset.Epoch,
+ }
+ }
+ if len(topicAssigns) > 0 {
+ if assigns == nil {
+ assigns = make(map[string]map[int32]Offset, 10)
+ }
+ assigns[topic] = topicAssigns
+ }
+ }
+
+ return assigns
+}
+
+// UncommittedOffsets returns the latest uncommitted offsets. Uncommitted
+// offsets are always updated on calls to PollFetches.
+//
+// If there are no uncommitted offsets, this returns nil.
+func (cl *Client) UncommittedOffsets() map[string]map[int32]EpochOffset {
+ if g := cl.consumer.g; g != nil {
+ return g.getUncommitted(true)
+ }
+ return nil
+}
+
+// MarkedOffsets returns the latest marked offsets. When autocommitting, a
+// marked offset is an offset that can be committed, in comparison to a dirty
+// offset that cannot yet be committed. MarkedOffsets returns nil if you are
+// not using AutoCommitMarks.
+func (cl *Client) MarkedOffsets() map[string]map[int32]EpochOffset {
+ g := cl.consumer.g
+ if g == nil || !cl.cfg.autocommitMarks {
+ return nil
+ }
+ return g.getUncommitted(false)
+}
+
+// CommittedOffsets returns the latest committed offsets. Committed offsets are
+// updated from commits or from joining a group and fetching offsets.
+//
+// If there are no committed offsets, this returns nil.
+func (cl *Client) CommittedOffsets() map[string]map[int32]EpochOffset {
+ g := cl.consumer.g
+ if g == nil {
+ return nil
+ }
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ return g.getUncommittedLocked(false, false)
+}
+
+func (g *groupConsumer) getUncommitted(dirty bool) map[string]map[int32]EpochOffset {
+ g.mu.Lock()
+ defer g.mu.Unlock()
+ return g.getUncommittedLocked(true, dirty)
+}
+
+func (g *groupConsumer) getUncommittedLocked(head, dirty bool) map[string]map[int32]EpochOffset {
+ if g.uncommitted == nil {
+ return nil
+ }
+
+ var uncommitted map[string]map[int32]EpochOffset
+ for topic, partitions := range g.uncommitted {
+ var topicUncommitted map[int32]EpochOffset
+ for partition, uncommit := range partitions {
+ if head && (dirty && uncommit.dirty == uncommit.committed || !dirty && uncommit.head == uncommit.committed) {
+ continue
+ }
+ if topicUncommitted == nil {
+ if uncommitted == nil {
+ uncommitted = make(map[string]map[int32]EpochOffset, len(g.uncommitted))
+ }
+ topicUncommitted = uncommitted[topic]
+ if topicUncommitted == nil {
+ topicUncommitted = make(map[int32]EpochOffset, len(partitions))
+ uncommitted[topic] = topicUncommitted
+ }
+ }
+ if head {
+ if dirty {
+ topicUncommitted[partition] = uncommit.dirty
+ } else {
+ topicUncommitted[partition] = uncommit.head
+ }
+ } else {
+ topicUncommitted[partition] = uncommit.committed
+ }
+ }
+ }
+ return uncommitted
+}
+
+type commitContextFnT struct{}
+
+var commitContextFn commitContextFnT
+
+// PreCommitFnContext attaches fn to the context through WithValue. Using the
+// context while committing allows fn to be called just before the commit is
+// issued. This can be used to modify the actual commit, such as by associating
+// metadata with partitions. If fn returns an error, the commit is not
+// attempted.
+func PreCommitFnContext(ctx context.Context, fn func(*kmsg.OffsetCommitRequest) error) context.Context {
+ return context.WithValue(ctx, commitContextFn, fn)
+}
+
+type txnCommitContextFnT struct{}
+
+var txnCommitContextFn txnCommitContextFnT
+
+// PreTxnCommitFnContext attaches fn to the context through WithValue. Using
+// the context while committing a transaction allows fn to be called just
+// before the commit is issued. This can be used to modify the actual commit,
+// such as by associating metadata with partitions (for transactions, the
+// default internal metadata is the client's current member ID). If fn returns
+// an error, the commit is not attempted. This context can be used in either
+// GroupTransactSession.End or in Client.EndTransaction.
+func PreTxnCommitFnContext(ctx context.Context, fn func(*kmsg.TxnOffsetCommitRequest) error) context.Context {
+ return context.WithValue(ctx, txnCommitContextFn, fn)
+}
+
+// CommitRecords issues a synchronous offset commit for the offsets contained
+// within rs. Retryable errors are retried up to the configured retry limit,
+// and any unretryable error is returned.
+//
+// This function is useful as a simple way to commit offsets if you have
+// disabled autocommitting. As an alternative if you always want to commit
+// everything, see CommitUncommittedOffsets.
+//
+// Simple usage of this function may lead to duplicate records if a consumer
+// group rebalance occurs before or while this function is being executed. You
+// can avoid this scenario by calling CommitRecords in a custom
+// OnPartitionsRevoked, but for most workloads, a small bit of potential
+// duplicate processing is fine. See the documentation on DisableAutoCommit
+// for more details. You can also avoid this problem by using
+// BlockRebalanceOnPoll, but that option comes with its own tradeoffs (refer to
+// its documentation).
+//
+// It is recommended to always commit records in order (per partition). If you
+// call this function twice with record for partition 0 at offset 999
+// initially, and then with record for partition 0 at offset 4, you will rewind
+// your commit.
+//
+// A use case for this function may be to partially process a batch of records,
+// commit, and then continue to process the rest of the records. It is not
+// recommended to call this for every record processed in a high throughput
+// scenario, because you do not want to unnecessarily increase load on Kafka.
+//
+// If you do not want to wait for this function to complete before continuing
+// processing records, you can call this function in a goroutine.
+func (cl *Client) CommitRecords(ctx context.Context, rs ...*Record) error {
+ // First build the offset commit map. We favor the latest epoch, then
+ // offset, if any records map to the same topic / partition.
+ offsets := make(map[string]map[int32]EpochOffset)
+ for _, r := range rs {
+ toffsets := offsets[r.Topic]
+ if toffsets == nil {
+ toffsets = make(map[int32]EpochOffset)
+ offsets[r.Topic] = toffsets
+ }
+
+ if at, exists := toffsets[r.Partition]; exists {
+ if at.Epoch > r.LeaderEpoch || at.Epoch == r.LeaderEpoch && at.Offset > r.Offset {
+ continue
+ }
+ }
+ toffsets[r.Partition] = EpochOffset{
+ r.LeaderEpoch,
+ r.Offset + 1, // need to advice to next offset to move forward
+ }
+ }
+
+ var rerr error // return error
+
+ // Our client retries an OffsetCommitRequest as necessary if the first
+ // response partition has a retryable group error (group coordinator
+ // loading, etc), so any partition error is fatal.
+ cl.CommitOffsetsSync(ctx, offsets, func(_ *Client, _ *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ if err != nil {
+ rerr = err
+ return
+ }
+
+ for _, topic := range resp.Topics {
+ for _, partition := range topic.Partitions {
+ if err := kerr.ErrorForCode(partition.ErrorCode); err != nil {
+ rerr = err
+ return
+ }
+ }
+ }
+ })
+
+ return rerr
+}
+
+// MarkCommitRecords marks records to be available for autocommitting. This
+// function is only useful if you use the AutoCommitMarks config option, see
+// the documentation on that option for more details. This function does not
+// allow rewinds.
+func (cl *Client) MarkCommitRecords(rs ...*Record) {
+ g := cl.consumer.g
+ if g == nil || !cl.cfg.autocommitMarks {
+ return
+ }
+
+ sort.Slice(rs, func(i, j int) bool {
+ return rs[i].Topic < rs[j].Topic ||
+ rs[i].Topic == rs[j].Topic && rs[i].Partition < rs[j].Partition
+ })
+
+ // protect g.uncommitted map
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ if g.uncommitted == nil {
+ g.uncommitted = make(uncommitted)
+ }
+ var curTopic string
+ var curPartitions map[int32]uncommit
+ for _, r := range rs {
+ if curPartitions == nil || r.Topic != curTopic {
+ curPartitions = g.uncommitted[r.Topic]
+ if curPartitions == nil {
+ curPartitions = make(map[int32]uncommit)
+ g.uncommitted[r.Topic] = curPartitions
+ }
+ curTopic = r.Topic
+ }
+
+ current := curPartitions[r.Partition]
+ if newHead := (EpochOffset{
+ r.LeaderEpoch,
+ r.Offset + 1,
+ }); current.head.Less(newHead) {
+ curPartitions[r.Partition] = uncommit{
+ dirty: current.dirty,
+ committed: current.committed,
+ head: newHead,
+ }
+ }
+ }
+}
+
+// MarkCommitOffsets marks offsets to be available for autocommitting. This
+// function is only useful if you use the AutoCommitMarks config option, see
+// the documentation on that option for more details. This function does not
+// allow rewinds.
+func (cl *Client) MarkCommitOffsets(unmarked map[string]map[int32]EpochOffset) {
+ g := cl.consumer.g
+ if g == nil || !cl.cfg.autocommitMarks {
+ return
+ }
+
+ // protect g.uncommitted map
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ if g.uncommitted == nil {
+ g.uncommitted = make(uncommitted)
+ }
+
+ for topic, partitions := range unmarked {
+ curPartitions := g.uncommitted[topic]
+ if curPartitions == nil {
+ curPartitions = make(map[int32]uncommit)
+ g.uncommitted[topic] = curPartitions
+ }
+
+ for partition, newHead := range partitions {
+ current := curPartitions[partition]
+ if current.head.Less(newHead) {
+ curPartitions[partition] = uncommit{
+ dirty: current.dirty,
+ committed: current.committed,
+ head: newHead,
+ }
+ }
+ }
+ }
+}
+
+// CommitUncommittedOffsets issues a synchronous offset commit for any
+// partition that has been consumed from that has uncommitted offsets.
+// Retryable errors are retried up to the configured retry limit, and any
+// unretryable error is returned.
+//
+// The recommended pattern for using this function is to have a poll / process
+// / commit loop. First PollFetches, then process every record, then call
+// CommitUncommittedOffsets.
+//
+// As an alternative if you want to commit specific records, see CommitRecords.
+func (cl *Client) CommitUncommittedOffsets(ctx context.Context) error {
+ // This function is just the tail end of CommitRecords just above.
+ return cl.commitOffsets(ctx, cl.UncommittedOffsets())
+}
+
+// CommitMarkedOffsets issues a synchronous offset commit for any partition
+// that has been consumed from that has marked offsets. Retryable errors are
+// retried up to the configured retry limit, and any unretryable error is
+// returned.
+//
+// This function is only useful if you have marked offsets with
+// MarkCommitRecords when using AutoCommitMarks, otherwise this is a no-op.
+//
+// The recommended pattern for using this function is to have a poll / process
+// / commit loop. First PollFetches, then process every record,
+// call MarkCommitRecords for the records you wish the commit and then call
+// CommitMarkedOffsets.
+//
+// As an alternative if you want to commit specific records, see CommitRecords.
+func (cl *Client) CommitMarkedOffsets(ctx context.Context) error {
+ // This function is just the tail end of CommitRecords just above.
+ marked := cl.MarkedOffsets()
+ if len(marked) == 0 {
+ return nil
+ }
+ return cl.commitOffsets(ctx, marked)
+}
+
+func (cl *Client) commitOffsets(ctx context.Context, offsets map[string]map[int32]EpochOffset) error {
+ var rerr error
+ cl.CommitOffsetsSync(ctx, offsets, func(_ *Client, _ *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ if err != nil {
+ rerr = err
+ return
+ }
+
+ for _, topic := range resp.Topics {
+ for _, partition := range topic.Partitions {
+ if err := kerr.ErrorForCode(partition.ErrorCode); err != nil {
+ rerr = err
+ return
+ }
+ }
+ }
+ })
+ return rerr
+}
+
+// CommitOffsetsSync cancels any active CommitOffsets, begins a commit that
+// cannot be canceled, and waits for that commit to complete. This function
+// will not return until the commit is done and the onDone callback is
+// complete.
+//
+// The purpose of this function is for use in OnPartitionsRevoked or committing
+// before leaving a group, because you do not want to have a commit issued in
+// OnPartitionsRevoked canceled.
+//
+// This is an advanced function, and for simpler, more easily understandable
+// committing, see CommitRecords and CommitUncommittedOffsets.
+//
+// For more information about committing and committing asynchronously, see
+// CommitOffsets.
+func (cl *Client) CommitOffsetsSync(
+ ctx context.Context,
+ uncommitted map[string]map[int32]EpochOffset,
+ onDone func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error),
+) {
+ if onDone == nil {
+ onDone = func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error) {}
+ }
+
+ g := cl.consumer.g
+ if g == nil {
+ onDone(cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), errNotGroup)
+ return
+ }
+ if len(uncommitted) == 0 {
+ onDone(cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), nil)
+ return
+ }
+ g.commitOffsetsSync(ctx, uncommitted, onDone)
+}
+
+// waitJoinSyncMu is a rather insane way to try to grab a lock, but also return
+// early if we have to wait and the context is canceled.
+func (g *groupConsumer) waitJoinSyncMu(ctx context.Context) error {
+ if g.noCommitDuringJoinAndSync.TryRLock() {
+ g.cfg.logger.Log(LogLevelDebug, "grabbed join/sync mu on first try")
+ return nil
+ }
+
+ var (
+ blockJoinSyncCh = make(chan struct{})
+ mu sync.Mutex
+ returned bool
+ maybeRUnlock = func() {
+ mu.Lock()
+ defer mu.Unlock()
+ if returned {
+ g.noCommitDuringJoinAndSync.RUnlock()
+ }
+ returned = true
+ }
+ )
+
+ go func() {
+ g.noCommitDuringJoinAndSync.RLock()
+ close(blockJoinSyncCh)
+ maybeRUnlock()
+ }()
+
+ select {
+ case <-blockJoinSyncCh:
+ g.cfg.logger.Log(LogLevelDebug, "grabbed join/sync mu after waiting")
+ return nil
+ case <-ctx.Done():
+ g.cfg.logger.Log(LogLevelDebug, "not grabbing mu because context canceled")
+ maybeRUnlock()
+ return ctx.Err()
+ }
+}
+
+func (g *groupConsumer) commitOffsetsSync(
+ ctx context.Context,
+ uncommitted map[string]map[int32]EpochOffset,
+ onDone func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error),
+) {
+ g.cfg.logger.Log(LogLevelDebug, "in CommitOffsetsSync", "group", g.cfg.group, "with", uncommitted)
+ defer g.cfg.logger.Log(LogLevelDebug, "left CommitOffsetsSync", "group", g.cfg.group)
+
+ done := make(chan struct{})
+ defer func() { <-done }()
+
+ if onDone == nil {
+ onDone = func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error) {}
+ }
+
+ if err := g.waitJoinSyncMu(ctx); err != nil {
+ onDone(g.cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), err)
+ close(done)
+ return
+ }
+
+ g.syncCommitMu.Lock() // block all other concurrent commits until our OnDone is done.
+ unblockCommits := func(cl *Client, req *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ g.noCommitDuringJoinAndSync.RUnlock()
+ defer close(done)
+ defer g.syncCommitMu.Unlock()
+ onDone(cl, req, resp, err)
+ }
+
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ g.blockAuto = true
+ unblockAuto := func(cl *Client, req *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ unblockCommits(cl, req, resp, err)
+ g.mu.Lock()
+ defer g.mu.Unlock()
+ g.blockAuto = false
+ }
+
+ g.commit(ctx, uncommitted, unblockAuto)
+}
+
+// CommitOffsets commits the given offsets for a group, calling onDone with the
+// commit request and either the response or an error if the response was not
+// issued. If uncommitted is empty or the client is not consuming as a group,
+// onDone is called with (nil, nil, nil) and this function returns immediately.
+// It is OK if onDone is nil, but you will not know if your commit succeeded.
+//
+// This is an advanced function and is difficult to use correctly. For simpler,
+// more easily understandable committing, see CommitRecords and
+// CommitUncommittedOffsets.
+//
+// This function itself does not wait for the commit to finish. By default,
+// this function is an asynchronous commit. You can use onDone to make it sync.
+// If autocommitting is enabled, this function blocks autocommitting until this
+// function is complete and the onDone has returned.
+//
+// It is invalid to use this function to commit offsets for a transaction.
+//
+// Note that this function ensures absolute ordering of commit requests by
+// canceling prior requests and ensuring they are done before executing a new
+// one. This means, for absolute control, you can use this function to
+// periodically commit async and then issue a final sync commit before quitting
+// (this is the behavior of autocommiting and using the default revoke). This
+// differs from the Java async commit, which does not retry requests to avoid
+// trampling on future commits.
+//
+// It is highly recommended to check the response's partition's error codes if
+// the response is non-nil. While unlikely, individual partitions can error.
+// This is most likely to happen if a commit occurs too late in a rebalance
+// event.
+//
+// Do not use this async CommitOffsets in OnPartitionsRevoked, instead use
+// CommitOffsetsSync. If you commit async, the rebalance will proceed before
+// this function executes, and you will commit offsets for partitions that have
+// moved to a different consumer.
+func (cl *Client) CommitOffsets(
+ ctx context.Context,
+ uncommitted map[string]map[int32]EpochOffset,
+ onDone func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error),
+) {
+ cl.cfg.logger.Log(LogLevelDebug, "in CommitOffsets", "with", uncommitted)
+ defer cl.cfg.logger.Log(LogLevelDebug, "left CommitOffsets")
+ if onDone == nil {
+ onDone = func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error) {}
+ }
+
+ g := cl.consumer.g
+ if g == nil {
+ onDone(cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), errNotGroup)
+ return
+ }
+ if len(uncommitted) == 0 {
+ onDone(cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), nil)
+ return
+ }
+
+ if err := g.waitJoinSyncMu(ctx); err != nil {
+ onDone(g.cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), err)
+ return
+ }
+
+ g.syncCommitMu.RLock() // block sync commit, but allow other concurrent Commit to cancel us
+ unblockJoinSync := func(cl *Client, req *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ g.noCommitDuringJoinAndSync.RUnlock()
+ defer g.syncCommitMu.RUnlock()
+ onDone(cl, req, resp, err)
+ }
+
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ g.blockAuto = true
+ unblockAuto := func(cl *Client, req *kmsg.OffsetCommitRequest, resp *kmsg.OffsetCommitResponse, err error) {
+ unblockJoinSync(cl, req, resp, err)
+ g.mu.Lock()
+ defer g.mu.Unlock()
+ g.blockAuto = false
+ }
+
+ g.commit(ctx, uncommitted, unblockAuto)
+}
+
+// defaultRevoke commits the last fetched offsets and waits for the commit to
+// finish. This is the default onRevoked function which, when combined with the
+// default autocommit, ensures we never miss committing everything.
+//
+// Note that the heartbeat loop invalidates all buffered, unpolled fetches
+// before revoking, meaning this truly will commit all polled fetches.
+func (g *groupConsumer) defaultRevoke(context.Context, *Client, map[string][]int32) {
+ if !g.cfg.autocommitDisable {
+ // We use the client's context rather than the group context,
+ // because this could come from the group being left. The group
+ // context will already be canceled.
+ g.commitOffsetsSync(g.cl.ctx, g.getUncommitted(false), g.cfg.commitCallback)
+ }
+}
+
+// The actual logic to commit. This is called under two locks:
+// - g.noCommitDuringJoinAndSync.RLock()
+// - g.mu.Lock()
+//
+// By blocking the JoinGroup from being issued, or blocking the commit on join
+// & sync finishing, we avoid RebalanceInProgress and IllegalGeneration. The
+// former error happens if a commit arrives to the broker between the two, the
+// latter error happens when a commit arrives to the broker with the old
+// generation (it was in flight before sync finished).
+//
+// Practically, what this means is that a user's commits will be blocked if
+// they try to commit between join and sync.
+//
+// For eager consuming, the user should not have any partitions to commit
+// anyway. For cooperative consuming, a rebalance can happen after at any
+// moment. We block only revokation aspects of rebalances with
+// BlockRebalanceOnPoll; we want to allow the cooperative part of rebalancing
+// to occur.
+func (g *groupConsumer) commit(
+ ctx context.Context,
+ uncommitted map[string]map[int32]EpochOffset,
+ onDone func(*Client, *kmsg.OffsetCommitRequest, *kmsg.OffsetCommitResponse, error),
+) {
+ // The user could theoretically give us topics that have no partitions
+ // to commit. We strip those: Kafka does not reply to them, and we
+ // expect all partitions in our request to be replied to in
+ // updateCommitted. If any topic is empty, we deeply clone and then
+ // strip everything empty. See #186.
+ var clone bool
+ for _, ps := range uncommitted {
+ if len(ps) == 0 {
+ clone = true
+ break
+ }
+ }
+ if clone {
+ dup := make(map[string]map[int32]EpochOffset, len(uncommitted))
+ for t, ps := range uncommitted {
+ if len(ps) == 0 {
+ continue
+ }
+ dupPs := make(map[int32]EpochOffset, len(ps))
+ dup[t] = dupPs
+ for p, eo := range ps {
+ dupPs[p] = eo
+ }
+ }
+ uncommitted = dup
+ }
+
+ if len(uncommitted) == 0 { // only empty if called thru autocommit / default revoke
+ // We have to do this concurrently because the expectation is
+ // that commit itself does not block.
+ go onDone(g.cl, kmsg.NewPtrOffsetCommitRequest(), kmsg.NewPtrOffsetCommitResponse(), nil)
+ return
+ }
+
+ priorCancel := g.commitCancel
+ priorDone := g.commitDone
+
+ commitCtx, commitCancel := context.WithCancel(ctx) // enable ours to be canceled and waited for
+ commitDone := make(chan struct{})
+
+ g.commitCancel = commitCancel
+ g.commitDone = commitDone
+
+ req := kmsg.NewPtrOffsetCommitRequest()
+ req.Group = g.cfg.group
+ memberID, generation := g.memberGen.load()
+ req.Generation = generation
+ req.MemberID = memberID
+ req.InstanceID = g.cfg.instanceID
+
+ if ctx.Done() != nil {
+ go func() {
+ select {
+ case <-ctx.Done():
+ commitCancel()
+ case <-commitCtx.Done():
+ }
+ }()
+ }
+
+ go func() {
+ defer close(commitDone) // allow future commits to continue when we are done
+ defer commitCancel()
+ if priorDone != nil { // wait for any prior request to finish
+ select {
+ case <-priorDone:
+ default:
+ g.cfg.logger.Log(LogLevelDebug, "canceling prior commit to issue another", "group", g.cfg.group)
+ priorCancel()
+ <-priorDone
+ }
+ }
+ g.cfg.logger.Log(LogLevelDebug, "issuing commit", "group", g.cfg.group, "uncommitted", uncommitted)
+
+ for topic, partitions := range uncommitted {
+ reqTopic := kmsg.NewOffsetCommitRequestTopic()
+ reqTopic.Topic = topic
+ for partition, eo := range partitions {
+ reqPartition := kmsg.NewOffsetCommitRequestTopicPartition()
+ reqPartition.Partition = partition
+ reqPartition.Offset = eo.Offset
+ reqPartition.LeaderEpoch = eo.Epoch // KIP-320
+ reqPartition.Metadata = &req.MemberID
+ reqTopic.Partitions = append(reqTopic.Partitions, reqPartition)
+ }
+ req.Topics = append(req.Topics, reqTopic)
+ }
+
+ if fn, ok := ctx.Value(commitContextFn).(func(*kmsg.OffsetCommitRequest) error); ok {
+ if err := fn(req); err != nil {
+ onDone(g.cl, req, nil, err)
+ return
+ }
+ }
+
+ resp, err := req.RequestWith(commitCtx, g.cl)
+ if err != nil {
+ onDone(g.cl, req, nil, err)
+ return
+ }
+ g.updateCommitted(req, resp)
+ onDone(g.cl, req, resp, nil)
+ }()
+}
+
+type reNews struct {
+ added map[string][]string
+ skipped []string
+}
+
+func (r *reNews) add(re, match string) {
+ if r.added == nil {
+ r.added = make(map[string][]string)
+ }
+ r.added[re] = append(r.added[re], match)
+}
+
+func (r *reNews) skip(topic string) {
+ r.skipped = append(r.skipped, topic)
+}
+
+func (r *reNews) log(cfg *cfg) {
+ if len(r.added) == 0 && len(r.skipped) == 0 {
+ return
+ }
+ var addeds []string
+ for re, matches := range r.added {
+ sort.Strings(matches)
+ addeds = append(addeds, fmt.Sprintf("%s[%s]", re, strings.Join(matches, " ")))
+ }
+ added := strings.Join(addeds, " ")
+ sort.Strings(r.skipped)
+ cfg.logger.Log(LogLevelInfo, "consumer regular expressions evaluated on new topics", "added", added, "evaluated_and_skipped", r.skipped)
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go b/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go
new file mode 100644
index 0000000000000..3ff1dbfebe81d
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go
@@ -0,0 +1,321 @@
+package kgo
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "os"
+)
+
+func isRetryableBrokerErr(err error) bool {
+ // The error could be nil if we are evaluating multiple errors at once,
+ // and only one is non-nil. The intent of this function is to evaluate
+ // whether an **error** is retryable, not a non-error. We return that
+ // nil is not retryable -- the calling code evaluating multiple errors
+ // at once would not call into this function if all errors were nil.
+ if err == nil {
+ return false
+ }
+ // https://github.com/golang/go/issues/45729
+ //
+ // Temporary is relatively useless. We will still check for the
+ // temporary interface, and in all cases, even with timeouts, we want
+ // to retry.
+ //
+ // More generally, we will retry for any error that unwraps into an
+ // os.SyscallError. Looking at Go's net package, the error we care
+ // about is net.OpError. Looking into that further, any error that
+ // reaches into the operating system return a syscall error, which is
+ // then put in net.OpError's Err field as an os.SyscallError. There are
+ // a few non-os.SyscallError errors, these are where Go itself detects
+ // a hard failure. We do not retry those.
+ //
+ // We blanket retry os.SyscallError because a lot of the times, what
+ // appears as a hard failure can actually be retried. For example, a
+ // failed dial can be retried, maybe the resolver temporarily had a
+ // problem.
+ //
+ // We favor testing os.SyscallError first, because net.OpError _always_
+ // implements Temporary, so if we test that first, it'll return false
+ // in many cases when we want to return true from os.SyscallError.
+ if se := (*os.SyscallError)(nil); errors.As(err, &se) {
+ // If a dial fails, potentially we could retry if the resolver
+ // had a temporary hiccup, but we will err on the side of this
+ // being a slightly less temporary error.
+ return !isDialNonTimeoutErr(err)
+ }
+ // EOF can be returned if a broker kills a connection unexpectedly, and
+ // we can retry that. Same for ErrClosed.
+ if errors.Is(err, net.ErrClosed) || errors.Is(err, io.EOF) {
+ return true
+ }
+ // We could have a retryable producer ID failure, which then bubbled up
+ // as errProducerIDLoadFail so as to be retried later.
+ if pe := (*errProducerIDLoadFail)(nil); errors.As(err, &pe) {
+ return true
+ }
+ // We could have chosen a broker, and then a concurrent metadata update
+ // could have removed it.
+ if errors.Is(err, errChosenBrokerDead) {
+ return true
+ }
+ // A broker kept giving us short sasl lifetimes, so we killed the
+ // connection ourselves. We can retry on a new connection.
+ if errors.Is(err, errSaslReauthLoop) {
+ return true
+ }
+ // We really should not get correlation mismatch, but if we do, we can
+ // retry.
+ if errors.Is(err, errCorrelationIDMismatch) {
+ return true
+ }
+ // We sometimes load the controller before issuing requests, and the
+ // cluster may not yet be ready and will return -1 for the controller.
+ // We can backoff and retry and hope the cluster has stabilized.
+ if ce := (*errUnknownController)(nil); errors.As(err, &ce) {
+ return true
+ }
+ // Same thought for a non-existing coordinator.
+ if ce := (*errUnknownCoordinator)(nil); errors.As(err, &ce) {
+ return true
+ }
+ var tempErr interface{ Temporary() bool }
+ if errors.As(err, &tempErr) {
+ return tempErr.Temporary()
+ }
+ return false
+}
+
+func isDialNonTimeoutErr(err error) bool {
+ var ne *net.OpError
+ return errors.As(err, &ne) && ne.Op == "dial" && !ne.Timeout()
+}
+
+func isAnyDialErr(err error) bool {
+ var ne *net.OpError
+ return errors.As(err, &ne) && ne.Op == "dial"
+}
+
+func isContextErr(err error) bool {
+ return errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded)
+}
+
+func isSkippableBrokerErr(err error) bool {
+ // Some broker errors are not retryable for the given broker itself,
+ // but we *could* skip the broker and try again on the next broker. For
+ // example, if the user input an invalid address and a valid address
+ // for seeds, when we fail dialing the first seed, we cannot retry that
+ // broker, but we can skip to the next.
+ //
+ // We take anything that returns an OpError that *is not* a context
+ // error deep inside.
+ if errors.Is(err, errUnknownBroker) {
+ return true
+ }
+ var ne *net.OpError
+ if errors.As(err, &ne) && !isContextErr(err) {
+ return true
+ }
+ return false
+}
+
+var (
+ //////////////
+ // INTERNAL // -- when used multiple times or checked in different areas of the client
+ //////////////
+
+ // Returned when issuing a request to a broker that the client does not
+ // know about (maybe missing from metadata responses now).
+ errUnknownBroker = errors.New("unknown broker")
+
+ // A temporary error returned when a broker chosen for a request is
+ // stopped due to a concurrent metadata response.
+ errChosenBrokerDead = errors.New("the internal broker struct chosen to issue this request has died--either the broker id is migrating or no longer exists")
+
+ // If a broker repeatedly gives us tiny sasl lifetimes, we fail a
+ // request after a few tries to forcefully kill the connection and
+ // restart a new connection ourselves.
+ errSaslReauthLoop = errors.New("the broker is repeatedly giving us sasl lifetimes that are too short to write a request")
+
+ // A temporary error returned when Kafka replies with a different
+ // correlation ID than we were expecting for the request the client
+ // issued.
+ //
+ // If this error happens, the client closes the broker connection.
+ errCorrelationIDMismatch = errors.New("correlation ID mismatch")
+
+ // Returned when using a kmsg.Request with a key larger than kmsg.MaxKey.
+ errUnknownRequestKey = errors.New("request key is unknown")
+
+ // Returned if a connection has loaded broker ApiVersions and knows
+ // that the broker cannot handle the request to-be-issued request.
+ errBrokerTooOld = errors.New("broker is too old; the broker has already indicated it will not know how to handle the request")
+
+ // Returned when trying to call group functions when the client is not
+ // assigned a group.
+ errNotGroup = errors.New("invalid group function call when not assigned a group")
+
+ // Returned when trying to begin a transaction with a client that does
+ // not have a transactional ID.
+ errNotTransactional = errors.New("invalid attempt to begin a transaction with a non-transactional client")
+
+ // Returned when trying to produce a record outside of a transaction.
+ errNotInTransaction = errors.New("cannot produce record transactionally if not in a transaction")
+
+ errNoTopic = errors.New("cannot produce record with no topic and no default topic")
+
+ // Returned for all buffered produce records when a user purges topics.
+ errPurged = errors.New("topic purged while buffered")
+
+ errMissingMetadataPartition = errors.New("metadata update is missing a partition that we were previously using")
+
+ errNoCommittedOffset = errors.New("partition has no prior committed offset")
+
+ //////////////
+ // EXTERNAL //
+ //////////////
+
+ // ErrRecordTimeout is passed to produce promises when records are
+ // unable to be produced within the RecordDeliveryTimeout.
+ ErrRecordTimeout = errors.New("records have timed out before they were able to be produced")
+
+ // ErrRecordRetries is passed to produce promises when records are
+ // unable to be produced after RecordRetries attempts.
+ ErrRecordRetries = errors.New("record failed after being retried too many times")
+
+ // ErrMaxBuffered is returned when the maximum amount of records are
+ // buffered and either manual flushing is enabled or you are using
+ // TryProduce.
+ ErrMaxBuffered = errors.New("the maximum amount of records are buffered, cannot buffer more")
+
+ // ErrAborting is returned for all buffered records while
+ // AbortBufferedRecords is being called.
+ ErrAborting = errors.New("client is aborting buffered records")
+
+ // ErrClientClosed is returned in various places when the client's
+ // Close function has been called.
+ //
+ // For producing, records are failed with this error.
+ //
+ // For consuming, a fake partition is injected into a poll response
+ // that has this error.
+ //
+ // For any request, the request is failed with this error.
+ ErrClientClosed = errors.New("client closed")
+)
+
+// ErrFirstReadEOF is returned for responses that immediately error with
+// io.EOF. This is the client's guess as to why a read from a broker is
+// failing with io.EOF. Two cases are currently handled,
+//
+// - When the client is using TLS but brokers are not, brokers close
+// connections immediately because the incoming request looks wrong.
+// - When SASL is required but missing, brokers close connections immediately.
+//
+// There may be other reasons that an immediate io.EOF is encountered (perhaps
+// the connection truly was severed before a response was received), but this
+// error can help you quickly check common problems.
+type ErrFirstReadEOF struct {
+ kind uint8
+ err error
+}
+
+type errProducerIDLoadFail struct {
+ err error
+}
+
+func (e *errProducerIDLoadFail) Error() string {
+ if e.err == nil {
+ return "unable to initialize a producer ID due to request failures"
+ }
+ return fmt.Sprintf("unable to initialize a producer ID due to request failures: %v", e.err)
+}
+
+func (e *errProducerIDLoadFail) Unwrap() error { return e.err }
+
+const (
+ firstReadSASL uint8 = iota
+ firstReadTLS
+)
+
+func (e *ErrFirstReadEOF) Error() string {
+ switch e.kind {
+ case firstReadTLS:
+ return "broker closed the connection immediately after a dial, which happens if the client is using TLS when the broker is not expecting it: is TLS misconfigured on the client or the broker?"
+ default: // firstReadSASL
+ return "broker closed the connection immediately after a request was issued, which happens when SASL is required but not provided: is SASL missing?"
+ }
+}
+
+// Unwrap returns io.EOF (or, if a custom dialer returned a wrapped io.EOF,
+// this returns the custom dialer's wrapped error).
+func (e *ErrFirstReadEOF) Unwrap() error { return e.err }
+
+// ErrDataLoss is returned for Kafka >=2.1 when data loss is detected and the
+// client is able to reset to the last valid offset.
+type ErrDataLoss struct {
+ // Topic is the topic data loss was detected on.
+ Topic string
+ // Partition is the partition data loss was detected on.
+ Partition int32
+ // ConsumedTo is what the client had consumed to for this partition before
+ // data loss was detected.
+ ConsumedTo int64
+ // ResetTo is what the client reset the partition to; everything from
+ // ResetTo to ConsumedTo was lost.
+ ResetTo int64
+}
+
+func (e *ErrDataLoss) Error() string {
+ return fmt.Sprintf("topic %s partition %d lost records;"+
+ " the client consumed to offset %d but was reset to offset %d",
+ e.Topic, e.Partition, e.ConsumedTo, e.ResetTo)
+}
+
+type errUnknownController struct {
+ id int32
+}
+
+func (e *errUnknownController) Error() string {
+ if e.id == -1 {
+ return "broker replied that the controller broker is not available"
+ }
+ return fmt.Sprintf("broker replied that the controller broker is %d,"+
+ " but did not reply with that broker in the broker list", e.id)
+}
+
+type errUnknownCoordinator struct {
+ coordinator int32
+ key coordinatorKey
+}
+
+func (e *errUnknownCoordinator) Error() string {
+ switch e.key.typ {
+ case coordinatorTypeGroup:
+ return fmt.Sprintf("broker replied that group %s has broker coordinator %d,"+
+ " but did not reply with that broker in the broker list",
+ e.key.name, e.coordinator)
+ case coordinatorTypeTxn:
+ return fmt.Sprintf("broker replied that txn id %s has broker coordinator %d,"+
+ " but did not reply with that broker in the broker list",
+ e.key.name, e.coordinator)
+ default:
+ return fmt.Sprintf("broker replied to an unknown coordinator key %s (type %d) that it has a broker coordinator %d,"+
+ " but did not reply with that broker in the broker list", e.key.name, e.key.typ, e.coordinator)
+ }
+}
+
+// ErrGroupSession is injected into a poll if an error occurred such that your
+// consumer group member was kicked from the group or was never able to join
+// the group.
+type ErrGroupSession struct {
+ err error
+}
+
+func (e *ErrGroupSession) Error() string {
+ return fmt.Sprintf("unable to join group session: %v", e.err)
+}
+
+func (e *ErrGroupSession) Unwrap() error { return e.err }
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/go118.go b/vendor/github.com/twmb/franz-go/pkg/kgo/go118.go
new file mode 100644
index 0000000000000..483c3e9127720
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/go118.go
@@ -0,0 +1,57 @@
+//go:build !go1.19
+// +build !go1.19
+
+package kgo
+
+import "sync/atomic"
+
+type atomicBool uint32
+
+func (b *atomicBool) Store(v bool) {
+ if v {
+ atomic.StoreUint32((*uint32)(b), 1)
+ } else {
+ atomic.StoreUint32((*uint32)(b), 0)
+ }
+}
+
+func (b *atomicBool) Load() bool { return atomic.LoadUint32((*uint32)(b)) == 1 }
+
+func (b *atomicBool) Swap(v bool) bool {
+ var swap uint32
+ if v {
+ swap = 1
+ }
+ return atomic.SwapUint32((*uint32)(b), swap) == 1
+}
+
+type atomicI32 int32
+
+func (v *atomicI32) Add(s int32) int32 { return atomic.AddInt32((*int32)(v), s) }
+func (v *atomicI32) Store(s int32) { atomic.StoreInt32((*int32)(v), s) }
+func (v *atomicI32) Load() int32 { return atomic.LoadInt32((*int32)(v)) }
+func (v *atomicI32) Swap(s int32) int32 { return atomic.SwapInt32((*int32)(v), s) }
+
+type atomicU32 uint32
+
+func (v *atomicU32) Add(s uint32) uint32 { return atomic.AddUint32((*uint32)(v), s) }
+func (v *atomicU32) Store(s uint32) { atomic.StoreUint32((*uint32)(v), s) }
+func (v *atomicU32) Load() uint32 { return atomic.LoadUint32((*uint32)(v)) }
+func (v *atomicU32) Swap(s uint32) uint32 { return atomic.SwapUint32((*uint32)(v), s) }
+func (v *atomicU32) CompareAndSwap(old, new uint32) bool {
+ return atomic.CompareAndSwapUint32((*uint32)(v), old, new)
+}
+
+type atomicI64 int64
+
+func (v *atomicI64) Add(s int64) int64 { return atomic.AddInt64((*int64)(v), s) }
+func (v *atomicI64) Store(s int64) { atomic.StoreInt64((*int64)(v), s) }
+func (v *atomicI64) Load() int64 { return atomic.LoadInt64((*int64)(v)) }
+func (v *atomicI64) Swap(s int64) int64 { return atomic.SwapInt64((*int64)(v), s) }
+
+type atomicU64 uint64
+
+func (v *atomicU64) Add(s uint64) uint64 { return atomic.AddUint64((*uint64)(v), s) }
+func (v *atomicU64) Store(s uint64) { atomic.StoreUint64((*uint64)(v), s) }
+func (v *atomicU64) Load() uint64 { return atomic.LoadUint64((*uint64)(v)) }
+func (v *atomicU64) Swap(s uint64) uint64 { return atomic.SwapUint64((*uint64)(v), s) }
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/go119.go b/vendor/github.com/twmb/franz-go/pkg/kgo/go119.go
new file mode 100644
index 0000000000000..7c8ade5e139a1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/go119.go
@@ -0,0 +1,14 @@
+//go:build go1.19
+// +build go1.19
+
+package kgo
+
+import "sync/atomic"
+
+type (
+ atomicBool struct{ atomic.Bool }
+ atomicI32 struct{ atomic.Int32 }
+ atomicU32 struct{ atomic.Uint32 }
+ atomicI64 struct{ atomic.Int64 }
+ atomicU64 struct{ atomic.Uint64 }
+)
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/group_balancer.go b/vendor/github.com/twmb/franz-go/pkg/kgo/group_balancer.go
new file mode 100644
index 0000000000000..85f31a5342a10
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/group_balancer.go
@@ -0,0 +1,959 @@
+package kgo
+
+import (
+ "bytes"
+ "fmt"
+ "sort"
+ "strings"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kgo/internal/sticky"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// GroupBalancer balances topics and partitions among group members.
+//
+// A GroupBalancer is roughly equivalent to Kafka's PartitionAssignor.
+type GroupBalancer interface {
+ // ProtocolName returns the name of the protocol, e.g. roundrobin,
+ // range, sticky.
+ ProtocolName() string
+
+ // JoinGroupMetadata returns the metadata to use in JoinGroup, given
+ // the topic interests and the current assignment and group generation.
+ //
+ // It is safe to modify the input topics and currentAssignment. The
+ // input topics are guaranteed to be sorted, as are the partitions for
+ // each topic in currentAssignment. It is recommended for your output
+ // to be ordered by topic and partitions. Since Kafka uses the output
+ // from this function to determine whether a rebalance is needed, a
+ // deterministic output will avoid accidental rebalances.
+ JoinGroupMetadata(
+ topicInterests []string,
+ currentAssignment map[string][]int32,
+ generation int32,
+ ) []byte
+
+ // ParseSyncAssignment returns assigned topics and partitions from an
+ // encoded SyncGroupResponse's MemberAssignment.
+ ParseSyncAssignment(assignment []byte) (map[string][]int32, error)
+
+ // MemberBalancer returns a GroupMemberBalancer for the given group
+ // members, as well as the topics that all the members are interested
+ // in. If the client does not have some topics in the returned topics,
+ // the client issues a metadata request to load the number of
+ // partitions in those topics before calling the GroupMemberBalancer's
+ // Balance function.
+ //
+ // The input group members are guaranteed to be sorted first by
+ // instance ID, if non-nil, and then by member ID.
+ //
+ // It is up to the user to decide how to decode each member's
+ // ProtocolMetadata field. The default client group protocol of
+ // "consumer" by default uses join group metadata's of type
+ // kmsg.ConsumerMemberMetadata. If this is the case for you, it may be
+ // useful to use the ConsumerBalancer type to help parse the metadata
+ // and balance.
+ //
+ // If the member metadata cannot be deserialized correctly, this should
+ // return a relevant error.
+ MemberBalancer(members []kmsg.JoinGroupResponseMember) (b GroupMemberBalancer, topics map[string]struct{}, err error)
+
+ // IsCooperative returns if this is a cooperative balance strategy.
+ IsCooperative() bool
+}
+
+// GroupMemberBalancer balances topics amongst group members. If your balancing
+// can fail, you can implement GroupMemberBalancerOrError.
+type GroupMemberBalancer interface {
+ // Balance balances topics and partitions among group members, where
+ // the int32 in the topics map corresponds to the number of partitions
+ // known to be in each topic.
+ Balance(topics map[string]int32) IntoSyncAssignment
+}
+
+// GroupMemberBalancerOrError is an optional extension interface for
+// GroupMemberBalancer. This can be implemented if your balance function can
+// fail.
+//
+// For interface purposes, it is required to implement GroupMemberBalancer, but
+// Balance will never be called.
+type GroupMemberBalancerOrError interface {
+ GroupMemberBalancer
+ BalanceOrError(topics map[string]int32) (IntoSyncAssignment, error)
+}
+
+// IntoSyncAssignment takes a balance plan and returns a list of assignments to
+// use in a kmsg.SyncGroupRequest.
+//
+// It is recommended to ensure the output is deterministic and ordered by
+// member / topic / partitions.
+type IntoSyncAssignment interface {
+ IntoSyncAssignment() []kmsg.SyncGroupRequestGroupAssignment
+}
+
+// ConsumerBalancer is a helper type for writing balance plans that use the
+// "consumer" protocol, such that each member uses a kmsg.ConsumerMemberMetadata
+// in its join group request.
+type ConsumerBalancer struct {
+ b ConsumerBalancerBalance
+ members []kmsg.JoinGroupResponseMember
+ metadatas []kmsg.ConsumerMemberMetadata
+ topics map[string]struct{}
+
+ err error
+}
+
+// Balance satisfies the GroupMemberBalancer interface, but is never called
+// because GroupMemberBalancerOrError exists.
+func (*ConsumerBalancer) Balance(map[string]int32) IntoSyncAssignment {
+ panic("unreachable")
+}
+
+// BalanceOrError satisfies the GroupMemberBalancerOrError interface.
+func (b *ConsumerBalancer) BalanceOrError(topics map[string]int32) (IntoSyncAssignment, error) {
+ return b.b.Balance(b, topics), b.err
+}
+
+// Members returns the list of input members for this group balancer.
+func (b *ConsumerBalancer) Members() []kmsg.JoinGroupResponseMember {
+ return b.members
+}
+
+// EachMember calls fn for each member and its corresponding metadata in the
+// consumer group being balanced.
+func (b *ConsumerBalancer) EachMember(fn func(member *kmsg.JoinGroupResponseMember, meta *kmsg.ConsumerMemberMetadata)) {
+ for i := range b.members {
+ fn(&b.members[i], &b.metadatas[i])
+ }
+}
+
+// MemberAt returns the nth member and its corresponding metadata.
+func (b *ConsumerBalancer) MemberAt(n int) (*kmsg.JoinGroupResponseMember, *kmsg.ConsumerMemberMetadata) {
+ return &b.members[n], &b.metadatas[n]
+}
+
+// SetError allows you to set any error that occurred while balancing. This
+// allows you to fail balancing and return nil from Balance.
+func (b *ConsumerBalancer) SetError(err error) {
+ b.err = err
+}
+
+// MemberTopics returns the unique set of topics that all members are
+// interested in.
+//
+// This can safely be called if the balancer is nil; if so, this will return
+// nil.
+func (b *ConsumerBalancer) MemberTopics() map[string]struct{} {
+ if b == nil {
+ return nil
+ }
+ return b.topics
+}
+
+// NewPlan returns a type that can be used to build a balance plan. The return
+// satisfies the IntoSyncAssignment interface.
+func (b *ConsumerBalancer) NewPlan() *BalancePlan {
+ plan := make(map[string]map[string][]int32, len(b.members))
+ for i := range b.members {
+ plan[b.members[i].MemberID] = make(map[string][]int32)
+ }
+ return &BalancePlan{plan}
+}
+
+// ConsumerBalancerBalance is what the ConsumerBalancer invokes to balance a
+// group.
+//
+// This is a complicated interface, but in short, this interface has one
+// function that implements the actual balancing logic: using the input
+// balancer, balance the input topics and partitions. If your balancing can
+// fail, you can use ConsumerBalancer.SetError(...) to return an error from
+// balancing, and then you can simply return nil from Balance.
+type ConsumerBalancerBalance interface {
+ Balance(*ConsumerBalancer, map[string]int32) IntoSyncAssignment
+}
+
+// ParseConsumerSyncAssignment returns an assignment as specified a
+// kmsg.ConsumerMemberAssignment, that is, the type encoded in metadata for the
+// consumer protocol.
+func ParseConsumerSyncAssignment(assignment []byte) (map[string][]int32, error) {
+ var kassignment kmsg.ConsumerMemberAssignment
+ if err := kassignment.ReadFrom(assignment); err != nil {
+ return nil, fmt.Errorf("sync assignment parse failed: %v", err)
+ }
+
+ m := make(map[string][]int32, len(kassignment.Topics))
+ for _, topic := range kassignment.Topics {
+ m[topic.Topic] = topic.Partitions
+ }
+ return m, nil
+}
+
+// NewConsumerBalancer parses the each member's metadata as a
+// kmsg.ConsumerMemberMetadata and returns a ConsumerBalancer to use in balancing.
+//
+// If any metadata parsing fails, this returns an error.
+func NewConsumerBalancer(balance ConsumerBalancerBalance, members []kmsg.JoinGroupResponseMember) (*ConsumerBalancer, error) {
+ b := &ConsumerBalancer{
+ b: balance,
+ members: members,
+ metadatas: make([]kmsg.ConsumerMemberMetadata, len(members)),
+ topics: make(map[string]struct{}),
+ }
+
+ for i, member := range members {
+ meta := &b.metadatas[i]
+ meta.Default()
+ memberMeta := member.ProtocolMetadata
+ if err := meta.ReadFrom(memberMeta); err != nil {
+ // Some buggy clients claimed support for v1 but then
+ // did not add OwnedPartitions, resulting in a short
+ // metadata. If we fail at reading and the version is
+ // v1, we retry again as v0. We do not support other
+ // versions because hopefully other clients stop
+ // claiming higher and higher version support and not
+ // actually supporting them. Sarama has a similarish
+ // workaround. See #493.
+ if bytes.HasPrefix(memberMeta, []byte{0, 1}) {
+ memberMeta[0] = 0
+ memberMeta[1] = 0
+ if err = meta.ReadFrom(memberMeta); err != nil {
+ return nil, fmt.Errorf("unable to read member metadata: %v", err)
+ }
+ }
+ }
+ for _, topic := range meta.Topics {
+ b.topics[topic] = struct{}{}
+ }
+ sort.Strings(meta.Topics)
+ }
+
+ return b, nil
+}
+
+// BalancePlan is a helper type to build the result of balancing topics
+// and partitions among group members.
+type BalancePlan struct {
+ plan map[string]map[string][]int32 // member => topic => partitions
+}
+
+// AsMemberIDMap returns the plan as a map of member IDs to their topic &
+// partition assignments.
+//
+// Internally, a BalancePlan is currently represented as this map. Any
+// modification to the map modifies the plan. The internal representation of a
+// plan may change in the future to include more metadata. If this happens, the
+// map returned from this function may not represent all aspects of a plan.
+// The client will attempt to mirror modifications to the map directly back
+// into the underlying plan as best as possible.
+func (p *BalancePlan) AsMemberIDMap() map[string]map[string][]int32 {
+ return p.plan
+}
+
+func (p *BalancePlan) String() string {
+ var sb strings.Builder
+
+ var membersWritten int
+ for member, topics := range p.plan {
+ membersWritten++
+ sb.WriteString(member)
+ sb.WriteString("{")
+
+ var topicsWritten int
+ for topic, partitions := range topics {
+ fmt.Fprintf(&sb, "%s%v", topic, partitions)
+ topicsWritten++
+ if topicsWritten < len(topics) {
+ sb.WriteString(", ")
+ }
+ }
+
+ sb.WriteString("}")
+ if membersWritten < len(p.plan) {
+ sb.WriteString(", ")
+ }
+ }
+
+ return sb.String()
+}
+
+// AddPartition assigns a partition for the topic to a given member.
+func (p *BalancePlan) AddPartition(member *kmsg.JoinGroupResponseMember, topic string, partition int32) {
+ memberPlan := p.plan[member.MemberID]
+ memberPlan[topic] = append(memberPlan[topic], partition)
+}
+
+// AddPartitions assigns many partitions for a topic to a given member.
+func (p *BalancePlan) AddPartitions(member *kmsg.JoinGroupResponseMember, topic string, partitions []int32) {
+ memberPlan := p.plan[member.MemberID]
+ memberPlan[topic] = append(memberPlan[topic], partitions...)
+}
+
+// IntoSyncAssignment satisfies the IntoSyncAssignment interface.
+func (p *BalancePlan) IntoSyncAssignment() []kmsg.SyncGroupRequestGroupAssignment {
+ kassignments := make([]kmsg.SyncGroupRequestGroupAssignment, 0, len(p.plan))
+ for member, assignment := range p.plan {
+ var kassignment kmsg.ConsumerMemberAssignment
+ for topic, partitions := range assignment {
+ sort.Slice(partitions, func(i, j int) bool { return partitions[i] < partitions[j] })
+ assnTopic := kmsg.NewConsumerMemberAssignmentTopic()
+ assnTopic.Topic = topic
+ assnTopic.Partitions = partitions
+ kassignment.Topics = append(kassignment.Topics, assnTopic)
+ }
+ sort.Slice(kassignment.Topics, func(i, j int) bool { return kassignment.Topics[i].Topic < kassignment.Topics[j].Topic })
+ syncAssn := kmsg.NewSyncGroupRequestGroupAssignment()
+ syncAssn.MemberID = member
+ syncAssn.MemberAssignment = kassignment.AppendTo(nil)
+ kassignments = append(kassignments, syncAssn)
+ }
+ sort.Slice(kassignments, func(i, j int) bool { return kassignments[i].MemberID < kassignments[j].MemberID })
+ return kassignments
+}
+
+func joinMemberLess(l, r *kmsg.JoinGroupResponseMember) bool {
+ if l.InstanceID != nil {
+ if r.InstanceID == nil {
+ return true
+ }
+ return *l.InstanceID < *r.InstanceID
+ }
+ if r.InstanceID != nil {
+ return false
+ }
+ return l.MemberID < r.MemberID
+}
+
+func sortJoinMembers(members []kmsg.JoinGroupResponseMember) {
+ sort.Slice(members, func(i, j int) bool { return joinMemberLess(&members[i], &members[j]) })
+}
+
+func sortJoinMemberPtrs(members []*kmsg.JoinGroupResponseMember) {
+ sort.Slice(members, func(i, j int) bool { return joinMemberLess(members[i], members[j]) })
+}
+
+func (g *groupConsumer) findBalancer(from, proto string) (GroupBalancer, error) {
+ for _, b := range g.cfg.balancers {
+ if b.ProtocolName() == proto {
+ return b, nil
+ }
+ }
+ var ours []string
+ for _, b := range g.cfg.balancers {
+ ours = append(ours, b.ProtocolName())
+ }
+ g.cl.cfg.logger.Log(LogLevelError, fmt.Sprintf("%s could not find broker-chosen balancer", from), "kafka_choice", proto, "our_set", strings.Join(ours, ", "))
+ return nil, fmt.Errorf("unable to balance: none of our balancers have a name equal to the balancer chosen for balancing (%s)", proto)
+}
+
+// balanceGroup returns a balancePlan from a join group response.
+//
+// If the group has topics this leader does not want to consume, this also
+// returns all topics and partitions; the leader will then periodically do its
+// own metadata update to see if partition counts have changed for these random
+// topics.
+func (g *groupConsumer) balanceGroup(proto string, members []kmsg.JoinGroupResponseMember, skipBalance bool) ([]kmsg.SyncGroupRequestGroupAssignment, error) {
+ g.cl.cfg.logger.Log(LogLevelInfo, "balancing group as leader")
+
+ b, err := g.findBalancer("balance group", proto)
+ if err != nil {
+ return nil, err
+ }
+
+ sortJoinMembers(members)
+
+ memberBalancer, topics, err := b.MemberBalancer(members)
+ if err != nil {
+ return nil, fmt.Errorf("unable to create group member balancer: %v", err)
+ }
+
+ myTopics := g.tps.load()
+ var needMeta bool
+ topicPartitionCount := make(map[string]int32, len(topics))
+ for topic := range topics {
+ data, exists := myTopics[topic]
+ if !exists {
+ needMeta = true
+ continue
+ }
+ topicPartitionCount[topic] = int32(len(data.load().partitions))
+ }
+
+ // If our consumer metadata does not contain all topics, the group is
+ // expressing interests in topics we are not consuming. Perhaps we have
+ // those topics saved in our external topics map.
+ if needMeta {
+ g.loadExternal().fn(func(m map[string]int32) {
+ needMeta = false
+ for topic := range topics {
+ partitions, exists := m[topic]
+ if !exists {
+ needMeta = true
+ continue
+ }
+ topicPartitionCount[topic] = partitions
+ }
+ })
+ }
+
+ if needMeta {
+ g.cl.cfg.logger.Log(LogLevelInfo, "group members indicated interest in topics the leader is not assigned, fetching metadata for all group topics")
+ var metaTopics []string
+ for topic := range topics {
+ metaTopics = append(metaTopics, topic)
+ }
+
+ _, resp, err := g.cl.fetchMetadataForTopics(g.ctx, false, metaTopics)
+ if err != nil {
+ return nil, fmt.Errorf("unable to fetch metadata for group topics: %v", err)
+ }
+ for i := range resp.Topics {
+ t := &resp.Topics[i]
+ if t.Topic == nil {
+ g.cl.cfg.logger.Log(LogLevelWarn, "metadata resp in balance for topic has nil topic, skipping...", "err", kerr.ErrorForCode(t.ErrorCode))
+ continue
+ }
+ if t.ErrorCode != 0 {
+ g.cl.cfg.logger.Log(LogLevelWarn, "metadata resp in balance for topic has error, skipping...", "topic", t.Topic, "err", kerr.ErrorForCode(t.ErrorCode))
+ continue
+ }
+ topicPartitionCount[*t.Topic] = int32(len(t.Partitions))
+ }
+
+ g.initExternal(topicPartitionCount)
+ }
+
+ // If the returned balancer is a ConsumerBalancer (which it likely
+ // always will be), then we can print some useful debugging information
+ // about what member interests are.
+ if b, ok := memberBalancer.(*ConsumerBalancer); ok {
+ interests := new(bytes.Buffer)
+ b.EachMember(func(member *kmsg.JoinGroupResponseMember, meta *kmsg.ConsumerMemberMetadata) {
+ interests.Reset()
+ fmt.Fprintf(interests, "interested topics: %v, previously owned: ", meta.Topics)
+ for _, owned := range meta.OwnedPartitions {
+ sort.Slice(owned.Partitions, func(i, j int) bool { return owned.Partitions[i] < owned.Partitions[j] })
+ fmt.Fprintf(interests, "%s%v, ", owned.Topic, owned.Partitions)
+ }
+ strInterests := interests.String()
+ strInterests = strings.TrimSuffix(strInterests, ", ")
+
+ if member.InstanceID == nil {
+ g.cl.cfg.logger.Log(LogLevelInfo, "balance group member", "id", member.MemberID, "interests", strInterests)
+ } else {
+ g.cl.cfg.logger.Log(LogLevelInfo, "balance group member", "id", member.MemberID, "instance_id", *member.InstanceID, "interests", strInterests)
+ }
+ })
+ } else {
+ g.cl.cfg.logger.Log(LogLevelInfo, "unable to log information about group member interests: the user has defined a custom balancer (not a *ConsumerBalancer)")
+ }
+
+ // KIP-814: we are leader and we know what the entire group is
+ // consuming. Crucially, we parsed topics that we are potentially not
+ // interested in and are now tracking them for metadata updates. We
+ // have logged the current interests, we do not need to actually
+ // balance.
+ if skipBalance {
+ switch proto := b.ProtocolName(); proto {
+ case RangeBalancer().ProtocolName(),
+ RoundRobinBalancer().ProtocolName(),
+ StickyBalancer().ProtocolName(),
+ CooperativeStickyBalancer().ProtocolName():
+ default:
+ return nil, nil
+ }
+ }
+
+ // If the returned IntoSyncAssignment is a BalancePlan, which it likely
+ // is if the balancer is a ConsumerBalancer, then we can again print
+ // more useful debugging information.
+ var into IntoSyncAssignment
+ if memberBalancerOrErr, ok := memberBalancer.(GroupMemberBalancerOrError); ok {
+ if into, err = memberBalancerOrErr.BalanceOrError(topicPartitionCount); err != nil {
+ g.cl.cfg.logger.Log(LogLevelError, "balance failed", "err", err)
+ return nil, err
+ }
+ } else {
+ into = memberBalancer.Balance(topicPartitionCount)
+ }
+
+ if p, ok := into.(*BalancePlan); ok {
+ g.cl.cfg.logger.Log(LogLevelInfo, "balanced", "plan", p.String())
+ } else {
+ g.cl.cfg.logger.Log(LogLevelInfo, "unable to log balance plan: the user has returned a custom IntoSyncAssignment (not a *BalancePlan)")
+ }
+
+ return into.IntoSyncAssignment(), nil
+}
+
+// helper func; range and roundrobin use v0
+func simpleMemberMetadata(interests []string, generation int32) []byte {
+ meta := kmsg.NewConsumerMemberMetadata()
+ meta.Version = 3 // BUMP ME WHEN NEW FIELDS ARE ADDED, AND BUMP BELOW
+ meta.Topics = interests // input interests are already sorted
+ // meta.OwnedPartitions is nil, since simple protocols are not cooperative
+ meta.Generation = generation
+ return meta.AppendTo(nil)
+}
+
+///////////////////
+// Balance Plans //
+///////////////////
+
+// RoundRobinBalancer returns a group balancer that evenly maps topics and
+// partitions to group members.
+//
+// Suppose there are two members M0 and M1, two topics t0 and t1, and each
+// topic has three partitions p0, p1, and p2. The partition balancing will be
+//
+// M0: [t0p0, t0p2, t1p1]
+// M1: [t0p1, t1p0, t1p2]
+//
+// If all members subscribe to all topics equally, the roundrobin balancer
+// will give a perfect balance. However, if topic subscriptions are quite
+// unequal, the roundrobin balancer may lead to a bad balance. See KIP-49
+// for one example (note that the fair strategy mentioned in KIP-49 does
+// not exist).
+//
+// This is equivalent to the Java roundrobin balancer.
+func RoundRobinBalancer() GroupBalancer {
+ return new(roundRobinBalancer)
+}
+
+type roundRobinBalancer struct{}
+
+func (*roundRobinBalancer) ProtocolName() string { return "roundrobin" }
+func (*roundRobinBalancer) IsCooperative() bool { return false }
+func (*roundRobinBalancer) JoinGroupMetadata(interests []string, _ map[string][]int32, generation int32) []byte {
+ return simpleMemberMetadata(interests, generation)
+}
+
+func (*roundRobinBalancer) ParseSyncAssignment(assignment []byte) (map[string][]int32, error) {
+ return ParseConsumerSyncAssignment(assignment)
+}
+
+func (r *roundRobinBalancer) MemberBalancer(members []kmsg.JoinGroupResponseMember) (GroupMemberBalancer, map[string]struct{}, error) {
+ b, err := NewConsumerBalancer(r, members)
+ return b, b.MemberTopics(), err
+}
+
+func (*roundRobinBalancer) Balance(b *ConsumerBalancer, topics map[string]int32) IntoSyncAssignment {
+ type topicPartition struct {
+ topic string
+ partition int32
+ }
+ var nparts int
+ for _, partitions := range topics {
+ nparts += int(partitions)
+ }
+ // Order all partitions available to balance, filtering out those that
+ // no members are subscribed to.
+ allParts := make([]topicPartition, 0, nparts)
+ for topic := range b.MemberTopics() {
+ for partition := int32(0); partition < topics[topic]; partition++ {
+ allParts = append(allParts, topicPartition{
+ topic,
+ partition,
+ })
+ }
+ }
+ sort.Slice(allParts, func(i, j int) bool {
+ l, r := allParts[i], allParts[j]
+ return l.topic < r.topic || l.topic == r.topic && l.partition < r.partition
+ })
+
+ plan := b.NewPlan()
+ // While parts are unassigned, assign them.
+ var memberIdx int
+ for len(allParts) > 0 {
+ next := allParts[0]
+ allParts = allParts[1:]
+
+ // The Java roundrobin strategy walks members circularly until
+ // a member can take this partition, and then starts the next
+ // partition where the circular iterator left off.
+ assigned:
+ for {
+ member, meta := b.MemberAt(memberIdx)
+ memberIdx = (memberIdx + 1) % len(b.Members())
+ for _, topic := range meta.Topics {
+ if topic == next.topic {
+ plan.AddPartition(member, next.topic, next.partition)
+ break assigned
+ }
+ }
+ }
+ }
+
+ return plan
+}
+
+// RangeBalancer returns a group balancer that, per topic, maps partitions to
+// group members. Since this works on a topic level, uneven partitions per
+// topic to the number of members can lead to slight partition consumption
+// disparities.
+//
+// Suppose there are two members M0 and M1, two topics t0 and t1, and each
+// topic has three partitions p0, p1, and p2. The partition balancing will be
+//
+// M0: [t0p0, t0p1, t1p0, t1p1]
+// M1: [t0p2, t1p2]
+//
+// This is equivalent to the Java range balancer.
+func RangeBalancer() GroupBalancer {
+ return new(rangeBalancer)
+}
+
+type rangeBalancer struct{}
+
+func (*rangeBalancer) ProtocolName() string { return "range" }
+func (*rangeBalancer) IsCooperative() bool { return false }
+func (*rangeBalancer) JoinGroupMetadata(interests []string, _ map[string][]int32, generation int32) []byte {
+ return simpleMemberMetadata(interests, generation)
+}
+
+func (*rangeBalancer) ParseSyncAssignment(assignment []byte) (map[string][]int32, error) {
+ return ParseConsumerSyncAssignment(assignment)
+}
+
+func (r *rangeBalancer) MemberBalancer(members []kmsg.JoinGroupResponseMember) (GroupMemberBalancer, map[string]struct{}, error) {
+ b, err := NewConsumerBalancer(r, members)
+ return b, b.MemberTopics(), err
+}
+
+func (*rangeBalancer) Balance(b *ConsumerBalancer, topics map[string]int32) IntoSyncAssignment {
+ topics2PotentialConsumers := make(map[string][]*kmsg.JoinGroupResponseMember)
+ b.EachMember(func(member *kmsg.JoinGroupResponseMember, meta *kmsg.ConsumerMemberMetadata) {
+ for _, topic := range meta.Topics {
+ topics2PotentialConsumers[topic] = append(topics2PotentialConsumers[topic], member)
+ }
+ })
+
+ plan := b.NewPlan()
+ for topic, potentialConsumers := range topics2PotentialConsumers {
+ sortJoinMemberPtrs(potentialConsumers)
+
+ numPartitions := topics[topic]
+ partitions := make([]int32, numPartitions)
+ for i := range partitions {
+ partitions[i] = int32(i)
+ }
+ numParts := len(partitions)
+ div, rem := numParts/len(potentialConsumers), numParts%len(potentialConsumers)
+
+ var consumerIdx int
+ for len(partitions) > 0 {
+ num := div
+ if rem > 0 {
+ num++
+ rem--
+ }
+
+ member := potentialConsumers[consumerIdx]
+ plan.AddPartitions(member, topic, partitions[:num])
+
+ consumerIdx++
+ partitions = partitions[num:]
+ }
+ }
+
+ return plan
+}
+
+// StickyBalancer returns a group balancer that ensures minimal partition
+// movement on group changes while also ensuring optimal balancing.
+//
+// Suppose there are three members M0, M1, and M2, and two topics t0 and t1
+// each with three partitions p0, p1, and p2. If the initial balance plan looks
+// like
+//
+// M0: [t0p0, t0p1, t0p2]
+// M1: [t1p0, t1p1, t1p2]
+// M2: [t2p0, t2p2, t2p2]
+//
+// If M2 disappears, both roundrobin and range would have mostly destructive
+// reassignments.
+//
+// Range would result in
+//
+// M0: [t0p0, t0p1, t1p0, t1p1, t2p0, t2p1]
+// M1: [t0p2, t1p2, t2p2]
+//
+// which is imbalanced and has 3 partitions move from members that did not need
+// to move (t0p2, t1p0, t1p1).
+//
+// RoundRobin would result in
+//
+// M0: [t0p0, t0p2, t1p1, t2p0, t2p2]
+// M1: [t0p1, t1p0, t1p2, t2p1]
+//
+// which is balanced, but has 2 partitions move when they do not need to
+// (t0p1, t1p1).
+//
+// Sticky balancing results in
+//
+// M0: [t0p0, t0p1, t0p2, t2p0, t2p2]
+// M1: [t1p0, t1p1, t1p2, t2p1]
+//
+// which is balanced and does not cause any unnecessary partition movement.
+// The actual t2 partitions may not be in that exact combination, but they
+// will be balanced.
+//
+// An advantage of the sticky consumer is that it allows API users to
+// potentially avoid some cleanup until after the consumer knows which
+// partitions it is losing when it gets its new assignment. Users can
+// then only cleanup state for partitions that changed, which will be
+// minimal (see KIP-54; this client also includes the KIP-351 bugfix).
+//
+// Note that this API implements the sticky partitioning quite differently from
+// the Java implementation. The Java implementation is difficult to reason
+// about and has many edge cases that result in non-optimal balancing (albeit,
+// you likely have to be trying to hit those edge cases). This API uses a
+// different algorithm to ensure optimal balancing while being an order of
+// magnitude faster.
+//
+// Since the new strategy is a strict improvement over the Java strategy, it is
+// entirely compatible. Any Go client sharing a group with a Java client will
+// not have its decisions undone on leadership change from a Go consumer to a
+// Java one. Java balancers do not apply the strategy it comes up with if it
+// deems the balance score equal to or worse than the original score (the score
+// being effectively equal to the standard deviation of the mean number of
+// assigned partitions). This Go sticky balancer is optimal and extra sticky.
+// Thus, the Java balancer will never back out of a strategy from this
+// balancer.
+func StickyBalancer() GroupBalancer {
+ return &stickyBalancer{cooperative: false}
+}
+
+type stickyBalancer struct {
+ cooperative bool
+}
+
+func (s *stickyBalancer) ProtocolName() string {
+ if s.cooperative {
+ return "cooperative-sticky"
+ }
+ return "sticky"
+}
+func (s *stickyBalancer) IsCooperative() bool { return s.cooperative }
+func (s *stickyBalancer) JoinGroupMetadata(interests []string, currentAssignment map[string][]int32, generation int32) []byte {
+ meta := kmsg.NewConsumerMemberMetadata()
+ meta.Version = 3 // BUMP ME WHEN NEW FIELDS ARE ADDED, AND BUMP ABOVE
+ meta.Topics = interests
+ meta.Generation = generation
+ stickyMeta := kmsg.NewStickyMemberMetadata()
+ stickyMeta.Generation = generation
+ for topic, partitions := range currentAssignment {
+ if s.cooperative {
+ metaPart := kmsg.NewConsumerMemberMetadataOwnedPartition()
+ metaPart.Topic = topic
+ metaPart.Partitions = partitions
+ meta.OwnedPartitions = append(meta.OwnedPartitions, metaPart)
+ }
+ stickyAssn := kmsg.NewStickyMemberMetadataCurrentAssignment()
+ stickyAssn.Topic = topic
+ stickyAssn.Partitions = partitions
+ stickyMeta.CurrentAssignment = append(stickyMeta.CurrentAssignment, stickyAssn)
+ }
+
+ // KAFKA-12898: ensure our topics are sorted
+ metaOwned := meta.OwnedPartitions
+ stickyCurrent := stickyMeta.CurrentAssignment
+ sort.Slice(metaOwned, func(i, j int) bool { return metaOwned[i].Topic < metaOwned[j].Topic })
+ sort.Slice(stickyCurrent, func(i, j int) bool { return stickyCurrent[i].Topic < stickyCurrent[j].Topic })
+
+ meta.UserData = stickyMeta.AppendTo(nil)
+ return meta.AppendTo(nil)
+}
+
+func (*stickyBalancer) ParseSyncAssignment(assignment []byte) (map[string][]int32, error) {
+ return ParseConsumerSyncAssignment(assignment)
+}
+
+func (s *stickyBalancer) MemberBalancer(members []kmsg.JoinGroupResponseMember) (GroupMemberBalancer, map[string]struct{}, error) {
+ b, err := NewConsumerBalancer(s, members)
+ return b, b.MemberTopics(), err
+}
+
+func (s *stickyBalancer) Balance(b *ConsumerBalancer, topics map[string]int32) IntoSyncAssignment {
+ // Since our input into balancing is already sorted by instance ID,
+ // the sticky strategy does not need to worry about instance IDs at all.
+ // See my (slightly rambling) comment on KAFKA-8432.
+ stickyMembers := make([]sticky.GroupMember, 0, len(b.Members()))
+ b.EachMember(func(member *kmsg.JoinGroupResponseMember, meta *kmsg.ConsumerMemberMetadata) {
+ stickyMembers = append(stickyMembers, sticky.GroupMember{
+ ID: member.MemberID,
+ Topics: meta.Topics,
+ UserData: meta.UserData,
+ Owned: meta.OwnedPartitions,
+ Generation: meta.Generation,
+ Cooperative: s.cooperative,
+ })
+ })
+
+ p := &BalancePlan{sticky.Balance(stickyMembers, topics)}
+ if s.cooperative {
+ p.AdjustCooperative(b)
+ }
+ return p
+}
+
+// CooperativeStickyBalancer performs the sticky balancing strategy, but
+// additionally opts the consumer group into "cooperative" rebalancing.
+//
+// Cooperative rebalancing differs from "eager" (the original) rebalancing in
+// that group members do not stop processing partitions during the rebalance.
+// Instead, once they receive their new assignment, each member determines
+// which partitions it needs to revoke. If any, they send a new join request
+// (before syncing), and the process starts over. This should ultimately end up
+// in only two join rounds, with the major benefit being that processing never
+// needs to stop.
+//
+// NOTE once a group is collectively using cooperative balancing, it is unsafe
+// to have a member join the group that does not support cooperative balancing.
+// If the only-eager member is elected leader, it will not know of the new
+// multiple join strategy and things will go awry. Thus, once a group is
+// entirely on cooperative rebalancing, it cannot go back.
+//
+// Migrating an eager group to cooperative balancing requires two rolling
+// bounce deploys. The first deploy should add the cooperative-sticky strategy
+// as an option (that is, each member goes from using one balance strategy to
+// two). During this deploy, Kafka will tell leaders to continue using the old
+// eager strategy, since the old eager strategy is the only one in common among
+// all members. The second rolling deploy removes the old eager strategy. At
+// this point, Kafka will tell the leader to use cooperative-sticky balancing.
+// During this roll, all members in the group that still have both strategies
+// continue to be eager and give up all of their partitions every rebalance.
+// However, once a member only has cooperative-sticky, it can begin using this
+// new strategy and things will work correctly. See KIP-429 for more details.
+func CooperativeStickyBalancer() GroupBalancer {
+ return &stickyBalancer{cooperative: true}
+}
+
+// AdjustCooperative performs the final adjustment to a plan for cooperative
+// balancing.
+//
+// Over the plan, we remove all partitions that migrated from one member (where
+// it was assigned) to a new member (where it is now planned).
+//
+// This allows members that had partitions removed to revoke and rejoin, which
+// will then do another rebalance, and in that new rebalance, the planned
+// partitions are now on the free list to be assigned.
+func (p *BalancePlan) AdjustCooperative(b *ConsumerBalancer) {
+ allAdded := make(map[string]map[int32]string, 100) // topic => partition => member
+ allRevoked := make(map[string]map[int32]struct{}, 100)
+
+ addT := func(t string) map[int32]string {
+ addT := allAdded[t]
+ if addT == nil {
+ addT = make(map[int32]string, 20)
+ allAdded[t] = addT
+ }
+ return addT
+ }
+ revokeT := func(t string) map[int32]struct{} {
+ revokeT := allRevoked[t]
+ if revokeT == nil {
+ revokeT = make(map[int32]struct{}, 20)
+ allRevoked[t] = revokeT
+ }
+ return revokeT
+ }
+
+ tmap := make(map[string]struct{}) // reusable topic existence map
+ pmap := make(map[int32]struct{}) // reusable partitions existence map
+
+ plan := p.plan
+
+ // First, on all members, we find what was added and what was removed
+ // to and from that member.
+ b.EachMember(func(member *kmsg.JoinGroupResponseMember, meta *kmsg.ConsumerMemberMetadata) {
+ planned := plan[member.MemberID]
+
+ // added := planned - current
+ // revoked := current - planned
+
+ for ptopic := range planned { // set existence for all planned topics
+ tmap[ptopic] = struct{}{}
+ }
+ for _, otopic := range meta.OwnedPartitions { // over all prior owned topics,
+ topic := otopic.Topic
+ delete(tmap, topic)
+ ppartitions, exists := planned[topic]
+ if !exists { // any topic that is no longer planned was entirely revoked,
+ allRevokedT := revokeT(topic)
+ for _, opartition := range otopic.Partitions {
+ allRevokedT[opartition] = struct{}{}
+ }
+ continue
+ }
+ // calculate what was added by creating a planned existence map,
+ // then removing what was owned, and anything that remains is new,
+ for _, ppartition := range ppartitions {
+ pmap[ppartition] = struct{}{}
+ }
+ for _, opartition := range otopic.Partitions {
+ delete(pmap, opartition)
+ }
+ if len(pmap) > 0 {
+ allAddedT := addT(topic)
+ for ppartition := range pmap {
+ delete(pmap, ppartition)
+ allAddedT[ppartition] = member.MemberID
+ }
+ }
+ // then calculate removal by creating owned existence map,
+ // then removing what was planned, anything remaining was revoked.
+ for _, opartition := range otopic.Partitions {
+ pmap[opartition] = struct{}{}
+ }
+ for _, ppartition := range ppartitions {
+ delete(pmap, ppartition)
+ }
+ if len(pmap) > 0 {
+ allRevokedT := revokeT(topic)
+ for opartition := range pmap {
+ delete(pmap, opartition)
+ allRevokedT[opartition] = struct{}{}
+ }
+ }
+ }
+ for ptopic := range tmap { // finally, anything remaining in tmap is a new planned topic.
+ delete(tmap, ptopic)
+ allAddedT := addT(ptopic)
+ for _, ppartition := range planned[ptopic] {
+ allAddedT[ppartition] = member.MemberID
+ }
+ }
+ })
+
+ // Over all revoked, if the revoked partition was added to a different
+ // member, we remove that partition from the new member.
+ for topic, rpartitions := range allRevoked {
+ atopic, exists := allAdded[topic]
+ if !exists {
+ continue
+ }
+ for rpartition := range rpartitions {
+ amember, exists := atopic[rpartition]
+ if !exists {
+ continue
+ }
+
+ ptopics := plan[amember]
+ ppartitions := ptopics[topic]
+ for i, ppartition := range ppartitions {
+ if ppartition == rpartition {
+ ppartitions[i] = ppartitions[len(ppartitions)-1]
+ ppartitions = ppartitions[:len(ppartitions)-1]
+ break
+ }
+ }
+ if len(ppartitions) > 0 {
+ ptopics[topic] = ppartitions
+ } else {
+ delete(ptopics, topic)
+ }
+ }
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/hooks.go b/vendor/github.com/twmb/franz-go/pkg/kgo/hooks.go
new file mode 100644
index 0000000000000..aeff4f19df0a3
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/hooks.go
@@ -0,0 +1,420 @@
+package kgo
+
+import (
+ "net"
+ "time"
+)
+
+////////////////////////////////////////////////////////////////
+// NOTE: //
+// NOTE: Make sure new hooks are checked in implementsAnyHook //
+// NOTE: //
+////////////////////////////////////////////////////////////////
+
+// Hook is a hook to be called when something happens in kgo.
+//
+// The base Hook interface is useless, but wherever a hook can occur in kgo,
+// the client checks if your hook implements an appropriate interface. If so,
+// your hook is called.
+//
+// This allows you to only hook in to behavior you care about, and it allows
+// the client to add more hooks in the future.
+//
+// All hook interfaces in this package have Hook in the name. Hooks must be
+// safe for concurrent use. It is expected that hooks are fast; if a hook needs
+// to take time, then copy what you need and ensure the hook is async.
+type Hook any
+
+type hooks []Hook
+
+func (hs hooks) each(fn func(Hook)) {
+ for _, h := range hs {
+ fn(h)
+ }
+}
+
+// HookNewClient is called in NewClient after a client is initialized. This
+// hook can be used to perform final setup work in your hooks.
+type HookNewClient interface {
+ // OnNewClient is passed the newly initialized client, before any
+ // client goroutines are started.
+ OnNewClient(*Client)
+}
+
+// HookClientClosed is called in Close or CloseAfterRebalance after a client
+// has been closed. This hook can be used to perform final cleanup work.
+type HookClientClosed interface {
+ // OnClientClosed is passed the client that has been closed, after
+ // all client-internal close cleanup has happened.
+ OnClientClosed(*Client)
+}
+
+//////////////////
+// BROKER HOOKS //
+//////////////////
+
+// HookBrokerConnect is called after a connection to a broker is opened.
+type HookBrokerConnect interface {
+ // OnBrokerConnect is passed the broker metadata, how long it took to
+ // dial, and either the dial's resulting net.Conn or error.
+ OnBrokerConnect(meta BrokerMetadata, dialDur time.Duration, conn net.Conn, err error)
+}
+
+// HookBrokerDisconnect is called when a connection to a broker is closed.
+type HookBrokerDisconnect interface {
+ // OnBrokerDisconnect is passed the broker metadata and the connection
+ // that is closing.
+ OnBrokerDisconnect(meta BrokerMetadata, conn net.Conn)
+}
+
+// HookBrokerWrite is called after a write to a broker.
+//
+// Kerberos SASL does not cause write hooks, since it directly writes to the
+// connection.
+type HookBrokerWrite interface {
+ // OnBrokerWrite is passed the broker metadata, the key for the request
+ // that was written, the number of bytes that were written (may not be
+ // the whole request if there was an error), how long the request
+ // waited before being written (including throttling waiting), how long
+ // it took to write the request, and any error.
+ //
+ // The bytes written does not count any tls overhead.
+ OnBrokerWrite(meta BrokerMetadata, key int16, bytesWritten int, writeWait, timeToWrite time.Duration, err error)
+}
+
+// HookBrokerRead is called after a read from a broker.
+//
+// Kerberos SASL does not cause read hooks, since it directly reads from the
+// connection.
+type HookBrokerRead interface {
+ // OnBrokerRead is passed the broker metadata, the key for the response
+ // that was read, the number of bytes read (may not be the whole read
+ // if there was an error), how long the client waited before reading
+ // the response, how long it took to read the response, and any error.
+ //
+ // The bytes read does not count any tls overhead.
+ OnBrokerRead(meta BrokerMetadata, key int16, bytesRead int, readWait, timeToRead time.Duration, err error)
+}
+
+// BrokerE2E tracks complete information for a write of a request followed by a
+// read of that requests's response.
+//
+// Note that if this is for a produce request with no acks, there will be no
+// read wait / time to read.
+type BrokerE2E struct {
+ // BytesWritten is the number of bytes written for this request.
+ //
+ // This may not be the whole request if there was an error while writing.
+ BytesWritten int
+
+ // BytesRead is the number of bytes read for this requests's response.
+ //
+ // This may not be the whole response if there was an error while
+ // reading, and this will be zero if there was a write error.
+ BytesRead int
+
+ // WriteWait is the time spent waiting from when this request was
+ // generated internally in the client to just before the request is
+ // written to the connection. This number is not included in the
+ // DurationE2E method.
+ WriteWait time.Duration
+ // TimeToWrite is how long a request took to be written on the wire.
+ // This specifically tracks only how long conn.Write takes.
+ TimeToWrite time.Duration
+ // ReadWait tracks the span of time immediately following conn.Write
+ // until conn.Read begins.
+ ReadWait time.Duration
+ // TimeToRead tracks how long conn.Read takes for this request to be
+ // entirely read. This includes the time it takes to allocate a buffer
+ // for the response after the initial four size bytes are read.
+ TimeToRead time.Duration
+
+ // WriteErr is any error encountered during writing. If a write error is
+ // encountered, no read will be attempted.
+ WriteErr error
+ // ReadErr is any error encountered during reading.
+ ReadErr error
+}
+
+// DurationE2E returns the e2e time from the start of when a request is written
+// to the end of when the response for that request was fully read. If a write
+// or read error occurs, this hook is called with all information possible at
+// the time (e.g., if a write error occurs, all write info is specified).
+//
+// Kerberos SASL does not cause this hook, since it directly reads from the
+// connection.
+func (e *BrokerE2E) DurationE2E() time.Duration {
+ return e.TimeToWrite + e.ReadWait + e.TimeToRead
+}
+
+// Err returns the first of either the write err or the read err. If this
+// return is non-nil, the request/response had an error.
+func (e *BrokerE2E) Err() error {
+ if e.WriteErr != nil {
+ return e.WriteErr
+ }
+ return e.ReadErr
+}
+
+// HookBrokerE2E is called after a write to a broker that errors, or after a
+// read to a broker.
+//
+// This differs from HookBrokerRead and HookBrokerWrite by tracking all E2E
+// info for a write and a read, which allows for easier e2e metrics. This hook
+// can replace both the read and write hook.
+type HookBrokerE2E interface {
+ // OnBrokerE2E is passed the broker metadata, the key for the
+ // request/response that was written/read, and the e2e info for the
+ // request and response.
+ OnBrokerE2E(meta BrokerMetadata, key int16, e2e BrokerE2E)
+}
+
+// HookBrokerThrottle is called after a response to a request is read
+// from a broker, and the response identifies throttling in effect.
+type HookBrokerThrottle interface {
+ // OnBrokerThrottle is passed the broker metadata, the imposed
+ // throttling interval, and whether the throttle was applied before
+ // Kafka responded to them request or after.
+ //
+ // For Kafka < 2.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0, the throttle is applied after issuing a response.
+ //
+ // If throttledAfterResponse is false, then Kafka already applied the
+ // throttle. If it is true, the client internally will not send another
+ // request until the throttle deadline has passed.
+ OnBrokerThrottle(meta BrokerMetadata, throttleInterval time.Duration, throttledAfterResponse bool)
+}
+
+//////////
+// MISC //
+//////////
+
+// HookGroupManageError is called after every error that causes the client,
+// operating as a group member, to break out of the group managing loop and
+// backoff temporarily.
+//
+// Specifically, any error that would result in OnPartitionsLost being called
+// will result in this hook being called.
+type HookGroupManageError interface {
+ // OnGroupManageError is passed the error that killed a group session.
+ // This can be used to detect potentially fatal errors and act on them
+ // at runtime to recover (such as group auth errors, or group max size
+ // reached).
+ OnGroupManageError(error)
+}
+
+///////////////////////////////
+// PRODUCE & CONSUME BATCHES //
+///////////////////////////////
+
+// ProduceBatchMetrics tracks information about successful produces to
+// partitions.
+type ProduceBatchMetrics struct {
+ // NumRecords is the number of records that were produced in this
+ // batch.
+ NumRecords int
+
+ // UncompressedBytes is the number of bytes the records serialized as
+ // before compression.
+ //
+ // For record batches (Kafka v0.11.0+), this is the size of the records
+ // in a batch, and does not include record batch overhead.
+ //
+ // For message sets, this size includes message set overhead.
+ UncompressedBytes int
+
+ // CompressedBytes is the number of bytes actually written for this
+ // batch, after compression. If compression is not used, this will be
+ // equal to UncompresedBytes.
+ //
+ // For record batches, this is the size of the compressed records, and
+ // does not include record batch overhead.
+ //
+ // For message sets, this is the size of the compressed message set.
+ CompressedBytes int
+
+ // CompressionType signifies which algorithm the batch was compressed
+ // with.
+ //
+ // 0 is no compression, 1 is gzip, 2 is snappy, 3 is lz4, and 4 is
+ // zstd.
+ CompressionType uint8
+}
+
+// HookProduceBatchWritten is called whenever a batch is known to be
+// successfully produced.
+type HookProduceBatchWritten interface {
+ // OnProduceBatchWritten is called per successful batch written to a
+ // topic partition
+ OnProduceBatchWritten(meta BrokerMetadata, topic string, partition int32, metrics ProduceBatchMetrics)
+}
+
+// FetchBatchMetrics tracks information about fetches of batches.
+type FetchBatchMetrics struct {
+ // NumRecords is the number of records that were fetched in this batch.
+ //
+ // Note that this number includes transaction markers, which are not
+ // actually returned to the user.
+ //
+ // If the batch has an encoding error, this will be 0.
+ NumRecords int
+
+ // UncompressedBytes is the number of bytes the records deserialized
+ // into after decompresion.
+ //
+ // For record batches (Kafka v0.11.0+), this is the size of the records
+ // in a batch, and does not include record batch overhead.
+ //
+ // For message sets, this size includes message set overhead.
+ //
+ // Note that this number may be higher than the corresponding number
+ // when producing, because as an "optimization", Kafka can return
+ // partial batches when fetching.
+ UncompressedBytes int
+
+ // CompressedBytes is the number of bytes actually read for this batch,
+ // before decompression. If the batch was not compressed, this will be
+ // equal to UncompressedBytes.
+ //
+ // For record batches, this is the size of the compressed records, and
+ // does not include record batch overhead.
+ //
+ // For message sets, this is the size of the compressed message set.
+ CompressedBytes int
+
+ // CompressionType signifies which algorithm the batch was compressed
+ // with.
+ //
+ // 0 is no compression, 1 is gzip, 2 is snappy, 3 is lz4, and 4 is
+ // zstd.
+ CompressionType uint8
+}
+
+// HookFetchBatchRead is called whenever a batch if read within the client.
+//
+// Note that this hook is called when processing, but a batch may be internally
+// discarded after processing in some uncommon specific circumstances.
+//
+// If the client reads v0 or v1 message sets, and they are not compressed, then
+// this hook will be called per record.
+type HookFetchBatchRead interface {
+ // OnFetchBatchRead is called per batch read from a topic partition.
+ OnFetchBatchRead(meta BrokerMetadata, topic string, partition int32, metrics FetchBatchMetrics)
+}
+
+///////////////////////////////
+// PRODUCE & CONSUME RECORDS //
+///////////////////////////////
+
+// HookProduceRecordBuffered is called when a record is buffered internally in
+// the client from a call to Produce.
+//
+// This hook can be used to write metrics that gather the number of records or
+// bytes buffered, or the hook can be used to write interceptors that modify a
+// record's key / value / headers before being produced. If you just want a
+// metric for the number of records buffered, use the client's
+// BufferedProduceRecords method, as it is faster.
+//
+// Note that this hook may slow down high-volume producing a bit.
+type HookProduceRecordBuffered interface {
+ // OnProduceRecordBuffered is passed a record that is buffered.
+ //
+ // This hook is called immediately after Produce is called, after the
+ // function potentially sets the default topic.
+ OnProduceRecordBuffered(*Record)
+}
+
+// HookProduceRecordPartitioned is called when a record is partitioned and
+// internally ready to be flushed.
+//
+// This hook can be used to create metrics of buffered records per partition,
+// and then you can correlate that to partition leaders and determine which
+// brokers are having problems.
+//
+// Note that this hook will slow down high-volume producing and it is
+// recommended to only use this temporarily or if you are ok with the
+// performance hit.
+type HookProduceRecordPartitioned interface {
+ // OnProduceRecordPartitioned is passed a record that has been
+ // partitioned and the current broker leader for the partition
+ // (note that the leader may change if the partition is moved).
+ //
+ // This hook is called once a record is queued to be flushed. The
+ // record's Partition and Timestamp fields are safe to read.
+ OnProduceRecordPartitioned(*Record, int32)
+}
+
+// HookProduceRecordUnbuffered is called just before a record's promise is
+// finished; this is effectively a mirror of a record promise.
+//
+// As an example, if using HookProduceRecordBuffered for a gauge of how many
+// record bytes are buffered, this hook can be used to decrement the gauge.
+//
+// Note that this hook will slow down high-volume producing a bit.
+type HookProduceRecordUnbuffered interface {
+ // OnProduceRecordUnbuffered is passed a record that is just about to
+ // have its produce promise called, as well as the error that the
+ // promise will be called with.
+ OnProduceRecordUnbuffered(*Record, error)
+}
+
+// HookFetchRecordBuffered is called when a record is internally buffered after
+// fetching, ready to be polled.
+//
+// This hook can be used to write gauge metrics regarding the number of records
+// or bytes buffered, or to write interceptors that modify a record before
+// being returned from polling. If you just want a metric for the number of
+// records buffered, use the client's BufferedFetchRecords method, as it is
+// faster.
+//
+// Note that this hook will slow down high-volume consuming a bit.
+type HookFetchRecordBuffered interface {
+ // OnFetchRecordBuffered is passed a record that is now buffered, ready
+ // to be polled.
+ OnFetchRecordBuffered(*Record)
+}
+
+// HookFetchRecordUnbuffered is called when a fetched record is unbuffered.
+//
+// A record can be internally discarded after being in some scenarios without
+// being polled, such as when the internal assignment changes.
+//
+// As an example, if using HookFetchRecordBuffered for a gauge of how many
+// record bytes are buffered ready to be polled, this hook can be used to
+// decrement the gauge.
+//
+// Note that this hook may slow down high-volume consuming a bit.
+type HookFetchRecordUnbuffered interface {
+ // OnFetchRecordUnbuffered is passwed a record that is being
+ // "unbuffered" within the client, and whether the record is being
+ // returned from polling.
+ OnFetchRecordUnbuffered(r *Record, polled bool)
+}
+
+/////////////
+// HELPERS //
+/////////////
+
+// implementsAnyHook will check the incoming Hook for any Hook implementation
+func implementsAnyHook(h Hook) bool {
+ switch h.(type) {
+ case HookNewClient,
+ HookClientClosed,
+ HookBrokerConnect,
+ HookBrokerDisconnect,
+ HookBrokerWrite,
+ HookBrokerRead,
+ HookBrokerE2E,
+ HookBrokerThrottle,
+ HookGroupManageError,
+ HookProduceBatchWritten,
+ HookFetchBatchRead,
+ HookProduceRecordBuffered,
+ HookProduceRecordPartitioned,
+ HookProduceRecordUnbuffered,
+ HookFetchRecordBuffered,
+ HookFetchRecordUnbuffered:
+ return true
+ }
+ return false
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/go121.go b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/go121.go
new file mode 100644
index 0000000000000..3cf972b6edbc9
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/go121.go
@@ -0,0 +1,28 @@
+//go:build go1.21
+// +build go1.21
+
+package sticky
+
+import "slices"
+
+func sortPartNums(ps memberPartitions) {
+ slices.Sort(ps)
+}
+
+func (b *balancer) sortMemberByLiteralPartNum(memberNum int) {
+ partNums := b.plan[memberNum]
+ slices.SortFunc(partNums, func(lpNum, rpNum int32) int {
+ ltNum, rtNum := b.partOwners[lpNum], b.partOwners[rpNum]
+ li, ri := b.topicInfos[ltNum], b.topicInfos[rtNum]
+ lt, rt := li.topic, ri.topic
+ lp, rp := lpNum-li.partNum, rpNum-ri.partNum
+ if lp < rp {
+ return -1
+ } else if lp > rp {
+ return 1
+ } else if lt < rt {
+ return -1
+ }
+ return 1
+ })
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/goold.go b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/goold.go
new file mode 100644
index 0000000000000..addd2bbc19c12
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/goold.go
@@ -0,0 +1,22 @@
+//go:build !go1.21
+// +build !go1.21
+
+package sticky
+
+import "sort"
+
+func sortPartNums(partNums memberPartitions) {
+ sort.Slice(partNums, func(i, j int) bool { return partNums[i] < partNums[j] })
+}
+
+func (b *balancer) sortMemberByLiteralPartNum(memberNum int) {
+ partNums := b.plan[memberNum]
+ sort.Slice(partNums, func(i, j int) bool {
+ lpNum, rpNum := partNums[i], partNums[j]
+ ltNum, rtNum := b.partOwners[lpNum], b.partOwners[rpNum]
+ li, ri := b.topicInfos[ltNum], b.topicInfos[rtNum]
+ lt, rt := li.topic, ri.topic
+ lp, rp := lpNum-li.partNum, rpNum-ri.partNum
+ return lp < rp || (lp == rp && lt < rt)
+ })
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/graph.go b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/graph.go
new file mode 100644
index 0000000000000..d6bbb587ed2a2
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/graph.go
@@ -0,0 +1,226 @@
+package sticky
+
+import "container/heap"
+
+// Graph maps members to partitions they want to steal.
+//
+// The representation was chosen so as to avoid updating all members on any
+// partition move; move updates are one map update.
+type graph struct {
+ b *balancer
+
+ // node => edges out
+ // "from a node (member), which topicNum could we steal?"
+ out [][]uint32
+
+ // edge => who owns this edge; built in balancer's assignUnassigned
+ cxns []partitionConsumer
+
+ // scores are all node scores from a search node. The distance field
+ // is reset on findSteal to infinityScore..
+ scores pathScores
+
+ // heapBuf and pathBuf are backing buffers that are reused every
+ // findSteal; note that pathBuf must be done being used before
+ // the next find steal, but it always is.
+ heapBuf pathHeap
+ pathBuf []stealSegment
+}
+
+func (b *balancer) newGraph(
+ partitionConsumers []partitionConsumer,
+ topicPotentials [][]uint16,
+) graph {
+ g := graph{
+ b: b,
+ out: make([][]uint32, len(b.members)),
+ cxns: partitionConsumers,
+ scores: make([]pathScore, len(b.members)),
+ heapBuf: make([]*pathScore, len(b.members)),
+ }
+ outBufs := make([]uint32, len(b.members)*len(topicPotentials))
+ for memberNum := range b.plan {
+ out := outBufs[:0:len(topicPotentials)]
+ outBufs = outBufs[len(topicPotentials):]
+ // In the worst case, if every node is linked to each other,
+ // each node will have nparts edges. We preallocate the worst
+ // case. It is common for the graph to be highly connected.
+ g.out[memberNum] = out
+ }
+ for topicNum, potentials := range topicPotentials {
+ for _, potential := range potentials {
+ g.out[potential] = append(g.out[potential], uint32(topicNum))
+ }
+ }
+ return g
+}
+
+func (g *graph) changeOwnership(edge int32, newDst uint16) {
+ g.cxns[edge].memberNum = newDst
+}
+
+// findSteal uses Dijkstra search to find a path from the best node it can reach.
+func (g *graph) findSteal(from uint16) ([]stealSegment, bool) {
+ // First, we must reset our scores from any prior run. This is O(M),
+ // but is fast and faster than making a map and extending it a lot.
+ for i := range g.scores {
+ g.scores[i].distance = infinityScore
+ g.scores[i].done = false
+ }
+
+ first, _ := g.getScore(from)
+
+ first.distance = 0
+ first.done = true
+
+ g.heapBuf = append(g.heapBuf[:0], first)
+ rem := &g.heapBuf
+ for rem.Len() > 0 {
+ current := heap.Pop(rem).(*pathScore)
+ if current.level > first.level+1 {
+ path := g.pathBuf[:0]
+ for current.parent != nil {
+ path = append(path, stealSegment{
+ current.node,
+ current.parent.node,
+ current.srcEdge,
+ })
+ current = current.parent
+ }
+ g.pathBuf = path
+ return path, true
+ }
+
+ current.done = true
+
+ for _, topicNum := range g.out[current.node] {
+ info := g.b.topicInfos[topicNum]
+ firstPartNum, lastPartNum := info.partNum, info.partNum+info.partitions
+ for edge := firstPartNum; edge < lastPartNum; edge++ {
+ neighborNode := g.cxns[edge].memberNum
+ neighbor, isNew := g.getScore(neighborNode)
+ if neighbor.done {
+ continue
+ }
+
+ distance := current.distance + 1
+
+ // The neighbor is the current node that owns this edge.
+ // If our node originally owned this partition, then it
+ // would be preferable to steal edge back.
+ srcIsOriginal := g.cxns[edge].originalNum == current.node
+
+ // If this is a new neighbor (our first time seeing the neighbor
+ // in our search), this is also the shortest path to reach them,
+ // where shortest defers preference to original sources THEN distance.
+ if isNew {
+ neighbor.parent = current
+ neighbor.srcIsOriginal = srcIsOriginal
+ neighbor.srcEdge = edge
+ neighbor.distance = distance
+ neighbor.heapIdx = len(*rem)
+ heap.Push(rem, neighbor)
+ } else if !neighbor.srcIsOriginal && srcIsOriginal {
+ // If the search path has seen this neighbor before, but
+ // we now are evaluating a partition that would increase
+ // stickiness if stolen, then fixup the neighbor's parent
+ // and srcEdge.
+ neighbor.parent = current
+ neighbor.srcIsOriginal = true
+ neighbor.srcEdge = edge
+ neighbor.distance = distance
+ heap.Fix(rem, neighbor.heapIdx)
+ }
+ }
+ }
+ }
+
+ return nil, false
+}
+
+type stealSegment struct {
+ src uint16 // member num
+ dst uint16 // member num
+ part int32 // partNum
+}
+
+// As we traverse a graph, we assign each node a path score, which tracks a few
+// numbers for what it would take to reach this node from our first node.
+type pathScore struct {
+ // Done is set to true when we pop a node off of the graph. Once we
+ // pop a node, it means we have found a best path to that node and
+ // we do not want to revisit it for processing if any other future
+ // nodes reach back to this one.
+ done bool
+
+ // srcIsOriginal is true if, were our parent to steal srcEdge, would
+ // that put srcEdge back on the original member. That is, if we are B
+ // and our parent is A, does our srcEdge originally belong do A?
+ //
+ // This field exists to work around a very slim edge case where a
+ // partition is stolen by B and then needs to be stolen back by A
+ // later.
+ srcIsOriginal bool
+
+ node uint16 // our member num
+ distance int32 // how many steals it would take to get here
+ srcEdge int32 // the partition used to reach us
+ level int32 // partitions owned on this segment
+ parent *pathScore
+ heapIdx int
+}
+
+type pathScores []pathScore
+
+const infinityScore = 1<<31 - 1
+
+func (g *graph) getScore(node uint16) (*pathScore, bool) {
+ r := &g.scores[node]
+ exists := r.distance != infinityScore
+ if !exists {
+ *r = pathScore{
+ node: node,
+ level: int32(len(g.b.plan[node])),
+ distance: infinityScore,
+ }
+ }
+ return r, !exists
+}
+
+type pathHeap []*pathScore
+
+func (p *pathHeap) Len() int { return len(*p) }
+func (p *pathHeap) Swap(i, j int) {
+ h := *p
+ l, r := h[i], h[j]
+ l.heapIdx, r.heapIdx = r.heapIdx, l.heapIdx
+ h[i], h[j] = r, l
+}
+
+// For our path, we always want to prioritize stealing a partition we
+// originally owned. This may result in a longer steal path, but it will
+// increase stickiness.
+//
+// Next, our real goal, which is to find a node we can steal from. Because of
+// this, we always want to sort by the highest level. The pathHeap stores
+// reachable paths, so by sorting by the highest level, we terminate quicker:
+// we always check the most likely candidates to quit our search.
+//
+// Finally, we simply prefer searching through shorter paths and, barring that,
+// just sort by node.
+func (p *pathHeap) Less(i, j int) bool {
+ l, r := (*p)[i], (*p)[j]
+ return l.srcIsOriginal && !r.srcIsOriginal || !l.srcIsOriginal && !r.srcIsOriginal &&
+ (l.level > r.level || l.level == r.level &&
+ (l.distance < r.distance || l.distance == r.distance &&
+ l.node < r.node))
+}
+
+func (p *pathHeap) Push(x any) { *p = append(*p, x.(*pathScore)) }
+func (p *pathHeap) Pop() any {
+ h := *p
+ l := len(h)
+ r := h[l-1]
+ *p = h[:l-1]
+ return r
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/rbtree.go b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/rbtree.go
new file mode 100644
index 0000000000000..8d563b7873f32
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/rbtree.go
@@ -0,0 +1,392 @@
+package sticky
+
+// This file contains a vendoring of github.com/twmb/go-rbtree, with interface
+// types replaced with *partitionLevel. We do this to simplify (and slightly)
+// speed up the rbtree, get rid of a bunch of code we do not need, and to drop
+// a dep.
+
+type color bool
+
+const red, black color = true, false
+
+// Tree is a red-black tree.
+type treePlan struct {
+ root *treePlanNode
+ size int
+}
+
+type treePlanNode struct {
+ left *treePlanNode
+ right *treePlanNode
+ parent *treePlanNode
+ color color
+ item *partitionLevel
+}
+
+// liftRightSideOf is rotateLeft.
+//
+// Graphically speaking, this takes the node on the right and lifts it above
+// ourselves. IMO trying to visualize a "rotation" is confusing.
+func (t *treePlan) liftRightSideOf(n *treePlanNode) {
+ r := n.right
+ t.relinkParenting(n, r)
+
+ // lift the right
+ n.right = r.left
+ n.parent = r
+
+ // fix the lifted right's left
+ if r.left != nil {
+ r.left.parent = n
+ }
+ r.left = n
+}
+
+// liftLeftSideOf is rotateRight, renamed to aid my visualization.
+func (t *treePlan) liftLeftSideOf(n *treePlanNode) {
+ l := n.left
+ t.relinkParenting(n, l)
+
+ n.left = l.right
+ n.parent = l
+
+ if l.right != nil {
+ l.right.parent = n
+ }
+ l.right = n
+}
+
+// relinkParenting is called to fix a former child c of node n's parent
+// relationship to the parent of n.
+//
+// After this, the n node can be considered to have no parent.
+func (t *treePlan) relinkParenting(n, c *treePlanNode) {
+ p := n.parent
+ if c != nil {
+ c.parent = p
+ }
+ if p == nil {
+ t.root = c
+ return
+ }
+ if n == p.left {
+ p.left = c
+ } else {
+ p.right = c
+ }
+}
+
+func (n *treePlanNode) sibling() *treePlanNode {
+ if n.parent == nil {
+ return nil
+ }
+ if n == n.parent.left {
+ return n.parent.right
+ }
+ return n.parent.left
+}
+
+func (n *treePlanNode) uncle() *treePlanNode {
+ p := n.parent
+ if p.parent == nil {
+ return nil
+ }
+ return p.sibling()
+}
+
+func (n *treePlanNode) grandparent() *treePlanNode {
+ return n.parent.parent
+}
+
+func (n *treePlanNode) isBlack() bool {
+ return n == nil || n.color == black
+}
+
+func (t *treePlan) insert(i *partitionLevel) *treePlanNode {
+ r := &treePlanNode{item: i}
+ t.reinsert(r)
+ return r
+}
+
+func (t *treePlan) reinsert(n *treePlanNode) {
+ *n = treePlanNode{
+ color: red,
+ item: n.item,
+ }
+ t.size++
+ if t.root == nil {
+ n.color = black
+ t.root = n
+ return
+ }
+
+ on := t.root
+ var set **treePlanNode
+ for {
+ if n.item.less(on.item) {
+ if on.left == nil {
+ set = &on.left
+ break
+ }
+ on = on.left
+ } else {
+ if on.right == nil {
+ set = &on.right
+ break
+ }
+ on = on.right
+ }
+ }
+
+ n.parent = on
+ *set = n
+
+repair:
+ // Case 1: we have jumped back to the root. Paint it black.
+ if n.parent == nil {
+ n.color = black
+ return
+ }
+
+ // Case 2: if our parent is black, us being red does not add a new black
+ // to the chain and cannot increase the maximum number of blacks from
+ // root, so we are done.
+ if n.parent.color == black {
+ return
+ }
+
+ // Case 3: if we have an uncle and it is red, then we flip our
+ // parent's, uncle's, and grandparent's color.
+ //
+ // This stops the red-red from parent to us, but may introduce
+ // a red-red from grandparent to its parent, so we set ourselves
+ // to the grandparent and go back to the repair beginning.
+ if uncle := n.uncle(); uncle != nil && uncle.color == red {
+ n.parent.color = black
+ uncle.color = black
+ n = n.grandparent()
+ n.color = red
+ goto repair
+ }
+
+ // Case 4 step 1: our parent is red but uncle is black. Step 2 relies
+ // on the node being on the "outside". If we are on the inside, our
+ // parent lifts ourselves above itself, thus making the parent the
+ // outside, and then we become that parent.
+ p := n.parent
+ g := p.parent
+ if n == p.right && p == g.left {
+ t.liftRightSideOf(p)
+ n = n.left
+ } else if n == p.left && p == g.right {
+ t.liftLeftSideOf(p)
+ n = n.right
+ }
+
+ // Care 4 step 2: we are on the outside, and we and our parent are red.
+ // If we are on the left, our grandparent lifts its left and then swaps
+ // its and our parent's colors.
+ //
+ // This fixes the red-red situation while preserving the number of
+ // blacks from root to leaf property.
+ p = n.parent
+ g = p.parent
+
+ if n == p.left {
+ t.liftLeftSideOf(g)
+ } else {
+ t.liftRightSideOf(g)
+ }
+ p.color = black
+ g.color = red
+}
+
+func (t *treePlan) delete(n *treePlanNode) {
+ t.size--
+
+ // We only want to delete nodes with at most one child. If this has
+ // two, we find the max node on the left, set this node's item to that
+ // node's item, and then delete that max node.
+ if n.left != nil && n.right != nil {
+ remove := n.left.max()
+ n.item, remove.item = remove.item, n.item
+ n = remove
+ }
+
+ // Determine which child to elevate into our position now that we know
+ // we have at most one child.
+ c := n.right
+ if n.right == nil {
+ c = n.left
+ }
+
+ t.doDelete(n, c)
+ t.relinkParenting(n, c)
+}
+
+// Since we do not represent leave nodes with objects, we relink the parent
+// after deleting. See the Wikipedia note. Most of our deletion operations
+// on n (the dubbed "shadow" node) rather than c.
+func (t *treePlan) doDelete(n, c *treePlanNode) {
+ // If the node was red, we deleted a red node; the number of black
+ // nodes along any path is the same and we can quit.
+ if n.color != black {
+ return
+ }
+
+ // If the node was black, then, if we have a child and it is red,
+ // we switch the child to black to preserve the path number.
+ if c != nil && c.color == red {
+ c.color = black
+ return
+ }
+
+ // We either do not have a child (nil is black), or we do and it
+ // is black. We must preserve the number of blacks.
+
+case1:
+ // Case 1: if the child is the new root, then the tree must have only
+ // had up to two elements and now has one or zero. We are done.
+ if n.parent == nil {
+ return
+ }
+
+ // Note that if we are here, we must have a sibling.
+ //
+ // The first time through, from the deleted node, the deleted node was
+ // black and the child was black. This being two blacks meant that the
+ // original node's parent required two blacks on the other side.
+ //
+ // The second time through, through case 3, the sibling was repainted
+ // red... so it must still exist.
+
+ // Case 2: if the child's sibling is red, we recolor the parent and
+ // sibling and lift the sibling, ensuring we have a black sibling.
+ s := n.sibling()
+ if s.color == red {
+ n.parent.color = red
+ s.color = black
+ if n == n.parent.left {
+ t.liftRightSideOf(n.parent)
+ } else {
+ t.liftLeftSideOf(n.parent)
+ }
+ s = n.sibling()
+ }
+
+ // Right here, we know the sibling is black. If both sibling children
+ // are black or nil leaves (black), we enter cases 3 and 4.
+ if s.left.isBlack() && s.right.isBlack() {
+ // Case 3: if the parent, sibling, sibling's children are
+ // black, we can paint the sibling red to fix the imbalance.
+ // However, the same black imbalance can exist on the other
+ // side of the parent, so we go back to case 1 on the parent.
+ s.color = red
+ if n.parent.color == black {
+ n = n.parent
+ goto case1
+ }
+
+ // Case 4: if the sibling and sibling's children are black, but
+ // the parent is red, We can swap parent and sibling colors to
+ // fix our imbalance. We have no worry of further imbalances up
+ // the tree since we deleted a black node, replaced it with a
+ // red node, and then painted that red node black.
+ n.parent.color = black
+ return
+ }
+
+ // Now we know the sibling is black and one of its children is red.
+
+ // Case 5: in preparation for 6, if we are on the left, we want our
+ // sibling, if it has a right child, for that child's color to be red.
+ // We swap the sibling and sibling's left's color (since we know the
+ // sibling has a red child and that the right is black) and we lift the
+ // left child.
+ //
+ // This keeps the same number of black nodes and under the sibling.
+ if n == n.parent.left && s.right.isBlack() {
+ s.color = red
+ s.left.color = black
+ t.liftLeftSideOf(s)
+ } else if n == n.parent.right && s.left.isBlack() {
+ s.color = red
+ s.right.color = black
+ t.liftRightSideOf(s)
+ }
+ s = n.sibling() // can change from the above case
+
+ // At this point, we know we have a black sibling and, if we are on
+ // the left, it has a red child on its right.
+
+ // Case 6: we lift the sibling above the parent, swap the sibling's and
+ // parent's color, and change the sibling's right's color from red to
+ // black.
+ //
+ // This brings in a black above our node to replace the one we deleted,
+ // while preserves the number of blacks on the other side of the path.
+ s.color = n.parent.color
+ n.parent.color = black
+ if n == n.parent.left {
+ s.right.color = black
+ t.liftRightSideOf(n.parent)
+ } else {
+ s.left.color = black
+ t.liftLeftSideOf(n.parent)
+ }
+}
+
+func (t *treePlan) findWith(cmp func(*partitionLevel) int) *treePlanNode {
+ on := t.root
+ for on != nil {
+ way := cmp(on.item)
+ switch {
+ case way < 0:
+ on = on.left
+ case way == 0:
+ return on
+ case way > 0:
+ on = on.right
+ }
+ }
+ return nil
+}
+
+func (t *treePlan) findWithOrInsertWith(
+ find func(*partitionLevel) int,
+ insert func() *partitionLevel,
+) *treePlanNode {
+ found := t.findWith(find)
+ if found == nil {
+ return t.insert(insert())
+ }
+ return found
+}
+
+func (t *treePlan) min() *treePlanNode {
+ if t.root == nil {
+ return nil
+ }
+ return t.root.min()
+}
+
+func (n *treePlanNode) min() *treePlanNode {
+ for n.left != nil {
+ n = n.left
+ }
+ return n
+}
+
+func (t *treePlan) max() *treePlanNode {
+ if t.root == nil {
+ return nil
+ }
+ return t.root.max()
+}
+
+func (n *treePlanNode) max() *treePlanNode {
+ for n.right != nil {
+ n = n.right
+ }
+ return n
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/sticky.go b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/sticky.go
new file mode 100644
index 0000000000000..a502a2e5613df
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/internal/sticky/sticky.go
@@ -0,0 +1,733 @@
+// Package sticky provides sticky partitioning strategy for Kafka, with a
+// complete overhaul to be faster, more understandable, and optimal.
+//
+// For some points on how Java's strategy is flawed, see
+// https://github.com/IBM/sarama/pull/1416/files/b29086bdaae0da7ce71eae3f854d50685fd6b631#r315005878
+package sticky
+
+import (
+ "math"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// Sticky partitioning has two versions, the latter from KIP-341 preventing a
+// bug. The second version introduced generations with the default generation
+// from the first generation's consumers defaulting to -1.
+
+// We can support up to 65533 members; two slots are reserved.
+// We can support up to 2,147,483,647 partitions.
+// I expect a server to fall over before reaching either of these numbers.
+
+// GroupMember is a Kafka group member.
+type GroupMember struct {
+ ID string
+ Topics []string
+ UserData []byte
+ Owned []kmsg.ConsumerMemberMetadataOwnedPartition
+ Generation int32
+ Cooperative bool
+}
+
+// Plan is the plan this package came up with (member => topic => partitions).
+type Plan map[string]map[string][]int32
+
+type balancer struct {
+ // members are the members in play for this balance.
+ // This is built in newBalancer mapping member IDs to the GroupMember.
+ members []GroupMember
+
+ memberNums map[string]uint16 // member id => index into members
+
+ topicNums map[string]uint32 // topic name => index into topicInfos
+ topicInfos []topicInfo
+ partOwners []uint32 // partition => owning topicNum
+
+ // Stales tracks partNums that are doubly subscribed in this join
+ // where one of the subscribers is on an old generation.
+ //
+ // The newer generation goes into plan directly, the older gets
+ // stuffed here.
+ stales map[int32]uint16 // partNum => stale memberNum
+
+ plan membersPartitions // what we are building and balancing
+
+ // planByNumPartitions orders plan members into partition count levels.
+ //
+ // The nodes in the tree reference values in plan, meaning updates in
+ // this field are visible in plan.
+ planByNumPartitions treePlan
+
+ // if the subscriptions are complex (all members do _not_ consume the
+ // same partitions), then we build a graph and use that for assigning.
+ isComplex bool
+
+ // stealGraph is a graphical representation of members and partitions
+ // they want to steal.
+ stealGraph graph
+}
+
+type topicInfo struct {
+ partNum int32 // base part num
+ partitions int32 // number of partitions in the topic
+ topic string
+}
+
+func newBalancer(members []GroupMember, topics map[string]int32) *balancer {
+ var (
+ nparts int
+ topicNums = make(map[string]uint32, len(topics))
+ topicInfos = make([]topicInfo, len(topics))
+ )
+ for topic, partitions := range topics {
+ topicNum := uint32(len(topicNums))
+ topicNums[topic] = topicNum
+ topicInfos[topicNum] = topicInfo{
+ partNum: int32(nparts),
+ partitions: partitions,
+ topic: topic,
+ }
+ nparts += int(partitions)
+ }
+ partOwners := make([]uint32, 0, nparts)
+ for topicNum, info := range topicInfos {
+ for i := int32(0); i < info.partitions; i++ {
+ partOwners = append(partOwners, uint32(topicNum))
+ }
+ }
+ memberNums := make(map[string]uint16, len(members))
+ for num, member := range members {
+ memberNums[member.ID] = uint16(num)
+ }
+
+ b := &balancer{
+ members: members,
+ memberNums: memberNums,
+ topicNums: topicNums,
+ topicInfos: topicInfos,
+
+ partOwners: partOwners,
+ stales: make(map[int32]uint16),
+ plan: make(membersPartitions, len(members)),
+ }
+
+ evenDivvy := nparts/len(members) + 1
+ planBuf := make(memberPartitions, evenDivvy*len(members))
+ for num := range members {
+ b.plan[num] = planBuf[:0:evenDivvy]
+ planBuf = planBuf[evenDivvy:]
+ }
+ return b
+}
+
+func (b *balancer) into() Plan {
+ plan := make(Plan, len(b.plan))
+ ntopics := 5 * len(b.topicNums) / 4
+
+ for memberNum, partNums := range b.plan {
+ member := b.members[memberNum].ID
+ if len(partNums) == 0 {
+ plan[member] = make(map[string][]int32, 0)
+ continue
+ }
+ topics := make(map[string][]int32, ntopics)
+ plan[member] = topics
+
+ // partOwners is created by topic, and partNums refers to
+ // indices in partOwners. If we sort by partNum, we have sorted
+ // topics and partitions.
+ sortPartNums(partNums)
+
+ // We can reuse partNums for our topic partitions.
+ topicParts := partNums[:0]
+
+ lastTopicNum := b.partOwners[partNums[0]]
+ lastTopicInfo := b.topicInfos[lastTopicNum]
+ for _, partNum := range partNums {
+ topicNum := b.partOwners[partNum]
+
+ if topicNum != lastTopicNum {
+ topics[lastTopicInfo.topic] = topicParts[:len(topicParts):len(topicParts)]
+ topicParts = topicParts[len(topicParts):]
+
+ lastTopicNum = topicNum
+ lastTopicInfo = b.topicInfos[topicNum]
+ }
+
+ partition := partNum - lastTopicInfo.partNum
+ topicParts = append(topicParts, partition)
+ }
+ topics[lastTopicInfo.topic] = topicParts[:len(topicParts):len(topicParts)]
+ }
+ return plan
+}
+
+func (b *balancer) partNumByTopic(topic string, partition int32) (int32, bool) {
+ topicNum, exists := b.topicNums[topic]
+ if !exists {
+ return 0, false
+ }
+ topicInfo := b.topicInfos[topicNum]
+ if partition >= topicInfo.partitions {
+ return 0, false
+ }
+ return topicInfo.partNum + partition, true
+}
+
+// memberPartitions contains partitions for a member.
+type memberPartitions []int32
+
+func (m *memberPartitions) remove(needle int32) {
+ s := *m
+ var d int
+ for i, check := range s {
+ if check == needle {
+ d = i
+ break
+ }
+ }
+ s[d] = s[len(s)-1]
+ *m = s[:len(s)-1]
+}
+
+func (m *memberPartitions) takeEnd() int32 {
+ s := *m
+ r := s[len(s)-1]
+ *m = s[:len(s)-1]
+ return r
+}
+
+func (m *memberPartitions) add(partNum int32) {
+ *m = append(*m, partNum)
+}
+
+// membersPartitions maps members to their partitions.
+type membersPartitions []memberPartitions
+
+type partitionLevel struct {
+ level int
+ members []uint16
+}
+
+// partitionLevel's members field used to be a map, but removing it gains a
+// slight perf boost at the cost of removing members being O(M).
+// Even with the worse complexity, scanning a short list can be faster
+// than managing a map, and we expect groups to not be _too_ large.
+func (l *partitionLevel) removeMember(memberNum uint16) {
+ for i, v := range l.members {
+ if v == memberNum {
+ l.members[i] = l.members[len(l.members)-1]
+ l.members = l.members[:len(l.members)-1]
+ return
+ }
+ }
+}
+
+func (b *balancer) findLevel(level int) *partitionLevel {
+ return b.planByNumPartitions.findWithOrInsertWith(
+ func(n *partitionLevel) int { return level - n.level },
+ func() *partitionLevel { return newPartitionLevel(level) },
+ ).item
+}
+
+func (b *balancer) fixMemberLevel(
+ src *treePlanNode,
+ memberNum uint16,
+ partNums memberPartitions,
+) {
+ b.removeLevelingMember(src, memberNum)
+ newLevel := len(partNums)
+ partLevel := b.findLevel(newLevel)
+ partLevel.members = append(partLevel.members, memberNum)
+}
+
+func (b *balancer) removeLevelingMember(
+ src *treePlanNode,
+ memberNum uint16,
+) {
+ src.item.removeMember(memberNum)
+ if len(src.item.members) == 0 {
+ b.planByNumPartitions.delete(src)
+ }
+}
+
+func (l *partitionLevel) less(r *partitionLevel) bool {
+ return l.level < r.level
+}
+
+func newPartitionLevel(level int) *partitionLevel {
+ return &partitionLevel{level: level}
+}
+
+func (b *balancer) initPlanByNumPartitions() {
+ for memberNum, partNums := range b.plan {
+ partLevel := b.findLevel(len(partNums))
+ partLevel.members = append(partLevel.members, uint16(memberNum))
+ }
+}
+
+// Balance performs sticky partitioning for the given group members and topics,
+// returning the determined plan.
+func Balance(members []GroupMember, topics map[string]int32) Plan {
+ if len(members) == 0 {
+ return make(Plan)
+ }
+ b := newBalancer(members, topics)
+ if cap(b.partOwners) == 0 {
+ return b.into()
+ }
+ b.parseMemberMetadata()
+ b.assignUnassignedAndInitGraph()
+ b.initPlanByNumPartitions()
+ b.balance()
+ return b.into()
+}
+
+// parseMemberMetadata parses all member userdata to initialize the prior plan.
+func (b *balancer) parseMemberMetadata() {
+ // all partitions => members that are consuming those partitions
+ // Each partition should only have one consumer, but a flaky member
+ // could rejoin with an old generation (stale user data) and say it
+ // is consuming something a different member is. See KIP-341.
+ partitionConsumersByGeneration := make([]memberGeneration, cap(b.partOwners))
+
+ const highBit uint32 = 1 << 31
+ var memberPlan []topicPartition
+ var gen uint32
+
+ for _, member := range b.members {
+ // KAFKA-13715 / KIP-792: cooperative-sticky now includes a
+ // generation directly with the currently-owned partitions, and
+ // we can avoid deserializing UserData. This guards against
+ // some zombie issues (see KIP).
+ //
+ // The eager (sticky) balancer revokes all partitions before
+ // rejoining, so we cannot use Owned.
+ if member.Cooperative && member.Generation >= 0 {
+ memberPlan = memberPlan[:0]
+ for _, t := range member.Owned {
+ for _, p := range t.Partitions {
+ memberPlan = append(memberPlan, topicPartition{t.Topic, p})
+ }
+ }
+ gen = uint32(member.Generation)
+ } else {
+ memberPlan, gen = deserializeUserData(member.UserData, memberPlan[:0])
+ }
+ gen |= highBit
+ memberNum := b.memberNums[member.ID]
+ for _, topicPartition := range memberPlan {
+ partNum, exists := b.partNumByTopic(topicPartition.topic, topicPartition.partition)
+ if !exists {
+ continue
+ }
+
+ // We keep the highest generation, and at most two generations.
+ // If something is doubly consumed, we skip it.
+ pcs := &partitionConsumersByGeneration[partNum]
+ switch {
+ case gen > pcs.genNew: // one consumer already, but new member has higher generation
+ pcs.memberOld, pcs.genOld = pcs.memberNew, pcs.genNew
+ pcs.memberNew, pcs.genNew = memberNum, gen
+
+ case gen > pcs.genOld: // one consumer already, we could be second, or if there is a second, we have a high generation
+ pcs.memberOld, pcs.genOld = memberNum, gen
+ }
+ }
+ }
+
+ for partNum, pcs := range partitionConsumersByGeneration {
+ if pcs.genNew&highBit != 0 {
+ b.plan[pcs.memberNew].add(int32(partNum))
+ if pcs.genOld&highBit != 0 {
+ b.stales[int32(partNum)] = pcs.memberOld
+ }
+ }
+ }
+}
+
+type memberGeneration struct {
+ memberNew uint16
+ memberOld uint16
+ genNew uint32
+ genOld uint32
+}
+
+type topicPartition struct {
+ topic string
+ partition int32
+}
+
+// deserializeUserData returns the topic partitions a member was consuming and
+// the join generation it was consuming from.
+//
+// If anything fails or we do not understand the userdata parsing generation,
+// we return empty defaults. The member will just be assumed to have no
+// history.
+func deserializeUserData(userdata []byte, base []topicPartition) (memberPlan []topicPartition, generation uint32) {
+ memberPlan = base[:0]
+ b := kbin.Reader{Src: userdata}
+ for numAssignments := b.ArrayLen(); numAssignments > 0; numAssignments-- {
+ topic := b.UnsafeString()
+ for numPartitions := b.ArrayLen(); numPartitions > 0; numPartitions-- {
+ memberPlan = append(memberPlan, topicPartition{
+ topic,
+ b.Int32(),
+ })
+ }
+ }
+ if len(b.Src) > 0 {
+ // A generation of -1 is just as good of a generation as 0, so we use 0
+ // and then use the high bit to signify this generation has been set.
+ if generationI32 := b.Int32(); generationI32 > 0 {
+ generation = uint32(generationI32)
+ }
+ }
+ if b.Complete() != nil {
+ memberPlan = memberPlan[:0]
+ }
+ return
+}
+
+// assignUnassignedAndInitGraph is a long function that assigns unassigned
+// partitions to the least loaded members and initializes our steal graph.
+//
+// Doing so requires a bunch of metadata, and in the process we want to remove
+// partitions from the plan that no longer exist in the client.
+func (b *balancer) assignUnassignedAndInitGraph() {
+ // First, over all members in this assignment, map each partition to
+ // the members that can consume it. We will use this for assigning.
+ //
+ // To do this mapping efficiently, we first map each topic to the
+ // memberNums that can consume those topics, and then use the results
+ // below in the partition mapping. Doing this two step process allows
+ // for a 10x speed boost rather than ranging over all partitions many
+ // times.
+ topicPotentialsBuf := make([]uint16, len(b.topicNums)*len(b.members))
+ topicPotentials := make([][]uint16, len(b.topicNums))
+ for memberNum, member := range b.members {
+ for _, topic := range member.Topics {
+ topicNum, exists := b.topicNums[topic]
+ if !exists {
+ continue
+ }
+ memberNums := topicPotentials[topicNum]
+ if cap(memberNums) == 0 {
+ memberNums = topicPotentialsBuf[:0:len(b.members)]
+ topicPotentialsBuf = topicPotentialsBuf[len(b.members):]
+ }
+ topicPotentials[topicNum] = append(memberNums, uint16(memberNum))
+ }
+ }
+
+ for _, topicMembers := range topicPotentials {
+ // If the number of members interested in this topic is not the
+ // same as the number of members in this group, then **other**
+ // members are interested in other topics and not this one, and
+ // we must go to complex balancing.
+ //
+ // We could accidentally fall into isComplex if any member is
+ // not interested in anything, but realistically we do not
+ // expect members to join with no interests.
+ if len(topicMembers) != len(b.members) {
+ b.isComplex = true
+ }
+ }
+
+ // Next, over the prior plan, un-map deleted topics or topics that
+ // members no longer want. This is where we determine what is now
+ // unassigned.
+ partitionConsumers := make([]partitionConsumer, cap(b.partOwners)) // partNum => consuming member
+ for i := range partitionConsumers {
+ partitionConsumers[i] = partitionConsumer{unassignedPart, unassignedPart}
+ }
+ for memberNum := range b.plan {
+ partNums := &b.plan[memberNum]
+ for _, partNum := range *partNums {
+ topicNum := b.partOwners[partNum]
+ if len(topicPotentials[topicNum]) == 0 { // all prior subscriptions stopped wanting this partition
+ partNums.remove(partNum)
+ continue
+ }
+ memberTopics := b.members[memberNum].Topics
+ var memberStillWantsTopic bool
+ for _, memberTopic := range memberTopics {
+ if memberTopic == b.topicInfos[topicNum].topic {
+ memberStillWantsTopic = true
+ break
+ }
+ }
+ if !memberStillWantsTopic {
+ partNums.remove(partNum)
+ continue
+ }
+ partitionConsumers[partNum] = partitionConsumer{uint16(memberNum), uint16(memberNum)}
+ }
+ }
+
+ b.tryRestickyStales(topicPotentials, partitionConsumers)
+
+ // For each member, we now sort their current partitions by partition,
+ // then topic. Sorting the lowest numbers first means that once we
+ // steal from the end (when adding a member), we steal equally across
+ // all topics. This benefits the standard case the most, where all
+ // members consume equally.
+ for memberNum := range b.plan {
+ b.sortMemberByLiteralPartNum(memberNum)
+ }
+
+ if !b.isComplex && len(topicPotentials) > 0 {
+ potentials := topicPotentials[0]
+ (&membersByPartitions{potentials, b.plan}).init()
+ for partNum, owner := range partitionConsumers {
+ if owner.memberNum != unassignedPart {
+ continue
+ }
+ assigned := potentials[0]
+ b.plan[assigned].add(int32(partNum))
+ (&membersByPartitions{potentials, b.plan}).fix0()
+ partitionConsumers[partNum].memberNum = assigned
+ }
+ } else {
+ for partNum, owner := range partitionConsumers {
+ if owner.memberNum != unassignedPart {
+ continue
+ }
+ potentials := topicPotentials[b.partOwners[partNum]]
+ if len(potentials) == 0 {
+ continue
+ }
+ leastConsumingPotential := potentials[0]
+ leastConsuming := len(b.plan[leastConsumingPotential])
+ for _, potential := range potentials[1:] {
+ potentialConsuming := len(b.plan[potential])
+ if potentialConsuming < leastConsuming {
+ leastConsumingPotential = potential
+ leastConsuming = potentialConsuming
+ }
+ }
+ b.plan[leastConsumingPotential].add(int32(partNum))
+ partitionConsumers[partNum].memberNum = leastConsumingPotential
+ }
+ }
+
+ // Lastly, with everything assigned, we build our steal graph for
+ // balancing if needed.
+ if b.isComplex {
+ b.stealGraph = b.newGraph(
+ partitionConsumers,
+ topicPotentials,
+ )
+ }
+}
+
+// unassignedPart is a fake member number that we use to track if a partition
+// is deleted or unassigned.
+const unassignedPart = math.MaxUint16 - 1
+
+// tryRestickyStales is a pre-assigning step where, for all stale members,
+// we give partitions back to them if the partition is currently on an
+// over loaded member or unassigned.
+//
+// This effectively re-stickies members before we balance further.
+func (b *balancer) tryRestickyStales(
+ topicPotentials [][]uint16,
+ partitionConsumers []partitionConsumer,
+) {
+ for staleNum, lastOwnerNum := range b.stales {
+ potentials := topicPotentials[b.partOwners[staleNum]] // there must be a potential consumer if we are here
+ var canTake bool
+ for _, potentialNum := range potentials {
+ if potentialNum == lastOwnerNum {
+ canTake = true
+ }
+ }
+ if !canTake {
+ return
+ }
+
+ // The part cannot be unassigned here; a stale member
+ // would just have it. The part also cannot be deleted;
+ // if it is, there are no potential consumers and the
+ // logic above continues before getting here. The part
+ // must be on a different owner (cannot be lastOwner),
+ // otherwise it would not be a lastOwner in the stales
+ // map; it would just be the current owner.
+ currentOwner := partitionConsumers[staleNum].memberNum
+ lastOwnerPartitions := &b.plan[lastOwnerNum]
+ currentOwnerPartitions := &b.plan[currentOwner]
+ if len(*lastOwnerPartitions)+1 < len(*currentOwnerPartitions) {
+ currentOwnerPartitions.remove(staleNum)
+ lastOwnerPartitions.add(staleNum)
+ }
+ }
+}
+
+type partitionConsumer struct {
+ memberNum uint16
+ originalNum uint16
+}
+
+// While assigning, we keep members per topic heap sorted by the number of
+// partitions they are currently consuming. This allows us to have quick
+// assignment vs. always scanning to see the min loaded member.
+//
+// Our process is to init the heap and then always fix the 0th index after
+// making it larger, so we only ever need to sift down.
+type membersByPartitions struct {
+ members []uint16
+ plan membersPartitions
+}
+
+func (m *membersByPartitions) init() {
+ n := len(m.members)
+ for i := n/2 - 1; i >= 0; i-- {
+ m.down(i, n)
+ }
+}
+
+func (m *membersByPartitions) fix0() {
+ m.down(0, len(m.members))
+}
+
+func (m *membersByPartitions) down(i0, n int) {
+ node := i0
+ for {
+ left := 2*node + 1
+ if left >= n || left < 0 { // left < 0 after int overflow
+ break
+ }
+ swap := left // left child
+ swapLen := len(m.plan[m.members[left]])
+ if right := left + 1; right < n {
+ if rightLen := len(m.plan[m.members[right]]); rightLen < swapLen {
+ swapLen = rightLen
+ swap = right
+ }
+ }
+ nodeLen := len(m.plan[m.members[node]])
+ if nodeLen <= swapLen {
+ break
+ }
+ m.members[node], m.members[swap] = m.members[swap], m.members[node]
+ node = swap
+ }
+}
+
+// balance loops trying to move partitions until the plan is as balanced
+// as it can be.
+func (b *balancer) balance() {
+ if b.isComplex {
+ b.balanceComplex()
+ return
+ }
+
+ // If all partitions are consumed equally, we have a very easy
+ // algorithm to balance: while the min and max levels are separated
+ // by over two, take from the top and give to the bottom.
+ min := b.planByNumPartitions.min().item
+ max := b.planByNumPartitions.max().item
+ for {
+ if max.level <= min.level+1 {
+ return
+ }
+
+ minMems := min.members
+ maxMems := max.members
+ for len(minMems) > 0 && len(maxMems) > 0 {
+ dst := minMems[0]
+ src := maxMems[0]
+
+ minMems = minMems[1:]
+ maxMems = maxMems[1:]
+
+ srcPartitions := &b.plan[src]
+ dstPartitions := &b.plan[dst]
+
+ dstPartitions.add(srcPartitions.takeEnd())
+ }
+
+ nextUp := b.findLevel(min.level + 1)
+ nextDown := b.findLevel(max.level - 1)
+
+ endOfUps := len(min.members) - len(minMems)
+ endOfDowns := len(max.members) - len(maxMems)
+
+ nextUp.members = append(nextUp.members, min.members[:endOfUps]...)
+ nextDown.members = append(nextDown.members, max.members[:endOfDowns]...)
+
+ min.members = min.members[endOfUps:]
+ max.members = max.members[endOfDowns:]
+
+ if len(min.members) == 0 {
+ b.planByNumPartitions.delete(b.planByNumPartitions.min())
+ min = b.planByNumPartitions.min().item
+ }
+ if len(max.members) == 0 {
+ b.planByNumPartitions.delete(b.planByNumPartitions.max())
+ max = b.planByNumPartitions.max().item
+ }
+ }
+}
+
+func (b *balancer) balanceComplex() {
+ for min := b.planByNumPartitions.min(); b.planByNumPartitions.size > 1; min = b.planByNumPartitions.min() {
+ level := min.item
+ // If this max level is within one of this level, then nothing
+ // can steal down so we return early.
+ max := b.planByNumPartitions.max().item
+ if max.level <= level.level+1 {
+ return
+ }
+ // We continually loop over this level until every member is
+ // static (deleted) or bumped up a level.
+ for len(level.members) > 0 {
+ memberNum := level.members[0]
+ if stealPath, found := b.stealGraph.findSteal(memberNum); found {
+ for _, segment := range stealPath {
+ b.reassignPartition(segment.src, segment.dst, segment.part)
+ }
+ if len(max.members) == 0 {
+ break
+ }
+ continue
+ }
+
+ // If we could not find a steal path, this
+ // member is not static (will never grow).
+ level.removeMember(memberNum)
+ if len(level.members) == 0 {
+ b.planByNumPartitions.delete(b.planByNumPartitions.min())
+ }
+ }
+ }
+}
+
+func (b *balancer) reassignPartition(src, dst uint16, partNum int32) {
+ srcPartitions := &b.plan[src]
+ dstPartitions := &b.plan[dst]
+
+ oldSrcLevel := len(*srcPartitions)
+ oldDstLevel := len(*dstPartitions)
+
+ srcPartitions.remove(partNum)
+ dstPartitions.add(partNum)
+
+ b.fixMemberLevel(
+ b.planByNumPartitions.findWith(func(n *partitionLevel) int {
+ return oldSrcLevel - n.level
+ }),
+ src,
+ *srcPartitions,
+ )
+ b.fixMemberLevel(
+ b.planByNumPartitions.findWith(func(n *partitionLevel) int {
+ return oldDstLevel - n.level
+ }),
+ dst,
+ *dstPartitions,
+ )
+
+ b.stealGraph.changeOwnership(partNum, dst)
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/logger.go b/vendor/github.com/twmb/franz-go/pkg/kgo/logger.go
new file mode 100644
index 0000000000000..bfc5dc0dd6b0a
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/logger.go
@@ -0,0 +1,124 @@
+package kgo
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "strings"
+)
+
+// LogLevel designates which level the logger should log at.
+type LogLevel int8
+
+const (
+ // LogLevelNone disables logging.
+ LogLevelNone LogLevel = iota
+ // LogLevelError logs all errors. Generally, these should not happen.
+ LogLevelError
+ // LogLevelWarn logs all warnings, such as request failures.
+ LogLevelWarn
+ // LogLevelInfo logs informational messages, such as requests. This is
+ // usually the default log level.
+ LogLevelInfo
+ // LogLevelDebug logs verbose information, and is usually not used in
+ // production.
+ LogLevelDebug
+)
+
+func (l LogLevel) String() string {
+ switch l {
+ case LogLevelError:
+ return "ERROR"
+ case LogLevelWarn:
+ return "WARN"
+ case LogLevelInfo:
+ return "INFO"
+ case LogLevelDebug:
+ return "DEBUG"
+ default:
+ return "NONE"
+ }
+}
+
+// Logger is used to log informational messages.
+type Logger interface {
+ // Level returns the log level to log at.
+ //
+ // Implementations can change their log level on the fly, but this
+ // function must be safe to call concurrently.
+ Level() LogLevel
+
+ // Log logs a message with key, value pair arguments for the given log
+ // level. Keys are always strings, while values can be any type.
+ //
+ // This must be safe to call concurrently.
+ Log(level LogLevel, msg string, keyvals ...any)
+}
+
+// BasicLogger returns a logger that will print to dst in the following format:
+//
+// prefix [LEVEL] message; key: val, key: val
+//
+// prefixFn is optional; if non-nil, it is called for a per-message prefix.
+//
+// Writes to dst are not checked for errors.
+func BasicLogger(dst io.Writer, level LogLevel, prefixFn func() string) Logger {
+ return &basicLogger{dst, level, prefixFn}
+}
+
+type basicLogger struct {
+ dst io.Writer
+ level LogLevel
+ pfxFn func() string
+}
+
+func (b *basicLogger) Level() LogLevel { return b.level }
+func (b *basicLogger) Log(level LogLevel, msg string, keyvals ...any) {
+ buf := byteBuffers.Get().(*bytes.Buffer)
+ defer byteBuffers.Put(buf)
+
+ buf.Reset()
+ if b.pfxFn != nil {
+ buf.WriteString(b.pfxFn())
+ }
+ buf.WriteByte('[')
+ buf.WriteString(level.String())
+ buf.WriteString("] ")
+ buf.WriteString(msg)
+
+ if len(keyvals) > 0 {
+ buf.WriteString("; ")
+ format := strings.Repeat("%v: %v, ", len(keyvals)/2)
+ format = format[:len(format)-2] // trim trailing comma and space
+ fmt.Fprintf(buf, format, keyvals...)
+ }
+
+ buf.WriteByte('\n')
+ b.dst.Write(buf.Bytes())
+}
+
+// nopLogger, the default logger, drops everything.
+type nopLogger struct{}
+
+func (*nopLogger) Level() LogLevel { return LogLevelNone }
+func (*nopLogger) Log(LogLevel, string, ...any) {
+}
+
+// wrappedLogger wraps the config logger for convenience at logging callsites.
+type wrappedLogger struct {
+ inner Logger
+}
+
+func (w *wrappedLogger) Level() LogLevel {
+ if w.inner == nil {
+ return LogLevelNone
+ }
+ return w.inner.Level()
+}
+
+func (w *wrappedLogger) Log(level LogLevel, msg string, keyvals ...any) {
+ if w.Level() < level {
+ return
+ }
+ w.inner.Log(level, msg, keyvals...)
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go b/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go
new file mode 100644
index 0000000000000..33cac6414f949
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go
@@ -0,0 +1,966 @@
+package kgo
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "sort"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+)
+
+type metawait struct {
+ mu sync.Mutex
+ c *sync.Cond
+ lastUpdate time.Time
+}
+
+func (m *metawait) init() { m.c = sync.NewCond(&m.mu) }
+func (m *metawait) signal() {
+ m.mu.Lock()
+ m.lastUpdate = time.Now()
+ m.mu.Unlock()
+ m.c.Broadcast()
+}
+
+// ForceMetadataRefresh triggers the client to update the metadata that is
+// currently used for producing & consuming.
+//
+// Internally, the client already properly triggers metadata updates whenever a
+// partition is discovered to be out of date (leader moved, epoch is old, etc).
+// However, when partitions are added to a topic through a CreatePartitions
+// request, it may take up to MetadataMaxAge for the new partitions to be
+// discovered. In this case, you may want to forcefully refresh metadata
+// manually to discover these new partitions sooner.
+func (cl *Client) ForceMetadataRefresh() {
+ cl.triggerUpdateMetadataNow("from user ForceMetadataRefresh")
+}
+
+// PartitionLeader returns the given topic partition's leader, leader epoch and
+// load error. This returns -1, -1, nil if the partition has not been loaded.
+func (cl *Client) PartitionLeader(topic string, partition int32) (leader, leaderEpoch int32, err error) {
+ if partition < 0 {
+ return -1, -1, errors.New("invalid negative partition")
+ }
+
+ var t *topicPartitions
+
+ m := cl.producer.topics.load()
+ if len(m) > 0 {
+ t = m[topic]
+ }
+ if t == nil {
+ if cl.consumer.g != nil {
+ if m = cl.consumer.g.tps.load(); len(m) > 0 {
+ t = m[topic]
+ }
+ } else if cl.consumer.d != nil {
+ if m = cl.consumer.d.tps.load(); len(m) > 0 {
+ t = m[topic]
+ }
+ }
+ if t == nil {
+ return -1, -1, nil
+ }
+ }
+
+ tv := t.load()
+ if len(tv.partitions) <= int(partition) {
+ return -1, -1, tv.loadErr
+ }
+ p := tv.partitions[partition]
+ return p.leader, p.leaderEpoch, p.loadErr
+}
+
+// waitmeta returns immediately if metadata was updated within the last second,
+// otherwise this waits for up to wait for a metadata update to complete.
+func (cl *Client) waitmeta(ctx context.Context, wait time.Duration, why string) {
+ now := time.Now()
+
+ cl.metawait.mu.Lock()
+ if now.Sub(cl.metawait.lastUpdate) < cl.cfg.metadataMinAge {
+ cl.metawait.mu.Unlock()
+ return
+ }
+ cl.metawait.mu.Unlock()
+
+ cl.triggerUpdateMetadataNow(why)
+
+ quit := false
+ done := make(chan struct{})
+ timeout := time.NewTimer(wait)
+ defer timeout.Stop()
+
+ go func() {
+ defer close(done)
+ cl.metawait.mu.Lock()
+ defer cl.metawait.mu.Unlock()
+
+ for !quit {
+ if now.Sub(cl.metawait.lastUpdate) < cl.cfg.metadataMinAge {
+ return
+ }
+ cl.metawait.c.Wait()
+ }
+ }()
+
+ select {
+ case <-done:
+ return
+ case <-timeout.C:
+ case <-ctx.Done():
+ case <-cl.ctx.Done():
+ }
+
+ cl.metawait.mu.Lock()
+ quit = true
+ cl.metawait.mu.Unlock()
+ cl.metawait.c.Broadcast()
+}
+
+func (cl *Client) triggerUpdateMetadata(must bool, why string) bool {
+ if !must {
+ cl.metawait.mu.Lock()
+ defer cl.metawait.mu.Unlock()
+ if time.Since(cl.metawait.lastUpdate) < cl.cfg.metadataMinAge {
+ return false
+ }
+ }
+
+ select {
+ case cl.updateMetadataCh <- why:
+ default:
+ }
+ return true
+}
+
+func (cl *Client) triggerUpdateMetadataNow(why string) {
+ select {
+ case cl.updateMetadataNowCh <- why:
+ default:
+ }
+}
+
+func (cl *Client) blockingMetadataFn(fn func()) {
+ var wg sync.WaitGroup
+ wg.Add(1)
+ waitfn := func() {
+ defer wg.Done()
+ fn()
+ }
+ select {
+ case cl.blockingMetadataFnCh <- waitfn:
+ wg.Wait()
+ case <-cl.ctx.Done():
+ }
+}
+
+// updateMetadataLoop updates metadata whenever the update ticker ticks,
+// or whenever deliberately triggered.
+func (cl *Client) updateMetadataLoop() {
+ defer close(cl.metadone)
+ var consecutiveErrors int
+ var lastAt time.Time
+
+ ticker := time.NewTicker(cl.cfg.metadataMaxAge)
+ defer ticker.Stop()
+loop:
+ for {
+ var now bool
+ select {
+ case <-cl.ctx.Done():
+ return
+ case <-ticker.C:
+ // We do not log on the standard update case.
+ case why := <-cl.updateMetadataCh:
+ cl.cfg.logger.Log(LogLevelInfo, "metadata update triggered", "why", why)
+ case why := <-cl.updateMetadataNowCh:
+ cl.cfg.logger.Log(LogLevelInfo, "immediate metadata update triggered", "why", why)
+ now = true
+ case fn := <-cl.blockingMetadataFnCh:
+ fn()
+ continue loop
+ }
+
+ var nowTries int
+ start:
+ nowTries++
+ if !now {
+ if wait := cl.cfg.metadataMinAge - time.Since(lastAt); wait > 0 {
+ timer := time.NewTimer(wait)
+ prewait:
+ select {
+ case <-cl.ctx.Done():
+ timer.Stop()
+ return
+ case why := <-cl.updateMetadataNowCh:
+ timer.Stop()
+ cl.cfg.logger.Log(LogLevelInfo, "immediate metadata update triggered, bypassing normal wait", "why", why)
+ case <-timer.C:
+ case fn := <-cl.blockingMetadataFnCh:
+ fn()
+ goto prewait
+ }
+ }
+ }
+
+ // Even with an "update now", we sleep just a bit to allow some
+ // potential pile on now triggers.
+ time.Sleep(time.Until(lastAt.Add(10 * time.Millisecond)))
+
+ // Drain any refires that occurred during our waiting.
+ out:
+ for {
+ select {
+ case <-cl.updateMetadataCh:
+ case <-cl.updateMetadataNowCh:
+ case fn := <-cl.blockingMetadataFnCh:
+ fn()
+ default:
+ break out
+ }
+ }
+
+ retryWhy, err := cl.updateMetadata()
+ if retryWhy != nil || err != nil {
+ // If err is non-nil, the metadata request failed
+ // itself and already retried 3x; we do not loop more.
+ //
+ // If err is nil, the a topic or partition had a load
+ // error and is perhaps still being created. We retry a
+ // few more times to give Kafka a chance to figure
+ // things out. By default this will put us at 2s of
+ // looping+waiting (250ms per wait, 8x), and if things
+ // still fail we will fall into the slower update below
+ // which waits (default) 5s between tries.
+ if now && err == nil && nowTries < 8 {
+ wait := 250 * time.Millisecond
+ if cl.cfg.metadataMinAge < wait {
+ wait = cl.cfg.metadataMinAge
+ }
+ cl.cfg.logger.Log(LogLevelDebug, "immediate metadata update had inner errors, re-updating",
+ "errors", retryWhy.reason(""),
+ "update_after", wait,
+ )
+ timer := time.NewTimer(wait)
+ quickbackoff:
+ select {
+ case <-cl.ctx.Done():
+ timer.Stop()
+ return
+ case <-timer.C:
+ case fn := <-cl.blockingMetadataFnCh:
+ fn()
+ goto quickbackoff
+ }
+ goto start
+ }
+ if err != nil {
+ cl.triggerUpdateMetadata(true, fmt.Sprintf("re-updating metadata due to err: %s", err))
+ } else {
+ cl.triggerUpdateMetadata(true, retryWhy.reason("re-updating due to inner errors"))
+ }
+ }
+ if err == nil {
+ cl.metawait.signal()
+ cl.consumer.doOnMetadataUpdate()
+ lastAt = time.Now()
+ consecutiveErrors = 0
+ continue
+ }
+
+ consecutiveErrors++
+ after := time.NewTimer(cl.cfg.retryBackoff(consecutiveErrors))
+ backoff:
+ select {
+ case <-cl.ctx.Done():
+ after.Stop()
+ return
+ case <-after.C:
+ case fn := <-cl.blockingMetadataFnCh:
+ fn()
+ goto backoff
+ }
+ }
+}
+
+var errMissingTopic = errors.New("topic_missing")
+
+// Updates all producer and consumer partition data, returning whether a new
+// update needs scheduling or if an error occurred.
+//
+// The producer and consumer use different topic maps and underlying
+// topicPartitionsData pointers, but we update those underlying pointers
+// equally.
+func (cl *Client) updateMetadata() (retryWhy multiUpdateWhy, err error) {
+ var (
+ tpsProducerLoad = cl.producer.topics.load()
+ tpsConsumer *topicsPartitions
+ groupExternal *groupExternal
+ all = cl.cfg.regex
+ reqTopics []string
+ )
+ c := &cl.consumer
+ switch {
+ case c.g != nil:
+ tpsConsumer = c.g.tps
+ groupExternal = c.g.loadExternal()
+ case c.d != nil:
+ tpsConsumer = c.d.tps
+ }
+
+ if !all {
+ reqTopicsSet := make(map[string]struct{})
+ for _, m := range []map[string]*topicPartitions{
+ tpsProducerLoad,
+ tpsConsumer.load(),
+ } {
+ for topic := range m {
+ reqTopicsSet[topic] = struct{}{}
+ }
+ }
+ groupExternal.eachTopic(func(t string) {
+ reqTopicsSet[t] = struct{}{}
+ })
+ reqTopics = make([]string, 0, len(reqTopicsSet))
+ for topic := range reqTopicsSet {
+ reqTopics = append(reqTopics, topic)
+ }
+ }
+
+ latest, err := cl.fetchTopicMetadata(all, reqTopics)
+ if err != nil {
+ cl.bumpMetadataFailForTopics( // bump load failures for all topics
+ tpsProducerLoad,
+ err,
+ )
+ return nil, err
+ }
+ groupExternal.updateLatest(latest)
+
+ // If we are consuming with regex and fetched all topics, the metadata
+ // may have returned topics the consumer is not yet tracking. We ensure
+ // that we will store the topics at the end of our metadata update.
+ tpsConsumerLoad := tpsConsumer.load()
+ if all {
+ allTopics := make([]string, 0, len(latest))
+ for topic := range latest {
+ allTopics = append(allTopics, topic)
+ }
+ tpsConsumerLoad = tpsConsumer.ensureTopics(allTopics)
+ defer tpsConsumer.storeData(tpsConsumerLoad)
+
+ // For regex consuming, if a topic is not returned in the
+ // response and for at least missingTopicDelete from when we
+ // first discovered it, we assume the topic has been deleted
+ // and purge it. We allow for missingTopicDelete because (in
+ // testing locally) Kafka can originally broadcast a newly
+ // created topic exists and then fail to broadcast that info
+ // again for a while.
+ var purgeTopics []string
+ for topic, tps := range tpsConsumerLoad {
+ if _, ok := latest[topic]; !ok {
+ if td := tps.load(); td.when != 0 && time.Since(time.Unix(td.when, 0)) > cl.cfg.missingTopicDelete {
+ purgeTopics = append(purgeTopics, td.topic)
+ } else {
+ retryWhy.add(topic, -1, errMissingTopic)
+ }
+ }
+ }
+ if len(purgeTopics) > 0 {
+ // We have to `go` because Purge issues a blocking
+ // metadata fn; this will wait for our current
+ // execution to finish then purge.
+ cl.cfg.logger.Log(LogLevelInfo, "regex consumer purging topics that were previously consumed because they are missing in a metadata response, we are assuming they are deleted", "topics", purgeTopics)
+ go cl.PurgeTopicsFromClient(purgeTopics...)
+ }
+ }
+
+ css := &consumerSessionStopper{cl: cl}
+ defer css.maybeRestart()
+
+ var missingProduceTopics []*topicPartitions
+ for _, m := range []struct {
+ priors map[string]*topicPartitions
+ isProduce bool
+ }{
+ {tpsProducerLoad, true},
+ {tpsConsumerLoad, false},
+ } {
+ for topic, priorParts := range m.priors {
+ newParts, exists := latest[topic]
+ if !exists {
+ if m.isProduce {
+ missingProduceTopics = append(missingProduceTopics, priorParts)
+ }
+ continue
+ }
+ cl.mergeTopicPartitions(
+ topic,
+ priorParts,
+ newParts,
+ m.isProduce,
+ css,
+ &retryWhy,
+ )
+ }
+ }
+
+ // For all produce topics that were missing, we want to bump their
+ // retries that a failure happened. However, if we are regex consuming,
+ // then it is possible in a rare scenario for the broker to not return
+ // a topic that actually does exist and that we previously received a
+ // metadata response for. This is handled above for consuming, we now
+ // handle it the same way for consuming.
+ if len(missingProduceTopics) > 0 {
+ var bumpFail []string
+ for _, tps := range missingProduceTopics {
+ if all {
+ if td := tps.load(); td.when != 0 && time.Since(time.Unix(td.when, 0)) > cl.cfg.missingTopicDelete {
+ bumpFail = append(bumpFail, td.topic)
+ } else {
+ retryWhy.add(td.topic, -1, errMissingTopic)
+ }
+ } else {
+ bumpFail = append(bumpFail, tps.load().topic)
+ }
+ }
+ if len(bumpFail) > 0 {
+ cl.bumpMetadataFailForTopics(
+ tpsProducerLoad,
+ fmt.Errorf("metadata request did not return topics: %v", bumpFail),
+ bumpFail...,
+ )
+ }
+ }
+
+ return retryWhy, nil
+}
+
+// We use a special structure to repesent metadata before we *actually* convert
+// it to topicPartitionsData. This helps avoid any pointer reuse problems
+// because we want to keep the client's producer and consumer maps completely
+// independent. If we just returned map[string]*topicPartitionsData, we could
+// end up in some really weird pointer reuse scenario that ultimately results
+// in a bug.
+//
+// See #190 for more details, as well as the commit message introducing this.
+type metadataTopic struct {
+ loadErr error
+ isInternal bool
+ topic string
+ partitions []metadataPartition
+}
+
+func (mt *metadataTopic) newPartitions(cl *Client, isProduce bool) *topicPartitionsData {
+ n := len(mt.partitions)
+ ps := &topicPartitionsData{
+ loadErr: mt.loadErr,
+ isInternal: mt.isInternal,
+ partitions: make([]*topicPartition, 0, n),
+ writablePartitions: make([]*topicPartition, 0, n),
+ topic: mt.topic,
+ when: time.Now().Unix(),
+ }
+ for i := range mt.partitions {
+ p := mt.partitions[i].newPartition(cl, isProduce)
+ ps.partitions = append(ps.partitions, p)
+ if p.loadErr == nil {
+ ps.writablePartitions = append(ps.writablePartitions, p)
+ }
+ }
+ return ps
+}
+
+type metadataPartition struct {
+ topic string
+ topicID [16]byte
+ partition int32
+ loadErr int16
+ leader int32
+ leaderEpoch int32
+ sns sinkAndSource
+}
+
+func (mp metadataPartition) newPartition(cl *Client, isProduce bool) *topicPartition {
+ td := topicPartitionData{
+ leader: mp.leader,
+ leaderEpoch: mp.leaderEpoch,
+ }
+ p := &topicPartition{
+ loadErr: kerr.ErrorForCode(mp.loadErr),
+ topicPartitionData: td,
+ }
+ if isProduce {
+ p.records = &recBuf{
+ cl: cl,
+ topic: mp.topic,
+ partition: mp.partition,
+ maxRecordBatchBytes: cl.maxRecordBatchBytesForTopic(mp.topic),
+ recBufsIdx: -1,
+ failing: mp.loadErr != 0,
+ sink: mp.sns.sink,
+ topicPartitionData: td,
+ }
+ } else {
+ p.cursor = &cursor{
+ topic: mp.topic,
+ topicID: mp.topicID,
+ partition: mp.partition,
+ keepControl: cl.cfg.keepControl,
+ cursorsIdx: -1,
+ source: mp.sns.source,
+ topicPartitionData: td,
+ cursorOffset: cursorOffset{
+ offset: -1, // required to not consume until needed
+ lastConsumedEpoch: -1, // required sentinel
+ },
+ }
+ }
+ return p
+}
+
+// fetchTopicMetadata fetches metadata for all reqTopics and returns new
+// topicPartitionsData for each topic.
+func (cl *Client) fetchTopicMetadata(all bool, reqTopics []string) (map[string]*metadataTopic, error) {
+ _, meta, err := cl.fetchMetadataForTopics(cl.ctx, all, reqTopics)
+ if err != nil {
+ return nil, err
+ }
+
+ // Since we've fetched the metadata for some topics we can optimistically cache it
+ // for mapped metadata too. This may reduce the number of Metadata requests issued
+ // by the client.
+ cl.storeCachedMappedMetadata(meta, nil)
+
+ topics := make(map[string]*metadataTopic, len(meta.Topics))
+
+ // Even if metadata returns a leader epoch, we do not use it unless we
+ // can validate it per OffsetForLeaderEpoch. Some brokers may have an
+ // odd set of support.
+ useLeaderEpoch := cl.supportsOffsetForLeaderEpoch()
+
+ for i := range meta.Topics {
+ topicMeta := &meta.Topics[i]
+ if topicMeta.Topic == nil {
+ cl.cfg.logger.Log(LogLevelWarn, "metadata response contained nil topic name even though we did not request with topic IDs, skipping")
+ continue
+ }
+ topic := *topicMeta.Topic
+
+ mt := &metadataTopic{
+ loadErr: kerr.ErrorForCode(topicMeta.ErrorCode),
+ isInternal: topicMeta.IsInternal,
+ topic: topic,
+ partitions: make([]metadataPartition, 0, len(topicMeta.Partitions)),
+ }
+
+ topics[topic] = mt
+
+ if mt.loadErr != nil {
+ continue
+ }
+
+ // This 249 limit is in Kafka itself, we copy it here to rely on it while producing.
+ if len(topic) > 249 {
+ mt.loadErr = fmt.Errorf("invalid long topic name of (len %d) greater than max allowed 249", len(topic))
+ continue
+ }
+
+ // Kafka partitions are strictly increasing from 0. We enforce
+ // that here; if any partition is missing, we consider this
+ // topic a load failure.
+ sort.Slice(topicMeta.Partitions, func(i, j int) bool {
+ return topicMeta.Partitions[i].Partition < topicMeta.Partitions[j].Partition
+ })
+ for i := range topicMeta.Partitions {
+ if got := topicMeta.Partitions[i].Partition; got != int32(i) {
+ mt.loadErr = fmt.Errorf("kafka did not reply with a comprensive set of partitions for a topic; we expected partition %d but saw %d", i, got)
+ break
+ }
+ }
+
+ if mt.loadErr != nil {
+ continue
+ }
+
+ for i := range topicMeta.Partitions {
+ partMeta := &topicMeta.Partitions[i]
+ leaderEpoch := partMeta.LeaderEpoch
+ if meta.Version < 7 || !useLeaderEpoch {
+ leaderEpoch = -1
+ }
+ mp := metadataPartition{
+ topic: topic,
+ topicID: topicMeta.TopicID,
+ partition: partMeta.Partition,
+ loadErr: partMeta.ErrorCode,
+ leader: partMeta.Leader,
+ leaderEpoch: leaderEpoch,
+ }
+ if mp.loadErr != 0 {
+ mp.leader = unknownSeedID(0) // ensure every records & cursor can use a sink or source
+ }
+ cl.sinksAndSourcesMu.Lock()
+ sns, exists := cl.sinksAndSources[mp.leader]
+ if !exists {
+ sns = sinkAndSource{
+ sink: cl.newSink(mp.leader),
+ source: cl.newSource(mp.leader),
+ }
+ cl.sinksAndSources[mp.leader] = sns
+ }
+ for _, replica := range partMeta.Replicas {
+ if replica < 0 {
+ continue
+ }
+ if _, exists = cl.sinksAndSources[replica]; !exists {
+ cl.sinksAndSources[replica] = sinkAndSource{
+ sink: cl.newSink(replica),
+ source: cl.newSource(replica),
+ }
+ }
+ }
+ cl.sinksAndSourcesMu.Unlock()
+ mp.sns = sns
+ mt.partitions = append(mt.partitions, mp)
+ }
+ }
+
+ return topics, nil
+}
+
+// mergeTopicPartitions merges a new topicPartition into an old and returns
+// whether the metadata update that caused this merge needs to be retried.
+//
+// Retries are necessary if the topic or any partition has a retryable error.
+func (cl *Client) mergeTopicPartitions(
+ topic string,
+ l *topicPartitions,
+ mt *metadataTopic,
+ isProduce bool,
+ css *consumerSessionStopper,
+ retryWhy *multiUpdateWhy,
+) {
+ lv := *l.load() // copy so our field writes do not collide with reads
+
+ r := mt.newPartitions(cl, isProduce)
+
+ // Producers must store the update through a special function that
+ // manages unknown topic waiting, whereas consumers can just simply
+ // store the update.
+ if isProduce {
+ hadPartitions := len(lv.partitions) != 0
+ defer func() { cl.storePartitionsUpdate(topic, l, &lv, hadPartitions) }()
+ } else {
+ defer l.v.Store(&lv)
+ }
+
+ lv.loadErr = r.loadErr
+ lv.isInternal = r.isInternal
+ lv.topic = r.topic
+ if lv.when == 0 {
+ lv.when = r.when
+ }
+
+ // If the load had an error for the entire topic, we set the load error
+ // but keep our stale partition information. For anything being
+ // produced, we bump the respective error or fail everything. There is
+ // nothing to be done in a consumer.
+ if r.loadErr != nil {
+ if isProduce {
+ for _, topicPartition := range lv.partitions {
+ topicPartition.records.bumpRepeatedLoadErr(lv.loadErr)
+ }
+ } else if !kerr.IsRetriable(r.loadErr) || cl.cfg.keepRetryableFetchErrors {
+ cl.consumer.addFakeReadyForDraining(topic, -1, r.loadErr, "metadata refresh has a load error on this entire topic")
+ }
+ retryWhy.add(topic, -1, r.loadErr)
+ return
+ }
+
+ // Before the atomic update, we keep the latest partitions / writable
+ // partitions. All updates happen in r's slices, and we keep the
+ // results and store them in lv.
+ defer func() {
+ lv.partitions = r.partitions
+ lv.writablePartitions = r.writablePartitions
+ }()
+
+ // We should have no deleted partitions, but there are two cases where
+ // we could.
+ //
+ // 1) an admin added partitions, we saw, then we re-fetched metadata
+ // from an out of date broker that did not have the new partitions
+ //
+ // 2) a topic was deleted and recreated with fewer partitions
+ //
+ // Both of these scenarios should be rare to non-existent, and we do
+ // nothing if we encounter them.
+
+ // Migrating topicPartitions is a little tricky because we have to
+ // worry about underlying pointers that may currently be loaded.
+ for part, oldTP := range lv.partitions {
+ exists := part < len(r.partitions)
+ if !exists {
+ // This is the "deleted" case; see the comment above.
+ //
+ // We need to keep the partition around. For producing,
+ // the partition could be loaded and a record could be
+ // added to it after we bump the load error. For
+ // consuming, the partition is part of a group or part
+ // of what was loaded for direct consuming.
+ //
+ // We only clear a partition if it is purged from the
+ // client (which can happen automatically for consumers
+ // if the user opted into ConsumeRecreatedTopics).
+ dup := *oldTP
+ newTP := &dup
+ newTP.loadErr = errMissingMetadataPartition
+
+ r.partitions = append(r.partitions, newTP)
+
+ cl.cfg.logger.Log(LogLevelDebug, "metadata update is missing partition in topic, we are keeping the partition around for safety -- use PurgeTopicsFromClient if you wish to remove the topic",
+ "topic", topic,
+ "partition", part,
+ )
+ if isProduce {
+ oldTP.records.bumpRepeatedLoadErr(errMissingMetadataPartition)
+ }
+ retryWhy.add(topic, int32(part), errMissingMetadataPartition)
+ continue
+ }
+ newTP := r.partitions[part]
+
+ // Like above for the entire topic, an individual partition
+ // can have a load error. Unlike for the topic, individual
+ // partition errors are always retryable.
+ //
+ // If the load errored, we keep all old information minus the
+ // load error itself (the new load will have no information).
+ if newTP.loadErr != nil {
+ err := newTP.loadErr
+ *newTP = *oldTP
+ newTP.loadErr = err
+ if isProduce {
+ newTP.records.bumpRepeatedLoadErr(newTP.loadErr)
+ } else if !kerr.IsRetriable(newTP.loadErr) || cl.cfg.keepRetryableFetchErrors {
+ cl.consumer.addFakeReadyForDraining(topic, int32(part), newTP.loadErr, "metadata refresh has a load error on this partition")
+ }
+ retryWhy.add(topic, int32(part), newTP.loadErr)
+ continue
+ }
+
+ // If the new partition has an older leader epoch, then we
+ // fetched from an out of date broker. We just keep the old
+ // information.
+ if newTP.leaderEpoch < oldTP.leaderEpoch {
+ // If we repeatedly rewind, then perhaps the cluster
+ // entered some bad state and lost forward progress.
+ // We will log & allow the rewind to allow the client
+ // to continue; other requests may encounter fenced
+ // epoch errors (and respectively recover).
+ //
+ // Five is a pretty low amount of retries, but since
+ // we iterate through known brokers, this basically
+ // means we keep stale metadata if five brokers all
+ // agree things rewound.
+ const maxEpochRewinds = 5
+ if oldTP.epochRewinds < maxEpochRewinds {
+ cl.cfg.logger.Log(LogLevelDebug, "metadata leader epoch went backwards, ignoring update",
+ "topic", topic,
+ "partition", part,
+ "old_leader_epoch", oldTP.leaderEpoch,
+ "new_leader_epoch", newTP.leaderEpoch,
+ "current_num_rewinds", oldTP.epochRewinds+1,
+ )
+ *newTP = *oldTP
+ newTP.epochRewinds++
+ retryWhy.add(topic, int32(part), errEpochRewind)
+ continue
+ }
+
+ cl.cfg.logger.Log(LogLevelInfo, "metadata leader epoch went backwards repeatedly, we are now keeping the metadata to allow forward progress",
+ "topic", topic,
+ "partition", part,
+ "old_leader_epoch", oldTP.leaderEpoch,
+ "new_leader_epoch", newTP.leaderEpoch,
+ )
+ }
+
+ if !isProduce {
+ var noID [16]byte
+ if newTP.cursor.topicID == noID && oldTP.cursor.topicID != noID {
+ cl.cfg.logger.Log(LogLevelWarn, "metadata update is missing the topic ID when we previously had one, ignoring update",
+ "topic", topic,
+ "partition", part,
+ )
+ retryWhy.add(topic, int32(part), errMissingTopicID)
+ continue
+ }
+ }
+
+ // If the tp data is the same, we simply copy over the records
+ // and cursor pointers.
+ //
+ // If the tp data equals the old, then the sink / source is the
+ // same, because the sink/source is from the tp leader.
+ if newTP.topicPartitionData == oldTP.topicPartitionData {
+ cl.cfg.logger.Log(LogLevelDebug, "metadata refresh has identical topic partition data",
+ "topic", topic,
+ "partition", part,
+ "leader", newTP.leader,
+ "leader_epoch", newTP.leaderEpoch,
+ )
+ if isProduce {
+ newTP.records = oldTP.records
+ newTP.records.clearFailing() // always clear failing state for producing after meta update
+ } else {
+ newTP.cursor = oldTP.cursor // unlike records, there is no failing state for a cursor
+ }
+ } else {
+ cl.cfg.logger.Log(LogLevelDebug, "metadata refresh topic partition data changed",
+ "topic", topic,
+ "partition", part,
+ "new_leader", newTP.leader,
+ "new_leader_epoch", newTP.leaderEpoch,
+ "old_leader", oldTP.leader,
+ "old_leader_epoch", oldTP.leaderEpoch,
+ )
+ if isProduce {
+ oldTP.migrateProductionTo(newTP) // migration clears failing state
+ } else {
+ oldTP.migrateCursorTo(newTP, css)
+ }
+ }
+ }
+
+ // For any partitions **not currently in use**, we need to add them to
+ // the sink or source. If they are in use, they could be getting
+ // managed or moved by the sink or source itself, so we should not
+ // check the index field (which may be concurrently modified).
+ if len(lv.partitions) > len(r.partitions) {
+ return
+ }
+ newPartitions := r.partitions[len(lv.partitions):]
+
+ // Anything left with a negative recBufsIdx / cursorsIdx is a new topic
+ // partition and must be added to the sink / source.
+ for _, newTP := range newPartitions {
+ if isProduce && newTP.records.recBufsIdx == -1 {
+ newTP.records.sink.addRecBuf(newTP.records)
+ } else if !isProduce && newTP.cursor.cursorsIdx == -1 {
+ newTP.cursor.source.addCursor(newTP.cursor)
+ }
+ }
+}
+
+var (
+ errEpochRewind = errors.New("epoch rewind")
+ errMissingTopicID = errors.New("missing topic ID")
+)
+
+type multiUpdateWhy map[kerrOrString]map[string]map[int32]struct{}
+
+type kerrOrString struct {
+ k *kerr.Error
+ s string
+}
+
+func (m *multiUpdateWhy) isOnly(err error) bool {
+ if m == nil {
+ return false
+ }
+ for e := range *m {
+ if !errors.Is(err, e.k) {
+ return false
+ }
+ }
+ return true
+}
+
+func (m *multiUpdateWhy) add(t string, p int32, err error) {
+ if err == nil {
+ return
+ }
+
+ if *m == nil {
+ *m = make(map[kerrOrString]map[string]map[int32]struct{})
+ }
+ var ks kerrOrString
+ if ke := (*kerr.Error)(nil); errors.As(err, &ke) {
+ ks = kerrOrString{k: ke}
+ } else {
+ ks = kerrOrString{s: err.Error()}
+ }
+
+ ts := (*m)[ks]
+ if ts == nil {
+ ts = make(map[string]map[int32]struct{})
+ (*m)[ks] = ts
+ }
+
+ ps := ts[t]
+ if ps == nil {
+ ps = make(map[int32]struct{})
+ ts[t] = ps
+ }
+ // -1 signals that the entire topic had an error.
+ if p != -1 {
+ ps[p] = struct{}{}
+ }
+}
+
+// err{topic[1 2 3] topic2[4 5 6]} err2{...}
+func (m multiUpdateWhy) reason(reason string) string {
+ if len(m) == 0 {
+ return ""
+ }
+
+ ksSorted := make([]kerrOrString, 0, len(m))
+ for err := range m {
+ ksSorted = append(ksSorted, err)
+ }
+ sort.Slice(ksSorted, func(i, j int) bool { // order by non-nil kerr's code, otherwise the string
+ l, r := ksSorted[i], ksSorted[j]
+ return l.k != nil && (r.k == nil || l.k.Code < r.k.Code) || r.k == nil && l.s < r.s
+ })
+
+ var errorStrings []string
+ for _, ks := range ksSorted {
+ ts := m[ks]
+ tsSorted := make([]string, 0, len(ts))
+ for t := range ts {
+ tsSorted = append(tsSorted, t)
+ }
+ sort.Strings(tsSorted)
+
+ var topicStrings []string
+ for _, t := range tsSorted {
+ ps := ts[t]
+ if len(ps) == 0 {
+ topicStrings = append(topicStrings, t)
+ } else {
+ psSorted := make([]int32, 0, len(ps))
+ for p := range ps {
+ psSorted = append(psSorted, p)
+ }
+ sort.Slice(psSorted, func(i, j int) bool { return psSorted[i] < psSorted[j] })
+ topicStrings = append(topicStrings, fmt.Sprintf("%s%v", t, psSorted))
+ }
+ }
+
+ if ks.k != nil {
+ errorStrings = append(errorStrings, fmt.Sprintf("%s{%s}", ks.k.Message, strings.Join(topicStrings, " ")))
+ } else {
+ errorStrings = append(errorStrings, fmt.Sprintf("%s{%s}", ks.s, strings.Join(topicStrings, " ")))
+ }
+ }
+ if reason == "" {
+ return strings.Join(errorStrings, " ")
+ }
+ return reason + ": " + strings.Join(errorStrings, " ")
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/partitioner.go b/vendor/github.com/twmb/franz-go/pkg/kgo/partitioner.go
new file mode 100644
index 0000000000000..46e7d11d124b8
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/partitioner.go
@@ -0,0 +1,614 @@
+package kgo
+
+import (
+ "math"
+ "math/rand"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+)
+
+// Partitioner creates topic partitioners to determine which partition messages
+// should be sent to.
+//
+// Note that a record struct is unmodified (minus a potential default topic)
+// from producing through partitioning, so you can set fields in the record
+// struct before producing to aid in partitioning with a custom partitioner.
+type Partitioner interface {
+ // forTopic returns a partitioner for an individual topic. It is
+ // guaranteed that only one record will use the an individual topic's
+ // topicPartitioner at a time, meaning partitioning within a topic does
+ // not require locks.
+ ForTopic(string) TopicPartitioner
+}
+
+// TopicPartitioner partitions records in an individual topic.
+type TopicPartitioner interface {
+ // RequiresConsistency returns true if a record must hash to the same
+ // partition even if a partition is down.
+ // If true, a record may hash to a partition that cannot be written to
+ // and will error until the partition comes back.
+ RequiresConsistency(*Record) bool
+ // Partition determines, among a set of n partitions, which index should
+ // be chosen to use for the partition for r.
+ Partition(r *Record, n int) int
+}
+
+// TopicPartitionerOnNewBatch is an optional extension interface to
+// TopicPartitioner that calls OnNewBatch before any new batch is created. If
+// buffering a record would cause a new batch, OnNewBatch is called.
+//
+// This interface allows for partitioner implementations that effectively pin
+// to a partition until a new batch is created, after which the partitioner can
+// choose which next partition to use.
+type TopicPartitionerOnNewBatch interface {
+ // OnNewBatch is called when producing a record if that record would
+ // trigger a new batch on its current partition.
+ OnNewBatch()
+}
+
+// TopicBackupPartitioner is an optional extension interface to
+// TopicPartitioner that can partition by the number of records buffered.
+//
+// If a partitioner implements this interface, the Partition function will
+// never be called.
+type TopicBackupPartitioner interface {
+ TopicPartitioner
+
+ // PartitionByBackup is similar to Partition, but has an additional
+ // backupIter. This iterator will return the number of buffered records
+ // per partition index. The iterator's Next function can only be called
+ // up to n times, calling it any more will panic.
+ PartitionByBackup(r *Record, n int, backupIter TopicBackupIter) int
+}
+
+// TopicBackupIter is an iterates through partition indices.
+type TopicBackupIter interface {
+ // Next returns the next partition index and the total buffered records
+ // for the partition. If Rem returns 0, calling this function again
+ // will panic.
+ Next() (int, int64)
+ // Rem returns the number of elements left to iterate through.
+ Rem() int
+}
+
+////////////
+// SIMPLE // - BasicConsistent, Manual, RoundRobin
+////////////
+
+// BasicConsistentPartitioner wraps a single function to provide a Partitioner
+// and TopicPartitioner (that function is essentially a combination of
+// Partitioner.ForTopic and TopicPartitioner.Partition).
+//
+// As a minimal example, if you do not care about the topic and you set the
+// partition before producing:
+//
+// kgo.BasicConsistentPartitioner(func(topic) func(*Record, int) int {
+// return func(r *Record, n int) int {
+// return int(r.Partition)
+// }
+// })
+func BasicConsistentPartitioner(partition func(string) func(r *Record, n int) int) Partitioner {
+ return &basicPartitioner{partition}
+}
+
+type (
+ basicPartitioner struct {
+ fn func(string) func(*Record, int) int
+ }
+
+ basicTopicPartitioner struct {
+ fn func(*Record, int) int
+ }
+)
+
+func (b *basicPartitioner) ForTopic(t string) TopicPartitioner {
+ return &basicTopicPartitioner{b.fn(t)}
+}
+
+func (*basicTopicPartitioner) RequiresConsistency(*Record) bool { return true }
+func (b *basicTopicPartitioner) Partition(r *Record, n int) int { return b.fn(r, n) }
+
+// ManualPartitioner is a partitioner that simply returns the Partition field
+// that is already set on any record.
+//
+// Any record with an invalid partition will be immediately failed. This
+// partitioner is simply the partitioner that is demonstrated in the
+// BasicConsistentPartitioner documentation.
+func ManualPartitioner() Partitioner {
+ return BasicConsistentPartitioner(func(string) func(*Record, int) int {
+ return func(r *Record, _ int) int {
+ return int(r.Partition)
+ }
+ })
+}
+
+// RoundRobinPartitioner is a partitioner that round-robin's through all
+// available partitions. This algorithm has lower throughput and causes higher
+// CPU load on brokers, but can be useful if you want to ensure an even
+// distribution of records to partitions.
+func RoundRobinPartitioner() Partitioner {
+ return new(roundRobinPartitioner)
+}
+
+type (
+ roundRobinPartitioner struct{}
+
+ roundRobinTopicPartitioner struct {
+ on int
+ }
+)
+
+func (*roundRobinPartitioner) ForTopic(string) TopicPartitioner {
+ return new(roundRobinTopicPartitioner)
+}
+
+func (*roundRobinTopicPartitioner) RequiresConsistency(*Record) bool { return false }
+func (r *roundRobinTopicPartitioner) Partition(_ *Record, n int) int {
+ if r.on >= n {
+ r.on = 0
+ }
+ ret := r.on
+ r.on++
+ return ret
+}
+
+//////////////////
+// LEAST BACKUP //
+//////////////////
+
+// LeastBackupPartitioner prioritizes partitioning by three factors, in order:
+//
+// 1. pin to the current pick until there is a new batch
+// 2. on new batch, choose the least backed up partition (the partition with
+// the fewest amount of buffered records)
+// 3. if multiple partitions are equally least-backed-up, choose one at random
+//
+// This algorithm prioritizes least-backed-up throughput, which may result in
+// unequal partitioning. It is likely that this algorithm will talk most to the
+// broker that it has the best connection to.
+//
+// This algorithm is resilient to brokers going down: if a few brokers die, it
+// is possible your throughput will be so high that the maximum buffered
+// records will be reached in the now-offline partitions before metadata
+// responds that the broker is offline. With the standard partitioning
+// algorithms, the only recovery is if the partition is remapped or if the
+// broker comes back online. With the least backup partitioner, downed
+// partitions will see slight backup, but then the other partitions that are
+// still accepting writes will get all of the writes and your client will not
+// be blocked.
+//
+// Under ideal scenarios (no broker / connection issues), StickyPartitioner is
+// equivalent to LeastBackupPartitioner. This partitioner is only recommended
+// if you are a producer consistently dealing with flaky connections or
+// problematic brokers and do not mind uneven load on your brokers.
+func LeastBackupPartitioner() Partitioner {
+ return new(leastBackupPartitioner)
+}
+
+type (
+ leastBackupInput struct{ mapping []*topicPartition }
+
+ leastBackupPartitioner struct{}
+
+ leastBackupTopicPartitioner struct {
+ onPart int
+ rng *rand.Rand
+ }
+)
+
+func (i *leastBackupInput) Next() (int, int64) {
+ last := len(i.mapping) - 1
+ buffered := i.mapping[last].records.buffered.Load()
+ i.mapping = i.mapping[:last]
+ return last, buffered
+}
+
+func (i *leastBackupInput) Rem() int {
+ return len(i.mapping)
+}
+
+func (*leastBackupPartitioner) ForTopic(string) TopicPartitioner {
+ return &leastBackupTopicPartitioner{
+ onPart: -1,
+ rng: rand.New(rand.NewSource(time.Now().UnixNano())),
+ }
+}
+
+func (p *leastBackupTopicPartitioner) OnNewBatch() { p.onPart = -1 }
+func (*leastBackupTopicPartitioner) RequiresConsistency(*Record) bool { return false }
+func (*leastBackupTopicPartitioner) Partition(*Record, int) int { panic("unreachable") }
+
+func (p *leastBackupTopicPartitioner) PartitionByBackup(_ *Record, n int, backup TopicBackupIter) int {
+ if p.onPart == -1 || p.onPart >= n {
+ leastBackup := int64(math.MaxInt64)
+ npicked := 0
+ for ; n > 0; n-- {
+ pick, backup := backup.Next()
+ if backup < leastBackup {
+ leastBackup = backup
+ p.onPart = pick
+ npicked = 1
+ } else {
+ npicked++ // reservoir sampling with k = 1
+ if p.rng.Intn(npicked) == 0 {
+ p.onPart = pick
+ }
+ }
+ }
+ }
+ return p.onPart
+}
+
+///////////////////
+// UNIFORM BYTES //
+///////////////////
+
+// UniformBytesPartitioner is a redux of the StickyPartitioner, proposed in
+// KIP-794 and release with the Java client in Kafka 3.3. This partitioner
+// returns the same partition until 'bytes' is hit. At that point, a
+// re-partitioning happens. If adaptive is false, this chooses a new random
+// partition, otherwise this chooses a broker based on the inverse of the
+// backlog currently buffered for that broker. If keys is true, this uses
+// standard hashing based on record key for records with non-nil keys. hasher
+// is optional; if nil, the default hasher murmur2 (Kafka's default).
+//
+// The point of this hasher is to create larger batches while producing the
+// same amount to all partitions over the long run. Adaptive opts in to a
+// slight imbalance so that this can produce more to brokers that are less
+// loaded.
+//
+// This implementation differs slightly from Kafka's because this does not
+// account for the compressed size of a batch, nor batch overhead. For
+// overhead, in practice, the overhead is relatively constant so it would
+// affect all batches equally. For compression, this client does not compress
+// until after a batch is created and frozen, so it is not possible to track
+// compression. This client also uses the number of records for backup
+// calculation rather than number of bytes, but the heuristic should be
+// similar. Lastly, this client does not have a timeout for partition
+// availability. Realistically, these will be the most backed up partitions so
+// they should be chosen the least.
+//
+// NOTE: This implementation may create sub-optimal batches if lingering is
+// enabled. This client's default is to disable lingering. The patch used to
+// address this in Kafka is KAFKA-14156 (which itself is not perfect in the
+// context of disabling lingering). For more details, read KAFKA-14156.
+func UniformBytesPartitioner(bytes int, adaptive, keys bool, hasher PartitionerHasher) Partitioner {
+ if hasher == nil {
+ hasher = KafkaHasher(murmur2)
+ }
+ return &uniformBytesPartitioner{
+ bytes,
+ adaptive,
+ keys,
+ hasher,
+ }
+}
+
+type (
+ uniformBytesPartitioner struct {
+ bytes int
+ adaptive bool
+ keys bool
+ hasher PartitionerHasher
+ }
+
+ uniformBytesTopicPartitioner struct {
+ u uniformBytesPartitioner
+ bytes int
+ onPart int
+ rng *rand.Rand
+
+ calc []struct {
+ f float64
+ n int
+ }
+ }
+)
+
+func (u *uniformBytesPartitioner) ForTopic(string) TopicPartitioner {
+ return &uniformBytesTopicPartitioner{
+ u: *u,
+ onPart: -1,
+ rng: rand.New(rand.NewSource(time.Now().UnixNano())),
+ }
+}
+
+func (p *uniformBytesTopicPartitioner) RequiresConsistency(r *Record) bool {
+ return p.u.keys && r.Key != nil
+}
+func (*uniformBytesTopicPartitioner) Partition(*Record, int) int { panic("unreachable") }
+
+func (p *uniformBytesTopicPartitioner) PartitionByBackup(r *Record, n int, backup TopicBackupIter) int {
+ if p.u.keys && r.Key != nil {
+ return p.u.hasher(r.Key, n)
+ }
+
+ l := 1 + // attributes, int8 unused
+ 1 + // ts delta, 1 minimum (likely 2 or 3)
+ 1 + // offset delta, likely 1
+ kbin.VarintLen(int32(len(r.Key))) +
+ len(r.Key) +
+ kbin.VarintLen(int32(len(r.Value))) +
+ len(r.Value) +
+ kbin.VarintLen(int32(len(r.Headers))) // varint array len headers
+
+ for _, h := range r.Headers {
+ l += kbin.VarintLen(int32(len(h.Key))) +
+ len(h.Key) +
+ kbin.VarintLen(int32(len(h.Value))) +
+ len(h.Value)
+ }
+
+ p.bytes += l
+ if p.bytes >= p.u.bytes {
+ p.bytes = l
+ p.onPart = -1
+ }
+
+ if p.onPart >= 0 && p.onPart < n {
+ return p.onPart
+ }
+
+ if !p.u.adaptive {
+ p.onPart = p.rng.Intn(n)
+ } else {
+ p.calc = p.calc[:0]
+
+ // For adaptive, the logic is that we pick by broker according
+ // to the inverse of the queue size. Presumably this means
+ // bytes, but we use records for simplicity.
+ //
+ // We calculate 1/recs for all brokers and choose the first one
+ // in this ordering that takes us negative.
+ //
+ // e.g., 1/1 + 1/3; pick is 0.2; 0.2*1.3333 = 0.26666; minus 1
+ // is negative, meaning our pick is the first. If rng was 0.9,
+ // scaled is 1.2, meaning our pick is the second (-1, still
+ // positive, second pick takes us negative).
+ //
+ // To guard floating rounding problems, if we pick nothing,
+ // then this means we pick our last.
+ var t float64
+ for ; n > 0; n-- {
+ n, backup := backup.Next()
+ backup++ // ensure non-zero
+ f := 1 / float64(backup)
+ t += f
+ p.calc = append(p.calc, struct {
+ f float64
+ n int
+ }{f, n})
+ }
+ r := p.rng.Float64()
+ pick := r * t
+ for _, c := range p.calc {
+ pick -= c.f
+ if pick <= 0 {
+ p.onPart = c.n
+ break
+ }
+ }
+ if p.onPart == -1 {
+ p.onPart = p.calc[len(p.calc)-1].n
+ }
+ }
+ return p.onPart
+}
+
+/////////////////////
+// STICKY & COMPAT // - Sticky, Kafka (custom hash), Sarama (custom hash)
+/////////////////////
+
+// StickyPartitioner is the same as StickyKeyPartitioner, but with no logic to
+// consistently hash keys. That is, this only partitions according to the
+// sticky partition strategy.
+func StickyPartitioner() Partitioner {
+ return new(stickyPartitioner)
+}
+
+type (
+ stickyPartitioner struct{}
+
+ stickyTopicPartitioner struct {
+ lastPart int
+ onPart int
+ rng *rand.Rand
+ }
+)
+
+func (*stickyPartitioner) ForTopic(string) TopicPartitioner {
+ p := newStickyTopicPartitioner()
+ return &p
+}
+
+func newStickyTopicPartitioner() stickyTopicPartitioner {
+ return stickyTopicPartitioner{
+ lastPart: -1,
+ onPart: -1,
+ rng: rand.New(rand.NewSource(time.Now().UnixNano())),
+ }
+}
+
+func (p *stickyTopicPartitioner) OnNewBatch() { p.lastPart, p.onPart = p.onPart, -1 }
+func (*stickyTopicPartitioner) RequiresConsistency(*Record) bool { return false }
+func (p *stickyTopicPartitioner) Partition(_ *Record, n int) int {
+ if p.onPart == -1 || p.onPart >= n {
+ p.onPart = p.rng.Intn(n)
+ if p.onPart == p.lastPart {
+ p.onPart = (p.onPart + 1) % n
+ }
+ }
+ return p.onPart
+}
+
+// StickyKeyPartitioner mirrors the default Java partitioner from Kafka's 2.4
+// release (see KIP-480 and KAFKA-8601) until their 3.3 release. This was
+// replaced in 3.3 with the uniform sticky partitioner (KIP-794), which is
+// reimplemented in this client as the UniformBytesPartitioner.
+//
+// This is the same "hash the key consistently, if no key, choose random
+// partition" strategy that the Java partitioner has always used, but rather
+// than always choosing a random partition, the partitioner pins a partition to
+// produce to until that partition rolls over to a new batch. Only when rolling
+// to new batches does this partitioner switch partitions.
+//
+// The benefit with this pinning is less CPU utilization on Kafka brokers.
+// Over time, the random distribution is the same, but the brokers are handling
+// on average larger batches.
+//
+// hasher is optional; if nil, this will return a partitioner that partitions
+// exactly how Kafka does. Specifically, the partitioner will use murmur2 to
+// hash keys, will mask out the 32nd bit, and then will mod by the number of
+// potential partitions.
+func StickyKeyPartitioner(hasher PartitionerHasher) Partitioner {
+ if hasher == nil {
+ hasher = KafkaHasher(murmur2)
+ }
+ return &keyPartitioner{hasher}
+}
+
+// PartitionerHasher returns a partition to use given the input data and number
+// of partitions.
+type PartitionerHasher func([]byte, int) int
+
+// KafkaHasher returns a PartitionerHasher using hashFn that mirrors how Kafka
+// partitions after hashing data. In Kafka, after hashing into a uint32, the
+// hash is converted to an int32 and the high bit is stripped. Kafka by default
+// uses murmur2 hashing, and the StickyKeyPartiitoner uses this by default.
+// Using this KafkaHasher function is only necessary if you want to change the
+// underlying hashing algorithm.
+func KafkaHasher(hashFn func([]byte) uint32) PartitionerHasher {
+ return func(key []byte, n int) int {
+ // https://github.com/apache/kafka/blob/d91a94e/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java#L59
+ // https://github.com/apache/kafka/blob/d91a94e/clients/src/main/java/org/apache/kafka/common/utils/Utils.java#L865-L867
+ // Masking before or after the int conversion makes no difference.
+ return int(hashFn(key)&0x7fffffff) % n
+ }
+}
+
+// SaramaHasher is a historical misnamed partitioner. This library's original
+// implementation of the SaramaHasher was incorrect, if you want an exact
+// match for the Sarama partitioner, use the [SaramaCompatHasher].
+//
+// This partitioner remains because as it turns out, other ecosystems provide
+// a similar partitioner and this partitioner is useful for compatibility.
+//
+// In particular, using this function with a crc32.ChecksumIEEE hasher makes
+// this partitioner match librdkafka's consistent partitioner, or the
+// zendesk/ruby-kafka partitioner.
+func SaramaHasher(hashFn func([]byte) uint32) PartitionerHasher {
+ return func(key []byte, n int) int {
+ p := int(hashFn(key)) % n
+ if p < 0 {
+ p = -p
+ }
+ return p
+ }
+}
+
+// SaramaCompatHasher returns a PartitionerHasher using hashFn that mirrors how
+// Sarama partitions after hashing data.
+//
+// Sarama has two differences from Kafka when partitioning:
+//
+// 1) In Kafka, when converting the uint32 hash to an int32, Kafka masks the
+// high bit. In Sarama, if the high bit is 1 (i.e., the number as an int32 is
+// negative), Sarama negates the number.
+//
+// 2) Kafka by default uses the murmur2 hashing algorithm. Sarama by default
+// uses fnv-1a.
+//
+// Sarama added a NewReferenceHashPartitioner function that attempted to align
+// with Kafka, but the reference partitioner only fixed the first difference,
+// not the second. Further customization options were added later that made it
+// possible to exactly match Kafka when hashing.
+//
+// In short, to *exactly* match the Sarama defaults, use the following:
+//
+// kgo.StickyKeyPartitioner(kgo.SaramaCompatHasher(fnv32a))
+//
+// Where fnv32a is a function returning a new 32 bit fnv-1a hasher.
+//
+// func fnv32a(b []byte) uint32 {
+// h := fnv.New32a()
+// h.Reset()
+// h.Write(b)
+// return h.Sum32()
+// }
+func SaramaCompatHasher(hashFn func([]byte) uint32) PartitionerHasher {
+ return func(key []byte, n int) int {
+ p := int32(hashFn(key)) % int32(n)
+ if p < 0 {
+ p = -p
+ }
+ return int(p)
+ }
+}
+
+type (
+ keyPartitioner struct {
+ hasher PartitionerHasher
+ }
+
+ stickyKeyTopicPartitioner struct {
+ hasher PartitionerHasher
+ stickyTopicPartitioner
+ }
+)
+
+func (k *keyPartitioner) ForTopic(string) TopicPartitioner {
+ return &stickyKeyTopicPartitioner{k.hasher, newStickyTopicPartitioner()}
+}
+
+func (*stickyKeyTopicPartitioner) RequiresConsistency(r *Record) bool { return r.Key != nil }
+func (p *stickyKeyTopicPartitioner) Partition(r *Record, n int) int {
+ if r.Key != nil {
+ return p.hasher(r.Key, n)
+ }
+ return p.stickyTopicPartitioner.Partition(r, n)
+}
+
+/////////////
+// MURMUR2 //
+/////////////
+
+// Straight from the C++ code and from the Java code duplicating it.
+// https://github.com/apache/kafka/blob/d91a94e/clients/src/main/java/org/apache/kafka/common/utils/Utils.java#L383-L421
+// https://github.com/aappleby/smhasher/blob/61a0530f/src/MurmurHash2.cpp#L37-L86
+//
+// The Java code uses ints but with unsigned shifts; we do not need to.
+func murmur2(b []byte) uint32 {
+ const (
+ seed uint32 = 0x9747b28c
+ m uint32 = 0x5bd1e995
+ r = 24
+ )
+ h := seed ^ uint32(len(b))
+ for len(b) >= 4 {
+ k := uint32(b[3])<<24 + uint32(b[2])<<16 + uint32(b[1])<<8 + uint32(b[0])
+ b = b[4:]
+ k *= m
+ k ^= k >> r
+ k *= m
+
+ h *= m
+ h ^= k
+ }
+ switch len(b) {
+ case 3:
+ h ^= uint32(b[2]) << 16
+ fallthrough
+ case 2:
+ h ^= uint32(b[1]) << 8
+ fallthrough
+ case 1:
+ h ^= uint32(b[0])
+ h *= m
+ }
+
+ h ^= h >> 13
+ h *= m
+ h ^= h >> 15
+ return h
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go b/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go
new file mode 100644
index 0000000000000..d9cca9920aace
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go
@@ -0,0 +1,1226 @@
+package kgo
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "math"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+type producer struct {
+ inflight atomicI64 // high 16: # waiters, low 48: # inflight
+
+ // mu and c are used for flush and drain notifications; mu is used for
+ // a few other tight locks.
+ mu sync.Mutex
+ c *sync.Cond
+
+ bufferedRecords int64
+ bufferedBytes int64
+
+ cl *Client
+
+ topicsMu sync.Mutex // locked to prevent concurrent updates; reads are always atomic
+ topics *topicsPartitions
+
+ // Hooks exist behind a pointer because likely they are not used.
+ // We only take up one byte vs. 6.
+ hooks *struct {
+ buffered []HookProduceRecordBuffered
+ partitioned []HookProduceRecordPartitioned
+ unbuffered []HookProduceRecordUnbuffered
+ }
+
+ hasHookBatchWritten bool
+
+ // unknownTopics buffers all records for topics that are not loaded.
+ // The map is to a pointer to a slice for reasons documented in
+ // waitUnknownTopic.
+ unknownTopicsMu sync.Mutex
+ unknownTopics map[string]*unknownTopicProduces
+
+ id atomic.Value
+ producingTxn atomicBool
+
+ // We must have a producer field for flushing; we cannot just have a
+ // field on recBufs that is toggled on flush. If we did, then a new
+ // recBuf could be created and records sent to while we are flushing.
+ flushing atomicI32 // >0 if flushing, can Flush many times concurrently
+ blocked atomicI32 // >0 if over max recs or bytes
+ blockedBytes int64
+
+ aborting atomicI32 // >0 if aborting, can abort many times concurrently
+
+ idMu sync.Mutex
+ idVersion int16
+
+ batchPromises ringBatchPromise
+ promisesMu sync.Mutex
+
+ txnMu sync.Mutex
+ inTxn bool
+
+ // If using EndBeginTxnUnsafe, and any partitions are actually produced
+ // to, we issue an AddPartitionsToTxn at the end to re-add them to a
+ // new transaction. We have to due to logic races: the broker may not
+ // have handled the produce requests yet, and we want to ensure a new
+ // transaction is started.
+ //
+ // If the user stops producing, we want to ensure that our restarted
+ // transaction is actually ended. Thus, we set readded whenever we have
+ // partitions we actually restart. We issue EndTxn and reset readded in
+ // EndAndBegin; if nothing more was produced to, we ensure we finish
+ // the started txn.
+ readded bool
+}
+
+// BufferedProduceRecords returns the number of records currently buffered for
+// producing within the client.
+//
+// This can be used as a gauge to determine how far behind the client is for
+// flushing records produced by your client (which can help determine network /
+// cluster health).
+func (cl *Client) BufferedProduceRecords() int64 {
+ cl.producer.mu.Lock()
+ defer cl.producer.mu.Unlock()
+ return cl.producer.bufferedRecords + int64(cl.producer.blocked.Load())
+}
+
+// BufferedProduceBytes returns the number of bytes currently buffered for
+// producing within the client. This is the sum of all keys, values, and header
+// keys/values. See the related [BufferedProduceRecords] for more information.
+func (cl *Client) BufferedProduceBytes() int64 {
+ cl.producer.mu.Lock()
+ defer cl.producer.mu.Unlock()
+ return cl.producer.bufferedBytes + cl.producer.blockedBytes
+}
+
+type unknownTopicProduces struct {
+ buffered []promisedRec
+ wait chan error // retryable errors
+ fatal chan error // must-signal quit errors; capacity 1
+}
+
+func (p *producer) init(cl *Client) {
+ p.cl = cl
+ p.topics = newTopicsPartitions()
+ p.unknownTopics = make(map[string]*unknownTopicProduces)
+ p.idVersion = -1
+ p.id.Store(&producerID{
+ id: -1,
+ epoch: -1,
+ err: errReloadProducerID,
+ })
+ p.c = sync.NewCond(&p.mu)
+
+ inithooks := func() {
+ if p.hooks == nil {
+ p.hooks = &struct {
+ buffered []HookProduceRecordBuffered
+ partitioned []HookProduceRecordPartitioned
+ unbuffered []HookProduceRecordUnbuffered
+ }{}
+ }
+ }
+
+ cl.cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookProduceRecordBuffered); ok {
+ inithooks()
+ p.hooks.buffered = append(p.hooks.buffered, h)
+ }
+ if h, ok := h.(HookProduceRecordPartitioned); ok {
+ inithooks()
+ p.hooks.partitioned = append(p.hooks.partitioned, h)
+ }
+ if h, ok := h.(HookProduceRecordUnbuffered); ok {
+ inithooks()
+ p.hooks.unbuffered = append(p.hooks.unbuffered, h)
+ }
+ if _, ok := h.(HookProduceBatchWritten); ok {
+ p.hasHookBatchWritten = true
+ }
+ })
+}
+
+func (p *producer) purgeTopics(topics []string) {
+ p.topicsMu.Lock()
+ defer p.topicsMu.Unlock()
+
+ p.unknownTopicsMu.Lock()
+ for _, topic := range topics {
+ if unknown, exists := p.unknownTopics[topic]; exists {
+ delete(p.unknownTopics, topic)
+ close(unknown.wait)
+ p.promiseBatch(batchPromise{
+ recs: unknown.buffered,
+ err: errPurged,
+ })
+ }
+ }
+ p.unknownTopicsMu.Unlock()
+
+ toStore := p.topics.clone()
+ defer p.topics.storeData(toStore)
+
+ for _, topic := range topics {
+ d := toStore.loadTopic(topic)
+ if d == nil {
+ continue
+ }
+ delete(toStore, topic)
+ for _, p := range d.partitions {
+ r := p.records
+
+ // First we set purged, so that anything in the process
+ // of being buffered will immediately fail when it goes
+ // to buffer.
+ r.mu.Lock()
+ r.purged = true
+ r.mu.Unlock()
+
+ // Now we remove from the sink. When we do, the recBuf
+ // is effectively abandonded. Any active produces may
+ // finish before we fail the records; if they finish
+ // after they will no longer belong in the batch, but
+ // they may have been produced. This is the duplicate
+ // risk a user runs when purging.
+ r.sink.removeRecBuf(r)
+
+ // Once abandonded, we now need to fail anything that
+ // was buffered.
+ go func() {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+ r.failAllRecords(errPurged)
+ }()
+ }
+ }
+}
+
+func (p *producer) isAborting() bool { return p.aborting.Load() > 0 }
+
+func noPromise(*Record, error) {}
+
+// ProduceResult is the result of producing a record in a synchronous manner.
+type ProduceResult struct {
+ // Record is the produced record. It is always non-nil.
+ //
+ // If this record was produced successfully, its attrs / offset / id /
+ // epoch / etc. fields are filled in on return if possible (i.e. when
+ // producing with acks required).
+ Record *Record
+
+ // Err is a potential produce error. If this is non-nil, the record was
+ // not produced successfully.
+ Err error
+}
+
+// ProduceResults is a collection of produce results.
+type ProduceResults []ProduceResult
+
+// FirstErr returns the first erroring result, if any.
+func (rs ProduceResults) FirstErr() error {
+ for _, r := range rs {
+ if r.Err != nil {
+ return r.Err
+ }
+ }
+ return nil
+}
+
+// First the first record and error in the produce results.
+//
+// This function is useful if you only passed one record to ProduceSync.
+func (rs ProduceResults) First() (*Record, error) {
+ return rs[0].Record, rs[0].Err
+}
+
+// ProduceSync is a synchronous produce. See the Produce documentation for an
+// in depth description of how producing works.
+//
+// This function produces all records in one range loop and waits for them all
+// to be produced before returning.
+func (cl *Client) ProduceSync(ctx context.Context, rs ...*Record) ProduceResults {
+ var (
+ wg sync.WaitGroup
+ results = make(ProduceResults, 0, len(rs))
+ promise = func(r *Record, err error) {
+ results = append(results, ProduceResult{r, err})
+ wg.Done()
+ }
+ )
+
+ wg.Add(len(rs))
+ for _, r := range rs {
+ cl.Produce(ctx, r, promise)
+ }
+ wg.Wait()
+
+ return results
+}
+
+// FirstErrPromise is a helper type to capture only the first failing error
+// when producing a batch of records with this type's Promise function.
+//
+// This is useful for when you only care about any record failing, and can use
+// that as a signal (i.e., to abort a batch). The AbortingFirstErrPromise
+// function can be used to abort all records as soon as the first error is
+// encountered. If you do not need to abort, you can use this type with no
+// constructor.
+//
+// This is similar to using ProduceResult's FirstErr function.
+type FirstErrPromise struct {
+ wg sync.WaitGroup
+ once atomicBool
+ err error
+ cl *Client
+}
+
+// AbortingFirstErrPromise returns a FirstErrPromise that will call the
+// client's AbortBufferedRecords function if an error is encountered.
+//
+// This can be used to quickly exit when any error is encountered, rather than
+// waiting while flushing only to discover things errored.
+func AbortingFirstErrPromise(cl *Client) *FirstErrPromise {
+ return &FirstErrPromise{
+ cl: cl,
+ }
+}
+
+// Promise is a promise for producing that will store the first error
+// encountered.
+func (f *FirstErrPromise) promise(_ *Record, err error) {
+ defer f.wg.Done()
+ if err != nil && !f.once.Swap(true) {
+ f.err = err
+ if f.cl != nil {
+ f.wg.Add(1)
+ go func() {
+ defer f.wg.Done()
+ f.cl.AbortBufferedRecords(context.Background())
+ }()
+ }
+ }
+}
+
+// Promise returns a promise for producing that will store the first error
+// encountered.
+//
+// The returned promise must eventually be called, because a FirstErrPromise
+// does not return from 'Err' until all promises are completed.
+func (f *FirstErrPromise) Promise() func(*Record, error) {
+ f.wg.Add(1)
+ return f.promise
+}
+
+// Err waits for all promises to complete and then returns any stored error.
+func (f *FirstErrPromise) Err() error {
+ f.wg.Wait()
+ return f.err
+}
+
+// TryProduce is similar to Produce, but rather than blocking if the client
+// currently has MaxBufferedRecords or MaxBufferedBytes buffered, this fails
+// immediately with ErrMaxBuffered. See the Produce documentation for more
+// details.
+func (cl *Client) TryProduce(
+ ctx context.Context,
+ r *Record,
+ promise func(*Record, error),
+) {
+ cl.produce(ctx, r, promise, false)
+}
+
+// Produce sends a Kafka record to the topic in the record's Topic field,
+// calling an optional `promise` with the record and a potential error when
+// Kafka replies. For a synchronous produce, see ProduceSync. Records are
+// produced in order per partition if the record is produced successfully.
+// Successfully produced records will have their attributes, offset, and
+// partition set before the promise is called. All promises are called serially
+// (and should be relatively fast). If a record's timestamp is unset, this
+// sets the timestamp to time.Now.
+//
+// If the topic field is empty, the client will use the DefaultProduceTopic; if
+// that is also empty, the record is failed immediately. If the record is too
+// large to fit in a batch on its own in a produce request, the record will be
+// failed with immediately kerr.MessageTooLarge.
+//
+// If the client is configured to automatically flush the client currently has
+// the configured maximum amount of records buffered, Produce will block. The
+// context can be used to cancel waiting while records flush to make space. In
+// contrast, if flushing is configured, the record will be failed immediately
+// with ErrMaxBuffered (this same behavior can be had with TryProduce).
+//
+// Once a record is buffered into a batch, it can be canceled in three ways:
+// canceling the context, the record timing out, or hitting the maximum
+// retries. If any of these conditions are hit and it is currently safe to fail
+// records, all buffered records for the relevant partition are failed. Only
+// the first record's context in a batch is considered when determining whether
+// the batch should be canceled. A record is not safe to fail if the client
+// is idempotently producing and a request has been sent; in this case, the
+// client cannot know if the broker actually processed the request (if so, then
+// removing the records from the client will create errors the next time you
+// produce).
+//
+// If the client is transactional and a transaction has not been begun, the
+// promise is immediately called with an error corresponding to not being in a
+// transaction.
+func (cl *Client) Produce(
+ ctx context.Context,
+ r *Record,
+ promise func(*Record, error),
+) {
+ cl.produce(ctx, r, promise, true)
+}
+
+func (cl *Client) produce(
+ ctx context.Context,
+ r *Record,
+ promise func(*Record, error),
+ block bool,
+) {
+ if ctx == nil {
+ ctx = context.Background()
+ }
+ if r.Context == nil {
+ r.Context = ctx
+ }
+ if promise == nil {
+ promise = noPromise
+ }
+ if r.Topic == "" {
+ r.Topic = cl.cfg.defaultProduceTopic
+ }
+
+ p := &cl.producer
+ if p.hooks != nil && len(p.hooks.buffered) > 0 {
+ for _, h := range p.hooks.buffered {
+ h.OnProduceRecordBuffered(r)
+ }
+ }
+
+ // We can now fail the rec after the buffered hook.
+ if r.Topic == "" {
+ p.promiseRecordBeforeBuf(promisedRec{ctx, promise, r}, errNoTopic)
+ return
+ }
+ if cl.cfg.txnID != nil && !p.producingTxn.Load() {
+ p.promiseRecordBeforeBuf(promisedRec{ctx, promise, r}, errNotInTransaction)
+ return
+ }
+
+ userSize := r.userSize()
+ if cl.cfg.maxBufferedBytes > 0 && userSize > cl.cfg.maxBufferedBytes {
+ p.promiseRecordBeforeBuf(promisedRec{ctx, promise, r}, kerr.MessageTooLarge)
+ return
+ }
+
+ // We have to grab the produce lock to check if this record will exceed
+ // configured limits. We try to keep the logic tight since this is
+ // effectively a global lock around producing.
+ var (
+ nextBufRecs, nextBufBytes int64
+ overMaxRecs, overMaxBytes bool
+
+ calcNums = func() {
+ nextBufRecs = p.bufferedRecords + 1
+ nextBufBytes = p.bufferedBytes + userSize
+ overMaxRecs = nextBufRecs > cl.cfg.maxBufferedRecords
+ overMaxBytes = cl.cfg.maxBufferedBytes > 0 && nextBufBytes > cl.cfg.maxBufferedBytes
+ }
+ )
+ p.mu.Lock()
+ calcNums()
+ if overMaxRecs || overMaxBytes {
+ if !block || cl.cfg.manualFlushing {
+ p.mu.Unlock()
+ p.promiseRecordBeforeBuf(promisedRec{ctx, promise, r}, ErrMaxBuffered)
+ return
+ }
+
+ // Before we potentially unlinger, add that we are blocked to
+ // ensure we do NOT start a linger anymore. We THEN wakeup
+ // anything that is actively lingering. Note that blocked is
+ // also used when finishing promises to see if we need to be
+ // notified.
+ p.blocked.Add(1)
+ p.blockedBytes += userSize
+ p.mu.Unlock()
+
+ cl.cfg.logger.Log(LogLevelDebug, "blocking Produce because we are either over max buffered records or max buffered bytes",
+ "over_max_records", overMaxRecs,
+ "over_max_bytes", overMaxBytes,
+ )
+
+ cl.unlingerDueToMaxRecsBuffered()
+
+ // We keep the lock when we exit. If we are flushing, we want
+ // this blocked record to be produced before we return from
+ // flushing. This blocked record will be accounted for in the
+ // bufferedRecords addition below, after being removed from
+ // blocked in the goroutine.
+ wait := make(chan struct{})
+ var quit bool
+ go func() {
+ defer close(wait)
+ p.mu.Lock()
+ calcNums()
+ for !quit && (overMaxRecs || overMaxBytes) {
+ p.c.Wait()
+ calcNums()
+ }
+ p.blocked.Add(-1)
+ p.blockedBytes -= userSize
+ }()
+
+ drainBuffered := func(err error) {
+ p.mu.Lock()
+ quit = true
+ p.mu.Unlock()
+ p.c.Broadcast() // wake the goroutine above
+ <-wait
+ p.mu.Unlock() // we wait for the goroutine to exit, then unlock again (since the goroutine leaves the mutex locked)
+ p.promiseRecordBeforeBuf(promisedRec{ctx, promise, r}, err)
+ }
+
+ select {
+ case <-wait:
+ cl.cfg.logger.Log(LogLevelDebug, "Produce block awoken, we now have space to produce, continuing to partition and produce")
+ case <-cl.ctx.Done():
+ drainBuffered(ErrClientClosed)
+ cl.cfg.logger.Log(LogLevelDebug, "client ctx canceled while blocked in Produce, returning")
+ return
+ case <-ctx.Done():
+ drainBuffered(ctx.Err())
+ cl.cfg.logger.Log(LogLevelDebug, "produce ctx canceled while blocked in Produce, returning")
+ return
+ }
+ }
+ p.bufferedRecords = nextBufRecs
+ p.bufferedBytes = nextBufBytes
+ p.mu.Unlock()
+
+ cl.partitionRecord(promisedRec{ctx, promise, r})
+}
+
+type batchPromise struct {
+ baseOffset int64
+ pid int64
+ epoch int16
+ attrs RecordAttrs
+ beforeBuf bool
+ partition int32
+ recs []promisedRec
+ err error
+}
+
+func (p *producer) promiseBatch(b batchPromise) {
+ if first := p.batchPromises.push(b); first {
+ go p.finishPromises(b)
+ }
+}
+
+func (p *producer) promiseRecord(pr promisedRec, err error) {
+ p.promiseBatch(batchPromise{recs: []promisedRec{pr}, err: err})
+}
+
+func (p *producer) promiseRecordBeforeBuf(pr promisedRec, err error) {
+ p.promiseBatch(batchPromise{recs: []promisedRec{pr}, beforeBuf: true, err: err})
+}
+
+func (p *producer) finishPromises(b batchPromise) {
+ cl := p.cl
+ var more bool
+start:
+ p.promisesMu.Lock()
+ for i, pr := range b.recs {
+ pr.LeaderEpoch = 0
+ pr.Offset = b.baseOffset + int64(i)
+ pr.Partition = b.partition
+ pr.ProducerID = b.pid
+ pr.ProducerEpoch = b.epoch
+ pr.Attrs = b.attrs
+ cl.finishRecordPromise(pr, b.err, b.beforeBuf)
+ b.recs[i] = promisedRec{}
+ }
+ p.promisesMu.Unlock()
+ if cap(b.recs) > 4 {
+ cl.prsPool.put(b.recs)
+ }
+
+ b, more = p.batchPromises.dropPeek()
+ if more {
+ goto start
+ }
+}
+
+func (cl *Client) finishRecordPromise(pr promisedRec, err error, beforeBuffering bool) {
+ p := &cl.producer
+
+ if p.hooks != nil && len(p.hooks.unbuffered) > 0 {
+ for _, h := range p.hooks.unbuffered {
+ h.OnProduceRecordUnbuffered(pr.Record, err)
+ }
+ }
+
+ // Capture user size before potential modification by the promise.
+ //
+ // We call the promise before finishing the flush notification,
+ // allowing users of Flush to know all buf recs are done by the
+ // time we notify flush below.
+ userSize := pr.userSize()
+ pr.promise(pr.Record, err)
+
+ // If this record was never buffered, it's size was never accounted
+ // for on any p field: return early.
+ if beforeBuffering {
+ return
+ }
+
+ // Keep the lock as tight as possible: the broadcast can come after.
+ p.mu.Lock()
+ p.bufferedBytes -= userSize
+ p.bufferedRecords--
+ broadcast := p.blocked.Load() > 0 || p.bufferedRecords == 0 && p.flushing.Load() > 0
+ p.mu.Unlock()
+
+ if broadcast {
+ p.c.Broadcast()
+ }
+}
+
+// partitionRecord loads the partitions for a topic and produce to them. If
+// the topic does not currently exist, the record is buffered in unknownTopics
+// for a metadata update to deal with.
+func (cl *Client) partitionRecord(pr promisedRec) {
+ parts, partsData := cl.partitionsForTopicProduce(pr)
+ if parts == nil { // saved in unknownTopics
+ return
+ }
+ cl.doPartitionRecord(parts, partsData, pr)
+}
+
+// doPartitionRecord is separate so that metadata updates that load unknown
+// partitions can call this directly.
+func (cl *Client) doPartitionRecord(parts *topicPartitions, partsData *topicPartitionsData, pr promisedRec) {
+ if partsData.loadErr != nil && !kerr.IsRetriable(partsData.loadErr) {
+ cl.producer.promiseRecord(pr, partsData.loadErr)
+ return
+ }
+
+ parts.partsMu.Lock()
+ defer parts.partsMu.Unlock()
+ if parts.partitioner == nil {
+ parts.partitioner = cl.cfg.partitioner.ForTopic(pr.Topic)
+ }
+
+ mapping := partsData.writablePartitions
+ if parts.partitioner.RequiresConsistency(pr.Record) {
+ mapping = partsData.partitions
+ }
+ if len(mapping) == 0 {
+ cl.producer.promiseRecord(pr, errors.New("unable to partition record due to no usable partitions"))
+ return
+ }
+
+ var pick int
+ tlp, _ := parts.partitioner.(TopicBackupPartitioner)
+ if tlp != nil {
+ if parts.lb == nil {
+ parts.lb = new(leastBackupInput)
+ }
+ parts.lb.mapping = mapping
+ pick = tlp.PartitionByBackup(pr.Record, len(mapping), parts.lb)
+ } else {
+ pick = parts.partitioner.Partition(pr.Record, len(mapping))
+ }
+ if pick < 0 || pick >= len(mapping) {
+ cl.producer.promiseRecord(pr, fmt.Errorf("invalid record partitioning choice of %d from %d available", pick, len(mapping)))
+ return
+ }
+
+ partition := mapping[pick]
+
+ onNewBatch, _ := parts.partitioner.(TopicPartitionerOnNewBatch)
+ abortOnNewBatch := onNewBatch != nil
+ processed := partition.records.bufferRecord(pr, abortOnNewBatch) // KIP-480
+ if !processed {
+ onNewBatch.OnNewBatch()
+
+ if tlp != nil {
+ parts.lb.mapping = mapping
+ pick = tlp.PartitionByBackup(pr.Record, len(mapping), parts.lb)
+ } else {
+ pick = parts.partitioner.Partition(pr.Record, len(mapping))
+ }
+
+ if pick < 0 || pick >= len(mapping) {
+ cl.producer.promiseRecord(pr, fmt.Errorf("invalid record partitioning choice of %d from %d available", pick, len(mapping)))
+ return
+ }
+ partition = mapping[pick]
+ partition.records.bufferRecord(pr, false) // KIP-480
+ }
+}
+
+// ProducerID returns, loading if necessary, the current producer ID and epoch.
+// This returns an error if the producer ID could not be loaded, if the
+// producer ID has fatally errored, or if the context is canceled.
+func (cl *Client) ProducerID(ctx context.Context) (int64, int16, error) {
+ var (
+ id int64
+ epoch int16
+ err error
+
+ done = make(chan struct{})
+ )
+
+ go func() {
+ defer close(done)
+ id, epoch, err = cl.producerID(ctx2fn(ctx))
+ }()
+
+ select {
+ case <-ctx.Done():
+ return 0, 0, ctx.Err()
+ case <-done:
+ return id, epoch, err
+ }
+}
+
+type producerID struct {
+ id int64
+ epoch int16
+ err error
+}
+
+var errReloadProducerID = errors.New("producer id needs reloading")
+
+// initProducerID initializes the client's producer ID for idempotent
+// producing only (no transactions, which are more special). After the first
+// load, this clears all buffered unknown topics.
+func (cl *Client) producerID(ctxFn func() context.Context) (int64, int16, error) {
+ p := &cl.producer
+
+ id := p.id.Load().(*producerID)
+ if errors.Is(id.err, errReloadProducerID) {
+ p.idMu.Lock()
+ defer p.idMu.Unlock()
+
+ if id = p.id.Load().(*producerID); errors.Is(id.err, errReloadProducerID) {
+ if cl.cfg.disableIdempotency {
+ cl.cfg.logger.Log(LogLevelInfo, "skipping producer id initialization because the client was configured to disable idempotent writes")
+ id = &producerID{
+ id: -1,
+ epoch: -1,
+ err: nil,
+ }
+ p.id.Store(id)
+ } else if cl.cfg.txnID == nil && id.id >= 0 && id.epoch < math.MaxInt16-1 {
+ // For the idempotent producer, as specified in KIP-360,
+ // if we had an ID, we can bump the epoch locally.
+ // If we are at the max epoch, we will ask for a new ID.
+ cl.resetAllProducerSequences()
+ id = &producerID{
+ id: id.id,
+ epoch: id.epoch + 1,
+ err: nil,
+ }
+ p.id.Store(id)
+ } else {
+ newID, keep := cl.doInitProducerID(ctxFn, id.id, id.epoch)
+ if keep {
+ id = newID
+ // Whenever we have a new producer ID, we need
+ // our sequence numbers to be 0. On the first
+ // record produced, this will be true, but if
+ // we were signaled to reset the producer ID,
+ // then we definitely still need to reset here.
+ cl.resetAllProducerSequences()
+ p.id.Store(id)
+ } else {
+ // If we are not keeping the producer ID,
+ // we will return our old ID but with a
+ // static error that we can check or bubble
+ // up where needed.
+ id = &producerID{
+ id: id.id,
+ epoch: id.epoch,
+ err: &errProducerIDLoadFail{newID.err},
+ }
+ }
+ }
+ }
+ }
+
+ return id.id, id.epoch, id.err
+}
+
+// As seen in KAFKA-12152, if we bump an epoch, we have to reset sequence nums
+// for every partition. Otherwise, we will use a new id/epoch for a partition
+// and trigger OOOSN errors.
+//
+// Pre 2.5, this function is only be called if it is acceptable to continue
+// on data loss (idempotent producer with no StopOnDataLoss option).
+//
+// 2.5+, it is safe to call this if the producer ID can be reset (KIP-360),
+// in EndTransaction.
+func (cl *Client) resetAllProducerSequences() {
+ for _, tp := range cl.producer.topics.load() {
+ for _, p := range tp.load().partitions {
+ p.records.mu.Lock()
+ p.records.needSeqReset = true
+ p.records.mu.Unlock()
+ }
+ }
+}
+
+func (cl *Client) failProducerID(id int64, epoch int16, err error) {
+ p := &cl.producer
+
+ // We do not lock the idMu when failing a producer ID, for two reasons.
+ //
+ // 1) With how we store below, we do not need to. We only fail if the
+ // ID we are failing has not changed and if the ID we are failing has
+ // not failed already. Failing outside the lock is the same as failing
+ // within the lock.
+ //
+ // 2) Locking would cause a deadlock, because producerID locks
+ // idMu=>recBuf.Mu, whereas we failing while locked within a recBuf in
+ // sink.go.
+ new := &producerID{
+ id: id,
+ epoch: epoch,
+ err: err,
+ }
+ for {
+ current := p.id.Load().(*producerID)
+ if current.id != id || current.epoch != epoch {
+ cl.cfg.logger.Log(LogLevelInfo, "ignoring a fail producer id request due to current id being different",
+ "current_id", current.id,
+ "current_epoch", current.epoch,
+ "current_err", current.err,
+ "fail_id", id,
+ "fail_epoch", epoch,
+ "fail_err", err,
+ )
+ return
+ }
+ if current.err != nil {
+ cl.cfg.logger.Log(LogLevelInfo, "ignoring a fail producer id because our producer id has already been failed",
+ "current_id", current.id,
+ "current_epoch", current.epoch,
+ "current_err", current.err,
+ "fail_err", err,
+ )
+ return
+ }
+ if p.id.CompareAndSwap(current, new) {
+ return
+ }
+ }
+}
+
+// doInitProducerID inits the idempotent ID and potentially the transactional
+// producer epoch, returning whether to keep the result.
+func (cl *Client) doInitProducerID(ctxFn func() context.Context, lastID int64, lastEpoch int16) (*producerID, bool) {
+ cl.cfg.logger.Log(LogLevelInfo, "initializing producer id")
+ req := kmsg.NewPtrInitProducerIDRequest()
+ req.TransactionalID = cl.cfg.txnID
+ req.ProducerID = lastID
+ req.ProducerEpoch = lastEpoch
+ if cl.cfg.txnID != nil {
+ req.TransactionTimeoutMillis = int32(cl.cfg.txnTimeout.Milliseconds())
+ }
+
+ ctx := ctxFn()
+ resp, err := req.RequestWith(ctx, cl)
+ if err != nil {
+ if errors.Is(err, errUnknownRequestKey) || errors.Is(err, errBrokerTooOld) {
+ cl.cfg.logger.Log(LogLevelInfo, "unable to initialize a producer id because the broker is too old or the client is pinned to an old version, continuing without a producer id")
+ return &producerID{-1, -1, nil}, true
+ }
+ if errors.Is(err, errChosenBrokerDead) {
+ select {
+ case <-cl.ctx.Done():
+ cl.cfg.logger.Log(LogLevelInfo, "producer id initialization failure due to dying client", "err", err)
+ return &producerID{lastID, lastEpoch, ErrClientClosed}, true
+ default:
+ }
+ }
+ cl.cfg.logger.Log(LogLevelInfo, "producer id initialization failure, discarding initialization attempt", "err", err)
+ return &producerID{lastID, lastEpoch, err}, false
+ }
+
+ if err = kerr.ErrorForCode(resp.ErrorCode); err != nil {
+ // We could receive concurrent transactions; this is ignorable
+ // and we just want to re-init.
+ if kerr.IsRetriable(err) || errors.Is(err, kerr.ConcurrentTransactions) {
+ cl.cfg.logger.Log(LogLevelInfo, "producer id initialization resulted in retryable error, discarding initialization attempt", "err", err)
+ return &producerID{lastID, lastEpoch, err}, false
+ }
+ cl.cfg.logger.Log(LogLevelInfo, "producer id initialization errored", "err", err)
+ return &producerID{lastID, lastEpoch, err}, true
+ }
+
+ cl.cfg.logger.Log(LogLevelInfo, "producer id initialization success", "id", resp.ProducerID, "epoch", resp.ProducerEpoch)
+
+ // We track if this was v3. We do not need to gate this behind a mutex,
+ // because the only other use is EndTransaction's read, which is
+ // documented to only be called sequentially after producing.
+ if cl.producer.idVersion == -1 {
+ cl.producer.idVersion = req.Version
+ }
+
+ return &producerID{resp.ProducerID, resp.ProducerEpoch, nil}, true
+}
+
+// partitionsForTopicProduce returns the topic partitions for a record.
+// If the topic is not loaded yet, this buffers the record and returns
+// nil, nil.
+func (cl *Client) partitionsForTopicProduce(pr promisedRec) (*topicPartitions, *topicPartitionsData) {
+ p := &cl.producer
+ topic := pr.Topic
+
+ topics := p.topics.load()
+ parts, exists := topics[topic]
+ if exists {
+ if v := parts.load(); len(v.partitions) > 0 {
+ return parts, v
+ }
+ }
+
+ if !exists { // topic did not exist: check again under mu and potentially create it
+ p.topicsMu.Lock()
+ defer p.topicsMu.Unlock()
+
+ if parts, exists = p.topics.load()[topic]; !exists { // update parts for below
+ // Before we store the new topic, we lock unknown
+ // topics to prevent a concurrent metadata update
+ // seeing our new topic before we are waiting from the
+ // addUnknownTopicRecord fn. Otherwise, we would wait
+ // and never be re-notified.
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+
+ p.topics.storeTopics([]string{topic})
+ cl.addUnknownTopicRecord(pr)
+ cl.triggerUpdateMetadataNow("forced load because we are producing to a topic for the first time")
+ return nil, nil
+ }
+ }
+
+ // Here, the topic existed, but maybe has not loaded partitions yet. We
+ // have to lock unknown topics first to ensure ordering just in case a
+ // load has not happened.
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+
+ if v := parts.load(); len(v.partitions) > 0 {
+ return parts, v
+ }
+ cl.addUnknownTopicRecord(pr)
+ cl.triggerUpdateMetadata(false, "reload trigger due to produce topic still not known")
+
+ return nil, nil // our record is buffered waiting for metadata update; nothing to return
+}
+
+// addUnknownTopicRecord adds a record to a topic whose partitions are
+// currently unknown. This is always called with the unknownTopicsMu held.
+func (cl *Client) addUnknownTopicRecord(pr promisedRec) {
+ unknown := cl.producer.unknownTopics[pr.Topic]
+ if unknown == nil {
+ unknown = &unknownTopicProduces{
+ buffered: make([]promisedRec, 0, 100),
+ wait: make(chan error, 5),
+ fatal: make(chan error, 1),
+ }
+ cl.producer.unknownTopics[pr.Topic] = unknown
+ }
+ unknown.buffered = append(unknown.buffered, pr)
+ if len(unknown.buffered) == 1 {
+ go cl.waitUnknownTopic(pr.ctx, pr.Record.Context, pr.Topic, unknown)
+ }
+}
+
+// waitUnknownTopic waits for a notification
+func (cl *Client) waitUnknownTopic(
+ pctx context.Context, // context passed to Produce
+ rctx context.Context, // context on the record itself
+ topic string,
+ unknown *unknownTopicProduces,
+) {
+ cl.cfg.logger.Log(LogLevelInfo, "producing to a new topic for the first time, fetching metadata to learn its partitions", "topic", topic)
+
+ var (
+ tries int
+ unknownTries int64
+ err error
+ after <-chan time.Time
+ )
+
+ if timeout := cl.cfg.recordTimeout; timeout > 0 {
+ timer := time.NewTimer(cl.cfg.recordTimeout)
+ defer timer.Stop()
+ after = timer.C
+ }
+
+ // Ordering: aborting is set first, then unknown topics are manually
+ // canceled in a lock. New unknown topics after that lock will see
+ // aborting here and immediately cancel themselves.
+ if cl.producer.isAborting() {
+ err = ErrAborting
+ }
+
+ for err == nil {
+ select {
+ case <-pctx.Done():
+ err = pctx.Err()
+ case <-rctx.Done():
+ err = rctx.Err()
+ case <-cl.ctx.Done():
+ err = ErrClientClosed
+ case <-after:
+ err = ErrRecordTimeout
+ case err = <-unknown.fatal:
+ case retryableErr, ok := <-unknown.wait:
+ if !ok {
+ cl.cfg.logger.Log(LogLevelInfo, "done waiting for metadata for new topic", "topic", topic)
+ return // metadata was successful!
+ }
+ cl.cfg.logger.Log(LogLevelInfo, "new topic metadata wait failed, retrying wait", "topic", topic, "err", retryableErr)
+ tries++
+ if int64(tries) >= cl.cfg.recordRetries {
+ err = fmt.Errorf("no partitions available after attempting to refresh metadata %d times, last err: %w", tries, retryableErr)
+ }
+ if cl.cfg.maxUnknownFailures >= 0 && errors.Is(retryableErr, kerr.UnknownTopicOrPartition) {
+ unknownTries++
+ if unknownTries > cl.cfg.maxUnknownFailures {
+ err = retryableErr
+ }
+ }
+ }
+ }
+
+ // If we errored above, we come down here to potentially clear the
+ // topic wait and fail all buffered records. However, under some
+ // extreme conditions, a quickly following metadata update could delete
+ // our unknown topic, and then a produce could recreate a new unknown
+ // topic. We only delete and finish promises if the pointer in the
+ // unknown topic map is still the same.
+ p := &cl.producer
+
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+
+ nowUnknown := p.unknownTopics[topic]
+ if nowUnknown != unknown {
+ return
+ }
+ cl.cfg.logger.Log(LogLevelInfo, "new topic metadata wait failed, done retrying, failing all records", "topic", topic, "err", err)
+
+ delete(p.unknownTopics, topic)
+ p.promiseBatch(batchPromise{
+ recs: unknown.buffered,
+ err: err,
+ })
+}
+
+func (cl *Client) unlingerDueToMaxRecsBuffered() {
+ if cl.cfg.linger <= 0 {
+ return
+ }
+ for _, parts := range cl.producer.topics.load() {
+ for _, part := range parts.load().partitions {
+ part.records.unlingerAndManuallyDrain()
+ }
+ }
+ cl.cfg.logger.Log(LogLevelDebug, "unlingered all partitions due to hitting max buffered")
+}
+
+// Flush hangs waiting for all buffered records to be flushed, stopping all
+// lingers if necessary.
+//
+// If the context finishes (Done), this returns the context's error.
+//
+// This function is safe to call multiple times concurrently, and safe to call
+// concurrent with Flush.
+func (cl *Client) Flush(ctx context.Context) error {
+ p := &cl.producer
+
+ // Signal to finishRecord that we want to be notified once buffered hits 0.
+ // Also forbid any new producing to start a linger.
+ p.flushing.Add(1)
+ defer p.flushing.Add(-1)
+
+ cl.cfg.logger.Log(LogLevelInfo, "flushing")
+ defer cl.cfg.logger.Log(LogLevelDebug, "flushed")
+
+ // At this point, if lingering is configured, nothing will _start_ a
+ // linger because the producer's flushing atomic int32 is nonzero. We
+ // must wake anything that could be lingering up, after which all sinks
+ // will loop draining.
+ if cl.cfg.linger > 0 || cl.cfg.manualFlushing {
+ for _, parts := range p.topics.load() {
+ for _, part := range parts.load().partitions {
+ part.records.unlingerAndManuallyDrain()
+ }
+ }
+ }
+
+ quit := false
+ done := make(chan struct{})
+ go func() {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ defer close(done)
+
+ for !quit && p.bufferedRecords+int64(p.blocked.Load()) > 0 {
+ p.c.Wait()
+ }
+ }()
+
+ select {
+ case <-done:
+ return nil
+ case <-ctx.Done():
+ p.mu.Lock()
+ quit = true
+ p.mu.Unlock()
+ p.c.Broadcast()
+ return ctx.Err()
+ }
+}
+
+func (p *producer) pause(ctx context.Context) error {
+ p.inflight.Add(1 << 48)
+
+ quit := false
+ done := make(chan struct{})
+ go func() {
+ p.mu.Lock()
+ defer p.mu.Unlock()
+ defer close(done)
+ for !quit && p.inflight.Load()&((1<<48)-1) != 0 {
+ p.c.Wait()
+ }
+ }()
+
+ select {
+ case <-done:
+ return nil
+ case <-ctx.Done():
+ p.mu.Lock()
+ quit = true
+ p.mu.Unlock()
+ p.c.Broadcast()
+ p.resume() // dec our inflight
+ return ctx.Err()
+ }
+}
+
+func (p *producer) resume() {
+ if p.inflight.Add(-1<<48) == 0 {
+ p.cl.allSinksAndSources(func(sns sinkAndSource) {
+ sns.sink.maybeDrain()
+ })
+ }
+}
+
+func (p *producer) maybeAddInflight() bool {
+ if p.inflight.Load()>>48 > 0 {
+ return false
+ }
+ if p.inflight.Add(1)>>48 > 0 {
+ p.decInflight()
+ return false
+ }
+ return true
+}
+
+func (p *producer) decInflight() {
+ if p.inflight.Add(-1)>>48 > 0 {
+ p.mu.Lock()
+ p.mu.Unlock() //nolint:gocritic,staticcheck // We use the lock as a barrier, unlocking immediately is safe.
+ p.c.Broadcast()
+ }
+}
+
+// Bumps the tries for all buffered records in the client.
+//
+// This is called whenever there is a problematic error that would affect the
+// state of all buffered records as a whole:
+//
+// - if we cannot init a producer ID due to RequestWith errors, producing is useless
+// - if we cannot add partitions to a txn due to RequestWith errors, producing is useless
+//
+// Note that these are specifically due to RequestWith errors, not due to
+// receiving a response that has a retryable error code. That is, if our
+// request keeps dying.
+func (cl *Client) bumpRepeatedLoadErr(err error) {
+ p := &cl.producer
+
+ for _, partitions := range p.topics.load() {
+ for _, partition := range partitions.load().partitions {
+ partition.records.bumpRepeatedLoadErr(err)
+ }
+ }
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+ for _, unknown := range p.unknownTopics {
+ select {
+ case unknown.wait <- err:
+ default:
+ }
+ }
+}
+
+// Clears all buffered records in the client with the given error.
+//
+// - closing client
+// - aborting transaction
+// - fatal AddPartitionsToTxn
+//
+// Because the error fails everything, we also empty our unknown topics and
+// delete any topics that were still unknown from the producer's topics.
+func (cl *Client) failBufferedRecords(err error) {
+ p := &cl.producer
+
+ for _, partitions := range p.topics.load() {
+ for _, partition := range partitions.load().partitions {
+ recBuf := partition.records
+ recBuf.mu.Lock()
+ recBuf.failAllRecords(err)
+ recBuf.mu.Unlock()
+ }
+ }
+
+ p.topicsMu.Lock()
+ defer p.topicsMu.Unlock()
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+
+ toStore := p.topics.clone()
+ defer p.topics.storeData(toStore)
+
+ var toFail [][]promisedRec
+ for topic, unknown := range p.unknownTopics {
+ delete(toStore, topic)
+ delete(p.unknownTopics, topic)
+ close(unknown.wait)
+ toFail = append(toFail, unknown.buffered)
+ }
+
+ for _, fail := range toFail {
+ p.promiseBatch(batchPromise{
+ recs: fail,
+ err: err,
+ })
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go b/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go
new file mode 100644
index 0000000000000..4f1ebe6f524b0
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go
@@ -0,0 +1,628 @@
+package kgo
+
+import (
+ "context"
+ "errors"
+ "reflect"
+ "time"
+ "unsafe"
+)
+
+// RecordHeader contains extra information that can be sent with Records.
+type RecordHeader struct {
+ Key string
+ Value []byte
+}
+
+// RecordAttrs contains additional meta information about a record, such as its
+// compression or timestamp type.
+type RecordAttrs struct {
+ // 6 bits are used right now for record batches, and we use the high
+ // bit to signify no timestamp due to v0 message set.
+ //
+ // bits 1 thru 3:
+ // 000 no compression
+ // 001 gzip
+ // 010 snappy
+ // 011 lz4
+ // 100 zstd
+ // bit 4: timestamp type
+ // bit 5: is transactional
+ // bit 6: is control
+ // bit 8: no timestamp type
+ attrs uint8
+}
+
+// TimestampType specifies how Timestamp was determined.
+//
+// The default, 0, means that the timestamp was determined in a client
+// when the record was produced.
+//
+// An alternative is 1, which is when the Timestamp is set in Kafka.
+//
+// Records pre 0.10.0 did not have timestamps and have value -1.
+func (a RecordAttrs) TimestampType() int8 {
+ if a.attrs&0b1000_0000 != 0 {
+ return -1
+ }
+ return int8(a.attrs&0b0000_1000) >> 3
+}
+
+// CompressionType signifies with which algorithm this record was compressed.
+//
+// 0 is no compression, 1 is gzip, 2 is snappy, 3 is lz4, and 4 is zstd.
+func (a RecordAttrs) CompressionType() uint8 {
+ return a.attrs & 0b0000_0111
+}
+
+// IsTransactional returns whether a record is a part of a transaction.
+func (a RecordAttrs) IsTransactional() bool {
+ return a.attrs&0b0001_0000 != 0
+}
+
+// IsControl returns whether a record is a "control" record (ABORT or COMMIT).
+// These are generally not visible unless explicitly opted into.
+func (a RecordAttrs) IsControl() bool {
+ return a.attrs&0b0010_0000 != 0
+}
+
+// Record is a record to write to Kafka.
+type Record struct {
+ // Key is an optional field that can be used for partition assignment.
+ //
+ // This is generally used with a hash partitioner to cause all records
+ // with the same key to go to the same partition.
+ Key []byte
+ // Value is blob of data to write to Kafka.
+ Value []byte
+
+ // Headers are optional key/value pairs that are passed along with
+ // records.
+ //
+ // These are purely for producers and consumers; Kafka does not look at
+ // this field and only writes it to disk.
+ Headers []RecordHeader
+
+ // NOTE: if logAppendTime, timestamp is MaxTimestamp, not first + delta
+ // zendesk/ruby-kafka#706
+
+ // Timestamp is the timestamp that will be used for this record.
+ //
+ // Record batches are always written with "CreateTime", meaning that
+ // timestamps are generated by clients rather than brokers.
+ //
+ // When producing, if this field is not yet set, it is set to time.Now.
+ Timestamp time.Time
+
+ // Topic is the topic that a record is written to.
+ //
+ // This must be set for producing.
+ Topic string
+
+ // Partition is the partition that a record is written to.
+ //
+ // For producing, this is left unset. This will be set by the client
+ // before the record is unbuffered. If you use the ManualPartitioner,
+ // the value of this field is always the partition chosen when
+ // producing (i.e., you partition manually ahead of time).
+ Partition int32
+
+ // Attrs specifies what attributes were on this record.
+ //
+ // For producing, this is left unset. This will be set by the client
+ // before the record is unbuffered.
+ Attrs RecordAttrs
+
+ // ProducerEpoch is the producer epoch of this message if it was
+ // produced with a producer ID. An epoch and ID of 0 means it was not.
+ //
+ // For producing, this is left unset. This will be set by the client
+ // before the record is unbuffered.
+ ProducerEpoch int16
+
+ // ProducerEpoch is the producer ID of this message if it was produced
+ // with a producer ID. An epoch and ID of 0 means it was not.
+ //
+ // For producing, this is left unset. This will be set by the client
+ // before the record is unbuffered.
+ ProducerID int64
+
+ // LeaderEpoch is the leader epoch of the broker at the time this
+ // record was written, or -1 if on message sets.
+ //
+ // For committing records, it is not recommended to modify the
+ // LeaderEpoch. Clients use the LeaderEpoch for data loss detection.
+ LeaderEpoch int32
+
+ // Offset is the offset that a record is written as.
+ //
+ // For producing, this is left unset. This will be set by the client
+ // before the record is unbuffered. If you are producing with no acks,
+ // this will just be the offset used in the produce request and does
+ // not mirror the offset actually stored within Kafka.
+ Offset int64
+
+ // Context is an optional field that is used for enriching records.
+ //
+ // If this field is nil when producing, it is set to the Produce ctx
+ // arg. This field can be used to propagate record enrichment across
+ // producer hooks. It can also be set in a consumer hook to propagate
+ // enrichment to consumer clients.
+ Context context.Context
+}
+
+func (r *Record) userSize() int64 {
+ s := len(r.Key) + len(r.Value)
+ for _, h := range r.Headers {
+ s += len(h.Key) + len(h.Value)
+ }
+ return int64(s)
+}
+
+// When buffering records, we calculate the length and tsDelta ahead of time
+// (also because number width affects encoding length). We repurpose the Offset
+// field to save space.
+func (r *Record) setLengthAndTimestampDelta(length int32, tsDelta int64) {
+ r.LeaderEpoch = length
+ r.Offset = tsDelta
+}
+
+func (r *Record) lengthAndTimestampDelta() (length int32, tsDelta int64) {
+ return r.LeaderEpoch, r.Offset
+}
+
+// AppendFormat appends a record to b given the layout or returns an error if
+// the layout is invalid. This is a one-off shortcut for using
+// NewRecordFormatter. See that function's documentation for the layout
+// specification.
+func (r *Record) AppendFormat(b []byte, layout string) ([]byte, error) {
+ f, err := NewRecordFormatter(layout)
+ if err != nil {
+ return b, err
+ }
+ return f.AppendRecord(b, r), nil
+}
+
+// StringRecord returns a Record with the Value field set to the input value
+// string. For producing, this function is useful in tandem with the
+// client-level DefaultProduceTopic option.
+//
+// This function uses the 'unsafe' package to avoid copying value into a slice.
+//
+// NOTE: It is NOT SAFE to modify the record's value. This function should only
+// be used if you only ever read record fields. This function can safely be used
+// for producing; the client never modifies a record's key nor value fields.
+func StringRecord(value string) *Record {
+ var slice []byte
+ slicehdr := (*reflect.SliceHeader)(unsafe.Pointer(&slice)) //nolint:gosec // known way to convert string to slice
+ slicehdr.Data = ((*reflect.StringHeader)(unsafe.Pointer(&value))).Data //nolint:gosec // known way to convert string to slice
+ slicehdr.Len = len(value)
+ slicehdr.Cap = len(value)
+
+ return &Record{Value: slice}
+}
+
+// KeyStringRecord returns a Record with the Key and Value fields set to the
+// input key and value strings. For producing, this function is useful in
+// tandem with the client-level DefaultProduceTopic option.
+//
+// This function uses the 'unsafe' package to avoid copying value into a slice.
+//
+// NOTE: It is NOT SAFE to modify the record's value. This function should only
+// be used if you only ever read record fields. This function can safely be used
+// for producing; the client never modifies a record's key nor value fields.
+func KeyStringRecord(key, value string) *Record {
+ r := StringRecord(value)
+
+ keyhdr := (*reflect.SliceHeader)(unsafe.Pointer(&r.Key)) //nolint:gosec // known way to convert string to slice
+ keyhdr.Data = ((*reflect.StringHeader)(unsafe.Pointer(&key))).Data //nolint:gosec // known way to convert string to slice
+ keyhdr.Len = len(key)
+ keyhdr.Cap = len(key)
+
+ return r
+}
+
+// SliceRecord returns a Record with the Value field set to the input value
+// slice. For producing, this function is useful in tandem with the
+// client-level DefaultProduceTopic option.
+func SliceRecord(value []byte) *Record {
+ return &Record{Value: value}
+}
+
+// KeySliceRecord returns a Record with the Key and Value fields set to the
+// input key and value slices. For producing, this function is useful in
+// tandem with the client-level DefaultProduceTopic option.
+func KeySliceRecord(key, value []byte) *Record {
+ return &Record{Key: key, Value: value}
+}
+
+// FetchPartition is a response for a partition in a fetched topic from a
+// broker.
+type FetchPartition struct {
+ // Partition is the partition this is for.
+ Partition int32
+ // Err is an error for this partition in the fetch.
+ //
+ // Note that if this is a fatal error, such as data loss or non
+ // retryable errors, this partition will never be fetched again.
+ Err error
+ // HighWatermark is the current high watermark for this partition, that
+ // is, the current offset that is on all in sync replicas.
+ HighWatermark int64
+ // LastStableOffset is the offset at which all prior offsets have been
+ // "decided". Non transactional records are always decided immediately,
+ // but transactional records are only decided once they are committed
+ // or aborted.
+ //
+ // The LastStableOffset will always be at or under the HighWatermark.
+ LastStableOffset int64
+ // LogStartOffset is the low watermark of this partition, otherwise
+ // known as the earliest offset in the partition.
+ LogStartOffset int64
+ // Records contains feched records for this partition.
+ Records []*Record
+}
+
+// EachRecord calls fn for each record in the partition.
+func (p *FetchPartition) EachRecord(fn func(*Record)) {
+ for _, r := range p.Records {
+ fn(r)
+ }
+}
+
+// FetchTopic is a response for a fetched topic from a broker.
+type FetchTopic struct {
+ // Topic is the topic this is for.
+ Topic string
+ // Partitions contains individual partitions in the topic that were
+ // fetched.
+ Partitions []FetchPartition
+}
+
+// EachPartition calls fn for each partition in Fetches.
+func (t *FetchTopic) EachPartition(fn func(FetchPartition)) {
+ for i := range t.Partitions {
+ fn(t.Partitions[i])
+ }
+}
+
+// EachRecord calls fn for each record in the topic, in any partition order.
+func (t *FetchTopic) EachRecord(fn func(*Record)) {
+ for i := range t.Partitions {
+ for _, r := range t.Partitions[i].Records {
+ fn(r)
+ }
+ }
+}
+
+// Records returns all records in all partitions in this topic.
+//
+// This is a convenience function that does a single slice allocation. If you
+// can process records individually, it is far more efficient to use the Each
+// functions.
+func (t *FetchTopic) Records() []*Record {
+ var n int
+ t.EachPartition(func(p FetchPartition) {
+ n += len(p.Records)
+ })
+ rs := make([]*Record, 0, n)
+ t.EachPartition(func(p FetchPartition) {
+ rs = append(rs, p.Records...)
+ })
+ return rs
+}
+
+// Fetch is an individual response from a broker.
+type Fetch struct {
+ // Topics are all topics being responded to from a fetch to a broker.
+ Topics []FetchTopic
+}
+
+// Fetches is a group of fetches from brokers.
+type Fetches []Fetch
+
+// FetchError is an error in a fetch along with the topic and partition that
+// the error was on.
+type FetchError struct {
+ Topic string
+ Partition int32
+ Err error
+}
+
+// Errors returns all errors in a fetch with the topic and partition that
+// errored.
+//
+// There are a few classes of errors possible:
+//
+// 1. a normal kerr.Error; these are usually the non-retryable kerr.Errors,
+// but theoretically a non-retryable error can be fixed at runtime (auth
+// error? fix auth). It is worth restarting the client for these errors if
+// you do not intend to fix this problem at runtime.
+//
+// 2. an injected *ErrDataLoss; these are informational, the client
+// automatically resets consuming to where it should and resumes. This
+// error is worth logging and investigating, but not worth restarting the
+// client for.
+//
+// 3. an untyped batch parse failure; these are usually unrecoverable by
+// restarts, and it may be best to just let the client continue. However,
+// restarting is an option, but you may need to manually repair your
+// partition.
+//
+// 4. an injected ErrClientClosed; this is a fatal informational error that
+// is returned from every Poll call if the client has been closed.
+// A corresponding helper function IsClientClosed can be used to detect
+// this error.
+//
+// 5. an injected context error; this can be present if the context you were
+// using for polling timed out or was canceled.
+//
+// 6. an injected ErrGroupSession; this is an informational error that is
+// injected once a group session is lost in a way that is not the standard
+// rebalance. This error can signify that your consumer member is not able
+// to connect to the group (ACL problems, unreachable broker), or you
+// blocked rebalancing for too long, or your callbacks took too long.
+//
+// This list may grow over time.
+func (fs Fetches) Errors() []FetchError {
+ var errs []FetchError
+ fs.EachError(func(t string, p int32, err error) {
+ errs = append(errs, FetchError{t, p, err})
+ })
+ return errs
+}
+
+// When we fetch, it is possible for Kafka to reply with topics / partitions
+// that have no records and no errors. This will definitely happen outside of
+// fetch sessions, but may also happen at other times (for some reason).
+// When that happens we want to ignore the fetch.
+func (f Fetch) hasErrorsOrRecords() bool {
+ for i := range f.Topics {
+ t := &f.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ if p.Err != nil || len(p.Records) > 0 {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// IsClientClosed returns whether the fetches include an error indicating that
+// the client is closed.
+//
+// This function is useful to break out of a poll loop; you likely want to call
+// this function before calling Errors. If you may cancel the context to poll,
+// you may want to use Err0 and manually check errors.Is(ErrClientClosed) or
+// errors.Is(context.Canceled).
+func (fs Fetches) IsClientClosed() bool {
+ // An injected ErrClientClosed is a single fetch with one topic and
+ // one partition. We can use this to make IsClientClosed do less work.
+ return len(fs) == 1 && len(fs[0].Topics) == 1 && len(fs[0].Topics[0].Partitions) == 1 && errors.Is(fs[0].Topics[0].Partitions[0].Err, ErrClientClosed)
+}
+
+// Err0 returns the error at the 0th index fetch, topic, and partition. This
+// can be used to quickly check if polling returned early because the client
+// was closed or the context was canceled and is faster than performing a
+// linear scan over all partitions with Err. When the client is closed or the
+// context is canceled, fetches will contain only one partition whose Err field
+// indicates the close / cancel. Note that this returns whatever the first
+// error is, nil or non-nil, and does not check for a specific error value.
+func (fs Fetches) Err0() error {
+ if len(fs) > 0 && len(fs[0].Topics) > 0 && len(fs[0].Topics[0].Partitions) > 0 {
+ return fs[0].Topics[0].Partitions[0].Err
+ }
+ return nil
+}
+
+// Err returns the first error in all fetches, if any. This can be used to
+// quickly check if the client is closed or your poll context was canceled, or
+// to check if there's some other error that requires deeper investigation with
+// EachError. This function performs a linear scan over all fetched partitions.
+// It is recommended to always check all errors. If you would like to more
+// quickly check ahead of time if a poll was canceled because of closing the
+// client or canceling the context, you can use Err0.
+func (fs Fetches) Err() error {
+ for _, f := range fs {
+ for i := range f.Topics {
+ ft := &f.Topics[i]
+ for j := range ft.Partitions {
+ fp := &ft.Partitions[j]
+ if fp.Err != nil {
+ return fp.Err
+ }
+ }
+ }
+ }
+ return nil
+}
+
+// EachError calls fn for every partition that had a fetch error with the
+// topic, partition, and error.
+//
+// This function has the same semantics as the Errors function; refer to the
+// documentation on that function for what types of errors are possible.
+func (fs Fetches) EachError(fn func(string, int32, error)) {
+ for _, f := range fs {
+ for i := range f.Topics {
+ ft := &f.Topics[i]
+ for j := range ft.Partitions {
+ fp := &ft.Partitions[j]
+ if fp.Err != nil {
+ fn(ft.Topic, fp.Partition, fp.Err)
+ }
+ }
+ }
+ }
+}
+
+// RecordIter returns an iterator over all records in a fetch.
+//
+// Note that errors should be inspected as well.
+func (fs Fetches) RecordIter() *FetchesRecordIter {
+ iter := &FetchesRecordIter{fetches: fs}
+ iter.prepareNext()
+ return iter
+}
+
+// FetchesRecordIter iterates over records in a fetch.
+type FetchesRecordIter struct {
+ fetches []Fetch
+ ti int // index to current topic in fetches[0]
+ pi int // index to current partition in current topic
+ ri int // index to current record in current partition
+}
+
+// Done returns whether there are any more records to iterate over.
+func (i *FetchesRecordIter) Done() bool {
+ return len(i.fetches) == 0
+}
+
+// Next returns the next record from a fetch.
+func (i *FetchesRecordIter) Next() *Record {
+ next := i.fetches[0].Topics[i.ti].Partitions[i.pi].Records[i.ri]
+ i.ri++
+ i.prepareNext()
+ return next
+}
+
+func (i *FetchesRecordIter) prepareNext() {
+beforeFetch0:
+ if len(i.fetches) == 0 {
+ return
+ }
+
+ fetch0 := &i.fetches[0]
+beforeTopic:
+ if i.ti >= len(fetch0.Topics) {
+ i.fetches = i.fetches[1:]
+ i.ti = 0
+ goto beforeFetch0
+ }
+
+ topic := &fetch0.Topics[i.ti]
+beforePartition:
+ if i.pi >= len(topic.Partitions) {
+ i.ti++
+ i.pi = 0
+ goto beforeTopic
+ }
+
+ partition := &topic.Partitions[i.pi]
+ if i.ri >= len(partition.Records) {
+ i.pi++
+ i.ri = 0
+ goto beforePartition
+ }
+}
+
+// EachPartition calls fn for each partition in Fetches.
+//
+// Partitions are not visited in any specific order, and a topic may be visited
+// multiple times if it is spread across fetches.
+func (fs Fetches) EachPartition(fn func(FetchTopicPartition)) {
+ for _, fetch := range fs {
+ for _, topic := range fetch.Topics {
+ for i := range topic.Partitions {
+ fn(FetchTopicPartition{
+ Topic: topic.Topic,
+ FetchPartition: topic.Partitions[i],
+ })
+ }
+ }
+ }
+}
+
+// EachTopic calls fn for each topic in Fetches.
+//
+// This is a convenience function that groups all partitions for the same topic
+// from many fetches into one FetchTopic. A map is internally allocated to
+// group partitions per topic before calling fn.
+func (fs Fetches) EachTopic(fn func(FetchTopic)) {
+ switch len(fs) {
+ case 0:
+ return
+ case 1:
+ for _, topic := range fs[0].Topics {
+ fn(topic)
+ }
+ return
+ }
+
+ topics := make(map[string][]FetchPartition)
+ for _, fetch := range fs {
+ for _, topic := range fetch.Topics {
+ topics[topic.Topic] = append(topics[topic.Topic], topic.Partitions...)
+ }
+ }
+
+ for topic, partitions := range topics {
+ fn(FetchTopic{
+ topic,
+ partitions,
+ })
+ }
+}
+
+// EachRecord calls fn for each record in Fetches.
+//
+// This is very similar to using a record iter, and is solely a convenience
+// function depending on which style you prefer.
+func (fs Fetches) EachRecord(fn func(*Record)) {
+ for iter := fs.RecordIter(); !iter.Done(); {
+ fn(iter.Next())
+ }
+}
+
+// Records returns all records in all fetches.
+//
+// This is a convenience function that does a single slice allocation. If you
+// can process records individually, it is far more efficient to use the Each
+// functions or the RecordIter.
+func (fs Fetches) Records() []*Record {
+ rs := make([]*Record, 0, fs.NumRecords())
+ fs.EachPartition(func(p FetchTopicPartition) {
+ rs = append(rs, p.Records...)
+ })
+ return rs
+}
+
+// NumRecords returns the total number of records across all fetched partitions.
+func (fs Fetches) NumRecords() (n int) {
+ fs.EachPartition(func(p FetchTopicPartition) {
+ n += len(p.Records)
+ })
+ return n
+}
+
+// Empty checks whether the fetch result empty. This method is faster than NumRecords() == 0.
+func (fs Fetches) Empty() bool {
+ for i := range fs {
+ for j := range fs[i].Topics {
+ for k := range fs[i].Topics[j].Partitions {
+ if len(fs[i].Topics[j].Partitions[k].Records) > 0 {
+ return false
+ }
+ }
+ }
+ }
+
+ return true
+}
+
+// FetchTopicPartition is similar to FetchTopic, but for an individual
+// partition.
+type FetchTopicPartition struct {
+ // Topic is the topic this is for.
+ Topic string
+ // FetchPartition is an individual partition within this topic.
+ FetchPartition
+}
+
+// EachRecord calls fn for each record in the topic's partition.
+func (r *FetchTopicPartition) EachRecord(fn func(*Record)) {
+ for _, r := range r.Records {
+ fn(r)
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/record_formatter.go b/vendor/github.com/twmb/franz-go/pkg/kgo/record_formatter.go
new file mode 100644
index 0000000000000..2f5d2ce3c33a1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/record_formatter.go
@@ -0,0 +1,2246 @@
+package kgo
+
+import (
+ "bufio"
+ "bytes"
+ "encoding/base64"
+ "encoding/binary"
+ "encoding/hex"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "regexp"
+ "strconv"
+ "strings"
+ "time"
+ "unicode/utf8"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+)
+
+////////////
+// WRITER //
+////////////
+
+// RecordFormatter formats records.
+type RecordFormatter struct {
+ calls atomicI64
+ fns []func([]byte, *FetchPartition, *Record) []byte
+}
+
+// AppendRecord appends a record to b given the parsed format and returns the
+// updated slice.
+func (f *RecordFormatter) AppendRecord(b []byte, r *Record) []byte {
+ for _, fn := range f.fns {
+ b = fn(b, nil, r)
+ }
+ return b
+}
+
+// AppendPartitionRecord appends a record and partition to b given the parsed
+// format and returns the updated slice.
+func (f *RecordFormatter) AppendPartitionRecord(b []byte, p *FetchPartition, r *Record) []byte {
+ for _, fn := range f.fns {
+ b = fn(b, p, r)
+ }
+ return b
+}
+
+// NewRecordFormatter returns a formatter for the given layout, or an error if
+// the layout is invalid.
+//
+// The formatter is very powerful, as such there is a lot to describe. This
+// documentation attempts to be as succinct as possible.
+//
+// Similar to the fmt package, record formatting is based off of slash escapes
+// and percent "verbs" (copying fmt package lingo). Slashes are used for common
+// escapes,
+//
+// \t \n \r \\ \xNN
+//
+// printing tabs, newlines, carriage returns, slashes, and hex encoded
+// characters.
+//
+// Percent encoding opts in to printing aspects of either a record or a fetch
+// partition:
+//
+// %t topic
+// %T topic length
+// %k key
+// %K key length
+// %v value
+// %V value length
+// %h begin the header specification
+// %H number of headers
+// %p partition
+// %o offset
+// %e leader epoch
+// %d timestamp (date, formatting described below)
+// %a record attributes (formatting required, described below)
+// %x producer id
+// %y producer epoch
+//
+// For AppendPartitionRecord, the formatter also undersands the following three
+// formatting options:
+//
+// %[ partition log start offset
+// %| partition last stable offset
+// %] partition high watermark
+//
+// The formatter internally tracks the number of times AppendRecord or
+// AppendPartitionRecord have been called. The special option %i prints the
+// iteration / call count:
+//
+// %i format iteration number (starts at 1)
+//
+// Lastly, there are three escapes to print raw characters that are usually
+// used for formatting options:
+//
+// %% percent sign
+// %{ left brace (required if a brace is after another format option)
+// %} right brace
+//
+// # Header specification
+//
+// Specifying headers is essentially a primitive nested format option,
+// accepting the key and value escapes above:
+//
+// %K header key length
+// %k header key
+// %V header value length
+// %v header value
+//
+// For example, "%H %h{%k %v }" will print the number of headers, and then each
+// header key and value with a space after each.
+//
+// # Verb modifiers
+//
+// Most of the previous verb specifications can be modified by adding braces
+// with a given modifier, e.g., "%V{ascii}". All modifiers are described below.
+//
+// # Numbers
+//
+// All number verbs accept braces that control how the number is printed:
+//
+// %v{ascii} the default, print the number as ascii
+// %v{number} alias for ascii
+//
+// %v{hex64} print 16 hex characters for the number
+// %v{hex32} print 8 hex characters for the number
+// %v{hex16} print 4 hex characters for the number
+// %v{hex8} print 2 hex characters for the number
+// %v{hex4} print 1 hex characters for the number
+// %v{hex} print as many hex characters as necessary for the number
+//
+// %v{big64} print the number in big endian uint64 format
+// %v{big32} print the number in big endian uint32 format
+// %v{big16} print the number in big endian uint16 format
+// %v{big8} alias for byte
+//
+// %v{little64} print the number in little endian uint64 format
+// %v{little32} print the number in little endian uint32 format
+// %v{little16} print the number in little endian uint16 format
+// %v{little8} alias for byte
+//
+// %v{byte} print the number as a single byte
+// %v{bool} print "true" if the number is non-zero, otherwise "false"
+//
+// All numbers are truncated as necessary per each given format.
+//
+// # Timestamps
+//
+// Timestamps can be specified in three formats: plain number formatting,
+// native Go timestamp formatting, or strftime formatting. Number formatting is
+// follows the rules above using the millisecond timestamp value. Go and
+// strftime have further internal format options:
+//
+// %d{go##2006-01-02T15:04:05Z07:00##}
+// %d{strftime[%F]}
+//
+// An arbitrary amount of pounds, braces, and brackets are understood before
+// beginning the actual timestamp formatting. For Go formatting, the format is
+// simply passed to the time package's AppendFormat function. For strftime, all
+// "man strftime" options are supported. Time is always in UTC.
+//
+// # Attributes
+//
+// Records attributes require formatting, where each formatting option selects
+// which attribute to print and how to print it.
+//
+// %a{compression}
+// %a{compression;number}
+// %a{compression;big64}
+// %a{compression;hex8}
+//
+// By default, prints the compression as text ("none", "gzip", ...).
+// Compression can be printed as a number with ";number", where number is any
+// number formatting option described above.
+//
+// %a{timestamp-type}
+// %a{timestamp-type;big64}
+//
+// Prints -1 for pre-0.10 records, 0 for client generated timestamps, and 1 for
+// broker generated. Number formatting can be controlled with ";number".
+//
+// %a{transactional-bit}
+// %a{transactional-bit;bool}
+//
+// Prints 1 if the record is a part of a transaction or 0 if it is not. Number
+// formatting can be controlled with ";number".
+//
+// %a{control-bit}
+// %a{control-bit;bool}
+//
+// Prints 1 if the record is a commit marker or 0 if it is not. Number
+// formatting can be controlled with ";number".
+//
+// # Text
+//
+// Topics, keys, and values have "base64", "base64raw", "hex", and "unpack"
+// formatting options:
+//
+// %t{hex}
+// %k{unpack{iIqQc.$}}
+// %v{base64}
+// %v{base64raw}
+//
+// Unpack formatting is inside of enclosing pounds, braces, or brackets, the
+// same way that timestamp formatting is understood. The syntax roughly follows
+// Python's struct packing/unpacking rules:
+//
+// x pad character (does not parse input)
+// < parse what follows as little endian
+// > parse what follows as big endian
+//
+// b signed byte
+// B unsigned byte
+// h int16 ("half word")
+// H uint16 ("half word")
+// i int32
+// I uint32
+// q int64 ("quad word")
+// Q uint64 ("quad word")
+//
+// c any character
+// . alias for c
+// s consume the rest of the input as a string
+// $ match the end of the line (append error string if anything remains)
+//
+// Unlike python, a '<' or '>' can appear anywhere in the format string and
+// affects everything that follows. It is possible to switch endianness
+// multiple times. If the parser needs more data than available, or if the more
+// input remains after '$', an error message will be appended.
+func NewRecordFormatter(layout string) (*RecordFormatter, error) {
+ var f RecordFormatter
+
+ var literal []byte // non-formatted raw text to output
+ var i int
+ for len(layout) > 0 {
+ i++
+ c, size := utf8.DecodeRuneInString(layout)
+ rawc := layout[:size]
+ layout = layout[size:]
+ switch c {
+ default:
+ literal = append(literal, rawc...)
+ continue
+
+ case '\\':
+ c, n, err := parseLayoutSlash(layout)
+ if err != nil {
+ return nil, err
+ }
+ layout = layout[n:]
+ literal = append(literal, c)
+ continue
+
+ case '%':
+ }
+
+ if len(layout) == 0 {
+ return nil, errors.New("invalid escape sequence at end of layout string")
+ }
+
+ cNext, size := utf8.DecodeRuneInString(layout)
+ if cNext == '%' || cNext == '{' || cNext == '}' {
+ literal = append(literal, byte(cNext))
+ layout = layout[size:]
+ continue
+ }
+
+ var (
+ isOpenBrace = len(layout) > 2 && layout[1] == '{'
+ handledBrace bool
+ escaped = layout[0]
+ )
+ layout = layout[1:]
+
+ // We are entering a format string. If we have any built
+ // literal before, this is now raw text that we will append.
+ if len(literal) > 0 {
+ l := literal
+ literal = nil
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, _ *Record) []byte { return append(b, l...) })
+ }
+
+ if isOpenBrace { // opening a brace: layout continues after
+ layout = layout[1:]
+ }
+
+ switch escaped {
+ default:
+ return nil, fmt.Errorf("unknown escape sequence %%%s", string(escaped))
+
+ case 'T', 'K', 'V', 'H', 'p', 'o', 'e', 'i', 'x', 'y', '[', '|', ']':
+ // Numbers default to ascii, but we support a bunch of
+ // formatting options. We parse the format here, and
+ // then below is switching on which field to print.
+ var numfn func([]byte, int64) []byte
+ if handledBrace = isOpenBrace; handledBrace {
+ numfn2, n, err := parseNumWriteLayout(layout)
+ if err != nil {
+ return nil, err
+ }
+ layout = layout[n:]
+ numfn = numfn2
+ } else {
+ numfn = writeNumASCII
+ }
+ switch escaped {
+ case 'T':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(len(r.Topic))) })
+ })
+ case 'K':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(len(r.Key))) })
+ })
+ case 'V':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(len(r.Value))) })
+ })
+ case 'H':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(len(r.Headers))) })
+ })
+ case 'p':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(r.Partition)) })
+ })
+ case 'o':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, r.Offset) })
+ })
+ case 'e':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(r.LeaderEpoch)) })
+ })
+ case 'i':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, _ *Record) []byte {
+ return numfn(b, f.calls.Add(1))
+ })
+ case 'x':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, r.ProducerID) })
+ })
+ case 'y':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, int64(r.ProducerEpoch)) })
+ })
+ case '[':
+ f.fns = append(f.fns, func(b []byte, p *FetchPartition, _ *Record) []byte {
+ return writeP(b, p, func(b []byte, p *FetchPartition) []byte { return numfn(b, p.LogStartOffset) })
+ })
+ case '|':
+ f.fns = append(f.fns, func(b []byte, p *FetchPartition, _ *Record) []byte {
+ return writeP(b, p, func(b []byte, p *FetchPartition) []byte { return numfn(b, p.LastStableOffset) })
+ })
+ case ']':
+ f.fns = append(f.fns, func(b []byte, p *FetchPartition, _ *Record) []byte {
+ return writeP(b, p, func(b []byte, p *FetchPartition) []byte { return numfn(b, p.HighWatermark) })
+ })
+ }
+
+ case 't', 'k', 'v':
+ var appendFn func([]byte, []byte) []byte
+ if handledBrace = isOpenBrace; handledBrace {
+ switch {
+ case strings.HasPrefix(layout, "}"):
+ layout = layout[len("}"):]
+ appendFn = appendPlain
+ case strings.HasPrefix(layout, "base64}"):
+ appendFn = appendBase64
+ layout = layout[len("base64}"):]
+ case strings.HasPrefix(layout, "base64raw}"):
+ appendFn = appendBase64raw
+ layout = layout[len("base64raw}"):]
+ case strings.HasPrefix(layout, "hex}"):
+ appendFn = appendHex
+ layout = layout[len("hex}"):]
+ case strings.HasPrefix(layout, "unpack"):
+ unpack, rem, err := nomOpenClose(layout[len("unpack"):])
+ if err != nil {
+ return nil, fmt.Errorf("unpack parse err: %v", err)
+ }
+ if len(rem) == 0 || rem[0] != '}' {
+ return nil, fmt.Errorf("unpack missing closing } in %q", layout)
+ }
+ layout = rem[1:]
+ appendFn, err = parseUnpack(unpack)
+ if err != nil {
+ return nil, fmt.Errorf("unpack formatting parse err: %v", err)
+ }
+
+ default:
+ return nil, fmt.Errorf("unknown %%%s{ escape", string(escaped))
+ }
+ } else {
+ appendFn = appendPlain
+ }
+ switch escaped {
+ case 't':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return appendFn(b, []byte(r.Topic)) })
+ })
+ case 'k':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return appendFn(b, r.Key) })
+ })
+ case 'v':
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return appendFn(b, r.Value) })
+ })
+ }
+
+ case 'a':
+ if !isOpenBrace {
+ return nil, errors.New("missing open brace sequence on %a signifying how attributes should be written")
+ }
+ handledBrace = true
+
+ num := func(skipText string, rfn func(*Record) int64) error {
+ layout = layout[len(skipText):]
+ numfn, n, err := parseNumWriteLayout(layout)
+ if err != nil {
+ return err
+ }
+ layout = layout[n:]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, rfn(r)) })
+ })
+ return nil
+ }
+ bi64 := func(b bool) int64 {
+ if b {
+ return 1
+ }
+ return 0
+ }
+
+ switch {
+ case strings.HasPrefix(layout, "compression}"):
+ layout = layout[len("compression}"):]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte {
+ switch codecType(r.Attrs.CompressionType()) {
+ case codecNone:
+ return append(b, "none"...)
+ case codecGzip:
+ return append(b, "gzip"...)
+ case codecSnappy:
+ return append(b, "snappy"...)
+ case codecLZ4:
+ return append(b, "lz4"...)
+ case codecZstd:
+ return append(b, "zstd"...)
+ default:
+ return append(b, "unknown"...)
+ }
+ })
+ })
+ case strings.HasPrefix(layout, "compression;"):
+ if err := num("compression;", func(r *Record) int64 { return int64(r.Attrs.CompressionType()) }); err != nil {
+ return nil, err
+ }
+
+ case strings.HasPrefix(layout, "timestamp-type}"):
+ layout = layout[len("timestamp-type}"):]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte {
+ return strconv.AppendInt(b, int64(r.Attrs.TimestampType()), 10)
+ })
+ })
+ case strings.HasPrefix(layout, "timestamp-type;"):
+ if err := num("timestamp-type;", func(r *Record) int64 { return int64(r.Attrs.TimestampType()) }); err != nil {
+ return nil, err
+ }
+
+ case strings.HasPrefix(layout, "transactional-bit}"):
+ layout = layout[len("transactional-bit}"):]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte {
+ if r.Attrs.IsTransactional() {
+ return append(b, '1')
+ }
+ return append(b, '0')
+ })
+ })
+ case strings.HasPrefix(layout, "transactional-bit;"):
+ if err := num("transactional-bit;", func(r *Record) int64 { return bi64(r.Attrs.IsTransactional()) }); err != nil {
+ return nil, err
+ }
+
+ case strings.HasPrefix(layout, "control-bit}"):
+ layout = layout[len("control-bit}"):]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte {
+ if r.Attrs.IsControl() {
+ return append(b, '1')
+ }
+ return append(b, '0')
+ })
+ })
+ case strings.HasPrefix(layout, "control-bit;"):
+ if err := num("control-bit;", func(r *Record) int64 { return bi64(r.Attrs.IsControl()) }); err != nil {
+ return nil, err
+ }
+
+ default:
+ return nil, errors.New("unknown %a formatting")
+ }
+
+ case 'h':
+ if !isOpenBrace {
+ return nil, errors.New("missing open brace sequence on %h signifying how headers are written")
+ }
+ handledBrace = true
+ // Headers can have their own internal braces, so we
+ // must look for a matching end brace.
+ braces := 1
+ at := 0
+ for braces != 0 && len(layout[at:]) > 0 {
+ switch layout[at] {
+ case '{':
+ if at > 0 && layout[at-1] != '%' {
+ braces++
+ }
+ case '}':
+ if at > 0 && layout[at-1] != '%' {
+ braces--
+ }
+ }
+ at++
+ }
+ if braces > 0 {
+ return nil, fmt.Errorf("invalid header specification: missing closing brace in %q", layout)
+ }
+
+ spec := layout[:at-1]
+ layout = layout[at:]
+ inf, err := NewRecordFormatter(spec)
+ if err != nil {
+ return nil, fmt.Errorf("invalid header specification %q: %v", spec, err)
+ }
+
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ reuse := new(Record)
+ for _, header := range r.Headers {
+ reuse.Key = []byte(header.Key)
+ reuse.Value = header.Value
+ b = inf.AppendRecord(b, reuse)
+ }
+ return b
+ })
+
+ case 'd':
+ // For datetime parsing, we support plain millis in any
+ // number format, strftime, or go formatting. We
+ // default to plain ascii millis.
+ handledBrace = isOpenBrace
+ if !handledBrace {
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return strconv.AppendInt(b, r.Timestamp.UnixNano()/1e6, 10) })
+ })
+ continue
+ }
+
+ switch {
+ case strings.HasPrefix(layout, "strftime"):
+ tfmt, rem, err := nomOpenClose(layout[len("strftime"):])
+ if err != nil {
+ return nil, fmt.Errorf("strftime parse err: %v", err)
+ }
+ if len(rem) == 0 || rem[0] != '}' {
+ return nil, fmt.Errorf("%%d{strftime missing closing } in %q", layout)
+ }
+ layout = rem[1:]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return strftimeAppendFormat(b, tfmt, r.Timestamp.UTC()) })
+ })
+
+ case strings.HasPrefix(layout, "go"):
+ tfmt, rem, err := nomOpenClose(layout[len("go"):])
+ if err != nil {
+ return nil, fmt.Errorf("go parse err: %v", err)
+ }
+ if len(rem) == 0 || rem[0] != '}' {
+ return nil, fmt.Errorf("%%d{go missing closing } in %q", layout)
+ }
+ layout = rem[1:]
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return r.Timestamp.UTC().AppendFormat(b, tfmt) })
+ })
+
+ default:
+ numfn, n, err := parseNumWriteLayout(layout)
+ if err != nil {
+ return nil, fmt.Errorf("unknown %%d{ time specification in %q", layout)
+ }
+ layout = layout[n:]
+
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, r *Record) []byte {
+ return writeR(b, r, func(b []byte, r *Record) []byte { return numfn(b, r.Timestamp.UnixNano()/1e6) })
+ })
+ }
+ }
+
+ // If we opened a brace, we require a closing brace.
+ if isOpenBrace && !handledBrace {
+ return nil, fmt.Errorf("unhandled open brace %q", layout)
+ }
+ }
+
+ // Ensure we print any trailing text.
+ if len(literal) > 0 {
+ f.fns = append(f.fns, func(b []byte, _ *FetchPartition, _ *Record) []byte { return append(b, literal...) })
+ }
+
+ return &f, nil
+}
+
+func appendPlain(dst, src []byte) []byte {
+ return append(dst, src...)
+}
+
+func appendBase64(dst, src []byte) []byte {
+ fin := append(dst, make([]byte, base64.StdEncoding.EncodedLen(len(src)))...)
+ base64.StdEncoding.Encode(fin[len(dst):], src)
+ return fin
+}
+
+func appendBase64raw(dst, src []byte) []byte {
+ fin := append(dst, make([]byte, base64.RawStdEncoding.EncodedLen(len(src)))...)
+ base64.RawStdEncoding.Encode(fin[len(dst):], src)
+ return fin
+}
+
+func appendHex(dst, src []byte) []byte {
+ fin := append(dst, make([]byte, hex.EncodedLen(len(src)))...)
+ hex.Encode(fin[len(dst):], src)
+ return fin
+}
+
+// nomOpenClose extracts a middle section from a string beginning with repeated
+// delimiters and returns it as with remaining (past end delimiters) string.
+func nomOpenClose(src string) (middle, remaining string, err error) {
+ if len(src) == 0 {
+ return "", "", errors.New("empty layout")
+ }
+ delim := src[0]
+ openers := 1
+ for openers < len(src) && src[openers] == delim {
+ openers++
+ }
+ switch delim {
+ case '{':
+ delim = '}'
+ case '[':
+ delim = ']'
+ case '(':
+ delim = ')'
+ }
+ src = src[openers:]
+ end := strings.Repeat(string(delim), openers)
+ idx := strings.Index(src, end)
+ if idx < 0 {
+ return "", "", fmt.Errorf("missing end delim %q", end)
+ }
+ middle = src[:idx]
+ return middle, src[idx+len(end):], nil
+}
+
+func parseUnpack(layout string) (func([]byte, []byte) []byte, error) {
+ // take dst, src; return dst
+ // %!q(eof)
+ // take 8 bytes, decode it, print decoded
+ var fns []func([]byte, []byte) ([]byte, int)
+ little := true
+ var sawEnd bool
+ for i := range layout {
+ if sawEnd {
+ return nil, errors.New("already saw end-of-input parsing character")
+ }
+
+ var need int
+ var signed bool
+ cs := layout[i : i+1]
+ switch cs[0] {
+ case 'x':
+ continue
+
+ case '<':
+ little = true
+ continue
+ case '>':
+ little = false
+ continue
+
+ case 'b':
+ need = 1
+ signed = true
+ case 'B':
+ need = 1
+ case 'h':
+ need = 2
+ signed = true
+ case 'H':
+ need = 2
+ case 'i':
+ need = 4
+ signed = true
+ case 'I':
+ need = 4
+ case 'q':
+ need = 8
+ signed = true
+ case 'Q':
+ need = 8
+
+ case 'c', '.':
+ fns = append(fns, func(dst, src []byte) ([]byte, int) {
+ if len(src) < 1 {
+ return append(dst, "%!c(no bytes available)"...), 0
+ }
+ return append(dst, src[0]), 1
+ })
+ continue
+
+ case 's':
+ sawEnd = true
+ fns = append(fns, func(dst, src []byte) ([]byte, int) {
+ return append(dst, src...), len(src)
+ })
+ continue
+
+ case '$':
+ fns = append(fns, func(dst, src []byte) ([]byte, int) {
+ if len(src) != 0 {
+ dst = append(dst, "%!$(not end-of-input)"...)
+ }
+ return dst, len(src)
+ })
+ sawEnd = true
+ continue
+
+ default:
+ return nil, fmt.Errorf("invalid unpack parsing character %s", cs)
+ }
+
+ islittle := little
+ fns = append(fns, func(dst, src []byte) ([]byte, int) {
+ if len(src) < need {
+ return append(dst, fmt.Sprintf("%%!%%s(have %d bytes, need %d)", len(src), need)...), len(src)
+ }
+
+ var ul, ub uint64
+ var il, ib int64
+ switch need {
+ case 1:
+ ul = uint64(src[0])
+ ub = ul
+ il = int64(byte(ul))
+ ib = int64(byte(ub))
+ case 2:
+ ul = uint64(binary.LittleEndian.Uint16(src))
+ ub = uint64(binary.BigEndian.Uint16(src))
+ il = int64(int16(ul))
+ ib = int64(int16(ub))
+ case 4:
+ ul = uint64(binary.LittleEndian.Uint32(src))
+ ub = uint64(binary.BigEndian.Uint32(src))
+ il = int64(int32(ul))
+ ib = int64(int32(ub))
+ case 8:
+ ul = binary.LittleEndian.Uint64(src)
+ ub = binary.BigEndian.Uint64(src)
+ il = int64(ul)
+ ib = int64(ub)
+ }
+ u := ub
+ i := ib
+ if islittle {
+ u = ul
+ i = il
+ }
+
+ if signed {
+ return strconv.AppendInt(dst, i, 10), need
+ }
+ return strconv.AppendUint(dst, u, 10), need
+ })
+ }
+
+ return func(dst, src []byte) []byte {
+ for _, fn := range fns {
+ var n int
+ dst, n = fn(dst, src)
+ src = src[n:]
+ }
+ return dst
+ }, nil
+}
+
+func parseNumWriteLayout(layout string) (func([]byte, int64) []byte, int, error) {
+ braceEnd := strings.IndexByte(layout, '}')
+ if braceEnd == -1 {
+ return nil, 0, errors.New("missing brace end } to close number format specification")
+ }
+ end := braceEnd + 1
+ switch layout = layout[:braceEnd]; layout {
+ case "ascii", "number":
+ return writeNumASCII, end, nil
+ case "hex64":
+ return writeNumHex64, end, nil
+ case "hex32":
+ return writeNumHex32, end, nil
+ case "hex16":
+ return writeNumHex16, end, nil
+ case "hex8":
+ return writeNumHex8, end, nil
+ case "hex4":
+ return writeNumHex4, end, nil
+ case "hex":
+ return writeNumHex, end, nil
+ case "big64":
+ return writeNumBig64, end, nil
+ case "big32":
+ return writeNumBig32, end, nil
+ case "big16":
+ return writeNumBig16, end, nil
+ case "byte", "big8", "little8":
+ return writeNumByte, end, nil
+ case "little64":
+ return writeNumLittle64, end, nil
+ case "little32":
+ return writeNumLittle32, end, nil
+ case "little16":
+ return writeNumLittle16, end, nil
+ case "bool":
+ return writeNumBool, end, nil
+ default:
+ return nil, 0, fmt.Errorf("invalid output number layout %q", layout)
+ }
+}
+
+func writeR(b []byte, r *Record, fn func([]byte, *Record) []byte) []byte {
+ if r == nil {
+ return append(b, ""...)
+ }
+ return fn(b, r)
+}
+
+func writeP(b []byte, p *FetchPartition, fn func([]byte, *FetchPartition) []byte) []byte {
+ if p == nil {
+ return append(b, ""...)
+ }
+ return fn(b, p)
+}
+func writeNumASCII(b []byte, n int64) []byte { return strconv.AppendInt(b, n, 10) }
+
+const hexc = "0123456789abcdef"
+
+func writeNumHex64(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b,
+ hexc[(u>>60)&0xf],
+ hexc[(u>>56)&0xf],
+ hexc[(u>>52)&0xf],
+ hexc[(u>>48)&0xf],
+ hexc[(u>>44)&0xf],
+ hexc[(u>>40)&0xf],
+ hexc[(u>>36)&0xf],
+ hexc[(u>>32)&0xf],
+ hexc[(u>>28)&0xf],
+ hexc[(u>>24)&0xf],
+ hexc[(u>>20)&0xf],
+ hexc[(u>>16)&0xf],
+ hexc[(u>>12)&0xf],
+ hexc[(u>>8)&0xf],
+ hexc[(u>>4)&0xf],
+ hexc[u&0xf],
+ )
+}
+
+func writeNumHex32(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b,
+ hexc[(u>>28)&0xf],
+ hexc[(u>>24)&0xf],
+ hexc[(u>>20)&0xf],
+ hexc[(u>>16)&0xf],
+ hexc[(u>>12)&0xf],
+ hexc[(u>>8)&0xf],
+ hexc[(u>>4)&0xf],
+ hexc[u&0xf],
+ )
+}
+
+func writeNumHex16(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b,
+ hexc[(u>>12)&0xf],
+ hexc[(u>>8)&0xf],
+ hexc[(u>>4)&0xf],
+ hexc[u&0xf],
+ )
+}
+
+func writeNumHex8(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b,
+ hexc[(u>>4)&0xf],
+ hexc[u&0xf],
+ )
+}
+
+func writeNumHex4(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b,
+ hexc[u&0xf],
+ )
+}
+
+func writeNumHex(b []byte, n int64) []byte {
+ return strconv.AppendUint(b, uint64(n), 16)
+}
+
+func writeNumBig64(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b, byte(u>>56), byte(u>>48), byte(u>>40), byte(u>>32), byte(u>>24), byte(u>>16), byte(u>>8), byte(u))
+}
+
+func writeNumLittle64(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b, byte(u), byte(u>>8), byte(u>>16), byte(u>>24), byte(u>>32), byte(u>>40), byte(u>>48), byte(u>>56))
+}
+
+func writeNumBig32(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b, byte(u>>24), byte(u>>16), byte(u>>8), byte(u))
+}
+
+func writeNumLittle32(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b, byte(u), byte(u>>8), byte(u>>16), byte(u>>24))
+}
+func writeNumBig16(b []byte, n int64) []byte { u := uint64(n); return append(b, byte(u>>8), byte(u)) }
+func writeNumLittle16(b []byte, n int64) []byte {
+ u := uint64(n)
+ return append(b, byte(u), byte(u>>8))
+}
+func writeNumByte(b []byte, n int64) []byte { u := uint64(n); return append(b, byte(u)) }
+
+func writeNumBool(b []byte, n int64) []byte {
+ if n == 0 {
+ return append(b, "false"...)
+ }
+ return append(b, "true"...)
+}
+
+////////////
+// READER //
+////////////
+
+// RecordReader reads records from an io.Reader.
+type RecordReader struct {
+ r *bufio.Reader
+
+ buf []byte
+ fns []readParse
+
+ done bool
+}
+
+// NewRecordReader returns a record reader for the given layout, or an error if
+// the layout is invalid.
+//
+// Similar to the RecordFormatter, the RecordReader parsing is quite powerful.
+// There is a bit less to describe in comparison to RecordFormatter, but still,
+// this documentation attempts to be as succinct as possible.
+//
+// Similar to the fmt package, record parsing is based off of slash escapes and
+// percent "verbs" (copying fmt package lingo). Slashes are used for common
+// escapes,
+//
+// \t \n \r \\ \xNN
+//
+// reading tabs, newlines, carriage returns, slashes, and hex encoded
+// characters.
+//
+// Percent encoding reads into specific values of a Record:
+//
+// %t topic
+// %T topic length
+// %k key
+// %K key length
+// %v value
+// %V value length
+// %h begin the header specification
+// %H number of headers
+// %p partition
+// %o offset
+// %e leader epoch
+// %d timestamp
+// %x producer id
+// %y producer epoch
+//
+// If using length / number verbs (i.e., "sized" verbs), they must occur before
+// what they are sizing.
+//
+// There are three escapes to parse raw characters, rather than opting into
+// some formatting option:
+//
+// %% percent sign
+// %{ left brace
+// %} right brace
+//
+// Unlike record formatting, timestamps can only be read as numbers because Go
+// or strftime formatting can both be variable length and do not play too well
+// with delimiters. Timestamps numbers are read as milliseconds.
+//
+// # Numbers
+//
+// All size numbers can be parsed in the following ways:
+//
+// %v{ascii} parse numeric digits until a non-numeric
+// %v{number} alias for ascii
+//
+// %v{hex64} read 16 hex characters for the number
+// %v{hex32} read 8 hex characters for the number
+// %v{hex16} read 4 hex characters for the number
+// %v{hex8} read 2 hex characters for the number
+// %v{hex4} read 1 hex characters for the number
+//
+// %v{big64} read the number as big endian uint64 format
+// %v{big32} read the number as big endian uint32 format
+// %v{big16} read the number as big endian uint16 format
+// %v{big8} alias for byte
+//
+// %v{little64} read the number as little endian uint64 format
+// %v{little32} read the number as little endian uint32 format
+// %v{little16} read the number as little endian uint16 format
+// %v{little8} read the number as a byte
+//
+// %v{byte} read the number as a byte
+// %v{bool} read "true" as 1, "false" as 0
+// %v{3} read 3 characters (any number)
+//
+// # Header specification
+//
+// Similar to number formatting, headers are parsed using a nested primitive
+// format option, accepting the key and value escapes previously mentioned.
+//
+// # Text
+//
+// Topics, keys, and values can be decoded using "base64", "hex", and "json"
+// formatting options. Any size specification is the size of the encoded value
+// actually being read (i.e., size as seen, not size when decoded). JSON values
+// are compacted after being read.
+//
+// %T%t{hex} - 4abcd reads four hex characters "abcd"
+// %V%v{base64} - 2z9 reads two base64 characters "z9"
+// %v{json} %k - {"foo" : "bar"} foo reads a JSON object and then "foo"
+//
+// As well, these text options can be parsed with regular expressions:
+//
+// %k{re[\d*]}%v{re[\s+]}
+func NewRecordReader(reader io.Reader, layout string) (*RecordReader, error) {
+ r := &RecordReader{r: bufio.NewReader(reader)}
+ if err := r.parseReadLayout(layout); err != nil {
+ return nil, err
+ }
+ return r, nil
+}
+
+// ReadRecord reads the next record in the reader and returns it, or returns a
+// parsing error.
+//
+// This will return io.EOF only if the underlying reader returns io.EOF at the
+// start of a new record. If an io.EOF is returned mid record, this returns
+// io.ErrUnexpectedEOF. It is expected for this function to be called until it
+// returns io.EOF.
+func (r *RecordReader) ReadRecord() (*Record, error) {
+ rec := new(Record)
+ return rec, r.ReadRecordInto(rec)
+}
+
+// ReadRecordInto reads the next record into the given record and returns any
+// parsing error
+//
+// This will return io.EOF only if the underlying reader returns io.EOF at the
+// start of a new record. If an io.EOF is returned mid record, this returns
+// io.ErrUnexpectedEOF. It is expected for this function to be called until it
+// returns io.EOF.
+func (r *RecordReader) ReadRecordInto(rec *Record) error {
+ if r.done {
+ return io.EOF
+ }
+ return r.next(rec)
+}
+
+// SetReader replaces the underlying reader with the given reader.
+func (r *RecordReader) SetReader(reader io.Reader) {
+ r.r = bufio.NewReader(reader)
+ r.done = false
+}
+
+const (
+ parsesTopic parseRecordBits = 1 << iota
+ parsesTopicSize
+ parsesKey
+ parsesKeySize
+ parsesValue
+ parsesValueSize
+ parsesHeaders
+ parsesHeadersNum
+)
+
+// The record reading format must be either entirely sized or entirely unsized.
+// This type helps us track what's what.
+type parseRecordBits uint8
+
+func (p *parseRecordBits) set(r parseRecordBits) { *p |= r }
+func (p parseRecordBits) has(r parseRecordBits) bool { return p&r != 0 }
+
+func (r *RecordReader) parseReadLayout(layout string) error {
+ if len(layout) == 0 {
+ return errors.New("RecordReader: invalid empty format")
+ }
+
+ var (
+ // If we are reading by size, we parse the layout size into one
+ // of these variables. When reading, we use the captured
+ // variable's value.
+ topicSize = new(uint64)
+ keySize = new(uint64)
+ valueSize = new(uint64)
+ headersNum = new(uint64)
+
+ bits parseRecordBits
+
+ literal []byte // raw literal we are currently working on
+ addLiteral = func() {
+ if len(r.fns) > 0 && r.fns[len(r.fns)-1].read.empty() {
+ r.fns[len(r.fns)-1].read.delim = literal
+ } else if len(literal) > 0 {
+ r.fns = append(r.fns, readParse{
+ read: readKind{exact: literal},
+ })
+ }
+ literal = nil
+ }
+ )
+
+ for len(layout) > 0 {
+ c, size := utf8.DecodeRuneInString(layout)
+ rawc := layout[:size]
+ layout = layout[size:]
+ switch c {
+ default:
+ literal = append(literal, rawc...)
+ continue
+
+ case '\\':
+ c, n, err := parseLayoutSlash(layout)
+ if err != nil {
+ return err
+ }
+ layout = layout[n:]
+ literal = append(literal, c)
+ continue
+
+ case '%':
+ }
+
+ if len(layout) == 0 {
+ literal = append(literal, rawc...)
+ continue
+ }
+
+ cNext, size := utf8.DecodeRuneInString(layout)
+ if cNext == '%' || cNext == '{' || cNext == '}' {
+ literal = append(literal, byte(cNext))
+ layout = layout[size:]
+ continue
+ }
+
+ var (
+ isOpenBrace = len(layout) > 2 && layout[1] == '{'
+ handledBrace bool
+ escaped = layout[0]
+ )
+ layout = layout[1:]
+ addLiteral()
+
+ if isOpenBrace { // opening a brace: layout continues after
+ layout = layout[1:]
+ }
+
+ switch escaped {
+ default:
+ return fmt.Errorf("unknown percent escape sequence %q", layout[:1])
+
+ case 'T', 'K', 'V', 'H':
+ var dst *uint64
+ var bit parseRecordBits
+ switch escaped {
+ case 'T':
+ dst, bit = topicSize, parsesTopicSize
+ case 'K':
+ dst, bit = keySize, parsesKeySize
+ case 'V':
+ dst, bit = valueSize, parsesValueSize
+ case 'H':
+ dst, bit = headersNum, parsesHeadersNum
+ }
+ if bits.has(bit) {
+ return fmt.Errorf("%%%s is doubly specified", string(escaped))
+ }
+ if bits.has(bit >> 1) {
+ return fmt.Errorf("size specification %%%s cannot come after value specification %%%s", string(escaped), strings.ToLower(string(escaped)))
+ }
+ bits.set(bit)
+ fn, n, err := r.parseReadSize("ascii", dst, false)
+ if handledBrace = isOpenBrace; handledBrace {
+ fn, n, err = r.parseReadSize(layout, dst, true)
+ }
+ if err != nil {
+ return fmt.Errorf("unable to parse %%%s: %s", string(escaped), err)
+ }
+ layout = layout[n:]
+ r.fns = append(r.fns, fn)
+
+ case 'p', 'o', 'e', 'd', 'x', 'y':
+ dst := new(uint64)
+ fn, n, err := r.parseReadSize("ascii", dst, false)
+ if handledBrace = isOpenBrace; handledBrace {
+ fn, n, err = r.parseReadSize(layout, dst, true)
+ }
+ if err != nil {
+ return fmt.Errorf("unable to parse %%%s: %s", string(escaped), err)
+ }
+ layout = layout[n:]
+ numParse := fn.parse
+ switch escaped {
+ case 'p':
+ fn.parse = func(b []byte, rec *Record) error {
+ if err := numParse(b, nil); err != nil {
+ return err
+ }
+ rec.Partition = int32(*dst)
+ return nil
+ }
+ case 'o':
+ fn.parse = func(b []byte, rec *Record) error {
+ if err := numParse(b, nil); err != nil {
+ return err
+ }
+ rec.Offset = int64(*dst)
+ return nil
+ }
+ case 'e':
+ fn.parse = func(b []byte, rec *Record) error {
+ if err := numParse(b, nil); err != nil {
+ return err
+ }
+ rec.LeaderEpoch = int32(*dst)
+ return nil
+ }
+ case 'd':
+ fn.parse = func(b []byte, rec *Record) error {
+ if err := numParse(b, nil); err != nil {
+ return err
+ }
+ rec.Timestamp = time.Unix(0, int64(*dst)*1e6)
+ return nil
+ }
+ case 'x':
+ fn.parse = func(b []byte, rec *Record) error {
+ if err := numParse(b, nil); err != nil {
+ return err
+ }
+ rec.ProducerID = int64(*dst)
+ return nil
+ }
+ case 'y':
+ fn.parse = func(b []byte, rec *Record) error {
+ if err := numParse(b, nil); err != nil {
+ return err
+ }
+ rec.ProducerEpoch = int16(*dst)
+ return nil
+ }
+ }
+ r.fns = append(r.fns, fn)
+
+ case 't', 'k', 'v':
+ var decodeFn func([]byte) ([]byte, error)
+ var re *regexp.Regexp
+ var isJson bool
+ if handledBrace = isOpenBrace; handledBrace {
+ switch {
+ case strings.HasPrefix(layout, "}"):
+ layout = layout[len("}"):]
+ case strings.HasPrefix(layout, "base64}"):
+ decodeFn = decodeBase64
+ layout = layout[len("base64}"):]
+ case strings.HasPrefix(layout, "hex}"):
+ decodeFn = decodeHex
+ layout = layout[len("hex}"):]
+ case strings.HasPrefix(layout, "json}"):
+ isJson = true
+ decodeFn = func(b []byte) ([]byte, error) {
+ var buf bytes.Buffer
+ err := json.Compact(&buf, b)
+ return buf.Bytes(), err
+ }
+ layout = layout[len("json}"):]
+ case strings.HasPrefix(layout, "re"):
+ restr, rem, err := nomOpenClose(layout[len("re"):])
+ if err != nil {
+ return fmt.Errorf("re parse err: %v", err)
+ }
+ if len(rem) == 0 || rem[0] != '}' {
+ return fmt.Errorf("re missing closing } in %q", layout)
+ }
+ layout = rem[1:]
+ if !strings.HasPrefix(restr, "^") {
+ restr = "^" + restr
+ }
+ re, err = regexp.Compile(restr)
+ if err != nil {
+ return fmt.Errorf("re parse err: %v", err)
+ }
+
+ default:
+ return fmt.Errorf("unknown %%%s{ escape", string(escaped))
+ }
+ }
+
+ var bit, bitSize parseRecordBits
+ var inner func([]byte, *Record)
+ var size *uint64
+ switch escaped {
+ case 't':
+ bit, bitSize, size = parsesTopic, parsesTopicSize, topicSize
+ inner = func(b []byte, r *Record) { r.Topic = string(b) }
+ case 'k':
+ bit, bitSize, size = parsesKey, parsesKeySize, keySize
+ inner = func(b []byte, r *Record) { r.Key = dupslice(b) }
+ case 'v':
+ bit, bitSize, size = parsesValue, parsesValueSize, valueSize
+ inner = func(b []byte, r *Record) { r.Value = dupslice(b) }
+ }
+
+ fn := readParse{parse: func(b []byte, r *Record) error {
+ if decodeFn != nil {
+ dec, err := decodeFn(b)
+ if err != nil {
+ return err
+ }
+ b = dec
+ }
+ inner(b, r)
+ return nil
+ }}
+ bit.set(bit)
+ if bits.has(bitSize) {
+ if re != nil {
+ return errors.New("cannot specify exact size and regular expression")
+ }
+ if isJson {
+ return errors.New("cannot specify exact size and json")
+ }
+ fn.read = readKind{sizefn: func() int { return int(*size) }}
+ } else if re != nil {
+ fn.read = readKind{re: re}
+ } else if isJson {
+ fn.read = readKind{condition: new(jsonReader).read}
+ }
+ r.fns = append(r.fns, fn)
+
+ case 'h':
+ bits.set(parsesHeaders)
+ if !bits.has(parsesHeadersNum) {
+ return errors.New("missing header count specification %H before header specification %h")
+ }
+ if !isOpenBrace {
+ return errors.New("missing open brace sequence on %h signifying how headers are encoded")
+ }
+ handledBrace = true
+ // Similar to above, headers can have their own
+ // internal braces, so we look for a matching end.
+ braces := 1
+ at := 0
+ for braces != 0 && len(layout[at:]) > 0 {
+ switch layout[at] {
+ case '{':
+ if at > 0 && layout[at-1] != '%' {
+ braces++
+ }
+ case '}':
+ if at > 0 && layout[at-1] != '%' {
+ braces--
+ }
+ }
+ at++
+ }
+ if braces > 0 {
+ return fmt.Errorf("invalid header specification: missing closing brace in %q", layout)
+ }
+
+ // We parse the header specification recursively, but
+ // we require that it is sized and contains only keys
+ // and values. Checking the delimiter checks sizing.
+ var inr RecordReader
+ if err := inr.parseReadLayout(layout[:at-1]); err != nil {
+ return fmt.Errorf("invalid header specification: %v", err)
+ }
+ layout = layout[at:]
+
+ // To parse headers, we save the inner reader's parsing
+ // function stash the current record's key/value before
+ // parsing, and then capture the key/value as a header.
+ r.fns = append(r.fns, readParse{read: readKind{handoff: func(r *RecordReader, rec *Record) error {
+ k, v := rec.Key, rec.Value
+ defer func() { rec.Key, rec.Value = k, v }()
+ inr.r = r.r
+ for i := uint64(0); i < *headersNum; i++ {
+ rec.Key, rec.Value = nil, nil
+ if err := inr.next(rec); err != nil {
+ return err
+ }
+ rec.Headers = append(rec.Headers, RecordHeader{Key: string(rec.Key), Value: rec.Value})
+ }
+ return nil
+ }}})
+ }
+
+ if isOpenBrace && !handledBrace {
+ return fmt.Errorf("unhandled open brace %q", layout)
+ }
+ }
+
+ addLiteral()
+
+ // We must sort noreads to the front, we use this guarantee when
+ // reading to handle EOF properly.
+ var noreads, reads []readParse
+ for _, fn := range r.fns {
+ if fn.read.noread {
+ noreads = append(noreads, fn)
+ } else {
+ reads = append(reads, fn)
+ }
+ }
+ r.fns = make([]readParse, 0, len(noreads)+len(reads))
+ r.fns = append(r.fns, noreads...)
+ r.fns = append(r.fns, reads...)
+
+ return nil
+}
+
+// Returns a function that parses a number from the internal reader into dst.
+//
+// If needBrace is true, the user is specifying how to read the number,
+// otherwise we default to ascii. Reading ascii requires us to peek at bytes
+// until we get to a non-number byte.
+func (*RecordReader) parseReadSize(layout string, dst *uint64, needBrace bool) (readParse, int, error) {
+ var end int
+ if needBrace {
+ braceEnd := strings.IndexByte(layout, '}')
+ if braceEnd == -1 {
+ return readParse{}, 0, errors.New("missing brace end } to close number size specification")
+ }
+ layout = layout[:braceEnd]
+ end = braceEnd + 1
+ }
+
+ switch layout {
+ default:
+ num, err := strconv.Atoi(layout)
+ if err != nil {
+ return readParse{}, 0, fmt.Errorf("unrecognized number reading layout %q: %v", layout, err)
+ }
+ if num <= 0 {
+ return readParse{}, 0, fmt.Errorf("invalid zero or negative number %q when parsing read size", layout)
+ }
+ return readParse{
+ readKind{noread: true},
+ func([]byte, *Record) error { *dst = uint64(num); return nil },
+ }, end, nil
+
+ case "ascii", "number":
+ return readParse{
+ readKind{condition: func(b byte) int8 {
+ if b < '0' || b > '9' {
+ return -1
+ }
+ return 2 // ignore EOF if we hit it after this
+ }},
+ func(b []byte, _ *Record) (err error) {
+ *dst, err = strconv.ParseUint(kbin.UnsafeString(b), 10, 64)
+ return err
+ },
+ }, end, nil
+
+ case "big64":
+ return readParse{
+ readKind{size: 8},
+ func(b []byte, _ *Record) error { *dst = binary.BigEndian.Uint64(b); return nil },
+ }, end, nil
+ case "big32":
+ return readParse{
+ readKind{size: 4},
+ func(b []byte, _ *Record) error { *dst = uint64(binary.BigEndian.Uint32(b)); return nil },
+ }, end, nil
+ case "big16":
+ return readParse{
+ readKind{size: 2},
+ func(b []byte, _ *Record) error { *dst = uint64(binary.BigEndian.Uint16(b)); return nil },
+ }, end, nil
+
+ case "little64":
+ return readParse{
+ readKind{size: 8},
+ func(b []byte, _ *Record) error { *dst = binary.LittleEndian.Uint64(b); return nil },
+ }, end, nil
+ case "little32":
+ return readParse{
+ readKind{size: 4},
+ func(b []byte, _ *Record) error { *dst = uint64(binary.LittleEndian.Uint32(b)); return nil },
+ }, end, nil
+ case "little16":
+ return readParse{
+ readKind{size: 2},
+ func(b []byte, _ *Record) error { *dst = uint64(binary.LittleEndian.Uint16(b)); return nil },
+ }, end, nil
+
+ case "byte", "big8", "little8":
+ return readParse{
+ readKind{size: 1},
+ func(b []byte, _ *Record) error { *dst = uint64(b[0]); return nil },
+ }, end, nil
+
+ case "hex64":
+ return readParse{
+ readKind{size: 16},
+ func(b []byte, _ *Record) (err error) {
+ *dst, err = strconv.ParseUint(kbin.UnsafeString(b), 16, 64)
+ return err
+ },
+ }, end, nil
+ case "hex32":
+ return readParse{
+ readKind{size: 8},
+ func(b []byte, _ *Record) (err error) {
+ *dst, err = strconv.ParseUint(kbin.UnsafeString(b), 16, 64)
+ return err
+ },
+ }, end, nil
+ case "hex16":
+ return readParse{
+ readKind{size: 4},
+ func(b []byte, _ *Record) (err error) {
+ *dst, err = strconv.ParseUint(kbin.UnsafeString(b), 16, 64)
+ return err
+ },
+ }, end, nil
+ case "hex8":
+ return readParse{
+ readKind{size: 2},
+ func(b []byte, _ *Record) (err error) {
+ *dst, err = strconv.ParseUint(kbin.UnsafeString(b), 16, 64)
+ return err
+ },
+ }, end, nil
+ case "hex4":
+ return readParse{
+ readKind{size: 1},
+ func(b []byte, _ *Record) (err error) {
+ *dst, err = strconv.ParseUint(kbin.UnsafeString(b), 16, 64)
+ return err
+ },
+ }, end, nil
+
+ case "bool":
+ const (
+ stateUnknown uint8 = iota
+ stateTrue
+ stateFalse
+ )
+ var state uint8
+ var last byte
+ return readParse{
+ readKind{condition: func(b byte) (done int8) {
+ defer func() {
+ if done <= 0 {
+ state = stateUnknown
+ last = 0
+ }
+ }()
+
+ switch state {
+ default: // stateUnknown
+ if b == 't' {
+ state = stateTrue
+ last = b
+ return 1
+ } else if b == 'f' {
+ state = stateFalse
+ last = b
+ return 1
+ }
+ return -1
+
+ case stateTrue:
+ if last == 't' && b == 'r' || last == 'r' && b == 'u' {
+ last = b
+ return 1
+ } else if last == 'u' && b == 'e' {
+ return 0
+ }
+ return -1
+
+ case stateFalse:
+ if last == 'f' && b == 'a' || last == 'a' && b == 'l' || last == 'l' && b == 's' {
+ last = b
+ return 1
+ } else if last == 's' && b == 'e' {
+ return 0
+ }
+ return -1
+ }
+ }},
+ func(b []byte, _ *Record) error {
+ switch string(b) {
+ case "true":
+ *dst = 1
+ case "false":
+ *dst = 0
+ default:
+ return fmt.Errorf("invalid bool %s", b)
+ }
+ return nil
+ },
+ }, end, nil
+ }
+}
+
+func decodeBase64(b []byte) ([]byte, error) {
+ n, err := base64.StdEncoding.Decode(b[:base64.StdEncoding.DecodedLen(len(b))], b)
+ return b[:n], err
+}
+
+func decodeHex(b []byte) ([]byte, error) {
+ n, err := hex.Decode(b[:hex.DecodedLen(len(b))], b)
+ return b[:n], err
+}
+
+type readKind struct {
+ noread bool
+ exact []byte
+ condition func(byte) int8 // -2: error, -1: stop, do not consume input; 0: stop, consume input; 1: keep going, consume input, 2: keep going, consume input, can EOF
+ size int
+ sizefn func() int
+ handoff func(*RecordReader, *Record) error
+ delim []byte
+ re *regexp.Regexp
+}
+
+func (r *readKind) empty() bool {
+ return !r.noread &&
+ r.exact == nil &&
+ r.condition == nil &&
+ r.size == 0 &&
+ r.sizefn == nil &&
+ r.handoff == nil &&
+ r.delim == nil &&
+ r.re == nil
+}
+
+type readParse struct {
+ read readKind
+ parse func([]byte, *Record) error
+}
+
+func dupslice(b []byte) []byte {
+ if len(b) == 0 {
+ return nil
+ }
+ dup := make([]byte, len(b))
+ copy(dup, b)
+ return dup
+}
+
+func (r *RecordReader) next(rec *Record) error {
+ for i, fn := range r.fns {
+ r.buf = r.buf[:0]
+
+ var err error
+ switch {
+ case fn.read.noread:
+ // do nothing
+ case fn.read.exact != nil:
+ err = r.readExact(fn.read.exact)
+ case fn.read.condition != nil:
+ err = r.readCondition(fn.read.condition)
+ case fn.read.size > 0:
+ err = r.readSize(fn.read.size)
+ case fn.read.sizefn != nil:
+ err = r.readSize(fn.read.sizefn())
+ case fn.read.handoff != nil:
+ err = fn.read.handoff(r, rec)
+ case fn.read.re != nil:
+ err = r.readRe(fn.read.re)
+ default:
+ err = r.readDelim(fn.read.delim) // we *always* fall back to delim parsing
+ }
+
+ switch err {
+ default:
+ return err
+ case nil:
+ case io.EOF, io.ErrUnexpectedEOF:
+ r.done = true
+ // We guarantee that all noread parses are at
+ // the front, so if we io.EOF on the first
+ // non-noread, then we bubble it up.
+ if len(r.buf) == 0 && (i == 0 || r.fns[i-1].read.noread) {
+ return io.EOF
+ }
+ if i != len(r.fns)-1 || err == io.ErrUnexpectedEOF {
+ return io.ErrUnexpectedEOF
+ }
+ }
+
+ if fn.parse == nil {
+ continue
+ }
+
+ if err := fn.parse(r.buf, rec); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (r *RecordReader) readCondition(fn func(byte) int8) error {
+ var ignoreEOF bool
+ for {
+ peek, err := r.r.Peek(1)
+ if err != nil {
+ if err == io.EOF && ignoreEOF {
+ err = nil
+ }
+ return err
+ }
+ ignoreEOF = false
+ c := peek[0]
+ switch fn(c) {
+ case -2:
+ return fmt.Errorf("invalid input %q", c)
+ case -1:
+ return nil
+ case 0:
+ r.r.Discard(1)
+ r.buf = append(r.buf, c)
+ return nil
+ case 1:
+ case 2:
+ ignoreEOF = true
+ }
+ r.r.Discard(1)
+ r.buf = append(r.buf, c)
+ }
+}
+
+type reReader struct {
+ r *RecordReader
+ peek []byte
+ err error
+}
+
+func (re *reReader) ReadRune() (r rune, size int, err error) {
+ re.peek, re.err = re.r.r.Peek(len(re.peek) + 1)
+ if re.err != nil {
+ return 0, 0, re.err
+ }
+ return rune(re.peek[len(re.peek)-1]), 1, nil
+}
+
+func (r *RecordReader) readRe(re *regexp.Regexp) error {
+ reader := reReader{r: r}
+ loc := re.FindReaderIndex(&reader)
+ if loc == nil {
+ if reader.err == io.EOF && len(reader.peek) > 0 {
+ return fmt.Errorf("regexp text mismatch, saw %q", reader.peek)
+ }
+ return reader.err
+ }
+ n := loc[1] // we ensure the regexp begins with ^, so we only need the end
+ r.buf = append(r.buf, reader.peek[:n]...)
+ r.r.Discard(n)
+ if n == len(reader.peek) {
+ return reader.err
+ }
+ return nil
+}
+
+func (r *RecordReader) readSize(n int) error {
+ r.buf = append(r.buf, make([]byte, n)...)
+ n, err := io.ReadFull(r.r, r.buf)
+ r.buf = r.buf[:n]
+ return err
+}
+
+func (r *RecordReader) readExact(d []byte) error {
+ if err := r.readSize(len(d)); err != nil {
+ return err
+ }
+ if !bytes.Equal(d, r.buf) {
+ return fmt.Errorf("exact text mismatch, read %q when expecting %q", r.buf, d)
+ }
+ return nil
+}
+
+func (r *RecordReader) readDelim(d []byte) error {
+ // Empty delimiters opt in to reading the rest of the text.
+ if len(d) == 0 {
+ b, err := io.ReadAll(r.r)
+ r.buf = b
+ // ReadAll stops at io.EOF, but we need to bubble that up.
+ if err == nil {
+ return io.EOF
+ }
+ return err
+ }
+
+ // We use the simple inefficient search algorithm, which can be O(nm),
+ // but we aren't expecting huge search spaces. Long term we could
+ // convert to a two-way search.
+ for {
+ peek, err := r.r.Peek(len(d))
+ if err != nil {
+ // If we peek an io.EOF, we were looking for our delim
+ // and hit the end. This is unexpected.
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ }
+ return err
+ }
+ if !bytes.Equal(peek, d) {
+ // We did not find our delim. Skip the first char
+ // then continue again.
+ r.buf = append(r.buf, peek[0])
+ r.r.Discard(1)
+ continue
+ }
+ // We found our delim. We discard it and return.
+ r.r.Discard(len(d))
+ return nil
+ }
+}
+
+type jsonReader struct {
+ state int8
+ n int8 // misc.
+ nexts []int8
+}
+
+func (*jsonReader) isHex(c byte) bool {
+ switch c {
+ case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
+ 'a', 'b', 'c', 'd', 'e', 'f',
+ 'A', 'B', 'C', 'D', 'E', 'F':
+ return true
+ default:
+ return false
+ }
+}
+
+func (*jsonReader) isNum(c byte) bool {
+ switch c {
+ case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ return true
+ }
+ return false
+}
+
+func (*jsonReader) isNat(c byte) bool {
+ switch c {
+ case '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ return true
+ }
+ return false
+}
+
+func (*jsonReader) isE(c byte) bool {
+ return c == 'e' || c == 'E'
+}
+
+const (
+ jrstAny int8 = iota
+ jrstObj
+ jrstObjSep
+ jrstObjFin
+ jrstArr
+ jrstArrFin
+ jrstStrBegin
+ jrstStr
+ jrstStrEsc
+ jrstStrEscU
+ jrstTrue
+ jrstFalse
+ jrstNull
+ jrstNeg
+ jrstOne
+ jrstDotOrE
+ jrstDot
+ jrstE
+)
+
+func (r *jsonReader) read(c byte) (rr int8) {
+start:
+ switch r.state {
+ case jrstAny:
+ switch c {
+ case ' ', '\t', '\n', '\r':
+ return 1 // skip whitespace, need more
+ case '{':
+ r.state = jrstObj
+ return 1 // object open, need more
+ case '[':
+ r.state = jrstArr
+ return 1 // array open, need more
+ case '"':
+ r.state = jrstStr
+ return 1 // string open, need more
+ case 't':
+ r.state = jrstTrue
+ r.n = 0
+ return 1 // beginning of true, need more
+ case 'f':
+ r.state = jrstFalse
+ r.n = 0
+ return 1 // beginning of false, need more
+ case 'n':
+ r.state = jrstNull
+ r.n = 0
+ return 1 // beginning of null, need more
+ case '-':
+ r.state = jrstNeg
+ return 1 // beginning of negative number, need more
+ case '0':
+ r.state = jrstDotOrE
+ return 1 // beginning of 0e or 0., need more
+ case '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ r.state = jrstOne
+ return 1 // beginning of number, need more
+ default:
+ return -2 // invalid json
+ }
+
+ case jrstObj:
+ switch c {
+ case ' ', '\t', '\n', '\r':
+ return 1 // skip whitespace in json object, need more
+ case '"':
+ r.pushState(jrstStr, jrstObjSep)
+ return 1 // beginning of object key, need to finish, transition to obj sep
+ case '}':
+ return r.popState() // end of object, this is valid json end, pop state
+ default:
+ return -2 // invalid json: expected object key
+ }
+ case jrstObjSep:
+ switch c {
+ case ' ', '\t', '\n', '\r':
+ return 1 // skip whitespace in json object, need more
+ case ':':
+ r.pushState(jrstAny, jrstObjFin)
+ return 1 // beginning of object value, need to finish, transition to obj fin
+ default:
+ return -2 // invalid json: expected object separator
+ }
+ case jrstObjFin:
+ switch c {
+ case ' ', '\r', '\t', '\n':
+ return 1 // skip whitespace in json object, need more
+ case ',':
+ r.pushState(jrstStrBegin, jrstObjSep)
+ return 1 // beginning of new object key, need to finish, transition to obj sep
+ case '}':
+ return r.popState() // end of object, this is valid json end, pop state
+ default:
+ return -2 // invalid json
+ }
+
+ case jrstArr:
+ switch c {
+ case ' ', '\r', '\t', '\n':
+ return 1 // skip whitespace in json array, need more
+ case ']':
+ return r.popState() // end of array, this is valid json end, pop state
+ default:
+ r.pushState(jrstAny, jrstArrFin)
+ goto start // array value began: immediately transition to it
+ }
+ case jrstArrFin:
+ switch c {
+ case ' ', '\r', '\t', '\n':
+ return 1 // skip whitespace in json array, need more
+ case ',':
+ r.state = jrstArr
+ return 1 // beginning of new array value, need more
+ case ']':
+ return r.popState() // end of array, this is valid json end, pop state
+ default:
+ return -2 // invalid json
+ }
+
+ case jrstStrBegin:
+ switch c {
+ case ' ', '\r', '\t', '\n':
+ return 1 // skip whitespace in json object (before beginning of key), need more
+ case '"':
+ r.state = jrstStr
+ return 1 // beginning of object key, need more
+ default:
+ return -2 // invalid json
+ }
+
+ case jrstStr:
+ switch c {
+ case 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
+ 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31:
+ return -2 // invalid json: control characters not allowed in string
+ case '"':
+ return r.popState() // end of string, this is valid json end, pop state
+ case '\\':
+ r.state = jrstStrEsc
+ return 1 // beginning of escape sequence, need more
+ default:
+ return 1 // continue string, need more
+ }
+ case jrstStrEsc:
+ switch c {
+ case 'b', 'f', 'n', 'r', 't', '\\', '/', '"':
+ r.state = jrstStr
+ return 1 // end of escape sequence, still need to finish string
+ case 'u':
+ r.state = jrstStrEscU
+ r.n = 0
+ return 1 // beginning of unicode escape sequence, need more
+ default:
+ return -2 // invalid json: invalid escape sequence
+ }
+ case jrstStrEscU:
+ if !r.isHex(c) {
+ return -2 // invalid json: invalid unicode escape sequence
+ }
+ r.n++
+ if r.n == 4 {
+ r.state = jrstStr
+ }
+ return 1 // end of unicode escape sequence, still need to finish string
+
+ case jrstTrue:
+ switch {
+ case r.n == 0 && c == 'r':
+ r.n++
+ return 1
+ case r.n == 1 && c == 'u':
+ r.n++
+ return 1
+ case r.n == 2 && c == 'e':
+ return r.popState() // end of true, this is valid json end, pop state
+ }
+ case jrstFalse:
+ switch {
+ case r.n == 0 && c == 'a':
+ r.n++
+ return 1
+ case r.n == 1 && c == 'l':
+ r.n++
+ return 1
+ case r.n == 2 && c == 's':
+ r.n++
+ return 1
+ case r.n == 3 && c == 'e':
+ return r.popState() // end of false, this is valid json end, pop state
+ }
+ case jrstNull:
+ switch {
+ case r.n == 0 && c == 'u':
+ r.n++
+ return 1
+ case r.n == 1 && c == 'l':
+ r.n++
+ return 1
+ case r.n == 2 && c == 'l':
+ return r.popState() // end of null, this is valid json end, pop state
+ }
+
+ case jrstNeg:
+ if c == '0' {
+ r.state = jrstDotOrE
+ return r.oneOrTwo() // beginning of -0, need to see if there is more (potentially end)
+ } else if r.isNat(c) {
+ r.state = jrstOne
+ return r.oneOrTwo() // beginning of -1 (or 2,3,..9), need to see if there is more (potentially end)
+ }
+ return -2 // invalid, -a or something
+ case jrstOne:
+ if r.isNum(c) {
+ return r.oneOrTwo() // continue the number (potentially end)
+ }
+ fallthrough // not a number, check if e or .
+ case jrstDotOrE:
+ if r.isE(c) {
+ r.state = jrstE
+ return 1 // beginning of exponent, need more
+ }
+ if c == '.' {
+ r.state = jrstDot
+ r.n = 0
+ return 1 // beginning of dot, need more
+ }
+ if r.popStateToStart() {
+ goto start
+ }
+ return -1 // done with number, no more state to bubble to: we are done
+
+ case jrstDot:
+ switch r.n {
+ case 0:
+ if !r.isNum(c) {
+ return -2 // first char after dot must be a number
+ }
+ r.n = 1
+ return r.oneOrTwo() // saw number, keep and continue (potentially end)
+ case 1:
+ if r.isNum(c) {
+ return r.oneOrTwo() // more number, keep and continue (potentially end)
+ }
+ if r.isE(c) {
+ r.state = jrstE
+ r.n = 0
+ return 1 // beginning of exponent (-0.1e), need more
+ }
+ if r.popStateToStart() {
+ goto start
+ }
+ return -1 // done with number, no more state to bubble to: we are done
+ }
+ case jrstE:
+ switch r.n {
+ case 0:
+ if c == '+' || c == '-' {
+ r.n = 1
+ return 1 // beginning of exponent sign, need more
+ }
+ fallthrough
+ case 1:
+ if !r.isNum(c) {
+ return -2 // first char after exponent must be sign or number
+ }
+ r.n = 2
+ return r.oneOrTwo() // saw number, keep and continue (potentially end)
+ case 2:
+ if r.isNum(c) {
+ return r.oneOrTwo() // more number, keep and continue (potentially end)
+ }
+ if r.popStateToStart() {
+ goto start
+ }
+ return -1 // done with number, no more state to bubble to: we are done
+ }
+ }
+ return -2 // unknown state
+}
+
+func (r *jsonReader) pushState(next, next2 int8) {
+ r.nexts = append(r.nexts, next2)
+ r.state = next
+}
+
+func (r *jsonReader) popState() int8 {
+ if len(r.nexts) == 0 {
+ r.state = jrstAny
+ return 0
+ }
+ r.state = r.nexts[len(r.nexts)-1]
+ r.nexts = r.nexts[:len(r.nexts)-1]
+ return 1
+}
+
+func (r *jsonReader) popStateToStart() bool {
+ if len(r.nexts) == 0 {
+ r.state = jrstAny
+ return false
+ }
+ r.state = r.nexts[len(r.nexts)-1]
+ r.nexts = r.nexts[:len(r.nexts)-1]
+ return true
+}
+
+func (r *jsonReader) oneOrTwo() int8 {
+ if len(r.nexts) > 0 {
+ return 1
+ }
+ return 2
+}
+
+////////////
+// COMMON //
+////////////
+
+func parseLayoutSlash(layout string) (byte, int, error) {
+ if len(layout) == 0 {
+ return 0, 0, errors.New("invalid slash escape at end of delim string")
+ }
+ switch layout[0] {
+ case 't':
+ return '\t', 1, nil
+ case 'n':
+ return '\n', 1, nil
+ case 'r':
+ return '\r', 1, nil
+ case '\\':
+ return '\\', 1, nil
+ case 'x':
+ if len(layout) < 3 { // on x, need two more
+ return 0, 0, errors.New("invalid non-terminated hex escape sequence at end of delim string")
+ }
+ hex := layout[1:3]
+ n, err := strconv.ParseInt(hex, 16, 8)
+ if err != nil {
+ return 0, 0, fmt.Errorf("unable to parse hex escape sequence %q: %v", hex, err)
+ }
+ return byte(n), 3, nil
+ default:
+ return 0, 0, fmt.Errorf("unknown slash escape sequence %q", layout[:1])
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/ring.go b/vendor/github.com/twmb/franz-go/pkg/kgo/ring.go
new file mode 100644
index 0000000000000..3ef989f49bf49
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/ring.go
@@ -0,0 +1,269 @@
+package kgo
+
+import "sync"
+
+// The ring types below are fixed sized blocking MPSC ringbuffers. These
+// replace channels in a few places in this client. The *main* advantage they
+// provide is to allow loops that terminate.
+//
+// With channels, we always have to have a goroutine draining the channel. We
+// cannot start the goroutine when we add the first element, because the
+// goroutine will immediately drain the first and if something produces right
+// away, it will start a second concurrent draining goroutine.
+//
+// We cannot fix that by adding a "working" field, because we would need a lock
+// around checking if the goroutine still has elements *and* around setting the
+// working field to false. If a push was blocked, it would be holding the lock,
+// which would block the worker from grabbing the lock. Any other lock ordering
+// has TOCTOU problems as well.
+//
+// We could use a slice that we always push to and pop the front of. This is a
+// bit easier to reason about, but constantly reallocates and has no bounded
+// capacity. The second we think about adding bounded capacity, we get this
+// ringbuffer below.
+//
+// The key insight is that we only pop the front *after* we are done with it.
+// If there are still more elements, the worker goroutine can continue working.
+// If there are no more elements, it can quit. When pushing, if the pusher
+// pushed the first element, it starts the worker.
+//
+// Pushes fail if the ring is dead, allowing the pusher to fail any promise.
+// If a die happens while a worker is running, all future pops will see the
+// ring is dead and can fail promises immediately. If a worker is not running,
+// then there are no promises that need to be called.
+//
+// We use size 8 buffers because eh why not. This gives us a small optimization
+// of masking to increment and decrement, rather than modulo arithmetic.
+
+const (
+ mask7 = 0b0000_0111
+ eight = mask7 + 1
+)
+
+type ringReq struct {
+ mu sync.Mutex
+ c *sync.Cond
+
+ elems [eight]promisedReq
+
+ head uint8
+ tail uint8
+ l uint8
+ dead bool
+}
+
+func (r *ringReq) die() {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ r.dead = true
+ if r.c != nil {
+ r.c.Broadcast()
+ }
+}
+
+func (r *ringReq) push(pr promisedReq) (first, dead bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ for r.l == eight && !r.dead {
+ if r.c == nil {
+ r.c = sync.NewCond(&r.mu)
+ }
+ r.c.Wait()
+ }
+
+ if r.dead {
+ return false, true
+ }
+
+ r.elems[r.tail] = pr
+ r.tail = (r.tail + 1) & mask7
+ r.l++
+
+ return r.l == 1, false
+}
+
+func (r *ringReq) dropPeek() (next promisedReq, more, dead bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ r.elems[r.head] = promisedReq{}
+ r.head = (r.head + 1) & mask7
+ r.l--
+
+ // If the cond has been initialized, there could potentially be waiters
+ // and we must always signal.
+ if r.c != nil {
+ r.c.Signal()
+ }
+
+ return r.elems[r.head], r.l > 0, r.dead
+}
+
+// ringResp duplicates the code above, but for promisedResp
+type ringResp struct {
+ mu sync.Mutex
+ c *sync.Cond
+
+ elems [eight]promisedResp
+
+ head uint8
+ tail uint8
+ l uint8
+ dead bool
+}
+
+func (r *ringResp) die() {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ r.dead = true
+ if r.c != nil {
+ r.c.Broadcast()
+ }
+}
+
+func (r *ringResp) push(pr promisedResp) (first, dead bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ for r.l == eight && !r.dead {
+ if r.c == nil {
+ r.c = sync.NewCond(&r.mu)
+ }
+ r.c.Wait()
+ }
+
+ if r.dead {
+ return false, true
+ }
+
+ r.elems[r.tail] = pr
+ r.tail = (r.tail + 1) & mask7
+ r.l++
+
+ return r.l == 1, false
+}
+
+func (r *ringResp) dropPeek() (next promisedResp, more, dead bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ r.elems[r.head] = promisedResp{}
+ r.head = (r.head + 1) & mask7
+ r.l--
+
+ if r.c != nil {
+ r.c.Signal()
+ }
+
+ return r.elems[r.head], r.l > 0, r.dead
+}
+
+// ringSeqResp duplicates the code above, but for *seqResp. We leave off die
+// because we do not use it, but we keep `c` for testing lowering eight/mask7.
+type ringSeqResp struct {
+ mu sync.Mutex
+ c *sync.Cond
+
+ elems [eight]*seqResp
+
+ head uint8
+ tail uint8
+ l uint8
+}
+
+func (r *ringSeqResp) push(sr *seqResp) (first bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ for r.l == eight {
+ if r.c == nil {
+ r.c = sync.NewCond(&r.mu)
+ }
+ r.c.Wait()
+ }
+
+ r.elems[r.tail] = sr
+ r.tail = (r.tail + 1) & mask7
+ r.l++
+
+ return r.l == 1
+}
+
+func (r *ringSeqResp) dropPeek() (next *seqResp, more bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ r.elems[r.head] = nil
+ r.head = (r.head + 1) & mask7
+ r.l--
+
+ if r.c != nil {
+ r.c.Signal()
+ }
+
+ return r.elems[r.head], r.l > 0
+}
+
+// Also no die; this type is slightly different because we can have overflow.
+// If we have overflow, we add to overflow until overflow is drained -- we
+// always want strict odering.
+type ringBatchPromise struct {
+ mu sync.Mutex
+
+ elems [eight]batchPromise
+
+ head uint8
+ tail uint8
+ l uint8
+
+ overflow []batchPromise
+}
+
+func (r *ringBatchPromise) push(b batchPromise) (first bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ // If the ring is full, we go into overflow; if overflow is non-empty,
+ // for ordering purposes, we add to the end of overflow. We only go
+ // back to using the ring once overflow is finally empty.
+ if r.l == eight || len(r.overflow) > 0 {
+ r.overflow = append(r.overflow, b)
+ return false
+ }
+
+ r.elems[r.tail] = b
+ r.tail = (r.tail + 1) & mask7
+ r.l++
+
+ return r.l == 1
+}
+
+func (r *ringBatchPromise) dropPeek() (next batchPromise, more bool) {
+ r.mu.Lock()
+ defer r.mu.Unlock()
+
+ // We always drain the ring first. If the ring is ever empty, there
+ // must be overflow: we would not be here if the ring is not-empty.
+ if r.l > 1 {
+ r.elems[r.head] = batchPromise{}
+ r.head = (r.head + 1) & mask7
+ r.l--
+ return r.elems[r.head], true
+ } else if r.l == 1 {
+ r.elems[r.head] = batchPromise{}
+ r.head = (r.head + 1) & mask7
+ r.l--
+ if len(r.overflow) == 0 {
+ return next, false
+ }
+ return r.overflow[0], true
+ }
+ r.overflow = r.overflow[1:]
+ if len(r.overflow) > 0 {
+ return r.overflow[0], true
+ }
+ return next, false
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go b/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go
new file mode 100644
index 0000000000000..6d0f3dfe008dc
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go
@@ -0,0 +1,2380 @@
+package kgo
+
+import (
+ "bytes"
+ "context"
+ "errors"
+ "fmt"
+ "hash/crc32"
+ "math"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+type sink struct {
+ cl *Client // our owning client, for cfg, metadata triggering, context, etc.
+ nodeID int32 // the node ID of the broker this sink belongs to
+
+ // inflightSem controls the number of concurrent produce requests. We
+ // start with a limit of 1, which covers Kafka v0.11.0. On the first
+ // response, we check what version was set in the request. If it is at
+ // least 4, which 1.0 introduced, we upgrade the sem size.
+ inflightSem atomic.Value
+ produceVersion atomicI32 // negative is unset, positive is version
+
+ drainState workLoop
+
+ // seqRespsMu, guarded by seqRespsMu, contains responses that must
+ // be handled sequentially. These responses are handled asynchronously,
+ // but sequentially.
+ seqResps ringSeqResp
+
+ backoffMu sync.Mutex // guards the following
+ needBackoff bool
+ backoffSeq uint32 // prevents pile on failures
+
+ // consecutiveFailures is incremented every backoff and cleared every
+ // successful response. For simplicity, if we have a good response
+ // following an error response before the error response's backoff
+ // occurs, the backoff is not cleared.
+ consecutiveFailures atomicU32
+
+ recBufsMu sync.Mutex // guards the following
+ recBufs []*recBuf // contains all partition records for batch building
+ recBufsStart int // incremented every req to avoid large batch starvation
+}
+
+type seqResp struct {
+ resp kmsg.Response
+ err error
+ done chan struct{}
+ br *broker
+ promise func(*broker, kmsg.Response, error)
+}
+
+func (cl *Client) newSink(nodeID int32) *sink {
+ s := &sink{
+ cl: cl,
+ nodeID: nodeID,
+ }
+ s.produceVersion.Store(-1)
+ maxInflight := 1
+ if cl.cfg.disableIdempotency {
+ maxInflight = cl.cfg.maxProduceInflight
+ }
+ s.inflightSem.Store(make(chan struct{}, maxInflight))
+ return s
+}
+
+// createReq returns a produceRequest from currently buffered records
+// and whether there are more records to create more requests immediately.
+func (s *sink) createReq(id int64, epoch int16) (*produceRequest, *kmsg.AddPartitionsToTxnRequest, bool) {
+ req := &produceRequest{
+ txnID: s.cl.cfg.txnID,
+ acks: s.cl.cfg.acks.val,
+ timeout: int32(s.cl.cfg.produceTimeout.Milliseconds()),
+ batches: make(seqRecBatches, 5),
+
+ producerID: id,
+ producerEpoch: epoch,
+
+ hasHook: s.cl.producer.hasHookBatchWritten,
+ compressor: s.cl.compressor,
+
+ wireLength: s.cl.baseProduceRequestLength(), // start length with no topics
+ wireLengthLimit: s.cl.cfg.maxBrokerWriteBytes,
+ }
+ txnBuilder := txnReqBuilder{
+ txnID: req.txnID,
+ id: id,
+ epoch: epoch,
+ }
+
+ var moreToDrain bool
+
+ s.recBufsMu.Lock()
+ defer s.recBufsMu.Unlock()
+
+ recBufsIdx := s.recBufsStart
+ for i := 0; i < len(s.recBufs); i++ {
+ recBuf := s.recBufs[recBufsIdx]
+ recBufsIdx = (recBufsIdx + 1) % len(s.recBufs)
+
+ recBuf.mu.Lock()
+ if recBuf.failing || len(recBuf.batches) == recBuf.batchDrainIdx || recBuf.inflightOnSink != nil && recBuf.inflightOnSink != s || recBuf.inflight != 0 && !recBuf.okOnSink {
+ recBuf.mu.Unlock()
+ continue
+ }
+
+ batch := recBuf.batches[recBuf.batchDrainIdx]
+ if added := req.tryAddBatch(s.produceVersion.Load(), recBuf, batch); !added {
+ recBuf.mu.Unlock()
+ moreToDrain = true
+ continue
+ }
+
+ recBuf.inflightOnSink = s
+ recBuf.inflight++
+
+ recBuf.batchDrainIdx++
+ recBuf.seq = incrementSequence(recBuf.seq, int32(len(batch.records)))
+ moreToDrain = moreToDrain || recBuf.tryStopLingerForDraining()
+ recBuf.mu.Unlock()
+
+ txnBuilder.add(recBuf)
+ }
+
+ // We could have lost our only record buffer just before we grabbed the
+ // lock above, so we have to check there are recBufs.
+ if len(s.recBufs) > 0 {
+ s.recBufsStart = (s.recBufsStart + 1) % len(s.recBufs)
+ }
+ return req, txnBuilder.req, moreToDrain
+}
+
+func incrementSequence(sequence, increment int32) int32 {
+ if sequence > math.MaxInt32-increment {
+ return increment - (math.MaxInt32 - sequence) - 1
+ }
+
+ return sequence + increment
+}
+
+type txnReqBuilder struct {
+ txnID *string
+ req *kmsg.AddPartitionsToTxnRequest
+ id int64
+ epoch int16
+ addedTopics map[string]int // topic => index into req
+}
+
+func (t *txnReqBuilder) add(rb *recBuf) {
+ if t.txnID == nil {
+ return
+ }
+ if rb.addedToTxn.Swap(true) {
+ return
+ }
+ if t.req == nil {
+ req := kmsg.NewPtrAddPartitionsToTxnRequest()
+ req.TransactionalID = *t.txnID
+ req.ProducerID = t.id
+ req.ProducerEpoch = t.epoch
+ t.req = req
+ t.addedTopics = make(map[string]int, 10)
+ }
+ idx, exists := t.addedTopics[rb.topic]
+ if !exists {
+ idx = len(t.req.Topics)
+ t.addedTopics[rb.topic] = idx
+ reqTopic := kmsg.NewAddPartitionsToTxnRequestTopic()
+ reqTopic.Topic = rb.topic
+ t.req.Topics = append(t.req.Topics, reqTopic)
+ }
+ t.req.Topics[idx].Partitions = append(t.req.Topics[idx].Partitions, rb.partition)
+}
+
+func (s *sink) maybeDrain() {
+ if s.cl.cfg.manualFlushing && s.cl.producer.flushing.Load() == 0 {
+ return
+ }
+ if s.drainState.maybeBegin() {
+ go s.drain()
+ }
+}
+
+func (s *sink) maybeBackoff() {
+ s.backoffMu.Lock()
+ backoff := s.needBackoff
+ s.backoffMu.Unlock()
+
+ if !backoff {
+ return
+ }
+ defer s.clearBackoff()
+
+ s.cl.triggerUpdateMetadata(false, "opportunistic load during sink backoff") // as good a time as any
+
+ tries := int(s.consecutiveFailures.Add(1))
+ after := time.NewTimer(s.cl.cfg.retryBackoff(tries))
+ defer after.Stop()
+
+ select {
+ case <-after.C:
+ case <-s.cl.ctx.Done():
+ case <-s.anyCtx().Done():
+ }
+}
+
+func (s *sink) maybeTriggerBackoff(seq uint32) {
+ s.backoffMu.Lock()
+ defer s.backoffMu.Unlock()
+ if seq == s.backoffSeq {
+ s.needBackoff = true
+ }
+}
+
+func (s *sink) clearBackoff() {
+ s.backoffMu.Lock()
+ defer s.backoffMu.Unlock()
+ s.backoffSeq++
+ s.needBackoff = false
+}
+
+// drain drains buffered records and issues produce requests.
+//
+// This function is harmless if there are no records that need draining.
+// We rely on that to not worry about accidental triggers of this function.
+func (s *sink) drain() {
+ again := true
+ for again {
+ s.maybeBackoff()
+
+ sem := s.inflightSem.Load().(chan struct{})
+ select {
+ case sem <- struct{}{}:
+ case <-s.cl.ctx.Done():
+ s.drainState.hardFinish()
+ return
+ }
+
+ again = s.drainState.maybeFinish(s.produce(sem))
+ }
+}
+
+// Returns the first context encountered ranging across all records.
+// This does not use defers to make it clear at the return that all
+// unlocks are called in proper order. Ideally, do not call this func
+// due to lock intensity.
+func (s *sink) anyCtx() context.Context {
+ s.recBufsMu.Lock()
+ for _, recBuf := range s.recBufs {
+ recBuf.mu.Lock()
+ if len(recBuf.batches) > 0 {
+ batch0 := recBuf.batches[0]
+ batch0.mu.Lock()
+ if batch0.canFailFromLoadErrs && len(batch0.records) > 0 {
+ r0 := batch0.records[0]
+ if rctx := r0.cancelingCtx(); rctx != nil {
+ batch0.mu.Unlock()
+ recBuf.mu.Unlock()
+ s.recBufsMu.Unlock()
+ return rctx
+ }
+ }
+ batch0.mu.Unlock()
+ }
+ recBuf.mu.Unlock()
+ }
+ s.recBufsMu.Unlock()
+ return context.Background()
+}
+
+func (s *sink) produce(sem <-chan struct{}) bool {
+ var produced bool
+ defer func() {
+ if !produced {
+ <-sem
+ }
+ }()
+
+ // We could have been triggered from a metadata update even though the
+ // user is not producing at all. If we have no buffered records, let's
+ // avoid potentially creating a producer ID.
+ if s.cl.BufferedProduceRecords() == 0 {
+ return false
+ }
+
+ // producerID can fail from:
+ // - retry failure
+ // - auth failure
+ // - transactional: a produce failure that failed the producer ID
+ // - AddPartitionsToTxn failure (see just below)
+ // - some head-of-line context failure
+ //
+ // All but the first error is fatal. Recovery may be possible with
+ // EndTransaction in specific cases, but regardless, all buffered
+ // records must fail.
+ //
+ // NOTE: we init the producer ID before creating a request to ensure we
+ // are always using the latest id/epoch with the proper sequence
+ // numbers. (i.e., resetAllSequenceNumbers && producerID logic combo).
+ //
+ // For the first-discovered-record-head-of-line context, we want to
+ // avoid looking it up if possible (which is why producerID takes a
+ // ctxFn). If we do use one, we want to be sure that the
+ // context.Canceled error is from *that* context rather than the client
+ // context or something else. So, we go through some special care to
+ // track setting the ctx / looking up if it is canceled.
+ var holCtxMu sync.Mutex
+ var holCtx context.Context
+ ctxFn := func() context.Context {
+ holCtxMu.Lock()
+ defer holCtxMu.Unlock()
+ holCtx = s.anyCtx()
+ return holCtx
+ }
+ isHolCtxDone := func() bool {
+ holCtxMu.Lock()
+ defer holCtxMu.Unlock()
+ if holCtx == nil {
+ return false
+ }
+ select {
+ case <-holCtx.Done():
+ return true
+ default:
+ }
+ return false
+ }
+
+ id, epoch, err := s.cl.producerID(ctxFn)
+ if err != nil {
+ var pe *errProducerIDLoadFail
+ switch {
+ case errors.As(err, &pe):
+ if errors.Is(pe.err, context.Canceled) && isHolCtxDone() {
+ // Some head-of-line record in a partition had a context cancelation.
+ // We look for any partition with HOL cancelations and fail them all.
+ s.cl.cfg.logger.Log(LogLevelInfo, "the first record in some partition(s) had a context cancelation; failing all relevant partitions", "broker", logID(s.nodeID))
+ s.recBufsMu.Lock()
+ defer s.recBufsMu.Unlock()
+ for _, recBuf := range s.recBufs {
+ recBuf.mu.Lock()
+ var failAll bool
+ if len(recBuf.batches) > 0 {
+ batch0 := recBuf.batches[0]
+ batch0.mu.Lock()
+ if batch0.canFailFromLoadErrs && len(batch0.records) > 0 {
+ r0 := batch0.records[0]
+ if rctx := r0.cancelingCtx(); rctx != nil {
+ select {
+ case <-rctx.Done():
+ failAll = true // we must not call failAllRecords here, because failAllRecords locks batches!
+ default:
+ }
+ }
+ }
+ batch0.mu.Unlock()
+ }
+ if failAll {
+ recBuf.failAllRecords(err)
+ }
+ recBuf.mu.Unlock()
+ }
+ return true
+ }
+ s.cl.bumpRepeatedLoadErr(err)
+ s.cl.cfg.logger.Log(LogLevelWarn, "unable to load producer ID, bumping client's buffered record load errors by 1 and retrying")
+ return true // whatever caused our produce, we did nothing, so keep going
+ case errors.Is(err, ErrClientClosed):
+ s.cl.failBufferedRecords(err)
+ default:
+ s.cl.cfg.logger.Log(LogLevelError, "fatal InitProducerID error, failing all buffered records", "broker", logID(s.nodeID), "err", err)
+ s.cl.failBufferedRecords(err)
+ }
+ return false
+ }
+
+ if !s.cl.producer.maybeAddInflight() { // must do before marking recBufs on a txn
+ return false
+ }
+ defer func() {
+ if !produced {
+ s.cl.producer.decInflight()
+ }
+ }()
+
+ // NOTE: we create the req AFTER getting our producer ID!
+ //
+ // If a prior response caused errReloadProducerID, then calling
+ // producerID() sets needSeqReset, and creating the request resets
+ // sequence numbers. We need to have that logic occur before we create
+ // the request, otherwise we will create a request with the old
+ // sequence numbers using our new producer ID, which will then again
+ // fail with OOOSN.
+ req, txnReq, moreToDrain := s.createReq(id, epoch)
+ if len(req.batches) == 0 { // everything was failing or lingering
+ return moreToDrain
+ }
+
+ if txnReq != nil {
+ // txnReq can fail from:
+ // - retry failure
+ // - auth failure
+ // - producer id mapping / epoch errors
+ // The latter case can potentially recover with the kip logic
+ // we have defined in EndTransaction. Regardless, on failure
+ // here, all buffered records must fail.
+ // We do not need to clear the addedToTxn flag for any recBuf
+ // it was set on, since producer id recovery resets the flag.
+ batchesStripped, err := s.doTxnReq(req, txnReq)
+ if err != nil {
+ switch {
+ case isRetryableBrokerErr(err) || isDialNonTimeoutErr(err):
+ s.cl.bumpRepeatedLoadErr(err)
+ s.cl.cfg.logger.Log(LogLevelWarn, "unable to AddPartitionsToTxn due to retryable broker err, bumping client's buffered record load errors by 1 and retrying", "err", err)
+ s.cl.triggerUpdateMetadata(false, "attempting to refresh broker list due to failed AddPartitionsToTxn requests")
+ return moreToDrain || len(req.batches) > 0 // nothing stripped if request-issuing error
+ default:
+ // Note that err can be InvalidProducerEpoch, which is
+ // potentially recoverable in EndTransaction.
+ //
+ // We do not fail all buffered records here,
+ // because that can lead to undesirable behavior
+ // with produce request vs. end txn (KAFKA-12671)
+ s.cl.failProducerID(id, epoch, err)
+ s.cl.cfg.logger.Log(LogLevelError, "fatal AddPartitionsToTxn error, failing all buffered records (it is possible the client can recover after EndTransaction)", "broker", logID(s.nodeID), "err", err)
+ }
+ return false
+ }
+
+ // If we stripped everything, ensure we backoff to force a
+ // metadata load. If not everything was stripped, we issue our
+ // request and ensure we will retry a producing until
+ // everything is stripped (and we eventually back off).
+ if batchesStripped {
+ moreToDrain = true
+ if len(req.batches) == 0 {
+ s.maybeTriggerBackoff(s.backoffSeq)
+ }
+ }
+ }
+
+ if len(req.batches) == 0 { // txn req could have removed some partitions to retry later (unknown topic, etc.)
+ return moreToDrain
+ }
+
+ req.backoffSeq = s.backoffSeq // safe to read outside mu since we are in drain loop
+
+ produced = true
+
+ batches := req.batches.sliced()
+ s.doSequenced(req, func(br *broker, resp kmsg.Response, err error) {
+ s.handleReqResp(br, req, resp, err)
+ s.cl.producer.decInflight()
+ batches.eachOwnerLocked((*recBatch).decInflight)
+ <-sem
+ })
+ return moreToDrain
+}
+
+// With handleSeqResps below, this function ensures that all request responses
+// are handled in order. We use this guarantee while in handleReqResp below.
+func (s *sink) doSequenced(
+ req kmsg.Request,
+ promise func(*broker, kmsg.Response, error),
+) {
+ wait := &seqResp{
+ done: make(chan struct{}),
+ promise: promise,
+ }
+
+ // We can NOT use any record context. If we do, we force the request to
+ // fail while also force the batch to be unfailable (due to no
+ // response),
+ br, err := s.cl.brokerOrErr(s.cl.ctx, s.nodeID, errUnknownBroker)
+ if err != nil {
+ wait.err = err
+ close(wait.done)
+ } else {
+ br.do(s.cl.ctx, req, func(resp kmsg.Response, err error) {
+ wait.resp = resp
+ wait.err = err
+ close(wait.done)
+ })
+ wait.br = br
+ }
+
+ if first := s.seqResps.push(wait); first {
+ go s.handleSeqResps(wait)
+ }
+}
+
+// Ensures that all request responses are processed in order.
+func (s *sink) handleSeqResps(wait *seqResp) {
+ var more bool
+start:
+ <-wait.done
+ wait.promise(wait.br, wait.resp, wait.err)
+
+ wait, more = s.seqResps.dropPeek()
+ if more {
+ goto start
+ }
+}
+
+// Issues an AddPartitionsToTxnRequest before a produce request for all
+// partitions that need to be added to a transaction.
+func (s *sink) doTxnReq(
+ req *produceRequest,
+ txnReq *kmsg.AddPartitionsToTxnRequest,
+) (stripped bool, err error) {
+ // If we return an unretryable error, then we have to reset everything
+ // to not be in the transaction and begin draining at the start.
+ //
+ // These batches must be the first in their recBuf, because we would
+ // not be trying to add them to a partition if they were not.
+ defer func() {
+ if err != nil {
+ req.batches.eachOwnerLocked(seqRecBatch.removeFromTxn)
+ }
+ }()
+ // We do NOT let record context cancelations fail this request: doing
+ // so would put the transactional ID in an unknown state. This is
+ // similar to the warning we give in the txn.go file, but the
+ // difference there is the user knows explicitly at the function call
+ // that canceling the context will opt them into invalid state.
+ err = s.cl.doWithConcurrentTransactions(s.cl.ctx, "AddPartitionsToTxn", func() error {
+ stripped, err = s.issueTxnReq(req, txnReq)
+ return err
+ })
+ return stripped, err
+}
+
+// Removing a batch from the transaction means we will not be issuing it
+// inflight, and that it was not added to the txn and that we need to reset the
+// drain index.
+func (b *recBatch) removeFromTxn() {
+ b.owner.addedToTxn.Store(false)
+ b.owner.resetBatchDrainIdx()
+ b.decInflight()
+}
+
+func (s *sink) issueTxnReq(
+ req *produceRequest,
+ txnReq *kmsg.AddPartitionsToTxnRequest,
+) (stripped bool, fatalErr error) {
+ resp, err := txnReq.RequestWith(s.cl.ctx, s.cl)
+ if err != nil {
+ return false, err
+ }
+
+ for _, topic := range resp.Topics {
+ topicBatches, ok := req.batches[topic.Topic]
+ if !ok {
+ s.cl.cfg.logger.Log(LogLevelError, "broker replied with topic in AddPartitionsToTxnResponse that was not in request", "topic", topic.Topic)
+ continue
+ }
+ for _, partition := range topic.Partitions {
+ if err := kerr.ErrorForCode(partition.ErrorCode); err != nil {
+ // OperationNotAttempted is set for all partitions that are authorized
+ // if any partition is unauthorized _or_ does not exist. We simply remove
+ // unattempted partitions and treat them as retryable.
+ if !kerr.IsRetriable(err) && !errors.Is(err, kerr.OperationNotAttempted) {
+ fatalErr = err // auth err, etc
+ continue
+ }
+
+ batch, ok := topicBatches[partition.Partition]
+ if !ok {
+ s.cl.cfg.logger.Log(LogLevelError, "broker replied with partition in AddPartitionsToTxnResponse that was not in request", "topic", topic.Topic, "partition", partition.Partition)
+ continue
+ }
+
+ // We are stripping this retryable-err batch from the request,
+ // so we must reset that it has been added to the txn.
+ batch.owner.mu.Lock()
+ batch.removeFromTxn()
+ batch.owner.mu.Unlock()
+
+ stripped = true
+
+ delete(topicBatches, partition.Partition)
+ }
+ if len(topicBatches) == 0 {
+ delete(req.batches, topic.Topic)
+ }
+ }
+ }
+ return stripped, fatalErr
+}
+
+// firstRespCheck is effectively a sink.Once. On the first response, if the
+// used request version is at least 4, we upgrade our inflight sem.
+//
+// Starting on version 4, Kafka allowed five inflight requests while
+// maintaining idempotency. Before, only one was allowed.
+//
+// We go through an atomic because drain can be waiting on the sem (with
+// capacity one). We store four here, meaning new drain loops will load the
+// higher capacity sem without read/write pointer racing a current loop.
+//
+// This logic does mean that we will never use the full potential 5 in flight
+// outside of a small window during the store, but some pages in the Kafka
+// confluence basically show that more than two in flight has marginal benefit
+// anyway (although that may be due to their Java API).
+//
+// https://cwiki.apache.org/confluence/display/KAFKA/An+analysis+of+the+impact+of+max.in.flight.requests.per.connection+and+acks+on+Producer+performance
+// https://issues.apache.org/jira/browse/KAFKA-5494
+func (s *sink) firstRespCheck(idempotent bool, version int16) {
+ if s.produceVersion.Load() < 0 {
+ s.produceVersion.Store(int32(version))
+ if idempotent && version >= 4 {
+ s.inflightSem.Store(make(chan struct{}, 4))
+ }
+ }
+}
+
+// handleReqClientErr is called when the client errors before receiving a
+// produce response.
+func (s *sink) handleReqClientErr(req *produceRequest, err error) {
+ switch {
+ default:
+ s.cl.cfg.logger.Log(LogLevelWarn, "random error while producing, requeueing unattempted request", "broker", logID(s.nodeID), "err", err)
+ fallthrough
+
+ case errors.Is(err, errUnknownBroker),
+ isDialNonTimeoutErr(err),
+ isRetryableBrokerErr(err):
+ updateMeta := !isRetryableBrokerErr(err)
+ if updateMeta {
+ s.cl.cfg.logger.Log(LogLevelInfo, "produce request failed, triggering metadata update", "broker", logID(s.nodeID), "err", err)
+ }
+ s.handleRetryBatches(req.batches, nil, req.backoffSeq, updateMeta, false, "failed produce request triggered metadata update")
+
+ case errors.Is(err, ErrClientClosed):
+ s.cl.failBufferedRecords(ErrClientClosed)
+ }
+}
+
+// No acks mean no response. The following block is basically an extremely
+// condensed version of the logic in handleReqResp.
+func (s *sink) handleReqRespNoack(b *bytes.Buffer, debug bool, req *produceRequest) {
+ if debug {
+ fmt.Fprintf(b, "noack ")
+ }
+ for topic, partitions := range req.batches {
+ if debug {
+ fmt.Fprintf(b, "%s[", topic)
+ }
+ for partition, batch := range partitions {
+ batch.owner.mu.Lock()
+ if batch.isOwnersFirstBatch() {
+ if debug {
+ fmt.Fprintf(b, "%d{0=>%d}, ", partition, len(batch.records))
+ }
+ s.cl.finishBatch(batch.recBatch, req.producerID, req.producerEpoch, partition, 0, nil)
+ } else if debug {
+ fmt.Fprintf(b, "%d{skipped}, ", partition)
+ }
+ batch.owner.mu.Unlock()
+ }
+ if debug {
+ if bytes.HasSuffix(b.Bytes(), []byte(", ")) {
+ b.Truncate(b.Len() - 2)
+ }
+ b.WriteString("], ")
+ }
+ }
+}
+
+func (s *sink) handleReqResp(br *broker, req *produceRequest, resp kmsg.Response, err error) {
+ if err != nil {
+ s.handleReqClientErr(req, err)
+ return
+ }
+ s.firstRespCheck(req.idempotent(), req.version)
+ s.consecutiveFailures.Store(0)
+ defer req.metrics.hook(&s.cl.cfg, br) // defer to end so that non-written batches are removed
+
+ var b *bytes.Buffer
+ debug := s.cl.cfg.logger.Level() >= LogLevelDebug
+ if debug {
+ b = bytes.NewBuffer(make([]byte, 0, 128))
+ defer func() {
+ update := b.String()
+ update = strings.TrimSuffix(update, ", ")
+ s.cl.cfg.logger.Log(LogLevelDebug, "produced", "broker", logID(s.nodeID), "to", update)
+ }()
+ }
+
+ if req.acks == 0 {
+ s.handleReqRespNoack(b, debug, req)
+ return
+ }
+
+ var kmove kip951move
+ var reqRetry seqRecBatches // handled at the end
+
+ kresp := resp.(*kmsg.ProduceResponse)
+ for i := range kresp.Topics {
+ rt := &kresp.Topics[i]
+ topic := rt.Topic
+ partitions, ok := req.batches[topic]
+ if !ok {
+ s.cl.cfg.logger.Log(LogLevelError, "broker erroneously replied with topic in produce request that we did not produce to", "broker", logID(s.nodeID), "topic", topic)
+ delete(req.metrics, topic)
+ continue // should not hit this
+ }
+
+ if debug {
+ fmt.Fprintf(b, "%s[", topic)
+ }
+
+ tmetrics := req.metrics[topic]
+ for j := range rt.Partitions {
+ rp := &rt.Partitions[j]
+ partition := rp.Partition
+ batch, ok := partitions[partition]
+ if !ok {
+ s.cl.cfg.logger.Log(LogLevelError, "broker erroneously replied with partition in produce request that we did not produce to", "broker", logID(s.nodeID), "topic", rt.Topic, "partition", partition)
+ delete(tmetrics, partition)
+ continue // should not hit this
+ }
+ delete(partitions, partition)
+
+ retry, didProduce := s.handleReqRespBatch(
+ b,
+ &kmove,
+ kresp,
+ topic,
+ rp,
+ batch,
+ req.producerID,
+ req.producerEpoch,
+ )
+ if retry {
+ reqRetry.addSeqBatch(topic, partition, batch)
+ }
+ if !didProduce {
+ delete(tmetrics, partition)
+ }
+ }
+
+ if debug {
+ if bytes.HasSuffix(b.Bytes(), []byte(", ")) {
+ b.Truncate(b.Len() - 2)
+ }
+ b.WriteString("], ")
+ }
+
+ if len(partitions) == 0 {
+ delete(req.batches, topic)
+ }
+ }
+
+ if len(req.batches) > 0 {
+ s.cl.cfg.logger.Log(LogLevelError, "broker did not reply to all topics / partitions in the produce request! reenqueuing missing partitions", "broker", logID(s.nodeID))
+ s.handleRetryBatches(req.batches, nil, 0, true, false, "broker did not reply to all topics in produce request")
+ }
+ if len(reqRetry) > 0 {
+ s.handleRetryBatches(reqRetry, &kmove, 0, true, true, "produce request had retry batches")
+ }
+}
+
+func (s *sink) handleReqRespBatch(
+ b *bytes.Buffer,
+ kmove *kip951move,
+ resp *kmsg.ProduceResponse,
+ topic string,
+ rp *kmsg.ProduceResponseTopicPartition,
+ batch seqRecBatch,
+ producerID int64,
+ producerEpoch int16,
+) (retry, didProduce bool) {
+ batch.owner.mu.Lock()
+ defer batch.owner.mu.Unlock()
+
+ nrec := len(batch.records)
+
+ debug := b != nil
+ if debug {
+ fmt.Fprintf(b, "%d{", rp.Partition)
+ }
+
+ // We only ever operate on the first batch in a record buf. Batches
+ // work sequentially; if this is not the first batch then an error
+ // happened and this later batch is no longer a part of a seq chain.
+ if !batch.isOwnersFirstBatch() {
+ if debug {
+ if err := kerr.ErrorForCode(rp.ErrorCode); err == nil {
+ if nrec > 0 {
+ fmt.Fprintf(b, "skipped@%d=>%d}, ", rp.BaseOffset, rp.BaseOffset+int64(nrec))
+ } else {
+ fmt.Fprintf(b, "skipped@%d}, ", rp.BaseOffset)
+ }
+ } else {
+ if nrec > 0 {
+ fmt.Fprintf(b, "skipped@%d,%d(%s)}, ", rp.BaseOffset, nrec, err)
+ } else {
+ fmt.Fprintf(b, "skipped@%d(%s)}, ", rp.BaseOffset, err)
+ }
+ }
+ }
+ return false, false
+ }
+
+ // Since we have received a response and we are the first batch, we can
+ // at this point re-enable failing from load errors.
+ //
+ // We do not need a lock since the owner is locked.
+ batch.canFailFromLoadErrs = true
+
+ // By default, we assume we errored. Non-error updates this back
+ // to true.
+ batch.owner.okOnSink = false
+
+ if moving := kmove.maybeAddProducePartition(resp, rp, batch.owner); moving {
+ if debug {
+ fmt.Fprintf(b, "move:%d:%d@%d,%d}, ", rp.CurrentLeader.LeaderID, rp.CurrentLeader.LeaderEpoch, rp.BaseOffset, nrec)
+ }
+ batch.owner.failing = true
+ return true, false
+ }
+
+ err := kerr.ErrorForCode(rp.ErrorCode)
+ failUnknown := batch.owner.checkUnknownFailLimit(err)
+ switch {
+ case kerr.IsRetriable(err) &&
+ !failUnknown &&
+ err != kerr.CorruptMessage &&
+ batch.tries < s.cl.cfg.recordRetries:
+
+ if debug {
+ fmt.Fprintf(b, "retrying@%d,%d(%s)}, ", rp.BaseOffset, nrec, err)
+ }
+ return true, false
+
+ case err == kerr.OutOfOrderSequenceNumber,
+ err == kerr.UnknownProducerID,
+ err == kerr.InvalidProducerIDMapping,
+ err == kerr.InvalidProducerEpoch:
+
+ // OOOSN always means data loss 1.0+ and is ambiguous prior.
+ // We assume the worst and only continue if requested.
+ //
+ // UnknownProducerID was introduced to allow some form of safe
+ // handling, but KIP-360 demonstrated that resetting sequence
+ // numbers is fundamentally unsafe, so we treat it like OOOSN.
+ //
+ // InvalidMapping is similar to UnknownProducerID, but occurs
+ // when the txnal coordinator timed out our transaction.
+ //
+ // 2.5
+ // =====
+ // 2.5 introduced some behavior to potentially safely reset
+ // the sequence numbers by bumping an epoch (see KIP-360).
+ //
+ // For the idempotent producer, the solution is to fail all
+ // buffered records and then let the client user reset things
+ // with the understanding that they cannot guard against
+ // potential dups / reordering at that point. Realistically,
+ // that's no better than a config knob that allows the user
+ // to continue (our stopOnDataLoss flag), so for the idempotent
+ // producer, if stopOnDataLoss is false, we just continue.
+ //
+ // For the transactional producer, we always fail the producerID.
+ // EndTransaction will trigger recovery if possible.
+ //
+ // 2.7
+ // =====
+ // InvalidProducerEpoch became retryable in 2.7. Prior, it
+ // was ambiguous (timeout? fenced?). Now, InvalidProducerEpoch
+ // is only returned on produce, and then we can recover on other
+ // txn coordinator requests, which have PRODUCER_FENCED vs
+ // TRANSACTION_TIMED_OUT.
+
+ if s.cl.cfg.txnID != nil || s.cl.cfg.stopOnDataLoss {
+ s.cl.cfg.logger.Log(LogLevelInfo, "batch errored, failing the producer ID",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", rp.Partition,
+ "producer_id", producerID,
+ "producer_epoch", producerEpoch,
+ "err", err,
+ )
+ s.cl.failProducerID(producerID, producerEpoch, err)
+
+ s.cl.finishBatch(batch.recBatch, producerID, producerEpoch, rp.Partition, rp.BaseOffset, err)
+ if debug {
+ fmt.Fprintf(b, "fatal@%d,%d(%s)}, ", rp.BaseOffset, nrec, err)
+ }
+ return false, false
+ }
+ if s.cl.cfg.onDataLoss != nil {
+ s.cl.cfg.onDataLoss(topic, rp.Partition)
+ }
+
+ // For OOOSN, and UnknownProducerID
+ //
+ // The only recovery is to fail the producer ID, which ensures
+ // that all batches reset sequence numbers and use a new producer
+ // ID on the next batch.
+ //
+ // For InvalidProducerIDMapping && InvalidProducerEpoch,
+ //
+ // We should not be here, since this error occurs in the
+ // context of transactions, which are caught above.
+ s.cl.cfg.logger.Log(LogLevelInfo, fmt.Sprintf("batch errored with %s, failing the producer ID and resetting all sequence numbers", err.(*kerr.Error).Message),
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", rp.Partition,
+ "producer_id", producerID,
+ "producer_epoch", producerEpoch,
+ "err", err,
+ )
+
+ // After we fail here, any new produce (even new ones
+ // happening concurrent with this function) will load
+ // a new epoch-bumped producer ID and all first-batches
+ // will reset sequence numbers appropriately.
+ s.cl.failProducerID(producerID, producerEpoch, errReloadProducerID)
+ if debug {
+ fmt.Fprintf(b, "resetting@%d,%d(%s)}, ", rp.BaseOffset, nrec, err)
+ }
+ return true, false
+
+ case err == kerr.DuplicateSequenceNumber: // ignorable, but we should not get
+ s.cl.cfg.logger.Log(LogLevelInfo, "received unexpected duplicate sequence number, ignoring and treating batch as successful",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", rp.Partition,
+ )
+ err = nil
+ fallthrough
+ default:
+ if err != nil {
+ s.cl.cfg.logger.Log(LogLevelInfo, "batch in a produce request failed",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", rp.Partition,
+ "err", err,
+ "err_is_retryable", kerr.IsRetriable(err),
+ "max_retries_reached", !failUnknown && batch.tries >= s.cl.cfg.recordRetries,
+ )
+ } else {
+ batch.owner.okOnSink = true
+ }
+ s.cl.finishBatch(batch.recBatch, producerID, producerEpoch, rp.Partition, rp.BaseOffset, err)
+ didProduce = err == nil
+ if debug {
+ if err != nil {
+ fmt.Fprintf(b, "err@%d,%d(%s)}, ", rp.BaseOffset, nrec, err)
+ } else {
+ fmt.Fprintf(b, "%d=>%d}, ", rp.BaseOffset, rp.BaseOffset+int64(nrec))
+ }
+ }
+ }
+ return false, didProduce // no retry
+}
+
+// finishBatch removes a batch from its owning record buffer and finishes all
+// records in the batch.
+//
+// This is safe even if the owning recBuf migrated sinks, since we are
+// finishing based off the status of an inflight req from the original sink.
+func (cl *Client) finishBatch(batch *recBatch, producerID int64, producerEpoch int16, partition int32, baseOffset int64, err error) {
+ recBuf := batch.owner
+
+ if err != nil {
+ // We know that Kafka replied this batch is a failure. We can
+ // fail this batch and all batches in this partition.
+ // This will keep sequence numbers correct.
+ recBuf.failAllRecords(err)
+ return
+ }
+
+ // We know the batch made it to Kafka successfully without error.
+ // We remove this batch and finish all records appropriately.
+ finished := len(batch.records)
+ recBuf.batch0Seq = incrementSequence(recBuf.batch0Seq, int32(finished))
+ recBuf.buffered.Add(-int64(finished))
+ recBuf.batches[0] = nil
+ recBuf.batches = recBuf.batches[1:]
+ recBuf.batchDrainIdx--
+
+ batch.mu.Lock()
+ records, attrs := batch.records, batch.attrs
+ batch.records = nil
+ batch.mu.Unlock()
+
+ cl.producer.promiseBatch(batchPromise{
+ baseOffset: baseOffset,
+ pid: producerID,
+ epoch: producerEpoch,
+ // A recBuf.attrs is updated when appending to be written. For
+ // v0 && v1 produce requests, we set bit 8 in the attrs
+ // corresponding to our own RecordAttr's bit 8 being no
+ // timestamp type. Thus, we can directly convert the batch
+ // attrs to our own RecordAttrs.
+ attrs: RecordAttrs{uint8(attrs)},
+ partition: partition,
+ recs: records,
+ })
+}
+
+// handleRetryBatches sets any first-buf-batch to failing and triggers a
+// metadata that will eventually clear the failing state and re-drain.
+//
+// If idempotency is disabled, if a batch is timed out or hit the retry limit,
+// we fail it and anything after it.
+func (s *sink) handleRetryBatches(
+ retry seqRecBatches,
+ kmove *kip951move,
+ backoffSeq uint32,
+ updateMeta bool, // if we should maybe update the metadata
+ canFail bool, // if records can fail if they are at limits
+ why string,
+) {
+ logger := s.cl.cfg.logger
+ debug := logger.Level() >= LogLevelDebug
+ var needsMetaUpdate bool
+ var shouldBackoff bool
+ if kmove != nil {
+ defer kmove.maybeBeginMove(s.cl)
+ }
+ var numRetryBatches, numMoveBatches int
+ retry.eachOwnerLocked(func(batch seqRecBatch) {
+ numRetryBatches++
+ if !batch.isOwnersFirstBatch() {
+ if debug {
+ logger.Log(LogLevelDebug, "retry batch is not the first batch in the owner, skipping result",
+ "topic", batch.owner.topic,
+ "partition", batch.owner.partition,
+ )
+ }
+ return
+ }
+
+ // If the request failed due to a concurrent metadata update
+ // moving partitions to a different sink (or killing the sink
+ // this partition was on), we can just reset the drain index
+ // and trigger draining now the new sink. There is no reason
+ // to backoff on this sink nor trigger a metadata update.
+ if batch.owner.sink != s {
+ if debug {
+ logger.Log(LogLevelDebug, "transitioned sinks while a request was inflight, retrying immediately on new sink without backoff",
+ "topic", batch.owner.topic,
+ "partition", batch.owner.partition,
+ "old_sink", s.nodeID,
+ "new_sink", batch.owner.sink.nodeID,
+ )
+ }
+ batch.owner.resetBatchDrainIdx()
+ return
+ }
+
+ if canFail || s.cl.cfg.disableIdempotency {
+ if err := batch.maybeFailErr(&s.cl.cfg); err != nil {
+ batch.owner.failAllRecords(err)
+ return
+ }
+ }
+
+ batch.owner.resetBatchDrainIdx()
+
+ // Now that the batch drain index is reset, if this retry is
+ // caused from a moving batch, return early. We do not need
+ // to backoff nor do we need to trigger a metadata update.
+ if kmove.hasRecBuf(batch.owner) {
+ numMoveBatches++
+ return
+ }
+
+ // If our first batch (seq == 0) fails with unknown topic, we
+ // retry immediately. Kafka can reply with valid metadata
+ // immediately after a topic was created, before the leaders
+ // actually know they are leader.
+ unknownAndFirstBatch := batch.owner.unknownFailures == 1 && batch.owner.seq == 0
+
+ if unknownAndFirstBatch {
+ shouldBackoff = true
+ return
+ }
+ if updateMeta {
+ batch.owner.failing = true
+ needsMetaUpdate = true
+ }
+ })
+
+ if debug {
+ logger.Log(LogLevelDebug, "retry batches processed",
+ "wanted_metadata_update", updateMeta,
+ "triggering_metadata_update", needsMetaUpdate,
+ "should_backoff", shouldBackoff,
+ )
+ }
+
+ // If we do want to metadata update, we only do so if any batch was the
+ // first batch in its buf / not concurrently failed.
+ if needsMetaUpdate {
+ s.cl.triggerUpdateMetadata(true, why)
+ return
+ }
+
+ // We could not need a metadata update for two reasons:
+ //
+ // * our request died when being issued
+ //
+ // * we would update metadata, but what failed was the first batch
+ // produced and the error was unknown topic / partition.
+ //
+ // In either of these cases, we should backoff a little bit to avoid
+ // spin looping.
+ //
+ // If neither of these cases are true, then we entered wanting a
+ // metadata update, but the batches either were not the first batch, or
+ // the batches were concurrently failed.
+ //
+ // If all partitions are moving, we do not need to backoff nor drain.
+ if shouldBackoff || (!updateMeta && numRetryBatches != numMoveBatches) {
+ s.maybeTriggerBackoff(backoffSeq)
+ s.maybeDrain()
+ }
+}
+
+// addRecBuf adds a new record buffer to be drained to a sink and clears the
+// buffer's failing state.
+func (s *sink) addRecBuf(add *recBuf) {
+ s.recBufsMu.Lock()
+ add.recBufsIdx = len(s.recBufs)
+ s.recBufs = append(s.recBufs, add)
+ s.recBufsMu.Unlock()
+
+ add.clearFailing()
+}
+
+// removeRecBuf removes a record buffer from a sink.
+func (s *sink) removeRecBuf(rm *recBuf) {
+ s.recBufsMu.Lock()
+ defer s.recBufsMu.Unlock()
+
+ if rm.recBufsIdx != len(s.recBufs)-1 {
+ s.recBufs[rm.recBufsIdx], s.recBufs[len(s.recBufs)-1] = s.recBufs[len(s.recBufs)-1], nil
+ s.recBufs[rm.recBufsIdx].recBufsIdx = rm.recBufsIdx
+ } else {
+ s.recBufs[rm.recBufsIdx] = nil // do not let this removal hang around
+ }
+
+ s.recBufs = s.recBufs[:len(s.recBufs)-1]
+ if s.recBufsStart == len(s.recBufs) {
+ s.recBufsStart = 0
+ }
+}
+
+// recBuf is a buffer of records being produced to a partition and being
+// drained by a sink. This is only not drained if the partition has a load
+// error and thus does not a have a sink to be drained into.
+type recBuf struct {
+ cl *Client // for cfg, record finishing
+
+ topic string
+ partition int32
+
+ // The number of bytes we can buffer in a batch for this particular
+ // topic/partition. This may be less than the configured
+ // maxRecordBatchBytes because of produce request overhead.
+ maxRecordBatchBytes int32
+
+ // addedToTxn, for transactions only, signifies whether this partition
+ // has been added to the transaction yet or not.
+ addedToTxn atomicBool
+
+ // For LoadTopicPartitioner partitioning; atomically tracks the number
+ // of records buffered in total on this recBuf.
+ buffered atomicI64
+
+ mu sync.Mutex // guards r/w access to all fields below
+
+ // sink is who is currently draining us. This can be modified
+ // concurrently during a metadata update.
+ //
+ // The first set to a non-nil sink is done without a mutex.
+ //
+ // Since only metadata updates can change the sink, metadata updates
+ // also read this without a mutex.
+ sink *sink
+ // recBufsIdx is our index into our current sink's recBufs field.
+ // This exists to aid in removing the buffer from the sink.
+ recBufsIdx int
+
+ // A concurrent metadata update can move a recBuf from one sink to
+ // another while requests are inflight on the original sink. We do not
+ // want to allow new requests to start on the new sink until they all
+ // finish on the old, because with some pathological request order
+ // finishing, we would allow requests to finish out of order:
+ // handleSeqResps works per sink, not across sinks.
+ inflightOnSink *sink
+ // We only want to allow more than 1 inflight on a sink *if* we are
+ // currently receiving successful responses. Unimportantly, this allows
+ // us to save resources if the broker is having a problem or just
+ // recovered from one. Importantly, we work around an edge case in
+ // Kafka. Kafka will accept the first produce request for a pid/epoch
+ // with *any* sequence number. Say we sent two requests inflight. The
+ // first request Kafka replies to with NOT_LEADER_FOR_PARTITION, the
+ // second, the broker finished setting up and accepts. The broker now
+ // has the second request but not the first, we will retry both
+ // requests and receive OOOSN, and the broker has logs out of order.
+ // By only allowing more than one inflight if we have seen an ok
+ // response, we largely eliminate risk of this problem. See #223 for
+ // more details.
+ okOnSink bool
+ // Inflight tracks the number of requests inflight using batches from
+ // this recBuf. Every time this hits zero, if the batchDrainIdx is not
+ // at the end, we clear inflightOnSink and trigger the *current* sink
+ // to drain.
+ inflight uint8
+
+ topicPartitionData // updated in metadata migrateProductionTo (same spot sink is updated)
+
+ // seq is used for the seq in each record batch. It is incremented when
+ // produce requests are made and can be reset on errors to batch0Seq.
+ //
+ // If idempotency is disabled, we just use "0" for the first sequence
+ // when encoding our payload.
+ //
+ // This is also used to check the first batch produced (disregarding
+ // seq resets) -- see handleRetryBatches.
+ seq int32
+ // batch0Seq is the seq of the batch at batchDrainIdx 0. If we reset
+ // the drain index, we reset seq with this number. If we successfully
+ // finish batch 0, we bump this.
+ batch0Seq int32
+ // If we need to reset sequence numbers, we set needSeqReset, and then
+ // when we use the **first** batch, we reset sequences to 0.
+ needSeqReset bool
+
+ // batches is our list of buffered records. Batches are appended as the
+ // final batch crosses size thresholds or as drain freezes batches from
+ // further modification.
+ //
+ // Most functions in a sink only operate on a batch if the batch is the
+ // first batch in a buffer. This is necessary to ensure that all
+ // records are truly finished without error in order.
+ batches []*recBatch
+ // batchDrainIdx is where the next batch will drain from. We only
+ // remove from the head of batches when a batch is finished.
+ // This is read while buffering and modified in a few places.
+ batchDrainIdx int
+
+ // If we fail with UNKNOWN_TOPIC_OR_PARTITION, we bump this and fail
+ // all records once this exceeds the config's unknown topic fail limit.
+ // If we ever see a different error (or no error), this is reset.
+ unknownFailures int64
+
+ // lingering is a timer that avoids starting maybeDrain until expiry,
+ // allowing for more records to be buffered in a single batch.
+ //
+ // Note that if something else starts a drain, if the first batch of
+ // this buffer fits into the request, it will be used.
+ //
+ // This is on recBuf rather than Sink to avoid some complicated
+ // interactions of triggering the sink to loop or not. Ideally, with
+ // the sticky partition hashers, we will only have a few partitions
+ // lingering and that this is on a RecBuf should not matter.
+ lingering *time.Timer
+
+ // failing is set when we encounter a temporary partition error during
+ // producing, such as UnknownTopicOrPartition (signifying the partition
+ // moved to a different broker).
+ //
+ // It is always cleared on metadata update.
+ failing bool
+
+ // Only possibly set in PurgeTopics, this is used to fail anything that
+ // was in the process of being buffered.
+ purged bool
+}
+
+// bufferRecord usually buffers a record, but does not if abortOnNewBatch is
+// true and if this function would create a new batch.
+//
+// This returns whether the promised record was processed or not (buffered or
+// immediately errored).
+func (recBuf *recBuf) bufferRecord(pr promisedRec, abortOnNewBatch bool) bool {
+ recBuf.mu.Lock()
+ defer recBuf.mu.Unlock()
+
+ // We truncate to milliseconds to avoid some accumulated rounding error
+ // problems (see IBM/sarama#1455)
+ if pr.Timestamp.IsZero() {
+ pr.Timestamp = time.Now()
+ }
+ pr.Timestamp = pr.Timestamp.Truncate(time.Millisecond)
+ pr.Partition = recBuf.partition // set now, for the hook below
+
+ if recBuf.purged {
+ recBuf.cl.producer.promiseRecord(pr, errPurged)
+ return true
+ }
+
+ var (
+ newBatch = true
+ onDrainBatch = recBuf.batchDrainIdx == len(recBuf.batches)
+ produceVersion = recBuf.sink.produceVersion.Load()
+ )
+
+ if !onDrainBatch {
+ batch := recBuf.batches[len(recBuf.batches)-1]
+ appended, _ := batch.tryBuffer(pr, produceVersion, recBuf.maxRecordBatchBytes, false)
+ newBatch = !appended
+ }
+
+ if newBatch {
+ newBatch := recBuf.newRecordBatch()
+ appended, aborted := newBatch.tryBuffer(pr, produceVersion, recBuf.maxRecordBatchBytes, abortOnNewBatch)
+
+ switch {
+ case aborted: // not processed
+ return false
+ case appended: // we return true below
+ default: // processed as failure
+ recBuf.cl.producer.promiseRecord(pr, kerr.MessageTooLarge)
+ return true
+ }
+
+ recBuf.batches = append(recBuf.batches, newBatch)
+ }
+
+ if recBuf.cl.cfg.linger == 0 {
+ if onDrainBatch {
+ recBuf.sink.maybeDrain()
+ }
+ } else {
+ // With linger, if this is a new batch but not the first, we
+ // stop lingering and begin draining. The drain loop will
+ // restart our linger once this buffer has one batch left.
+ if newBatch && !onDrainBatch ||
+ // If this is the first batch, try lingering; if
+ // we cannot, we are being flushed and must drain.
+ onDrainBatch && !recBuf.lockedMaybeStartLinger() {
+ recBuf.lockedStopLinger()
+ recBuf.sink.maybeDrain()
+ }
+ }
+
+ recBuf.buffered.Add(1)
+
+ if recBuf.cl.producer.hooks != nil && len(recBuf.cl.producer.hooks.partitioned) > 0 {
+ for _, h := range recBuf.cl.producer.hooks.partitioned {
+ h.OnProduceRecordPartitioned(pr.Record, recBuf.sink.nodeID)
+ }
+ }
+
+ return true
+}
+
+// Stops lingering, potentially restarting it, and returns whether there is
+// more to drain.
+//
+// If lingering, if there are more than one batches ready, there is definitely
+// more to drain and we should not linger. Otherwise, if we cannot restart
+// lingering, then we are flushing and also indicate there is more to drain.
+func (recBuf *recBuf) tryStopLingerForDraining() bool {
+ recBuf.lockedStopLinger()
+ canLinger := recBuf.cl.cfg.linger == 0
+ moreToDrain := !canLinger && len(recBuf.batches) > recBuf.batchDrainIdx ||
+ canLinger && (len(recBuf.batches) > recBuf.batchDrainIdx+1 ||
+ len(recBuf.batches) == recBuf.batchDrainIdx+1 && !recBuf.lockedMaybeStartLinger())
+ return moreToDrain
+}
+
+// Begins a linger timer unless the producer is being flushed.
+func (recBuf *recBuf) lockedMaybeStartLinger() bool {
+ if recBuf.cl.producer.flushing.Load() > 0 || recBuf.cl.producer.blocked.Load() > 0 {
+ return false
+ }
+ recBuf.lingering = time.AfterFunc(recBuf.cl.cfg.linger, recBuf.sink.maybeDrain)
+ return true
+}
+
+func (recBuf *recBuf) lockedStopLinger() {
+ if recBuf.lingering != nil {
+ recBuf.lingering.Stop()
+ recBuf.lingering = nil
+ }
+}
+
+func (recBuf *recBuf) unlingerAndManuallyDrain() {
+ recBuf.mu.Lock()
+ defer recBuf.mu.Unlock()
+ recBuf.lockedStopLinger()
+ recBuf.sink.maybeDrain()
+}
+
+// bumpRepeatedLoadErr is provided to bump a buffer's number of consecutive
+// load errors during metadata updates.
+//
+// Partition load errors are generally temporary (leader/listener/replica not
+// available), and this try bump is not expected to do much. If for some reason
+// a partition errors for a long time and we are not idempotent, this function
+// drops all buffered records.
+func (recBuf *recBuf) bumpRepeatedLoadErr(err error) {
+ recBuf.mu.Lock()
+ defer recBuf.mu.Unlock()
+ if len(recBuf.batches) == 0 {
+ return
+ }
+ batch0 := recBuf.batches[0]
+ batch0.tries++
+
+ // We need to lock the batch as well because there could be a buffered
+ // request about to be written. Writing requests only grabs the batch
+ // mu, not the recBuf mu.
+ batch0.mu.Lock()
+ var (
+ canFail = !recBuf.cl.idempotent() || batch0.canFailFromLoadErrs // we can only fail if we are not idempotent or if we have no outstanding requests
+ batch0Fail = batch0.maybeFailErr(&recBuf.cl.cfg) != nil // timeout, retries, or aborting
+ netErr = isRetryableBrokerErr(err) || isDialNonTimeoutErr(err) // we can fail if this is *not* a network error
+ retryableKerr = kerr.IsRetriable(err) // we fail if this is not a retryable kerr,
+ isUnknownLimit = recBuf.checkUnknownFailLimit(err) // or if it is, but it is UnknownTopicOrPartition and we are at our limit
+
+ willFail = canFail && (batch0Fail || !netErr && (!retryableKerr || retryableKerr && isUnknownLimit))
+ )
+ batch0.isFailingFromLoadErr = willFail
+ batch0.mu.Unlock()
+
+ recBuf.cl.cfg.logger.Log(LogLevelWarn, "produce partition load error, bumping error count on first stored batch",
+ "broker", logID(recBuf.sink.nodeID),
+ "topic", recBuf.topic,
+ "partition", recBuf.partition,
+ "err", err,
+ "can_fail", canFail,
+ "batch0_should_fail", batch0Fail,
+ "is_network_err", netErr,
+ "is_retryable_kerr", retryableKerr,
+ "is_unknown_limit", isUnknownLimit,
+ "will_fail", willFail,
+ )
+
+ if willFail {
+ recBuf.failAllRecords(err)
+ }
+}
+
+// Called locked, if err is an unknown error, bumps our limit, otherwise resets
+// it. This returns if we have reached or exceeded the limit.
+func (recBuf *recBuf) checkUnknownFailLimit(err error) bool {
+ if errors.Is(err, kerr.UnknownTopicOrPartition) {
+ recBuf.unknownFailures++
+ } else {
+ recBuf.unknownFailures = 0
+ }
+ return recBuf.cl.cfg.maxUnknownFailures >= 0 && recBuf.unknownFailures > recBuf.cl.cfg.maxUnknownFailures
+}
+
+// failAllRecords fails all buffered records in this recBuf.
+// This is used anywhere where we have to fail and remove an entire batch,
+// if we just removed the one batch, the seq num chain would be broken.
+//
+// - from fatal InitProducerID or AddPartitionsToTxn
+// - from client closing
+// - if not idempotent && hit retry / timeout limit
+// - if batch fails fatally when producing
+func (recBuf *recBuf) failAllRecords(err error) {
+ recBuf.lockedStopLinger()
+ for _, batch := range recBuf.batches {
+ // We need to guard our clearing of records against a
+ // concurrent produceRequest's write, which can have this batch
+ // buffered wile we are failing.
+ //
+ // We do not need to worry about concurrent recBuf
+ // modifications to this batch because the recBuf is already
+ // locked.
+ batch.mu.Lock()
+ records := batch.records
+ batch.records = nil
+ batch.mu.Unlock()
+
+ recBuf.cl.producer.promiseBatch(batchPromise{
+ recs: records,
+ err: err,
+ })
+ }
+ recBuf.resetBatchDrainIdx()
+ recBuf.buffered.Store(0)
+ recBuf.batches = nil
+}
+
+// clearFailing clears a buffer's failing state if it is failing.
+//
+// This is called when a buffer is added to a sink (to clear a failing state
+// from migrating buffers between sinks) or when a metadata update sees the
+// sink is still on the same source.
+func (recBuf *recBuf) clearFailing() {
+ recBuf.mu.Lock()
+ defer recBuf.mu.Unlock()
+
+ recBuf.failing = false
+ if len(recBuf.batches) != recBuf.batchDrainIdx {
+ recBuf.sink.maybeDrain()
+ }
+}
+
+func (recBuf *recBuf) resetBatchDrainIdx() {
+ recBuf.seq = recBuf.batch0Seq
+ recBuf.batchDrainIdx = 0
+}
+
+// promisedRec ties a record with the callback that will be called once
+// a batch is finally written and receives a response.
+type promisedRec struct {
+ ctx context.Context
+ promise func(*Record, error)
+ *Record
+}
+
+func (pr promisedRec) cancelingCtx() context.Context {
+ if pr.ctx.Done() != nil {
+ return pr.ctx
+ }
+ if pr.Context.Done() != nil {
+ return pr.Context
+ }
+ return nil
+}
+
+// recBatch is the type used for buffering records before they are written.
+type recBatch struct {
+ owner *recBuf // who owns us
+
+ tries int64 // if this was sent before and is thus now immutable
+
+ // We can only fail a batch if we have never issued it, or we have
+ // issued it and have received a response. If we do not receive a
+ // response, we cannot know whether we actually wrote bytes that Kafka
+ // processed or not. So, we set this to false every time we issue a
+ // request with this batch, and then reset it to true whenever we
+ // process a response.
+ canFailFromLoadErrs bool
+ // If we are going to fail the batch in bumpRepeatedLoadErr, we need to
+ // set this bool to true. There could be a concurrent request about to
+ // be written. See more comments below where this is used.
+ isFailingFromLoadErr bool
+
+ wireLength int32 // tracks total size this batch would currently encode as, including length prefix
+ v1wireLength int32 // same as wireLength, but for message set v1
+
+ attrs int16 // updated during apending; read and converted to RecordAttrs on success
+ firstTimestamp int64 // since unix epoch, in millis
+ maxTimestampDelta int64
+
+ mu sync.Mutex // guards appendTo's reading of records against failAllRecords emptying it
+ records []promisedRec // record w/ length, ts calculated
+}
+
+// Returns an error if the batch should fail.
+func (b *recBatch) maybeFailErr(cfg *cfg) error {
+ if len(b.records) > 0 {
+ r0 := &b.records[0]
+ select {
+ case <-r0.ctx.Done():
+ return r0.ctx.Err()
+ case <-r0.Context.Done():
+ return r0.Context.Err()
+ default:
+ }
+ }
+ switch {
+ case b.isTimedOut(cfg.recordTimeout):
+ return ErrRecordTimeout
+ case b.tries >= cfg.recordRetries:
+ return ErrRecordRetries
+ case b.owner.cl.producer.isAborting():
+ return ErrAborting
+ }
+ return nil
+}
+
+func (b *recBatch) v0wireLength() int32 { return b.v1wireLength - 8 } // no timestamp
+func (b *recBatch) batchLength() int32 { return b.wireLength - 4 } // no length prefix
+func (b *recBatch) flexibleWireLength() int32 { // uvarint length prefix
+ batchLength := b.batchLength()
+ return int32(kbin.UvarintLen(uvar32(batchLength))) + batchLength
+}
+
+// appendRecord saves a new record to a batch.
+//
+// This is called under the owning recBuf's mu, meaning records cannot be
+// concurrently modified by failing. This batch cannot actively be used
+// in a request, so we do not need to worry about a concurrent read.
+func (b *recBatch) appendRecord(pr promisedRec, nums recordNumbers) {
+ b.wireLength += nums.wireLength()
+ b.v1wireLength += messageSet1Length(pr.Record)
+ if len(b.records) == 0 {
+ b.firstTimestamp = pr.Timestamp.UnixNano() / 1e6
+ } else if nums.tsDelta > b.maxTimestampDelta {
+ b.maxTimestampDelta = nums.tsDelta
+ }
+ b.records = append(b.records, pr)
+}
+
+// newRecordBatch returns a new record batch for a topic and partition.
+func (recBuf *recBuf) newRecordBatch() *recBatch {
+ const recordBatchOverhead = 4 + // array len
+ 8 + // firstOffset
+ 4 + // batchLength
+ 4 + // partitionLeaderEpoch
+ 1 + // magic
+ 4 + // crc
+ 2 + // attributes
+ 4 + // lastOffsetDelta
+ 8 + // firstTimestamp
+ 8 + // maxTimestamp
+ 8 + // producerID
+ 2 + // producerEpoch
+ 4 + // seq
+ 4 // record array length
+ return &recBatch{
+ owner: recBuf,
+ records: recBuf.cl.prsPool.get()[:0],
+ wireLength: recordBatchOverhead,
+
+ canFailFromLoadErrs: true, // until we send this batch, we can fail it
+ }
+}
+
+type prsPool struct{ p *sync.Pool }
+
+func newPrsPool() prsPool {
+ return prsPool{
+ p: &sync.Pool{New: func() any { r := make([]promisedRec, 10); return &r }},
+ }
+}
+
+func (p prsPool) get() []promisedRec { return (*p.p.Get().(*[]promisedRec))[:0] }
+func (p prsPool) put(s []promisedRec) { p.p.Put(&s) }
+
+// isOwnersFirstBatch returns if the batch in a recBatch is the first batch in
+// a records. We only ever want to update batch / buffer logic if the batch is
+// the first in the buffer.
+func (b *recBatch) isOwnersFirstBatch() bool {
+ return len(b.owner.batches) > 0 && b.owner.batches[0] == b
+}
+
+// Returns whether the first record in a batch is past the limit.
+func (b *recBatch) isTimedOut(limit time.Duration) bool {
+ if limit == 0 {
+ return false
+ }
+ return time.Since(b.records[0].Timestamp) > limit
+}
+
+// Decrements the inflight count for this batch.
+//
+// If the inflight count hits zero, this potentially re-triggers a drain on the
+// *current* sink. A concurrent metadata update could have moved the recBuf to
+// a different sink; that sink will not drain this recBuf until all requests on
+// the old sink are finished.
+//
+// This is always called in the produce request path, not anywhere else (i.e.
+// not failAllRecords). We want inflight decrementing to be the last thing that
+// happens always for every request. It does not matter if the records were
+// independently failed: from the request issuing perspective, the batch is
+// still inflight.
+func (b *recBatch) decInflight() {
+ recBuf := b.owner
+ recBuf.inflight--
+ if recBuf.inflight != 0 {
+ return
+ }
+ recBuf.inflightOnSink = nil
+ if recBuf.batchDrainIdx != len(recBuf.batches) {
+ recBuf.sink.maybeDrain()
+ }
+}
+
+////////////////////
+// produceRequest //
+////////////////////
+
+// produceRequest is a kmsg.Request that is used when we want to
+// flush our buffered records.
+//
+// It is the same as kmsg.ProduceRequest, but with a custom AppendTo.
+type produceRequest struct {
+ version int16
+
+ backoffSeq uint32
+
+ txnID *string
+ acks int16
+ timeout int32
+ batches seqRecBatches
+
+ producerID int64
+ producerEpoch int16
+
+ // Initialized in AppendTo, metrics tracks uncompressed & compressed
+ // sizes (in byteS) of each batch.
+ //
+ // We use this in handleReqResp for the OnProduceHook.
+ metrics produceMetrics
+ hasHook bool
+
+ compressor *compressor
+
+ // wireLength is initially the size of sending a produce request,
+ // including the request header, with no topics. We start with the
+ // non-flexible size because it is strictly larger than flexible, but
+ // we use the proper flexible numbers when calculating.
+ wireLength int32
+ wireLengthLimit int32
+}
+
+type produceMetrics map[string]map[int32]ProduceBatchMetrics
+
+func (p produceMetrics) hook(cfg *cfg, br *broker) {
+ if len(p) == 0 {
+ return
+ }
+ var hooks []HookProduceBatchWritten
+ cfg.hooks.each(func(h Hook) {
+ if h, ok := h.(HookProduceBatchWritten); ok {
+ hooks = append(hooks, h)
+ }
+ })
+ if len(hooks) == 0 {
+ return
+ }
+ go func() {
+ for _, h := range hooks {
+ for topic, partitions := range p {
+ for partition, metrics := range partitions {
+ h.OnProduceBatchWritten(br.meta, topic, partition, metrics)
+ }
+ }
+ }
+ }()
+}
+
+func (p *produceRequest) idempotent() bool { return p.producerID >= 0 }
+
+func (p *produceRequest) tryAddBatch(produceVersion int32, recBuf *recBuf, batch *recBatch) bool {
+ batchWireLength, flexible := batch.wireLengthForProduceVersion(produceVersion)
+ batchWireLength += 4 // int32 partition prefix
+
+ if partitions, exists := p.batches[recBuf.topic]; !exists {
+ lt := int32(len(recBuf.topic))
+ if flexible {
+ batchWireLength += uvarlen(len(recBuf.topic)) + lt + 1 // compact string len, topic, compact array len for 1 item
+ } else {
+ batchWireLength += 2 + lt + 4 // string len, topic, partition array len
+ }
+ } else if flexible {
+ // If the topic exists and we are flexible, adding this
+ // partition may increase the length of our size prefix.
+ lastPartitionsLen := uvarlen(len(partitions))
+ newPartitionsLen := uvarlen(len(partitions) + 1)
+ batchWireLength += (newPartitionsLen - lastPartitionsLen)
+ }
+ // If we are flexible but do not know it yet, adding partitions may
+ // increase our length prefix. Since we are pessimistically assuming
+ // non-flexible, we have 200mil partitions to add before we have to
+ // worry about hitting 5 bytes vs. the non-flexible 4. We do not worry.
+
+ if p.wireLength+batchWireLength > p.wireLengthLimit {
+ return false
+ }
+
+ if recBuf.batches[0] == batch {
+ if !p.idempotent() || batch.canFailFromLoadErrs {
+ if err := batch.maybeFailErr(&batch.owner.cl.cfg); err != nil {
+ recBuf.failAllRecords(err)
+ return false
+ }
+ }
+ if recBuf.needSeqReset {
+ recBuf.needSeqReset = false
+ recBuf.seq = 0
+ recBuf.batch0Seq = 0
+ }
+ }
+
+ batch.tries++
+ p.wireLength += batchWireLength
+ p.batches.addBatch(
+ recBuf.topic,
+ recBuf.partition,
+ recBuf.seq,
+ batch,
+ )
+ return true
+}
+
+// seqRecBatch: a recBatch with a sequence number.
+type seqRecBatch struct {
+ seq int32
+ *recBatch
+}
+
+type seqRecBatches map[string]map[int32]seqRecBatch
+
+func (rbs *seqRecBatches) addBatch(topic string, part, seq int32, batch *recBatch) {
+ if *rbs == nil {
+ *rbs = make(seqRecBatches, 5)
+ }
+ topicBatches, exists := (*rbs)[topic]
+ if !exists {
+ topicBatches = make(map[int32]seqRecBatch, 1)
+ (*rbs)[topic] = topicBatches
+ }
+ topicBatches[part] = seqRecBatch{seq, batch}
+}
+
+func (rbs *seqRecBatches) addSeqBatch(topic string, part int32, batch seqRecBatch) {
+ if *rbs == nil {
+ *rbs = make(seqRecBatches, 5)
+ }
+ topicBatches, exists := (*rbs)[topic]
+ if !exists {
+ topicBatches = make(map[int32]seqRecBatch, 1)
+ (*rbs)[topic] = topicBatches
+ }
+ topicBatches[part] = batch
+}
+
+func (rbs seqRecBatches) each(fn func(seqRecBatch)) {
+ for _, partitions := range rbs {
+ for _, batch := range partitions {
+ fn(batch)
+ }
+ }
+}
+
+func (rbs seqRecBatches) eachOwnerLocked(fn func(seqRecBatch)) {
+ rbs.each(func(batch seqRecBatch) {
+ batch.owner.mu.Lock()
+ defer batch.owner.mu.Unlock()
+ fn(batch)
+ })
+}
+
+func (rbs seqRecBatches) sliced() recBatches {
+ var batches []*recBatch
+ for _, partitions := range rbs {
+ for _, batch := range partitions {
+ batches = append(batches, batch.recBatch)
+ }
+ }
+ return batches
+}
+
+type recBatches []*recBatch
+
+func (bs recBatches) eachOwnerLocked(fn func(*recBatch)) {
+ for _, b := range bs {
+ b.owner.mu.Lock()
+ fn(b)
+ b.owner.mu.Unlock()
+ }
+}
+
+//////////////
+// COUNTING // - this section is all about counting how bytes lay out on the wire
+//////////////
+
+// Returns the non-flexible base produce request length (the request header and
+// the request itself with no topics).
+//
+// See the large comment on maxRecordBatchBytesForTopic for why we always use
+// non-flexible (in short: it is strictly larger).
+func (cl *Client) baseProduceRequestLength() int32 {
+ const messageRequestOverhead int32 = 4 + // int32 length prefix
+ 2 + // int16 key
+ 2 + // int16 version
+ 4 + // int32 correlation ID
+ 2 // int16 client ID len (always non flexible)
+ // empty tag section skipped; see below
+
+ const produceRequestBaseOverhead int32 = 2 + // int16 transactional ID len (flexible or not, since we cap at 16382)
+ 2 + // int16 acks
+ 4 + // int32 timeout
+ 4 // int32 topics non-flexible array length
+ // empty tag section skipped; see below
+
+ baseLength := messageRequestOverhead + produceRequestBaseOverhead
+ if cl.cfg.id != nil {
+ baseLength += int32(len(*cl.cfg.id))
+ }
+ if cl.cfg.txnID != nil {
+ baseLength += int32(len(*cl.cfg.txnID))
+ }
+ return baseLength
+}
+
+// Returns the maximum size a record batch can be for this given topic, such
+// that if just a **single partition** is fully stuffed with records and we
+// only encode that one partition, we will not overflow our configured limits.
+//
+// The maximum topic length is 249, which has a 2 byte prefix for flexible or
+// non-flexible.
+//
+// Non-flexible versions will have a 4 byte length topic array prefix, a 4 byte
+// length partition array prefix. and a 4 byte records array length prefix.
+//
+// Flexible versions would have a 1 byte length topic array prefix, a 1 byte
+// length partition array prefix, up to 5 bytes for the records array length
+// prefix, and three empty tag sections resulting in 3 bytes (produce request
+// struct, topic struct, partition struct). As well, for the request header
+// itself, we have an additional 1 byte tag section (that we currently keep
+// empty).
+//
+// Thus in the worst case, we have 14 bytes of prefixes for non-flexible vs.
+// 11 bytes for flexible. We default to the more limiting size: non-flexible.
+func (cl *Client) maxRecordBatchBytesForTopic(topic string) int32 {
+ minOnePartitionBatchLength := cl.baseProduceRequestLength() +
+ 2 + // int16 topic string length prefix length
+ int32(len(topic)) +
+ 4 + // int32 partitions array length
+ 4 + // partition int32 encoding length
+ 4 // int32 record bytes array length
+
+ wireLengthLimit := cl.cfg.maxBrokerWriteBytes
+
+ recordBatchLimit := wireLengthLimit - minOnePartitionBatchLength
+ if cfgLimit := cl.cfg.maxRecordBatchBytes; cfgLimit < recordBatchLimit {
+ recordBatchLimit = cfgLimit
+ }
+ return recordBatchLimit
+}
+
+func messageSet0Length(r *Record) int32 {
+ const length = 4 + // array len
+ 8 + // offset
+ 4 + // size
+ 4 + // crc
+ 1 + // magic
+ 1 + // attributes
+ 4 + // key array bytes len
+ 4 // value array bytes len
+ return length + int32(len(r.Key)) + int32(len(r.Value))
+}
+
+func messageSet1Length(r *Record) int32 {
+ return messageSet0Length(r) + 8 // timestamp
+}
+
+// Returns the numbers for a record if it were added to the record batch.
+func (b *recBatch) calculateRecordNumbers(r *Record) recordNumbers {
+ tsMillis := r.Timestamp.UnixNano() / 1e6
+ tsDelta := tsMillis - b.firstTimestamp
+
+ // If this is to be the first record in the batch, then our timestamp
+ // delta is actually 0.
+ if len(b.records) == 0 {
+ tsDelta = 0
+ }
+
+ offsetDelta := int32(len(b.records)) // since called before adding record, delta is the current end
+
+ l := 1 + // attributes, int8 unused
+ kbin.VarlongLen(tsDelta) +
+ kbin.VarintLen(offsetDelta) +
+ kbin.VarintLen(int32(len(r.Key))) +
+ len(r.Key) +
+ kbin.VarintLen(int32(len(r.Value))) +
+ len(r.Value) +
+ kbin.VarintLen(int32(len(r.Headers))) // varint array len headers
+
+ for _, h := range r.Headers {
+ l += kbin.VarintLen(int32(len(h.Key))) +
+ len(h.Key) +
+ kbin.VarintLen(int32(len(h.Value))) +
+ len(h.Value)
+ }
+
+ return recordNumbers{
+ lengthField: int32(l),
+ tsDelta: tsDelta,
+ }
+}
+
+func uvar32(l int32) uint32 { return 1 + uint32(l) }
+func uvarlen(l int) int32 { return int32(kbin.UvarintLen(uvar32(int32(l)))) }
+
+// recordNumbers tracks a few numbers for a record that is buffered.
+type recordNumbers struct {
+ lengthField int32 // the length field prefix of a record encoded on the wire
+ tsDelta int64 // the ms delta of when the record was added against the first timestamp
+}
+
+// wireLength is the wire length of a record including its length field prefix.
+func (n recordNumbers) wireLength() int32 {
+ return int32(kbin.VarintLen(n.lengthField)) + n.lengthField
+}
+
+func (b *recBatch) wireLengthForProduceVersion(v int32) (batchWireLength int32, flexible bool) {
+ batchWireLength = b.wireLength
+
+ // If we do not yet know the produce version, we default to the largest
+ // size. Our request building sizes will always be an overestimate.
+ if v < 0 {
+ v1BatchWireLength := b.v1wireLength
+ if v1BatchWireLength > batchWireLength {
+ batchWireLength = v1BatchWireLength
+ }
+ flexibleBatchWireLength := b.flexibleWireLength()
+ if flexibleBatchWireLength > batchWireLength {
+ batchWireLength = flexibleBatchWireLength
+ }
+ } else {
+ switch v {
+ case 0, 1:
+ batchWireLength = b.v0wireLength()
+ case 2:
+ batchWireLength = b.v1wireLength
+ case 3, 4, 5, 6, 7, 8:
+ batchWireLength = b.wireLength
+ default:
+ batchWireLength = b.flexibleWireLength()
+ flexible = true
+ }
+ }
+
+ return
+}
+
+func (b *recBatch) tryBuffer(pr promisedRec, produceVersion, maxBatchBytes int32, abortOnNewBatch bool) (appended, aborted bool) {
+ nums := b.calculateRecordNumbers(pr.Record)
+
+ batchWireLength, _ := b.wireLengthForProduceVersion(produceVersion)
+ newBatchLength := batchWireLength + nums.wireLength()
+
+ if b.tries != 0 || newBatchLength > maxBatchBytes {
+ return false, false
+ }
+ if abortOnNewBatch {
+ return false, true
+ }
+ b.appendRecord(pr, nums)
+ pr.setLengthAndTimestampDelta(
+ nums.lengthField,
+ nums.tsDelta,
+ )
+ return true, false
+}
+
+//////////////
+// ENCODING // - this section is all about actually writing a produce request
+//////////////
+
+func (*produceRequest) Key() int16 { return 0 }
+func (*produceRequest) MaxVersion() int16 { return 10 }
+func (p *produceRequest) SetVersion(v int16) { p.version = v }
+func (p *produceRequest) GetVersion() int16 { return p.version }
+func (p *produceRequest) IsFlexible() bool { return p.version >= 9 }
+func (p *produceRequest) AppendTo(dst []byte) []byte {
+ flexible := p.IsFlexible()
+
+ if p.hasHook {
+ p.metrics = make(map[string]map[int32]ProduceBatchMetrics)
+ }
+
+ if p.version >= 3 {
+ if flexible {
+ dst = kbin.AppendCompactNullableString(dst, p.txnID)
+ } else {
+ dst = kbin.AppendNullableString(dst, p.txnID)
+ }
+ }
+
+ dst = kbin.AppendInt16(dst, p.acks)
+ dst = kbin.AppendInt32(dst, p.timeout)
+ if flexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(p.batches))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(p.batches))
+ }
+
+ for topic, partitions := range p.batches {
+ if flexible {
+ dst = kbin.AppendCompactString(dst, topic)
+ dst = kbin.AppendCompactArrayLen(dst, len(partitions))
+ } else {
+ dst = kbin.AppendString(dst, topic)
+ dst = kbin.AppendArrayLen(dst, len(partitions))
+ }
+
+ var tmetrics map[int32]ProduceBatchMetrics
+ if p.hasHook {
+ tmetrics = make(map[int32]ProduceBatchMetrics)
+ p.metrics[topic] = tmetrics
+ }
+
+ for partition, batch := range partitions {
+ dst = kbin.AppendInt32(dst, partition)
+ batch.mu.Lock()
+ if batch.records == nil || batch.isFailingFromLoadErr { // concurrent failAllRecords OR concurrent bumpRepeatedLoadErr
+ if flexible {
+ dst = kbin.AppendCompactNullableBytes(dst, nil)
+ } else {
+ dst = kbin.AppendNullableBytes(dst, nil)
+ }
+ batch.mu.Unlock()
+ continue
+ }
+ batch.canFailFromLoadErrs = false // we are going to write this batch: the response status is now unknown
+ var pmetrics ProduceBatchMetrics
+ if p.version < 3 {
+ dst, pmetrics = batch.appendToAsMessageSet(dst, uint8(p.version), p.compressor)
+ } else {
+ dst, pmetrics = batch.appendTo(dst, p.version, p.producerID, p.producerEpoch, p.txnID != nil, p.compressor)
+ }
+ batch.mu.Unlock()
+ if p.hasHook {
+ tmetrics[partition] = pmetrics
+ }
+ if flexible {
+ dst = append(dst, 0)
+ }
+ }
+ if flexible {
+ dst = append(dst, 0)
+ }
+ }
+ if flexible {
+ dst = append(dst, 0)
+ }
+
+ return dst
+}
+
+func (*produceRequest) ReadFrom([]byte) error {
+ panic("unreachable -- the client never uses ReadFrom on its internal produceRequest")
+}
+
+func (p *produceRequest) ResponseKind() kmsg.Response {
+ r := kmsg.NewPtrProduceResponse()
+ r.Version = p.version
+ return r
+}
+
+func (b seqRecBatch) appendTo(
+ in []byte,
+ version int16,
+ producerID int64,
+ producerEpoch int16,
+ transactional bool,
+ compressor *compressor,
+) (dst []byte, m ProduceBatchMetrics) { // named return so that our defer for flexible versions can modify it
+ flexible := version >= 9
+ dst = in
+ nullableBytesLen := b.wireLength - 4 // NULLABLE_BYTES leading length, minus itself
+ nullableBytesLenAt := len(dst) // in case compression adjusting
+ dst = kbin.AppendInt32(dst, nullableBytesLen)
+
+ // With flexible versions, the array length prefix can be anywhere from
+ // 1 byte long to 5 bytes long (covering up to 268MB).
+ //
+ // We have to add our initial understanding of the array length as a
+ // uvarint, but if compressing shrinks what that length would encode
+ // as, we have to shift everything down.
+ if flexible {
+ dst = dst[:nullableBytesLenAt]
+ batchLength := b.batchLength()
+ dst = kbin.AppendUvarint(dst, uvar32(batchLength)) // compact array non-null prefix
+ batchAt := len(dst)
+ defer func() {
+ batch := dst[batchAt:]
+ if int32(len(batch)) == batchLength { // we did not compress: simply return
+ return
+ }
+
+ // We *only* could have shrunk the batch bytes, so our
+ // append here will not overwrite anything we need to
+ // keep.
+ newDst := kbin.AppendUvarint(dst[:nullableBytesLenAt], uvar32(int32(len(batch))))
+
+ // If our append did not shorten the length prefix, we
+ // can just return the prior dst, otherwise we have to
+ // shift the batch itself down on newDst.
+ if len(newDst) != batchAt {
+ dst = append(newDst, batch...)
+ }
+ }()
+ }
+
+ // Below here, we append the actual record batch, which cannot be
+ // flexible. Everything encodes properly; flexible adjusting is done in
+ // the defer just above.
+
+ dst = kbin.AppendInt64(dst, 0) // firstOffset, defined as zero for producing
+
+ batchLen := nullableBytesLen - 8 - 4 // length of what follows this field (so, minus what came before and ourself)
+ batchLenAt := len(dst) // in case compression adjusting
+ dst = kbin.AppendInt32(dst, batchLen)
+
+ dst = kbin.AppendInt32(dst, -1) // partitionLeaderEpoch, unused in clients
+ dst = kbin.AppendInt8(dst, 2) // magic, defined as 2 for records v0.11.0+
+
+ crcStart := len(dst) // fill at end
+ dst = kbin.AppendInt32(dst, 0) // reserved crc
+
+ attrsAt := len(dst) // in case compression adjusting
+ b.attrs = 0
+ if transactional {
+ b.attrs |= 0x0010 // bit 5 is the "is transactional" bit
+ }
+ dst = kbin.AppendInt16(dst, b.attrs)
+ dst = kbin.AppendInt32(dst, int32(len(b.records)-1)) // lastOffsetDelta
+ dst = kbin.AppendInt64(dst, b.firstTimestamp)
+ dst = kbin.AppendInt64(dst, b.firstTimestamp+b.maxTimestampDelta)
+
+ seq := b.seq
+ if producerID < 0 { // a negative producer ID means we are not using idempotence
+ seq = 0
+ }
+ dst = kbin.AppendInt64(dst, producerID)
+ dst = kbin.AppendInt16(dst, producerEpoch)
+ dst = kbin.AppendInt32(dst, seq)
+
+ dst = kbin.AppendArrayLen(dst, len(b.records))
+ recordsAt := len(dst)
+ for i, pr := range b.records {
+ dst = pr.appendTo(dst, int32(i))
+ }
+
+ toCompress := dst[recordsAt:]
+ m.NumRecords = len(b.records)
+ m.UncompressedBytes = len(toCompress)
+ m.CompressedBytes = m.UncompressedBytes
+
+ if compressor != nil {
+ w := byteBuffers.Get().(*bytes.Buffer)
+ defer byteBuffers.Put(w)
+ w.Reset()
+
+ compressed, codec := compressor.compress(w, toCompress, version)
+ if compressed != nil && // nil would be from an error
+ len(compressed) < len(toCompress) {
+ // our compressed was shorter: copy over
+ copy(dst[recordsAt:], compressed)
+ dst = dst[:recordsAt+len(compressed)]
+ m.CompressedBytes = len(compressed)
+ m.CompressionType = uint8(codec)
+
+ // update the few record batch fields we already wrote
+ savings := int32(len(toCompress) - len(compressed))
+ nullableBytesLen -= savings
+ batchLen -= savings
+ b.attrs |= int16(codec)
+ if !flexible {
+ kbin.AppendInt32(dst[:nullableBytesLenAt], nullableBytesLen)
+ }
+ kbin.AppendInt32(dst[:batchLenAt], batchLen)
+ kbin.AppendInt16(dst[:attrsAt], b.attrs)
+ }
+ }
+
+ kbin.AppendInt32(dst[:crcStart], int32(crc32.Checksum(dst[crcStart+4:], crc32c)))
+
+ return dst, m
+}
+
+func (pr promisedRec) appendTo(dst []byte, offsetDelta int32) []byte {
+ length, tsDelta := pr.lengthAndTimestampDelta()
+ dst = kbin.AppendVarint(dst, length)
+ dst = kbin.AppendInt8(dst, 0) // attributes, currently unused
+ dst = kbin.AppendVarlong(dst, tsDelta)
+ dst = kbin.AppendVarint(dst, offsetDelta)
+ dst = kbin.AppendVarintBytes(dst, pr.Key)
+ dst = kbin.AppendVarintBytes(dst, pr.Value)
+ dst = kbin.AppendVarint(dst, int32(len(pr.Headers)))
+ for _, h := range pr.Headers {
+ dst = kbin.AppendVarintString(dst, h.Key)
+ dst = kbin.AppendVarintBytes(dst, h.Value)
+ }
+ return dst
+}
+
+func (b seqRecBatch) appendToAsMessageSet(dst []byte, version uint8, compressor *compressor) ([]byte, ProduceBatchMetrics) {
+ var m ProduceBatchMetrics
+
+ nullableBytesLenAt := len(dst)
+ dst = append(dst, 0, 0, 0, 0) // nullable bytes len
+ for i, pr := range b.records {
+ _, tsDelta := pr.lengthAndTimestampDelta()
+ dst = appendMessageTo(
+ dst,
+ version,
+ 0,
+ int64(i),
+ b.firstTimestamp+tsDelta,
+ pr.Record,
+ )
+ }
+
+ b.attrs = 0
+
+ // Produce request v0 and v1 uses message set v0, which does not have
+ // timestamps. We set bit 8 in our attrs which corresponds with our own
+ // kgo.RecordAttrs's bit. The attrs field is unused in a sink / recBuf
+ // outside of the appending functions or finishing records; if we use
+ // more bits in our internal RecordAttrs, the below will need to
+ // change.
+ if version == 0 || version == 1 {
+ b.attrs |= 0b1000_0000
+ }
+
+ toCompress := dst[nullableBytesLenAt+4:] // skip nullable bytes leading prefix
+ m.NumRecords = len(b.records)
+ m.UncompressedBytes = len(toCompress)
+ m.CompressedBytes = m.UncompressedBytes
+
+ if compressor != nil {
+ w := byteBuffers.Get().(*bytes.Buffer)
+ defer byteBuffers.Put(w)
+ w.Reset()
+
+ compressed, codec := compressor.compress(w, toCompress, int16(version))
+ inner := &Record{Value: compressed}
+ wrappedLength := messageSet0Length(inner)
+ if version == 2 {
+ wrappedLength += 8 // timestamp
+ }
+
+ if compressed != nil && int(wrappedLength) < len(toCompress) {
+ m.CompressedBytes = int(wrappedLength)
+ m.CompressionType = uint8(codec)
+
+ b.attrs |= int16(codec)
+
+ dst = appendMessageTo(
+ dst[:nullableBytesLenAt+4],
+ version,
+ int8(codec),
+ int64(len(b.records)-1),
+ b.firstTimestamp,
+ inner,
+ )
+ }
+ }
+
+ kbin.AppendInt32(dst[:nullableBytesLenAt], int32(len(dst[nullableBytesLenAt+4:])))
+ return dst, m
+}
+
+func appendMessageTo(
+ dst []byte,
+ version uint8,
+ attributes int8,
+ offset int64,
+ timestamp int64,
+ r *Record,
+) []byte {
+ magic := version >> 1
+ dst = kbin.AppendInt64(dst, offset)
+ msgSizeStart := len(dst)
+ dst = append(dst, 0, 0, 0, 0)
+ crc32Start := len(dst)
+ dst = append(dst,
+ 0, 0, 0, 0,
+ magic,
+ byte(attributes))
+ if magic == 1 {
+ dst = kbin.AppendInt64(dst, timestamp)
+ }
+ dst = kbin.AppendNullableBytes(dst, r.Key)
+ dst = kbin.AppendNullableBytes(dst, r.Value)
+ kbin.AppendInt32(dst[:crc32Start], int32(crc32.ChecksumIEEE(dst[crc32Start+4:])))
+ kbin.AppendInt32(dst[:msgSizeStart], int32(len(dst[msgSizeStart+4:])))
+ return dst
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/source.go b/vendor/github.com/twmb/franz-go/pkg/kgo/source.go
new file mode 100644
index 0000000000000..0c475d14a9419
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/source.go
@@ -0,0 +1,2326 @@
+package kgo
+
+import (
+ "context"
+ "encoding/binary"
+ "fmt"
+ "hash/crc32"
+ "slices"
+ "sort"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kbin"
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+type readerFrom interface {
+ ReadFrom([]byte) error
+}
+
+// A source consumes from an individual broker.
+//
+// As long as there is at least one active cursor, a source aims to have *one*
+// buffered fetch at all times. As soon as the fetch is taken, a source issues
+// another fetch in the background.
+type source struct {
+ cl *Client // our owning client, for cfg, metadata triggering, context, etc.
+ nodeID int32 // the node ID of the broker this sink belongs to
+
+ // Tracks how many _failed_ fetch requests we have in a row (unable to
+ // receive a response). Any response, even responses with an ErrorCode
+ // set, are successful. This field is used for backoff purposes.
+ consecutiveFailures int
+
+ fetchState workLoop
+ sem chan struct{} // closed when fetchable, recreated when a buffered fetch exists
+ buffered bufferedFetch // contains a fetch the source has buffered for polling
+
+ session fetchSession // supports fetch sessions as per KIP-227
+
+ cursorsMu sync.Mutex
+ cursors []*cursor // contains all partitions being consumed on this source
+ cursorsStart int // incremented every fetch req to ensure all partitions are fetched
+}
+
+func (cl *Client) newSource(nodeID int32) *source {
+ s := &source{
+ cl: cl,
+ nodeID: nodeID,
+ sem: make(chan struct{}),
+ }
+ if cl.cfg.disableFetchSessions {
+ s.session.kill()
+ }
+ close(s.sem)
+ return s
+}
+
+func (s *source) addCursor(add *cursor) {
+ s.cursorsMu.Lock()
+ add.cursorsIdx = len(s.cursors)
+ s.cursors = append(s.cursors, add)
+ s.cursorsMu.Unlock()
+
+ // Adding a new cursor may allow a new partition to be fetched.
+ // We do not need to cancel any current fetch nor kill the session,
+ // since adding a cursor is non-destructive to work in progress.
+ // If the session is currently stopped, this is a no-op.
+ s.maybeConsume()
+}
+
+// Removes a cursor from the source.
+//
+// The caller should do this with a stopped session if necessary, which
+// should clear any buffered fetch and reset the source's session.
+func (s *source) removeCursor(rm *cursor) {
+ s.cursorsMu.Lock()
+ defer s.cursorsMu.Unlock()
+
+ if rm.cursorsIdx != len(s.cursors)-1 {
+ s.cursors[rm.cursorsIdx], s.cursors[len(s.cursors)-1] = s.cursors[len(s.cursors)-1], nil
+ s.cursors[rm.cursorsIdx].cursorsIdx = rm.cursorsIdx
+ } else {
+ s.cursors[rm.cursorsIdx] = nil // do not let the memory hang around
+ }
+
+ s.cursors = s.cursors[:len(s.cursors)-1]
+ if s.cursorsStart == len(s.cursors) {
+ s.cursorsStart = 0
+ }
+}
+
+// cursor is where we are consuming from for an individual partition.
+type cursor struct {
+ topic string
+ topicID [16]byte
+ partition int32
+
+ unknownIDFails atomicI32
+
+ keepControl bool // whether to keep control records
+
+ cursorsIdx int // updated under source mutex
+
+ // The source we are currently on. This is modified in two scenarios:
+ //
+ // * by metadata when the consumer session is completely stopped
+ //
+ // * by a fetch when handling a fetch response that returned preferred
+ // replicas
+ //
+ // This is additionally read within a session when cursor is
+ // transitioning from used to usable.
+ source *source
+
+ // useState is an atomic that has two states: unusable and usable. A
+ // cursor can be used in a fetch request if it is in the usable state.
+ // Once used, the cursor is unusable, and will be set back to usable
+ // one the request lifecycle is complete (a usable fetch response, or
+ // once listing offsets or loading epochs completes).
+ //
+ // A cursor can be set back to unusable when sources are stopped. This
+ // can be done if a group loses a partition, for example.
+ //
+ // The used state is exclusively updated by either building a fetch
+ // request or when the source is stopped.
+ useState atomicBool
+
+ topicPartitionData // updated in metadata when session is stopped
+
+ // cursorOffset is our epoch/offset that we are consuming. When a fetch
+ // request is issued, we "freeze" a view of the offset and of the
+ // leader epoch (see cursorOffsetNext for why the leader epoch). When a
+ // buffered fetch is taken, we update the cursor.
+ cursorOffset
+}
+
+// cursorOffset tracks offsets/epochs for a cursor.
+type cursorOffset struct {
+ // What the cursor is at: we request this offset next.
+ offset int64
+
+ // The epoch of the last record we consumed. Also used for KIP-320, if
+ // we are fenced or we have an offset out of range error, we go into
+ // the OffsetForLeaderEpoch recovery. The last consumed epoch tells the
+ // broker which offset we want: either (a) the next offset if the last
+ // consumed epoch is the current epoch, or (b) the offset of the first
+ // record in the next epoch. This allows for exact offset resetting and
+ // data loss detection.
+ //
+ // See kmsg.OffsetForLeaderEpochResponseTopicPartition for more
+ // details.
+ lastConsumedEpoch int32
+
+ // If we receive OFFSET_OUT_OF_RANGE, and we previously *know* we
+ // consumed an offset, we reset to the nearest offset after our prior
+ // known valid consumed offset.
+ lastConsumedTime time.Time
+
+ // The current high watermark of the partition. Uninitialized (0) means
+ // we do not know the HWM, or there is no lag.
+ hwm int64
+}
+
+// use, for fetch requests, freezes a view of the cursorOffset.
+func (c *cursor) use() *cursorOffsetNext {
+ // A source using a cursor has exclusive access to the use field by
+ // virtue of that source building a request during a live session,
+ // or by virtue of the session being stopped.
+ c.useState.Store(false)
+ return &cursorOffsetNext{
+ cursorOffset: c.cursorOffset,
+ from: c,
+ currentLeaderEpoch: c.leaderEpoch,
+ }
+}
+
+// unset transitions a cursor to an unusable state when the cursor is no longer
+// to be consumed. This is called exclusively after sources are stopped.
+// This also unsets the cursor offset, which is assumed to be unused now.
+func (c *cursor) unset() {
+ c.useState.Store(false)
+ c.setOffset(cursorOffset{
+ offset: -1,
+ lastConsumedEpoch: -1,
+ hwm: 0,
+ })
+}
+
+// usable returns whether a cursor can be used for building a fetch request.
+func (c *cursor) usable() bool {
+ return c.useState.Load()
+}
+
+// allowUsable allows a cursor to be fetched, and is called either in assigning
+// offsets, or when a buffered fetch is taken or discarded, or when listing /
+// epoch loading finishes.
+func (c *cursor) allowUsable() {
+ c.useState.Swap(true)
+ c.source.maybeConsume()
+}
+
+// setOffset sets the cursors offset which will be used the next time a fetch
+// request is built. This function is called under the source mutex while the
+// source is stopped, and the caller is responsible for calling maybeConsume
+// after.
+func (c *cursor) setOffset(o cursorOffset) {
+ c.cursorOffset = o
+}
+
+// cursorOffsetNext is updated while processing a fetch response.
+//
+// When a buffered fetch is taken, we update a cursor with the final values in
+// the modified cursor offset.
+type cursorOffsetNext struct {
+ cursorOffset
+ from *cursor
+
+ // The leader epoch at the time we took this cursor offset snapshot. We
+ // need to copy this rather than accessing it through `from` because a
+ // fetch request can be canceled while it is being written (and reading
+ // the epoch).
+ //
+ // The leader field itself is only read within the context of a session
+ // while the session is alive, thus it needs no such guard.
+ //
+ // Basically, any field read in AppendTo needs to be copied into
+ // cursorOffsetNext.
+ currentLeaderEpoch int32
+}
+
+type cursorOffsetPreferred struct {
+ cursorOffsetNext
+ preferredReplica int32
+}
+
+// Moves a cursor from one source to another. This is done while handling
+// a fetch response, which means within the context of a live session.
+func (p *cursorOffsetPreferred) move() {
+ c := p.from
+ defer c.allowUsable()
+
+ // Before we migrate the cursor, we check if the destination source
+ // exists. If not, we do not migrate and instead force a metadata.
+
+ c.source.cl.sinksAndSourcesMu.Lock()
+ sns, exists := c.source.cl.sinksAndSources[p.preferredReplica]
+ c.source.cl.sinksAndSourcesMu.Unlock()
+
+ if !exists {
+ c.source.cl.triggerUpdateMetadataNow("cursor moving to a different broker that is not yet known")
+ return
+ }
+
+ // This remove clears the source's session and buffered fetch, although
+ // we will not have a buffered fetch since moving replicas is called
+ // before buffering a fetch.
+ c.source.removeCursor(c)
+ c.source = sns.source
+ c.source.addCursor(c)
+}
+
+type cursorPreferreds []cursorOffsetPreferred
+
+func (cs cursorPreferreds) String() string {
+ type pnext struct {
+ p int32
+ next int32
+ }
+ ts := make(map[string][]pnext)
+ for _, c := range cs {
+ t := c.from.topic
+ p := c.from.partition
+ ts[t] = append(ts[t], pnext{p, c.preferredReplica})
+ }
+ tsorted := make([]string, 0, len(ts))
+ for t, ps := range ts {
+ tsorted = append(tsorted, t)
+ slices.SortFunc(ps, func(l, r pnext) int {
+ if l.p < r.p {
+ return -1
+ }
+ if l.p > r.p {
+ return 1
+ }
+ if l.next < r.next {
+ return -1
+ }
+ if l.next > r.next {
+ return 1
+ }
+ return 0
+ })
+ }
+ slices.Sort(tsorted)
+
+ sb := new(strings.Builder)
+ for i, t := range tsorted {
+ ps := ts[t]
+ fmt.Fprintf(sb, "%s{", t)
+
+ for j, p := range ps {
+ if j < len(ps)-1 {
+ fmt.Fprintf(sb, "%d=>%d, ", p.p, p.next)
+ } else {
+ fmt.Fprintf(sb, "%d=>%d", p.p, p.next)
+ }
+ }
+
+ if i < len(tsorted)-1 {
+ fmt.Fprint(sb, "}, ")
+ } else {
+ fmt.Fprint(sb, "}")
+ }
+ }
+ return sb.String()
+}
+
+func (cs cursorPreferreds) eachPreferred(fn func(cursorOffsetPreferred)) {
+ for _, c := range cs {
+ fn(c)
+ }
+}
+
+type usedOffsets map[string]map[int32]*cursorOffsetNext
+
+func (os usedOffsets) eachOffset(fn func(*cursorOffsetNext)) {
+ for _, ps := range os {
+ for _, o := range ps {
+ fn(o)
+ }
+ }
+}
+
+func (os usedOffsets) finishUsingAllWithSet() {
+ os.eachOffset(func(o *cursorOffsetNext) { o.from.setOffset(o.cursorOffset); o.from.allowUsable() })
+}
+
+func (os usedOffsets) finishUsingAll() {
+ os.eachOffset(func(o *cursorOffsetNext) { o.from.allowUsable() })
+}
+
+// bufferedFetch is a fetch response waiting to be consumed by the client.
+type bufferedFetch struct {
+ fetch Fetch
+
+ doneFetch chan<- struct{} // when unbuffered, we send down this
+ usedOffsets usedOffsets // what the offsets will be next if this fetch is used
+}
+
+func (s *source) hook(f *Fetch, buffered, polled bool) {
+ s.cl.cfg.hooks.each(func(h Hook) {
+ if buffered {
+ h, ok := h.(HookFetchRecordBuffered)
+ if !ok {
+ return
+ }
+ for i := range f.Topics {
+ t := &f.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ for _, r := range p.Records {
+ h.OnFetchRecordBuffered(r)
+ }
+ }
+ }
+ } else {
+ h, ok := h.(HookFetchRecordUnbuffered)
+ if !ok {
+ return
+ }
+ for i := range f.Topics {
+ t := &f.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ for _, r := range p.Records {
+ h.OnFetchRecordUnbuffered(r, polled)
+ }
+ }
+ }
+ }
+ })
+
+ var nrecs int
+ var nbytes int64
+ for i := range f.Topics {
+ t := &f.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ nrecs += len(p.Records)
+ for k := range p.Records {
+ nbytes += p.Records[k].userSize()
+ }
+ }
+ }
+ if buffered {
+ s.cl.consumer.bufferedRecords.Add(int64(nrecs))
+ s.cl.consumer.bufferedBytes.Add(nbytes)
+ } else {
+ s.cl.consumer.bufferedRecords.Add(-int64(nrecs))
+ s.cl.consumer.bufferedBytes.Add(-nbytes)
+ }
+}
+
+// takeBuffered drains a buffered fetch and updates offsets.
+func (s *source) takeBuffered(paused pausedTopics) Fetch {
+ if len(paused) == 0 {
+ return s.takeBufferedFn(true, usedOffsets.finishUsingAllWithSet)
+ }
+ var strip map[string]map[int32]struct{}
+ f := s.takeBufferedFn(true, func(os usedOffsets) {
+ for t, ps := range os {
+ // If the entire topic is paused, we allowUsable all
+ // and strip the topic entirely.
+ pps, ok := paused.t(t)
+ if !ok {
+ for _, o := range ps {
+ o.from.setOffset(o.cursorOffset)
+ o.from.allowUsable()
+ }
+ continue
+ }
+ if strip == nil {
+ strip = make(map[string]map[int32]struct{})
+ }
+ if pps.all {
+ for _, o := range ps {
+ o.from.allowUsable()
+ }
+ strip[t] = nil // initialize key, for existence-but-len-0 check below
+ continue
+ }
+ stript := make(map[int32]struct{})
+ for _, o := range ps {
+ if _, ok := pps.m[o.from.partition]; ok {
+ o.from.allowUsable()
+ stript[o.from.partition] = struct{}{}
+ continue
+ }
+ o.from.setOffset(o.cursorOffset)
+ o.from.allowUsable()
+ }
+ // We only add stript to strip if there are any
+ // stripped partitions. We could have a paused
+ // partition that is on another broker, while this
+ // broker has no paused partitions -- if we add stript
+ // here, our logic below (stripping this entire topic)
+ // is more confusing (present nil vs. non-present nil).
+ if len(stript) > 0 {
+ strip[t] = stript
+ }
+ }
+ })
+ if strip != nil {
+ keep := f.Topics[:0]
+ for _, t := range f.Topics {
+ stript, ok := strip[t.Topic]
+ if ok {
+ if len(stript) == 0 {
+ continue // stripping this entire topic
+ }
+ keepp := t.Partitions[:0]
+ for _, p := range t.Partitions {
+ if _, ok := stript[p.Partition]; ok {
+ continue
+ }
+ keepp = append(keepp, p)
+ }
+ t.Partitions = keepp
+ }
+ keep = append(keep, t)
+ }
+ f.Topics = keep
+ }
+ return f
+}
+
+func (s *source) discardBuffered() {
+ s.takeBufferedFn(false, usedOffsets.finishUsingAll)
+}
+
+// takeNBuffered takes a limited amount of records from a buffered fetch,
+// updating offsets in each partition per records taken.
+//
+// This only allows a new fetch once every buffered record has been taken.
+//
+// This returns the number of records taken and whether the source has been
+// completely drained.
+func (s *source) takeNBuffered(paused pausedTopics, n int) (Fetch, int, bool) {
+ var r Fetch
+ var taken int
+
+ b := &s.buffered
+ bf := &b.fetch
+ for len(bf.Topics) > 0 && n > 0 {
+ t := &bf.Topics[0]
+
+ // If the topic is outright paused, we allowUsable all
+ // partitions in the topic and skip the topic entirely.
+ if paused.has(t.Topic, -1) {
+ bf.Topics = bf.Topics[1:]
+ for _, pCursor := range b.usedOffsets[t.Topic] {
+ pCursor.from.allowUsable()
+ }
+ delete(b.usedOffsets, t.Topic)
+ continue
+ }
+
+ var rt *FetchTopic
+ ensureTopicAdded := func() {
+ if rt != nil {
+ return
+ }
+ r.Topics = append(r.Topics, *t)
+ rt = &r.Topics[len(r.Topics)-1]
+ rt.Partitions = nil
+ }
+
+ tCursors := b.usedOffsets[t.Topic]
+
+ for len(t.Partitions) > 0 && n > 0 {
+ p := &t.Partitions[0]
+
+ if paused.has(t.Topic, p.Partition) {
+ t.Partitions = t.Partitions[1:]
+ pCursor := tCursors[p.Partition]
+ pCursor.from.allowUsable()
+ delete(tCursors, p.Partition)
+ if len(tCursors) == 0 {
+ delete(b.usedOffsets, t.Topic)
+ }
+ continue
+ }
+
+ ensureTopicAdded()
+ rt.Partitions = append(rt.Partitions, *p)
+ rp := &rt.Partitions[len(rt.Partitions)-1]
+
+ take := n
+ if take > len(p.Records) {
+ take = len(p.Records)
+ }
+
+ rp.Records = p.Records[:take:take]
+ p.Records = p.Records[take:]
+
+ n -= take
+ taken += take
+
+ pCursor := tCursors[p.Partition]
+
+ if len(p.Records) == 0 {
+ t.Partitions = t.Partitions[1:]
+
+ pCursor.from.setOffset(pCursor.cursorOffset)
+ pCursor.from.allowUsable()
+ delete(tCursors, p.Partition)
+ if len(tCursors) == 0 {
+ delete(b.usedOffsets, t.Topic)
+ }
+ continue
+ }
+
+ lastReturnedRecord := rp.Records[len(rp.Records)-1]
+ pCursor.from.setOffset(cursorOffset{
+ offset: lastReturnedRecord.Offset + 1,
+ lastConsumedEpoch: lastReturnedRecord.LeaderEpoch,
+ lastConsumedTime: lastReturnedRecord.Timestamp,
+ hwm: p.HighWatermark,
+ })
+ }
+
+ if len(t.Partitions) == 0 {
+ bf.Topics = bf.Topics[1:]
+ }
+ }
+
+ s.hook(&r, false, true) // unbuffered, polled
+
+ drained := len(bf.Topics) == 0
+ if drained {
+ s.takeBuffered(nil)
+ }
+ return r, taken, drained
+}
+
+func (s *source) takeBufferedFn(polled bool, offsetFn func(usedOffsets)) Fetch {
+ r := s.buffered
+ s.buffered = bufferedFetch{}
+ offsetFn(r.usedOffsets)
+ r.doneFetch <- struct{}{}
+ close(s.sem)
+
+ s.hook(&r.fetch, false, polled) // unbuffered, potentially polled
+
+ return r.fetch
+}
+
+// createReq actually creates a fetch request.
+func (s *source) createReq() *fetchRequest {
+ req := &fetchRequest{
+ maxWait: s.cl.cfg.maxWait,
+ minBytes: s.cl.cfg.minBytes,
+ maxBytes: s.cl.cfg.maxBytes.load(),
+ maxPartBytes: s.cl.cfg.maxPartBytes.load(),
+ rack: s.cl.cfg.rack,
+ isolationLevel: s.cl.cfg.isolationLevel,
+ preferLagFn: s.cl.cfg.preferLagFn,
+
+ // We copy a view of the session for the request, which allows
+ // modify source while the request may be reading its copy.
+ session: s.session,
+ }
+
+ paused := s.cl.consumer.loadPaused()
+
+ s.cursorsMu.Lock()
+ defer s.cursorsMu.Unlock()
+
+ cursorIdx := s.cursorsStart
+ for i := 0; i < len(s.cursors); i++ {
+ c := s.cursors[cursorIdx]
+ cursorIdx = (cursorIdx + 1) % len(s.cursors)
+ if !c.usable() || paused.has(c.topic, c.partition) {
+ continue
+ }
+ req.addCursor(c)
+ }
+
+ // We could have lost our only record buffer just before we grabbed the
+ // source lock above.
+ if len(s.cursors) > 0 {
+ s.cursorsStart = (s.cursorsStart + 1) % len(s.cursors)
+ }
+
+ return req
+}
+
+func (s *source) maybeConsume() {
+ if s.fetchState.maybeBegin() {
+ go s.loopFetch()
+ }
+}
+
+func (s *source) loopFetch() {
+ consumer := &s.cl.consumer
+ session := consumer.loadSession()
+
+ if session == noConsumerSession {
+ s.fetchState.hardFinish()
+ // It is possible that we were triggered to consume while we
+ // had no consumer session, and then *after* loopFetch loaded
+ // noConsumerSession, the session was saved and triggered to
+ // consume again. If this function is slow the first time
+ // around, it could still be running and about to hardFinish.
+ // The second trigger will do nothing, and then we hardFinish
+ // and block a new session from actually starting consuming.
+ //
+ // To guard against this, after we hard finish, we load the
+ // session again: if it is *not* noConsumerSession, we trigger
+ // attempting to consume again. Worst case, the trigger is
+ // useless and it will exit below when it builds an empty
+ // request.
+ sessionNow := consumer.loadSession()
+ if session != sessionNow {
+ s.maybeConsume()
+ }
+ return
+ }
+
+ session.incWorker()
+ defer session.decWorker()
+
+ // After our add, check quickly **without** another select case to
+ // determine if this context was truly canceled. Any other select that
+ // has another select case could theoretically race with the other case
+ // also being selected.
+ select {
+ case <-session.ctx.Done():
+ s.fetchState.hardFinish()
+ return
+ default:
+ }
+
+ // We receive on canFetch when we can fetch, and we send back when we
+ // are done fetching.
+ canFetch := make(chan chan struct{}, 1)
+
+ again := true
+ for again {
+ select {
+ case <-session.ctx.Done():
+ s.fetchState.hardFinish()
+ return
+ case <-s.sem:
+ }
+
+ select {
+ case <-session.ctx.Done():
+ s.fetchState.hardFinish()
+ return
+ case session.desireFetch() <- canFetch:
+ }
+
+ select {
+ case <-session.ctx.Done():
+ session.cancelFetchCh <- canFetch
+ s.fetchState.hardFinish()
+ return
+ case doneFetch := <-canFetch:
+ again = s.fetchState.maybeFinish(s.fetch(session, doneFetch))
+ }
+ }
+}
+
+func (s *source) killSessionOnClose(ctx context.Context) {
+ br, err := s.cl.brokerOrErr(nil, s.nodeID, errUnknownBroker)
+ if err != nil {
+ return
+ }
+ s.session.kill()
+ req := &fetchRequest{
+ maxWait: 1,
+ minBytes: 1,
+ maxBytes: 1,
+ maxPartBytes: 1,
+ rack: s.cl.cfg.rack,
+ isolationLevel: s.cl.cfg.isolationLevel,
+ session: s.session,
+ }
+ ch := make(chan struct{})
+ br.do(ctx, req, func(kmsg.Response, error) { close(ch) })
+ <-ch
+}
+
+// fetch is the main logic center of fetching messages.
+//
+// This is a long function, made much longer by winded documentation, that
+// contains a lot of the side effects of fetching and updating. The function
+// consists of two main bulks of logic:
+//
+// - First, issue a request that can be killed if the source needs to be
+// stopped. Processing the response modifies no state on the source.
+//
+// - Second, we keep the fetch response and update everything relevant
+// (session, trigger some list or epoch updates, buffer the fetch).
+//
+// One small part between the first and second step is to update preferred
+// replicas. We always keep the preferred replicas from the fetch response
+// *even if* the source needs to be stopped. The knowledge of which preferred
+// replica to use would not be out of date even if the consumer session is
+// changing.
+func (s *source) fetch(consumerSession *consumerSession, doneFetch chan<- struct{}) (fetched bool) {
+ req := s.createReq()
+
+ // For all returns, if we do not buffer our fetch, then we want to
+ // ensure our used offsets are usable again.
+ var (
+ alreadySentToDoneFetch bool
+ setOffsets bool
+ buffered bool
+ )
+ defer func() {
+ if !buffered {
+ if req.numOffsets > 0 {
+ if setOffsets {
+ req.usedOffsets.finishUsingAllWithSet()
+ } else {
+ req.usedOffsets.finishUsingAll()
+ }
+ }
+ if !alreadySentToDoneFetch {
+ doneFetch <- struct{}{}
+ }
+ }
+ }()
+
+ if req.numOffsets == 0 { // cursors could have been set unusable
+ return
+ }
+
+ // If our fetch is killed, we want to cancel waiting for the response.
+ var (
+ kresp kmsg.Response
+ requested = make(chan struct{})
+ ctx, cancel = context.WithCancel(consumerSession.ctx)
+ )
+ defer cancel()
+
+ br, err := s.cl.brokerOrErr(ctx, s.nodeID, errUnknownBroker)
+ if err != nil {
+ close(requested)
+ } else {
+ br.do(ctx, req, func(k kmsg.Response, e error) {
+ kresp, err = k, e
+ close(requested)
+ })
+ }
+
+ select {
+ case <-requested:
+ fetched = true
+ case <-ctx.Done():
+ return
+ }
+
+ var didBackoff bool
+ backoff := func(why interface{}) {
+ // We preemptively allow more fetches (since we are not buffering)
+ // and reset our session because of the error (who knows if kafka
+ // processed the request but the client failed to receive it).
+ doneFetch <- struct{}{}
+ alreadySentToDoneFetch = true
+ s.session.reset()
+ didBackoff = true
+
+ s.cl.triggerUpdateMetadata(false, fmt.Sprintf("opportunistic load during source backoff: %v", why)) // as good a time as any
+ s.consecutiveFailures++
+ after := time.NewTimer(s.cl.cfg.retryBackoff(s.consecutiveFailures))
+ defer after.Stop()
+ select {
+ case <-after.C:
+ case <-ctx.Done():
+ }
+ }
+ defer func() {
+ if !didBackoff {
+ s.consecutiveFailures = 0
+ }
+ }()
+
+ // If we had an error, we backoff. Killing a fetch quits the backoff,
+ // but that is fine; we may just re-request too early and fall into
+ // another backoff.
+ if err != nil {
+ backoff(err)
+ return
+ }
+
+ resp := kresp.(*kmsg.FetchResponse)
+
+ var (
+ fetch Fetch
+ reloadOffsets listOrEpochLoads
+ preferreds cursorPreferreds
+ allErrsStripped bool
+ updateWhy multiUpdateWhy
+ handled = make(chan struct{})
+ )
+
+ // Theoretically, handleReqResp could take a bit of CPU time due to
+ // decompressing and processing the response. We do this in a goroutine
+ // to allow the session to be canceled at any moment.
+ //
+ // Processing the response only needs the source's nodeID and client.
+ go func() {
+ defer close(handled)
+ fetch, reloadOffsets, preferreds, allErrsStripped, updateWhy = s.handleReqResp(br, req, resp)
+ }()
+
+ select {
+ case <-handled:
+ case <-ctx.Done():
+ return
+ }
+
+ // The logic below here should be relatively quick.
+ //
+ // Note that fetch runs entirely in the context of a consumer session.
+ // loopFetch does not return until this function does, meaning we
+ // cannot concurrently issue a second fetch for partitions that are
+ // being processed below.
+
+ deleteReqUsedOffset := func(topic string, partition int32) {
+ t := req.usedOffsets[topic]
+ delete(t, partition)
+ if len(t) == 0 {
+ delete(req.usedOffsets, topic)
+ }
+ }
+
+ // Before updating the source, we move all cursors that have new
+ // preferred replicas and remove them from being tracked in our req
+ // offsets. We also remove the reload offsets from our req offsets.
+ //
+ // These two removals transition responsibility for finishing using the
+ // cursor from the request's used offsets to the new source or the
+ // reloading.
+ if len(preferreds) > 0 {
+ s.cl.cfg.logger.Log(LogLevelInfo, "fetch partitions returned preferred replicas",
+ "from_broker", s.nodeID,
+ "moves", preferreds.String(),
+ )
+ }
+ preferreds.eachPreferred(func(c cursorOffsetPreferred) {
+ c.move()
+ deleteReqUsedOffset(c.from.topic, c.from.partition)
+ })
+ reloadOffsets.each(deleteReqUsedOffset)
+
+ // The session on the request was updated; we keep those updates.
+ s.session = req.session
+
+ // handleReqResp only parses the body of the response, not the top
+ // level error code.
+ //
+ // The top level error code is related to fetch sessions only, and if
+ // there was an error, the body was empty (so processing is basically a
+ // no-op). We process the fetch session error now.
+ switch err := kerr.ErrorForCode(resp.ErrorCode); err {
+ case kerr.FetchSessionIDNotFound:
+ if s.session.epoch == 0 {
+ // If the epoch was zero, the broker did not even
+ // establish a session for us (and thus is maxed on
+ // sessions). We stop trying.
+ s.cl.cfg.logger.Log(LogLevelInfo, "session failed with SessionIDNotFound while trying to establish a session; broker likely maxed on sessions; continuing on without using sessions", "broker", logID(s.nodeID))
+ s.session.kill()
+ } else {
+ s.cl.cfg.logger.Log(LogLevelInfo, "received SessionIDNotFound from our in use session, our session was likely evicted; resetting session", "broker", logID(s.nodeID))
+ s.session.reset()
+ }
+ return
+ case kerr.InvalidFetchSessionEpoch:
+ s.cl.cfg.logger.Log(LogLevelInfo, "resetting fetch session", "broker", logID(s.nodeID), "err", err)
+ s.session.reset()
+ return
+
+ case kerr.FetchSessionTopicIDError, kerr.InconsistentTopicID:
+ s.cl.cfg.logger.Log(LogLevelInfo, "topic id issues, resetting session and updating metadata", "broker", logID(s.nodeID), "err", err)
+ s.session.reset()
+ s.cl.triggerUpdateMetadataNow("topic id issues")
+ return
+ }
+
+ // At this point, we have successfully processed the response. Even if
+ // the response contains no records, we want to keep any offset
+ // advancements (we could have consumed only control records, we must
+ // advance past them).
+ setOffsets = true
+
+ if resp.Version < 7 || resp.SessionID <= 0 {
+ // If the version is less than 7, we cannot use fetch sessions,
+ // so we kill them on the first response.
+ s.session.kill()
+ } else {
+ s.session.bumpEpoch(resp.SessionID)
+ }
+
+ // If we have a reason to update (per-partition fetch errors), and the
+ // reason is not just unknown topic or partition, then we immediately
+ // update metadata. We avoid updating for unknown because it _likely_
+ // means the topic does not exist and reloading is wasteful. We only
+ // trigger a metadata update if we have no reload offsets. Having
+ // reload offsets *always* triggers a metadata update.
+ if updateWhy != nil {
+ why := updateWhy.reason(fmt.Sprintf("fetch had inner topic errors from broker %d", s.nodeID))
+ // loadWithSessionNow triggers a metadata update IF there are
+ // offsets to reload. If there are no offsets to reload, we
+ // trigger one here.
+ if !reloadOffsets.loadWithSessionNow(consumerSession, why) {
+ if updateWhy.isOnly(kerr.UnknownTopicOrPartition) || updateWhy.isOnly(kerr.UnknownTopicID) {
+ s.cl.triggerUpdateMetadata(false, why)
+ } else {
+ s.cl.triggerUpdateMetadataNow(why)
+ }
+ }
+ }
+
+ if fetch.hasErrorsOrRecords() {
+ buffered = true
+ s.buffered = bufferedFetch{
+ fetch: fetch,
+ doneFetch: doneFetch,
+ usedOffsets: req.usedOffsets,
+ }
+ s.sem = make(chan struct{})
+ s.hook(&fetch, true, false) // buffered, not polled
+ s.cl.consumer.addSourceReadyForDraining(s)
+ } else if allErrsStripped {
+ // If we stripped all errors from the response, we are likely
+ // fetching from topics that were deleted. We want to back off
+ // a bit rather than spin-loop immediately re-requesting
+ // deleted topics.
+ backoff("empty fetch response due to all partitions having retryable errors")
+ }
+ return
+}
+
+// Parses a fetch response into a Fetch, offsets to reload, and whether
+// metadata needs updating.
+//
+// This only uses a source's broker and client, and thus does not need
+// the source mutex.
+//
+// This function, and everything it calls, is side effect free.
+func (s *source) handleReqResp(br *broker, req *fetchRequest, resp *kmsg.FetchResponse) (
+ f Fetch,
+ reloadOffsets listOrEpochLoads,
+ preferreds cursorPreferreds,
+ allErrsStripped bool,
+ updateWhy multiUpdateWhy,
+) {
+ f = Fetch{Topics: make([]FetchTopic, 0, len(resp.Topics))}
+ var (
+ debugWhyStripped multiUpdateWhy
+ numErrsStripped int
+ kip320 = s.cl.supportsOffsetForLeaderEpoch()
+ kmove kip951move
+ )
+ defer kmove.maybeBeginMove(s.cl)
+
+ strip := func(t string, p int32, err error) {
+ numErrsStripped++
+ if s.cl.cfg.logger.Level() < LogLevelDebug {
+ return
+ }
+ debugWhyStripped.add(t, p, err)
+ }
+
+ for _, rt := range resp.Topics {
+ topic := rt.Topic
+ // v13 only uses topic IDs, so we have to map the response
+ // uuid's to our string topics.
+ if resp.Version >= 13 {
+ topic = req.id2topic[rt.TopicID]
+ }
+
+ // We always include all cursors on this source in the fetch;
+ // we should not receive any topics or partitions we do not
+ // expect.
+ topicOffsets, ok := req.usedOffsets[topic]
+ if !ok {
+ s.cl.cfg.logger.Log(LogLevelWarn, "broker returned topic from fetch that we did not ask for",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ )
+ continue
+ }
+
+ fetchTopic := FetchTopic{
+ Topic: topic,
+ Partitions: make([]FetchPartition, 0, len(rt.Partitions)),
+ }
+
+ for i := range rt.Partitions {
+ rp := &rt.Partitions[i]
+ partition := rp.Partition
+ partOffset, ok := topicOffsets[partition]
+ if !ok {
+ s.cl.cfg.logger.Log(LogLevelWarn, "broker returned partition from fetch that we did not ask for",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", partition,
+ )
+ continue
+ }
+
+ // If we are fetching from the replica already, Kafka replies with a -1
+ // preferred read replica. If Kafka replies with a preferred replica,
+ // it sends no records.
+ if preferred := rp.PreferredReadReplica; resp.Version >= 11 && preferred >= 0 {
+ preferreds = append(preferreds, cursorOffsetPreferred{
+ *partOffset,
+ preferred,
+ })
+ continue
+ }
+
+ fp := partOffset.processRespPartition(br, rp, s.cl.decompressor, s.cl.cfg.hooks)
+ if fp.Err != nil {
+ if moving := kmove.maybeAddFetchPartition(resp, rp, partOffset.from); moving {
+ strip(topic, partition, fp.Err)
+ continue
+ }
+ updateWhy.add(topic, partition, fp.Err)
+ }
+
+ // We only keep the partition if it has no error, or an
+ // error we do not internally retry.
+ var keep bool
+ switch fp.Err {
+ default:
+ if kerr.IsRetriable(fp.Err) && !s.cl.cfg.keepRetryableFetchErrors {
+ // UnknownLeaderEpoch: our meta is newer than the broker we fetched from
+ // OffsetNotAvailable: fetched from out of sync replica or a behind in-sync one (KIP-392 case 1 and case 2)
+ // UnknownTopicID: kafka has not synced the state on all brokers
+ // And other standard retryable errors.
+ strip(topic, partition, fp.Err)
+ } else {
+ // - bad auth
+ // - unsupported compression
+ // - unsupported message version
+ // - unknown error
+ // - or, no error
+ keep = true
+ }
+
+ case nil:
+ partOffset.from.unknownIDFails.Store(0)
+ keep = true
+
+ case kerr.UnknownTopicID:
+ // We need to keep UnknownTopicID even though it is
+ // retryable, because encountering this error means
+ // the topic has been recreated and we will never
+ // consume the topic again anymore. This is an error
+ // worth bubbling up.
+ //
+ // Kafka will actually return this error for a brief
+ // window immediately after creating a topic for the
+ // first time, meaning the controller has not yet
+ // propagated to the leader that it is now the leader
+ // of a new partition. We need to ignore this error
+ // for a little bit.
+ if fails := partOffset.from.unknownIDFails.Add(1); fails > 5 {
+ partOffset.from.unknownIDFails.Add(-1)
+ keep = true
+ } else if s.cl.cfg.keepRetryableFetchErrors {
+ keep = true
+ } else {
+ strip(topic, partition, fp.Err)
+ }
+
+ case kerr.OffsetOutOfRange:
+ // If we are out of range, we reset to what we can.
+ // With Kafka >= 2.1, we should only get offset out
+ // of range if we fetch before the start, but a user
+ // could start past the end and want to reset to
+ // the end. We respect that.
+ //
+ // KIP-392 (case 3) specifies that if we are consuming
+ // from a follower, then if our offset request is before
+ // the low watermark, we list offsets from the follower.
+ //
+ // KIP-392 (case 4) specifies that if we are consuming
+ // a follower and our request is larger than the high
+ // watermark, then we should first check for truncation
+ // from the leader and then if we still get out of
+ // range, reset with list offsets.
+ //
+ // It further goes on to say that "out of range errors
+ // due to ISR propagation delays should be extremely
+ // rare". Rather than falling back to listing offsets,
+ // we stay in a cycle of validating the leader epoch
+ // until the follower has caught up.
+ //
+ // In all cases except case 4, we also have to check if
+ // no reset offset was configured. If so, we ignore
+ // trying to reset and instead keep our failed partition.
+ addList := func(replica int32, log bool) {
+ if s.cl.cfg.resetOffset.noReset {
+ keep = true
+ } else if !partOffset.from.lastConsumedTime.IsZero() {
+ reloadOffsets.addLoad(topic, partition, loadTypeList, offsetLoad{
+ replica: replica,
+ Offset: NewOffset().AfterMilli(partOffset.from.lastConsumedTime.UnixMilli()),
+ })
+ if log {
+ s.cl.cfg.logger.Log(LogLevelWarn, "received OFFSET_OUT_OF_RANGE, resetting to the nearest offset; either you were consuming too slowly and the broker has deleted the segment you were in the middle of consuming, or the broker has lost data and has not yet transferred leadership",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", partition,
+ "prior_offset", partOffset.offset,
+ )
+ }
+ } else {
+ reloadOffsets.addLoad(topic, partition, loadTypeList, offsetLoad{
+ replica: replica,
+ Offset: s.cl.cfg.resetOffset,
+ })
+ if log {
+ s.cl.cfg.logger.Log(LogLevelInfo, "received OFFSET_OUT_OF_RANGE on the first fetch, resetting to the configured ConsumeResetOffset",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", partition,
+ "prior_offset", partOffset.offset,
+ )
+ }
+ }
+ }
+
+ switch {
+ case s.nodeID == partOffset.from.leader: // non KIP-392 case
+ addList(-1, true)
+
+ case partOffset.offset < fp.LogStartOffset: // KIP-392 case 3
+ addList(s.nodeID, false)
+
+ default: // partOffset.offset > fp.HighWatermark, KIP-392 case 4
+ if kip320 {
+ reloadOffsets.addLoad(topic, partition, loadTypeEpoch, offsetLoad{
+ replica: -1,
+ Offset: Offset{
+ at: partOffset.offset,
+ epoch: partOffset.lastConsumedEpoch,
+ },
+ })
+ } else {
+ // If the broker does not support offset for leader epoch but
+ // does support follower fetching for some reason, we have to
+ // fallback to listing.
+ addList(-1, true)
+ }
+ }
+
+ case kerr.FencedLeaderEpoch:
+ // With fenced leader epoch, we notify an error only
+ // if necessary after we find out if loss occurred.
+ // If we have consumed nothing, then we got unlucky
+ // by being fenced right after we grabbed metadata.
+ // We just refresh metadata and try again.
+ //
+ // It would be odd for a broker to reply we are fenced
+ // but not support offset for leader epoch, so we do
+ // not check KIP-320 support here.
+ if partOffset.lastConsumedEpoch >= 0 {
+ reloadOffsets.addLoad(topic, partition, loadTypeEpoch, offsetLoad{
+ replica: -1,
+ Offset: Offset{
+ at: partOffset.offset,
+ epoch: partOffset.lastConsumedEpoch,
+ },
+ })
+ }
+ }
+
+ if keep {
+ fetchTopic.Partitions = append(fetchTopic.Partitions, fp)
+ }
+ }
+
+ if len(fetchTopic.Partitions) > 0 {
+ f.Topics = append(f.Topics, fetchTopic)
+ }
+ }
+
+ if s.cl.cfg.logger.Level() >= LogLevelDebug && len(debugWhyStripped) > 0 {
+ s.cl.cfg.logger.Log(LogLevelDebug, "fetch stripped partitions", "why", debugWhyStripped.reason(""))
+ }
+
+ return f, reloadOffsets, preferreds, req.numOffsets == numErrsStripped, updateWhy
+}
+
+// processRespPartition processes all records in all potentially compressed
+// batches (or message sets).
+func (o *cursorOffsetNext) processRespPartition(br *broker, rp *kmsg.FetchResponseTopicPartition, decompressor *decompressor, hooks hooks) FetchPartition {
+ fp := FetchPartition{
+ Partition: rp.Partition,
+ Err: kerr.ErrorForCode(rp.ErrorCode),
+ HighWatermark: rp.HighWatermark,
+ LastStableOffset: rp.LastStableOffset,
+ LogStartOffset: rp.LogStartOffset,
+ }
+ if rp.ErrorCode == 0 {
+ o.hwm = rp.HighWatermark
+ }
+
+ var aborter aborter
+ if br.cl.cfg.isolationLevel == 1 {
+ aborter = buildAborter(rp)
+ }
+
+ // A response could contain any of message v0, message v1, or record
+ // batches, and this is solely dictated by the magic byte (not the
+ // fetch response version). The magic byte is located at byte 17.
+ //
+ // 1 thru 8: int64 offset / first offset
+ // 9 thru 12: int32 length
+ // 13 thru 16: crc (magic 0 or 1), or partition leader epoch (magic 2)
+ // 17: magic
+ //
+ // We decode and validate similarly for messages and record batches, so
+ // we "abstract" away the high level stuff into a check function just
+ // below, and then switch based on the magic for how to process.
+ var (
+ in = rp.RecordBatches
+
+ r readerFrom
+ kind string
+ length int32
+ lengthField *int32
+ crcField *int32
+ crcTable *crc32.Table
+ crcAt int
+
+ check = func() bool {
+ // If we call into check, we know we have a valid
+ // length, so we should be at least able to parse our
+ // top level struct and validate the length and CRC.
+ if err := r.ReadFrom(in[:length]); err != nil {
+ fp.Err = fmt.Errorf("unable to read %s, not enough data", kind)
+ return false
+ }
+ if length := int32(len(in[12:length])); length != *lengthField {
+ fp.Err = fmt.Errorf("encoded length %d does not match read length %d", *lengthField, length)
+ return false
+ }
+ // We have already validated that the slice is at least
+ // 17 bytes, but our CRC may be later (i.e. RecordBatch
+ // starts at byte 21). Ensure there is at least space
+ // for a CRC.
+ if len(in) < crcAt {
+ fp.Err = fmt.Errorf("length %d is too short to allow for a crc", len(in))
+ return false
+ }
+ if crcCalc := int32(crc32.Checksum(in[crcAt:length], crcTable)); crcCalc != *crcField {
+ fp.Err = fmt.Errorf("encoded crc %x does not match calculated crc %x", *crcField, crcCalc)
+ return false
+ }
+ return true
+ }
+ )
+
+ for len(in) > 17 && fp.Err == nil {
+ offset := int64(binary.BigEndian.Uint64(in))
+ length = int32(binary.BigEndian.Uint32(in[8:]))
+ length += 12 // for the int64 offset we skipped and int32 length field itself
+ if len(in) < int(length) {
+ break
+ }
+
+ switch magic := in[16]; magic {
+ case 0:
+ m := new(kmsg.MessageV0)
+ kind = "message v0"
+ lengthField = &m.MessageSize
+ crcField = &m.CRC
+ crcTable = crc32.IEEETable
+ crcAt = 16
+ r = m
+ case 1:
+ m := new(kmsg.MessageV1)
+ kind = "message v1"
+ lengthField = &m.MessageSize
+ crcField = &m.CRC
+ crcTable = crc32.IEEETable
+ crcAt = 16
+ r = m
+ case 2:
+ rb := new(kmsg.RecordBatch)
+ kind = "record batch"
+ lengthField = &rb.Length
+ crcField = &rb.CRC
+ crcTable = crc32c
+ crcAt = 21
+ r = rb
+
+ default:
+ fp.Err = fmt.Errorf("unknown magic %d; message offset is %d and length is %d, skipping and setting to next offset", magic, offset, length)
+ if next := offset + 1; next > o.offset {
+ o.offset = next
+ }
+ return fp
+ }
+
+ if !check() {
+ break
+ }
+
+ in = in[length:]
+
+ var m FetchBatchMetrics
+
+ switch t := r.(type) {
+ case *kmsg.MessageV0:
+ m.CompressedBytes = int(length) // for message sets, we include the message set overhead in length
+ m.CompressionType = uint8(t.Attributes) & 0b0000_0111
+ m.NumRecords, m.UncompressedBytes = o.processV0OuterMessage(&fp, t, decompressor)
+
+ case *kmsg.MessageV1:
+ m.CompressedBytes = int(length)
+ m.CompressionType = uint8(t.Attributes) & 0b0000_0111
+ m.NumRecords, m.UncompressedBytes = o.processV1OuterMessage(&fp, t, decompressor)
+
+ case *kmsg.RecordBatch:
+ m.CompressedBytes = len(t.Records) // for record batches, we only track the record batch length
+ m.CompressionType = uint8(t.Attributes) & 0b0000_0111
+ m.NumRecords, m.UncompressedBytes = o.processRecordBatch(&fp, t, aborter, decompressor)
+ }
+
+ if m.UncompressedBytes == 0 {
+ m.UncompressedBytes = m.CompressedBytes
+ }
+ hooks.each(func(h Hook) {
+ if h, ok := h.(HookFetchBatchRead); ok {
+ h.OnFetchBatchRead(br.meta, o.from.topic, o.from.partition, m)
+ }
+ })
+ }
+
+ return fp
+}
+
+type aborter map[int64][]int64
+
+func buildAborter(rp *kmsg.FetchResponseTopicPartition) aborter {
+ if len(rp.AbortedTransactions) == 0 {
+ return nil
+ }
+ a := make(aborter)
+ for _, abort := range rp.AbortedTransactions {
+ a[abort.ProducerID] = append(a[abort.ProducerID], abort.FirstOffset)
+ }
+ return a
+}
+
+func (a aborter) shouldAbortBatch(b *kmsg.RecordBatch) bool {
+ if len(a) == 0 || b.Attributes&0b0001_0000 == 0 {
+ return false
+ }
+ pidAborts := a[b.ProducerID]
+ if len(pidAborts) == 0 {
+ return false
+ }
+ // If the first offset in this batch is less than the first offset
+ // aborted, then this batch is not aborted.
+ if b.FirstOffset < pidAborts[0] {
+ return false
+ }
+ return true
+}
+
+func (a aborter) trackAbortedPID(producerID int64) {
+ remaining := a[producerID][1:]
+ if len(remaining) == 0 {
+ delete(a, producerID)
+ } else {
+ a[producerID] = remaining
+ }
+}
+
+//////////////////////////////////////
+// processing records to fetch part //
+//////////////////////////////////////
+
+// readRawRecords reads n records from in and returns them, returning early if
+// there were partial records.
+func readRawRecords(n int, in []byte) []kmsg.Record {
+ rs := make([]kmsg.Record, n)
+ for i := 0; i < n; i++ {
+ length, used := kbin.Varint(in)
+ total := used + int(length)
+ if used == 0 || length < 0 || len(in) < total {
+ return rs[:i]
+ }
+ if err := (&rs[i]).ReadFrom(in[:total]); err != nil {
+ return rs[:i]
+ }
+ in = in[total:]
+ }
+ return rs
+}
+
+func (o *cursorOffsetNext) processRecordBatch(
+ fp *FetchPartition,
+ batch *kmsg.RecordBatch,
+ aborter aborter,
+ decompressor *decompressor,
+) (int, int) {
+ if batch.Magic != 2 {
+ fp.Err = fmt.Errorf("unknown batch magic %d", batch.Magic)
+ return 0, 0
+ }
+ lastOffset := batch.FirstOffset + int64(batch.LastOffsetDelta)
+ if lastOffset < o.offset {
+ // If the last offset in this batch is less than what we asked
+ // for, we got a batch that we entirely do not need. We can
+ // avoid all work (although we should not get this batch).
+ return 0, 0
+ }
+
+ rawRecords := batch.Records
+ if compression := byte(batch.Attributes & 0x0007); compression != 0 {
+ var err error
+ if rawRecords, err = decompressor.decompress(rawRecords, compression); err != nil {
+ return 0, 0 // truncated batch
+ }
+ }
+
+ uncompressedBytes := len(rawRecords)
+
+ numRecords := int(batch.NumRecords)
+ krecords := readRawRecords(numRecords, rawRecords)
+
+ // KAFKA-5443: compacted topics preserve the last offset in a batch,
+ // even if the last record is removed, meaning that using offsets from
+ // records alone may not get us to the next offset we need to ask for.
+ //
+ // We only perform this logic if we did not consume a truncated batch.
+ // If we consume a truncated batch, then what was truncated could have
+ // been an offset we are interested in consuming. Even if our fetch did
+ // not advance this partition at all, we will eventually fetch from the
+ // partition and not have a truncated response, at which point we will
+ // either advance offsets or will set to nextAskOffset.
+ nextAskOffset := lastOffset + 1
+ defer func() {
+ if numRecords == len(krecords) && o.offset < nextAskOffset {
+ o.offset = nextAskOffset
+ }
+ }()
+
+ abortBatch := aborter.shouldAbortBatch(batch)
+ for i := range krecords {
+ record := recordToRecord(
+ o.from.topic,
+ fp.Partition,
+ batch,
+ &krecords[i],
+ )
+ o.maybeKeepRecord(fp, record, abortBatch)
+
+ if abortBatch && record.Attrs.IsControl() {
+ // A control record has a key and a value where the key
+ // is int16 version and int16 type. Aborted records
+ // have a type of 0.
+ if key := record.Key; len(key) >= 4 && key[2] == 0 && key[3] == 0 {
+ aborter.trackAbortedPID(batch.ProducerID)
+ }
+ }
+ }
+
+ return len(krecords), uncompressedBytes
+}
+
+// Processes an outer v1 message. There could be no inner message, which makes
+// this easy, but if not, we decompress and process each inner message as
+// either v0 or v1. We only expect the inner message to be v1, but technically
+// a crazy pipeline could have v0 anywhere.
+func (o *cursorOffsetNext) processV1OuterMessage(
+ fp *FetchPartition,
+ message *kmsg.MessageV1,
+ decompressor *decompressor,
+) (int, int) {
+ compression := byte(message.Attributes & 0x0003)
+ if compression == 0 {
+ o.processV1Message(fp, message)
+ return 1, 0
+ }
+
+ rawInner, err := decompressor.decompress(message.Value, compression)
+ if err != nil {
+ return 0, 0 // truncated batch
+ }
+
+ uncompressedBytes := len(rawInner)
+
+ var innerMessages []readerFrom
+out:
+ for len(rawInner) > 17 { // magic at byte 17
+ length := int32(binary.BigEndian.Uint32(rawInner[8:]))
+ length += 12 // offset and length fields
+ if len(rawInner) < int(length) {
+ break
+ }
+
+ var (
+ magic = rawInner[16]
+
+ msg readerFrom
+ lengthField *int32
+ crcField *int32
+ )
+
+ switch magic {
+ case 0:
+ m := new(kmsg.MessageV0)
+ msg = m
+ lengthField = &m.MessageSize
+ crcField = &m.CRC
+ case 1:
+ m := new(kmsg.MessageV1)
+ msg = m
+ lengthField = &m.MessageSize
+ crcField = &m.CRC
+
+ default:
+ fp.Err = fmt.Errorf("message set v1 has inner message with invalid magic %d", magic)
+ break out
+ }
+
+ if err := msg.ReadFrom(rawInner[:length]); err != nil {
+ fp.Err = fmt.Errorf("unable to read message v%d, not enough data", magic)
+ break
+ }
+ if length := int32(len(rawInner[12:length])); length != *lengthField {
+ fp.Err = fmt.Errorf("encoded length %d does not match read length %d", *lengthField, length)
+ break
+ }
+ if crcCalc := int32(crc32.ChecksumIEEE(rawInner[16:length])); crcCalc != *crcField {
+ fp.Err = fmt.Errorf("encoded crc %x does not match calculated crc %x", *crcField, crcCalc)
+ break
+ }
+ innerMessages = append(innerMessages, msg)
+ rawInner = rawInner[length:]
+ }
+ if len(innerMessages) == 0 {
+ return 0, uncompressedBytes
+ }
+
+ firstOffset := message.Offset - int64(len(innerMessages)) + 1
+ for i := range innerMessages {
+ innerMessage := innerMessages[i]
+ switch innerMessage := innerMessage.(type) {
+ case *kmsg.MessageV0:
+ innerMessage.Offset = firstOffset + int64(i)
+ innerMessage.Attributes |= int8(compression)
+ if !o.processV0Message(fp, innerMessage) {
+ return i, uncompressedBytes
+ }
+ case *kmsg.MessageV1:
+ innerMessage.Offset = firstOffset + int64(i)
+ innerMessage.Attributes |= int8(compression)
+ if !o.processV1Message(fp, innerMessage) {
+ return i, uncompressedBytes
+ }
+ }
+ }
+ return len(innerMessages), uncompressedBytes
+}
+
+func (o *cursorOffsetNext) processV1Message(
+ fp *FetchPartition,
+ message *kmsg.MessageV1,
+) bool {
+ if message.Magic != 1 {
+ fp.Err = fmt.Errorf("unknown message magic %d", message.Magic)
+ return false
+ }
+ if uint8(message.Attributes)&0b1111_0000 != 0 {
+ fp.Err = fmt.Errorf("unknown attributes on message %d", message.Attributes)
+ return false
+ }
+ record := v1MessageToRecord(o.from.topic, fp.Partition, message)
+ o.maybeKeepRecord(fp, record, false)
+ return true
+}
+
+// Processes an outer v0 message. We expect inner messages to be entirely v0 as
+// well, so this only tries v0 always.
+func (o *cursorOffsetNext) processV0OuterMessage(
+ fp *FetchPartition,
+ message *kmsg.MessageV0,
+ decompressor *decompressor,
+) (int, int) {
+ compression := byte(message.Attributes & 0x0003)
+ if compression == 0 {
+ o.processV0Message(fp, message)
+ return 1, 0 // uncompressed bytes is 0; set to compressed bytes on return
+ }
+
+ rawInner, err := decompressor.decompress(message.Value, compression)
+ if err != nil {
+ return 0, 0 // truncated batch
+ }
+
+ uncompressedBytes := len(rawInner)
+
+ var innerMessages []kmsg.MessageV0
+ for len(rawInner) > 17 { // magic at byte 17
+ length := int32(binary.BigEndian.Uint32(rawInner[8:]))
+ length += 12 // offset and length fields
+ if len(rawInner) < int(length) {
+ break // truncated batch
+ }
+ var m kmsg.MessageV0
+ if err := m.ReadFrom(rawInner[:length]); err != nil {
+ fp.Err = fmt.Errorf("unable to read message v0, not enough data")
+ break
+ }
+ if length := int32(len(rawInner[12:length])); length != m.MessageSize {
+ fp.Err = fmt.Errorf("encoded length %d does not match read length %d", m.MessageSize, length)
+ break
+ }
+ if crcCalc := int32(crc32.ChecksumIEEE(rawInner[16:length])); crcCalc != m.CRC {
+ fp.Err = fmt.Errorf("encoded crc %x does not match calculated crc %x", m.CRC, crcCalc)
+ break
+ }
+ innerMessages = append(innerMessages, m)
+ rawInner = rawInner[length:]
+ }
+ if len(innerMessages) == 0 {
+ return 0, uncompressedBytes
+ }
+
+ firstOffset := message.Offset - int64(len(innerMessages)) + 1
+ for i := range innerMessages {
+ innerMessage := &innerMessages[i]
+ innerMessage.Attributes |= int8(compression)
+ innerMessage.Offset = firstOffset + int64(i)
+ if !o.processV0Message(fp, innerMessage) {
+ return i, uncompressedBytes
+ }
+ }
+ return len(innerMessages), uncompressedBytes
+}
+
+func (o *cursorOffsetNext) processV0Message(
+ fp *FetchPartition,
+ message *kmsg.MessageV0,
+) bool {
+ if message.Magic != 0 {
+ fp.Err = fmt.Errorf("unknown message magic %d", message.Magic)
+ return false
+ }
+ if uint8(message.Attributes)&0b1111_1000 != 0 {
+ fp.Err = fmt.Errorf("unknown attributes on message %d", message.Attributes)
+ return false
+ }
+ record := v0MessageToRecord(o.from.topic, fp.Partition, message)
+ o.maybeKeepRecord(fp, record, false)
+ return true
+}
+
+// maybeKeepRecord keeps a record if it is within our range of offsets to keep.
+//
+// If the record is being aborted or the record is a control record and the
+// client does not want to keep control records, this does not keep the record.
+func (o *cursorOffsetNext) maybeKeepRecord(fp *FetchPartition, record *Record, abort bool) {
+ if record.Offset < o.offset {
+ // We asked for offset 5, but that was in the middle of a
+ // batch; we got offsets 0 thru 4 that we need to skip.
+ return
+ }
+
+ // We only keep control records if specifically requested.
+ if record.Attrs.IsControl() {
+ abort = !o.from.keepControl
+ }
+ if !abort {
+ fp.Records = append(fp.Records, record)
+ }
+
+ // The record offset may be much larger than our expected offset if the
+ // topic is compacted.
+ o.offset = record.Offset + 1
+ o.lastConsumedEpoch = record.LeaderEpoch
+ o.lastConsumedTime = record.Timestamp
+}
+
+///////////////////////////////
+// kmsg.Record to kgo.Record //
+///////////////////////////////
+
+func timeFromMillis(millis int64) time.Time {
+ return time.Unix(0, millis*1e6)
+}
+
+// recordToRecord converts a kmsg.RecordBatch's Record to a kgo Record.
+func recordToRecord(
+ topic string,
+ partition int32,
+ batch *kmsg.RecordBatch,
+ record *kmsg.Record,
+) *Record {
+ h := make([]RecordHeader, 0, len(record.Headers))
+ for _, kv := range record.Headers {
+ h = append(h, RecordHeader{
+ Key: kv.Key,
+ Value: kv.Value,
+ })
+ }
+
+ r := &Record{
+ Key: record.Key,
+ Value: record.Value,
+ Headers: h,
+ Topic: topic,
+ Partition: partition,
+ Attrs: RecordAttrs{uint8(batch.Attributes)},
+ ProducerID: batch.ProducerID,
+ ProducerEpoch: batch.ProducerEpoch,
+ LeaderEpoch: batch.PartitionLeaderEpoch,
+ Offset: batch.FirstOffset + int64(record.OffsetDelta),
+ }
+ if r.Attrs.TimestampType() == 0 {
+ r.Timestamp = timeFromMillis(batch.FirstTimestamp + record.TimestampDelta64)
+ } else {
+ r.Timestamp = timeFromMillis(batch.MaxTimestamp)
+ }
+ return r
+}
+
+func messageAttrsToRecordAttrs(attrs int8, v0 bool) RecordAttrs {
+ uattrs := uint8(attrs)
+ if v0 {
+ uattrs |= 0b1000_0000
+ }
+ return RecordAttrs{uattrs}
+}
+
+func v0MessageToRecord(
+ topic string,
+ partition int32,
+ message *kmsg.MessageV0,
+) *Record {
+ return &Record{
+ Key: message.Key,
+ Value: message.Value,
+ Topic: topic,
+ Partition: partition,
+ Attrs: messageAttrsToRecordAttrs(message.Attributes, true),
+ ProducerID: -1,
+ ProducerEpoch: -1,
+ LeaderEpoch: -1,
+ Offset: message.Offset,
+ }
+}
+
+func v1MessageToRecord(
+ topic string,
+ partition int32,
+ message *kmsg.MessageV1,
+) *Record {
+ return &Record{
+ Key: message.Key,
+ Value: message.Value,
+ Timestamp: timeFromMillis(message.Timestamp),
+ Topic: topic,
+ Partition: partition,
+ Attrs: messageAttrsToRecordAttrs(message.Attributes, false),
+ ProducerID: -1,
+ ProducerEpoch: -1,
+ LeaderEpoch: -1,
+ Offset: message.Offset,
+ }
+}
+
+//////////////////
+// fetchRequest //
+//////////////////
+
+type fetchRequest struct {
+ version int16
+ maxWait int32
+ minBytes int32
+ maxBytes int32
+ maxPartBytes int32
+ rack string
+
+ isolationLevel int8
+ preferLagFn PreferLagFn
+
+ numOffsets int
+ usedOffsets usedOffsets
+
+ torder []string // order of topics to write
+ porder map[string][]int32 // per topic, order of partitions to write
+
+ // topic2id and id2topic track bidirectional lookup of topics and IDs
+ // that are being added to *this* specific request. topic2id slightly
+ // duplicates the map t2id in the fetch session, but t2id is different
+ // in that t2id tracks IDs in use from all prior requests -- and,
+ // importantly, t2id is cleared of IDs that are no longer used (see
+ // ForgottenTopics).
+ //
+ // We need to have both a session t2id map and a request t2id map:
+ //
+ // * The session t2id is what we use when creating forgotten topics.
+ // If we are forgetting a topic, the ID is not in the req t2id.
+ //
+ // * The req topic2id is used for adding to the session t2id. When
+ // building a request, if the id is in req.topic2id but not
+ // session.t2id, we promote the ID into the session map.
+ //
+ // Lastly, id2topic is used when handling the response, as our reverse
+ // lookup from the ID back to the topic (and then we work with the
+ // topic name only). There is no equivalent in the session because
+ // there is no need for the id2topic lookup ever in the session.
+ topic2id map[string][16]byte
+ id2topic map[[16]byte]string
+
+ disableIDs bool // #295: using an old IBP on new Kafka results in ApiVersions advertising 13+ while the broker does not return IDs
+
+ // Session is a copy of the source session at the time a request is
+ // built. If the source is reset, the session it has is reset at the
+ // field level only. Our view of the original session is still valid.
+ session fetchSession
+}
+
+func (f *fetchRequest) addCursor(c *cursor) {
+ if f.usedOffsets == nil {
+ f.usedOffsets = make(usedOffsets)
+ f.id2topic = make(map[[16]byte]string)
+ f.topic2id = make(map[string][16]byte)
+ f.porder = make(map[string][]int32)
+ }
+ partitions := f.usedOffsets[c.topic]
+ if partitions == nil {
+ partitions = make(map[int32]*cursorOffsetNext)
+ f.usedOffsets[c.topic] = partitions
+ f.id2topic[c.topicID] = c.topic
+ f.topic2id[c.topic] = c.topicID
+ var noID [16]byte
+ if c.topicID == noID {
+ f.disableIDs = true
+ }
+ f.torder = append(f.torder, c.topic)
+ }
+ partitions[c.partition] = c.use()
+ f.porder[c.topic] = append(f.porder[c.topic], c.partition)
+ f.numOffsets++
+}
+
+// PreferLagFn accepts topic and partition lag, the previously determined topic
+// order, and the previously determined per-topic partition order, and returns
+// a new topic and per-topic partition order.
+//
+// Most use cases will not need to look at the prior orders, but they exist if
+// you want to get fancy.
+//
+// You can return partial results: if you only return topics, partitions within
+// each topic keep their prior ordering. If you only return some topics but not
+// all, the topics you do not return / the partitions you do not return will
+// retain their original ordering *after* your given ordering.
+//
+// NOTE: torderPrior and porderPrior must not be modified. To avoid a bit of
+// unnecessary allocations, these arguments are views into data that is used to
+// build a fetch request.
+type PreferLagFn func(lag map[string]map[int32]int64, torderPrior []string, porderPrior map[string][]int32) ([]string, map[string][]int32)
+
+// PreferLagAt is a simple PreferLagFn that orders the largest lag first, for
+// any topic that is collectively lagging more than preferLagAt, and for any
+// partition that is lagging more than preferLagAt.
+//
+// The function does not prescribe any ordering for topics that have the same
+// lag. It is recommended to use a number more than 0 or 1: if you use 0, you
+// may just always undo client ordering when there is no actual lag.
+func PreferLagAt(preferLagAt int64) PreferLagFn {
+ if preferLagAt < 0 {
+ return nil
+ }
+ return func(lag map[string]map[int32]int64, _ []string, _ map[string][]int32) ([]string, map[string][]int32) {
+ type plag struct {
+ p int32
+ lag int64
+ }
+ type tlag struct {
+ t string
+ lag int64
+ ps []plag
+ }
+
+ // First, collect all partition lag into per-topic lag.
+ tlags := make(map[string]tlag, len(lag))
+ for t, ps := range lag {
+ for p, lag := range ps {
+ prior := tlags[t]
+ tlags[t] = tlag{
+ t: t,
+ lag: prior.lag + lag,
+ ps: append(prior.ps, plag{p, lag}),
+ }
+ }
+ }
+
+ // We now remove topics and partitions that are not lagging
+ // enough. Collectively, the topic could be lagging too much,
+ // but individually, no partition is lagging that much: we will
+ // sort the topic first and keep the old partition ordering.
+ for t, tlag := range tlags {
+ if tlag.lag < preferLagAt {
+ delete(tlags, t)
+ continue
+ }
+ for i := 0; i < len(tlag.ps); i++ {
+ plag := tlag.ps[i]
+ if plag.lag < preferLagAt {
+ tlag.ps[i] = tlag.ps[len(tlag.ps)-1]
+ tlag.ps = tlag.ps[:len(tlag.ps)-1]
+ i--
+ }
+ }
+ }
+ if len(tlags) == 0 {
+ return nil, nil
+ }
+
+ var sortedLags []tlag
+ for _, tlag := range tlags {
+ sort.Slice(tlag.ps, func(i, j int) bool { return tlag.ps[i].lag > tlag.ps[j].lag })
+ sortedLags = append(sortedLags, tlag)
+ }
+ sort.Slice(sortedLags, func(i, j int) bool { return sortedLags[i].lag > sortedLags[j].lag })
+
+ // We now return our laggy topics and partitions, and let the
+ // caller add back any missing topics / partitions in their
+ // prior order.
+ torder := make([]string, 0, len(sortedLags))
+ for _, t := range sortedLags {
+ torder = append(torder, t.t)
+ }
+ porder := make(map[string][]int32, len(sortedLags))
+ for _, tlag := range sortedLags {
+ ps := make([]int32, 0, len(tlag.ps))
+ for _, p := range tlag.ps {
+ ps = append(ps, p.p)
+ }
+ porder[tlag.t] = ps
+ }
+ return torder, porder
+ }
+}
+
+// If the end user prefers to consume lag, we reorder our previously ordered
+// partitions, preferring first the laggiest topics, and then within those, the
+// laggiest partitions.
+func (f *fetchRequest) adjustPreferringLag() {
+ if f.preferLagFn == nil {
+ return
+ }
+
+ tall := make(map[string]struct{}, len(f.torder))
+ for _, t := range f.torder {
+ tall[t] = struct{}{}
+ }
+ pall := make(map[string][]int32, len(f.porder))
+ for t, ps := range f.porder {
+ pall[t] = append([]int32(nil), ps...)
+ }
+
+ lag := make(map[string]map[int32]int64, len(f.torder))
+ for t, ps := range f.usedOffsets {
+ plag := make(map[int32]int64, len(ps))
+ lag[t] = plag
+ for p, c := range ps {
+ hwm := c.hwm
+ if c.hwm < 0 {
+ hwm = 0
+ }
+ lag := hwm - c.offset
+ if c.offset <= 0 {
+ lag = hwm
+ }
+ if lag < 0 {
+ lag = 0
+ }
+ plag[p] = lag
+ }
+ }
+
+ torder, porder := f.preferLagFn(lag, f.torder, f.porder)
+ if torder == nil && porder == nil {
+ return
+ }
+ defer func() { f.torder, f.porder = torder, porder }()
+
+ if len(torder) == 0 {
+ torder = f.torder // user did not modify topic order, keep old order
+ } else {
+ // Remove any extra topics the user returned that we were not
+ // consuming, and add all topics they did not give back.
+ for i := 0; i < len(torder); i++ {
+ t := torder[i]
+ if _, exists := tall[t]; !exists {
+ torder = append(torder[:i], torder[i+1:]...) // user gave topic we were not fetching
+ i--
+ }
+ delete(tall, t)
+ }
+ for _, t := range f.torder {
+ if _, exists := tall[t]; exists {
+ torder = append(torder, t) // user did not return topic we were fetching
+ delete(tall, t)
+ }
+ }
+ }
+
+ if len(porder) == 0 {
+ porder = f.porder // user did not modify partition order, keep old order
+ return
+ }
+
+ pused := make(map[int32]struct{})
+ for t, ps := range pall {
+ order, exists := porder[t]
+ if !exists {
+ porder[t] = ps // shortcut: user did not define this partition's oorder, keep old order
+ continue
+ }
+ for _, p := range ps {
+ pused[p] = struct{}{}
+ }
+ for i := 0; i < len(order); i++ {
+ p := order[i]
+ if _, exists := pused[p]; !exists {
+ order = append(order[:i], order[i+1:]...)
+ i--
+ }
+ delete(pused, p)
+ }
+ for _, p := range f.porder[t] {
+ if _, exists := pused[p]; exists {
+ order = append(order, p)
+ delete(pused, p)
+ }
+ }
+ porder[t] = order
+ }
+}
+
+func (*fetchRequest) Key() int16 { return 1 }
+func (f *fetchRequest) MaxVersion() int16 {
+ if f.disableIDs || f.session.disableIDs {
+ return 12
+ }
+ return 16
+}
+func (f *fetchRequest) SetVersion(v int16) { f.version = v }
+func (f *fetchRequest) GetVersion() int16 { return f.version }
+func (f *fetchRequest) IsFlexible() bool { return f.version >= 12 } // version 12+ is flexible
+func (f *fetchRequest) AppendTo(dst []byte) []byte {
+ req := kmsg.NewFetchRequest()
+ req.Version = f.version
+ req.ReplicaID = -1
+ req.MaxWaitMillis = f.maxWait
+ req.MinBytes = f.minBytes
+ req.MaxBytes = f.maxBytes
+ req.IsolationLevel = f.isolationLevel
+ req.SessionID = f.session.id
+ req.SessionEpoch = f.session.epoch
+ req.Rack = f.rack
+
+ // We track which partitions we add in this request; any partitions
+ // missing that are already in the session get added to forgotten
+ // topics at the end.
+ var sessionUsed map[string]map[int32]struct{}
+ if !f.session.killed {
+ sessionUsed = make(map[string]map[int32]struct{}, len(f.usedOffsets))
+ }
+
+ f.adjustPreferringLag()
+
+ for _, topic := range f.torder {
+ partitions := f.usedOffsets[topic]
+
+ var reqTopic *kmsg.FetchRequestTopic
+ sessionTopic := f.session.lookupTopic(topic, f.topic2id)
+
+ var usedTopic map[int32]struct{}
+ if sessionUsed != nil {
+ usedTopic = make(map[int32]struct{}, len(partitions))
+ }
+
+ for _, partition := range f.porder[topic] {
+ cursorOffsetNext := partitions[partition]
+
+ if usedTopic != nil {
+ usedTopic[partition] = struct{}{}
+ }
+
+ if !sessionTopic.hasPartitionAt(
+ partition,
+ cursorOffsetNext.offset,
+ cursorOffsetNext.currentLeaderEpoch,
+ ) {
+ if reqTopic == nil {
+ t := kmsg.NewFetchRequestTopic()
+ t.Topic = topic
+ t.TopicID = f.topic2id[topic]
+ req.Topics = append(req.Topics, t)
+ reqTopic = &req.Topics[len(req.Topics)-1]
+ }
+
+ reqPartition := kmsg.NewFetchRequestTopicPartition()
+ reqPartition.Partition = partition
+ reqPartition.CurrentLeaderEpoch = cursorOffsetNext.currentLeaderEpoch
+ reqPartition.FetchOffset = cursorOffsetNext.offset
+ reqPartition.LastFetchedEpoch = -1
+ reqPartition.LogStartOffset = -1
+ reqPartition.PartitionMaxBytes = f.maxPartBytes
+ reqTopic.Partitions = append(reqTopic.Partitions, reqPartition)
+ }
+ }
+
+ if sessionUsed != nil {
+ sessionUsed[topic] = usedTopic
+ }
+ }
+
+ // Now for everything that we did not use in our session, add it to
+ // forgotten topics and remove it from the session.
+ if sessionUsed != nil {
+ for topic, partitions := range f.session.used {
+ var forgottenTopic *kmsg.FetchRequestForgottenTopic
+ topicUsed := sessionUsed[topic]
+ for partition := range partitions {
+ if topicUsed != nil {
+ if _, partitionUsed := topicUsed[partition]; partitionUsed {
+ continue
+ }
+ }
+ if forgottenTopic == nil {
+ t := kmsg.NewFetchRequestForgottenTopic()
+ t.Topic = topic
+ t.TopicID = f.session.t2id[topic]
+ req.ForgottenTopics = append(req.ForgottenTopics, t)
+ forgottenTopic = &req.ForgottenTopics[len(req.ForgottenTopics)-1]
+ }
+ forgottenTopic.Partitions = append(forgottenTopic.Partitions, partition)
+ delete(partitions, partition)
+ }
+ if len(partitions) == 0 {
+ delete(f.session.used, topic)
+ id := f.session.t2id[topic]
+ delete(f.session.t2id, topic)
+ // If we deleted a topic that was missing an ID, then we clear the
+ // previous disableIDs state. We potentially *reenable* disableIDs
+ // if any remaining topics in our session are also missing their ID.
+ var noID [16]byte
+ if id == noID {
+ f.session.disableIDs = false
+ for _, id := range f.session.t2id {
+ if id == noID {
+ f.session.disableIDs = true
+ break
+ }
+ }
+ }
+ }
+ }
+ }
+
+ return req.AppendTo(dst)
+}
+
+func (*fetchRequest) ReadFrom([]byte) error {
+ panic("unreachable -- the client never uses ReadFrom on its internal fetchRequest")
+}
+
+func (f *fetchRequest) ResponseKind() kmsg.Response {
+ r := kmsg.NewPtrFetchResponse()
+ r.Version = f.version
+ return r
+}
+
+// fetchSessions, introduced in KIP-227, allow us to send less information back
+// and forth to a Kafka broker.
+type fetchSession struct {
+ id int32
+ epoch int32
+
+ used map[string]map[int32]fetchSessionOffsetEpoch // what we have in the session so far
+ t2id map[string][16]byte
+
+ disableIDs bool // if anything in t2id has no ID
+ killed bool // if we cannot use a session anymore
+}
+
+func (s *fetchSession) kill() {
+ s.epoch = -1
+ s.used = nil
+ s.t2id = nil
+ s.disableIDs = false
+ s.killed = true
+}
+
+// reset resets the session by setting the next request to use epoch 0.
+// We do not reset the ID; using epoch 0 for an existing ID unregisters the
+// prior session.
+func (s *fetchSession) reset() {
+ if s.killed {
+ return
+ }
+ s.epoch = 0
+ s.used = nil
+ s.t2id = nil
+ s.disableIDs = false
+}
+
+// bumpEpoch bumps the epoch and saves the session id.
+//
+// Kafka replies with the session ID of the session to use. When it does, we
+// start from epoch 1, wrapping back to 1 if we go negative.
+func (s *fetchSession) bumpEpoch(id int32) {
+ if s.killed {
+ return
+ }
+ if id != s.id {
+ s.epoch = 0 // new session: reset to 0 for the increment below
+ }
+ s.epoch++
+ if s.epoch < 0 {
+ s.epoch = 1 // we wrapped: reset back to 1 to continue this session
+ }
+ s.id = id
+}
+
+func (s *fetchSession) lookupTopic(topic string, t2id map[string][16]byte) fetchSessionTopic {
+ if s.killed {
+ return nil
+ }
+ if s.used == nil {
+ s.used = make(map[string]map[int32]fetchSessionOffsetEpoch)
+ s.t2id = make(map[string][16]byte)
+ }
+ t := s.used[topic]
+ if t == nil {
+ t = make(map[int32]fetchSessionOffsetEpoch)
+ s.used[topic] = t
+ id := t2id[topic]
+ s.t2id[topic] = id
+ if id == ([16]byte{}) {
+ s.disableIDs = true
+ }
+ }
+ return t
+}
+
+type fetchSessionOffsetEpoch struct {
+ offset int64
+ epoch int32
+}
+
+type fetchSessionTopic map[int32]fetchSessionOffsetEpoch
+
+func (s fetchSessionTopic) hasPartitionAt(partition int32, offset int64, epoch int32) bool {
+ if s == nil { // if we are nil, the session was killed
+ return false
+ }
+ at, exists := s[partition]
+ now := fetchSessionOffsetEpoch{offset, epoch}
+ s[partition] = now
+ return exists && at == now
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/strftime.go b/vendor/github.com/twmb/franz-go/pkg/kgo/strftime.go
new file mode 100644
index 0000000000000..6ff862fbf5091
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/strftime.go
@@ -0,0 +1,205 @@
+package kgo
+
+import (
+ "strconv"
+ "time"
+)
+
+// NOTE: this code is copied from github.com/twmb/go-strftime, with AppendFormat
+// being unexported.
+
+// appendFormat appends t to dst according to the input strftime format.
+//
+// this does not take into account locale; some high level differences:
+//
+// %E and %O are stripped, as well as a single subsequent alpha char
+// %x is DD/MM/YY
+// %c is time.ANSIC
+//
+// In normal strftime, %a, %A, %b, %B, %c, %p, %P, %r, %x, and %X are all
+// affected by locale. This package hardcodes the implementation to mirror
+// LC_TIME=C (minus %x). Every strftime(3) formatter is accounted for.
+func strftimeAppendFormat(dst []byte, format string, t time.Time) []byte {
+ for i := 0; i < len(format); i++ {
+ c := format[i]
+ if c != '%' || i == len(format)-1 {
+ dst = append(dst, c)
+ continue
+ }
+
+ i++
+ c = format[i]
+ switch c {
+ default:
+ dst = append(dst, '%', c)
+ case 'a': // abbrev day
+ dst = t.AppendFormat(dst, "Mon")
+ case 'A': // full day
+ dst = t.AppendFormat(dst, "Monday")
+ case 'b', 'h': // abbrev month, h is equivalent to b
+ dst = t.AppendFormat(dst, "Jan")
+ case 'B': // full month
+ dst = t.AppendFormat(dst, "January")
+ case 'c': // preferred date and time representation
+ dst = t.AppendFormat(dst, time.ANSIC)
+ case 'C': // century (year/100) as two digit num
+ dst = append0Pad(dst, t.Year()/100, 2)
+ case 'd': // day of month as two digit num
+ dst = append0Pad(dst, t.Day(), 2)
+ case 'D': // %m/%d/%y
+ dst = append0Pad(dst, int(t.Month()), 2)
+ dst = append(dst, '/')
+ dst = append0Pad(dst, t.Day(), 2)
+ dst = append(dst, '/')
+ dst = append0Pad(dst, t.Year()%100, 2)
+ case 'e': // day of month as num like %d, but leading 0 is space instead
+ dst = appendSpacePad(dst, t.Day())
+ case 'E', 'O': // modifier, ignored and skip next (if ascii)
+ if i+1 < len(format) {
+ next := format[i+1]
+ if 'a' <= next && next <= 'z' || 'A' <= next && next <= 'Z' {
+ i++
+ }
+ }
+ case 'F': // %Y-%m-%d (iso8601)
+ dst = strconv.AppendInt(dst, int64(t.Year()), 10)
+ dst = append(dst, '-')
+ dst = append0Pad(dst, int(t.Month()), 2)
+ dst = append(dst, '-')
+ dst = append0Pad(dst, t.Day(), 2)
+ case 'G': // iso8601 week-based year
+ year, _ := t.ISOWeek()
+ dst = append0Pad(dst, year, 4)
+ case 'g': // like %G, but two digit year (no century)
+ year, _ := t.ISOWeek()
+ dst = append0Pad(dst, year%100, 2)
+ case 'H': // hour as number on 24hr clock
+ dst = append0Pad(dst, t.Hour(), 2)
+ case 'I': // hour as number on 12hr clock
+ dst = append0Pad(dst, t.Hour()%12, 2)
+ case 'j': // day of year as decimal number
+ dst = append0Pad(dst, t.YearDay(), 3)
+ case 'k': // 24hr as number, space padded
+ dst = appendSpacePad(dst, t.Hour())
+ case 'l': // 12hr as number, space padded
+ dst = appendSpacePad(dst, t.Hour()%12)
+ case 'm': // month as number
+ dst = append0Pad(dst, int(t.Month()), 2)
+ case 'M': // minute as number
+ dst = append0Pad(dst, t.Minute(), 2)
+ case 'n': // newline
+ dst = append(dst, '\n')
+ case 'p': // AM or PM
+ dst = appendAMPM(dst, t.Hour())
+ case 'P': // like %p buf lowercase
+ dst = appendampm(dst, t.Hour())
+ case 'r': // %I:%M:%S %p
+ h := t.Hour()
+ dst = append0Pad(dst, h%12, 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Minute(), 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Second(), 2)
+ dst = append(dst, ' ')
+ dst = appendAMPM(dst, h)
+ case 'R': // %H:%M
+ dst = append0Pad(dst, t.Hour(), 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Minute(), 2)
+ case 's': // seconds since epoch
+ dst = strconv.AppendInt(dst, t.Unix(), 10)
+ case 'S': // second as number thru 60 for leap second
+ dst = append0Pad(dst, t.Second(), 2)
+ case 't': // tab
+ dst = append(dst, '\t')
+ case 'T': // %H:%M:%S
+ dst = append0Pad(dst, t.Hour(), 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Minute(), 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Second(), 2)
+ case 'u': // day of week as num; Monday is 1
+ day := byte(t.Weekday())
+ if day == 0 {
+ day = 7
+ }
+ dst = append(dst, '0'+day)
+ case 'U': // week number of year starting from first Sunday
+ dst = append0Pad(dst, (t.YearDay()-int(t.Weekday())+7)/7, 2)
+ case 'V': // iso8601 week number
+ _, week := t.ISOWeek()
+ dst = append0Pad(dst, week, 2)
+ case 'w': // day of week, 0 to 6, Sunday 0
+ dst = strconv.AppendInt(dst, int64(t.Weekday()), 10)
+ case 'W': // week number of year starting from first Monday
+ dst = append0Pad(dst, (t.YearDay()-(int(t.Weekday())+6)%7+7)/7, 2)
+ case 'x': // date representation for current locale; we go DD/MM/YY
+ dst = append0Pad(dst, t.Day(), 2)
+ dst = append(dst, '/')
+ dst = append0Pad(dst, int(t.Month()), 2)
+ dst = append(dst, '/')
+ dst = append0Pad(dst, t.Year()%100, 2)
+ case 'X': // time representation for current locale; we go HH:MM:SS
+ dst = append0Pad(dst, t.Hour(), 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Minute(), 2)
+ dst = append(dst, ':')
+ dst = append0Pad(dst, t.Second(), 2)
+ case 'y': // year as num without century
+ dst = append0Pad(dst, t.Year()%100, 2)
+ case 'Y': // year as a num
+ dst = append0Pad(dst, t.Year(), 4)
+ case 'z': // +hhmm or -hhmm offset from utc
+ dst = t.AppendFormat(dst, "-0700")
+ case 'Z': // timezone
+ dst = t.AppendFormat(dst, "MST")
+ case '+': // date and time in date(1) format
+ dst = t.AppendFormat(dst, "Mon Jan _2 15:04:05 MST 2006")
+ case '%':
+ dst = append(dst, '%')
+ }
+ }
+ return dst
+}
+
+// all space padded numbers are two length
+func appendSpacePad(p []byte, n int) []byte {
+ if n < 10 {
+ return append(p, ' ', '0'+byte(n))
+ }
+ return strconv.AppendInt(p, int64(n), 10)
+}
+
+func append0Pad(dst []byte, n, size int) []byte {
+ switch size {
+ case 4:
+ if n < 1000 {
+ dst = append(dst, '0')
+ }
+ fallthrough
+ case 3:
+ if n < 100 {
+ dst = append(dst, '0')
+ }
+ fallthrough
+ case 2:
+ if n < 10 {
+ dst = append(dst, '0')
+ }
+ }
+ return strconv.AppendInt(dst, int64(n), 10)
+}
+
+func appendampm(p []byte, h int) []byte {
+ if h < 12 {
+ return append(p, 'a', 'm')
+ }
+ return append(p, 'p', 'm')
+}
+
+func appendAMPM(p []byte, h int) []byte {
+ if h < 12 {
+ return append(p, 'A', 'M')
+ }
+ return append(p, 'P', 'M')
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/topics_and_partitions.go b/vendor/github.com/twmb/franz-go/pkg/kgo/topics_and_partitions.go
new file mode 100644
index 0000000000000..3c25284f31dfc
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/topics_and_partitions.go
@@ -0,0 +1,922 @@
+package kgo
+
+import (
+ "fmt"
+ "sort"
+ "strings"
+ "sync"
+ "sync/atomic"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+/////////////
+// HELPERS // -- ugly types to eliminate the toil of nil maps and lookups
+/////////////
+
+func dupmsi32(m map[string]int32) map[string]int32 {
+ d := make(map[string]int32, len(m))
+ for t, ps := range m {
+ d[t] = ps
+ }
+ return d
+}
+
+// "Atomic map of topic partitions", for lack of a better name at this point.
+type amtps struct {
+ v atomic.Value
+}
+
+func (a *amtps) read() map[string][]int32 {
+ v := a.v.Load()
+ if v == nil {
+ return nil
+ }
+ return v.(map[string][]int32)
+}
+
+func (a *amtps) write(fn func(map[string][]int32)) {
+ dup := a.clone()
+ fn(dup)
+ a.store(dup)
+}
+
+func (a *amtps) clone() map[string][]int32 {
+ orig := a.read()
+ dup := make(map[string][]int32, len(orig))
+ for t, ps := range orig {
+ dup[t] = append(dup[t], ps...)
+ }
+ return dup
+}
+
+func (a *amtps) store(m map[string][]int32) { a.v.Store(m) }
+
+type mtps map[string][]int32
+
+func (m mtps) String() string {
+ var sb strings.Builder
+ var topicsWritten int
+ ts := make([]string, 0, len(m))
+ var ps []int32
+ for t := range m {
+ ts = append(ts, t)
+ }
+ sort.Strings(ts)
+ for _, t := range ts {
+ ps = append(ps[:0], m[t]...)
+ sort.Slice(ps, func(i, j int) bool { return ps[i] < ps[j] })
+ topicsWritten++
+ fmt.Fprintf(&sb, "%s%v", t, ps)
+ if topicsWritten < len(m) {
+ sb.WriteString(", ")
+ }
+ }
+ return sb.String()
+}
+
+type mtmps map[string]map[int32]struct{} // map of topics to map of partitions
+
+func (m *mtmps) add(t string, p int32) {
+ if *m == nil {
+ *m = make(mtmps)
+ }
+ mps := (*m)[t]
+ if mps == nil {
+ mps = make(map[int32]struct{})
+ (*m)[t] = mps
+ }
+ mps[p] = struct{}{}
+}
+
+func (m *mtmps) addt(t string) {
+ if *m == nil {
+ *m = make(mtmps)
+ }
+ mps := (*m)[t]
+ if mps == nil {
+ mps = make(map[int32]struct{})
+ (*m)[t] = mps
+ }
+}
+
+func (m mtmps) onlyt(t string) bool {
+ if m == nil {
+ return false
+ }
+ ps, exists := m[t]
+ return exists && len(ps) == 0
+}
+
+func (m mtmps) remove(t string, p int32) {
+ if m == nil {
+ return
+ }
+ mps, exists := m[t]
+ if !exists {
+ return
+ }
+ delete(mps, p)
+ if len(mps) == 0 {
+ delete(m, t)
+ }
+}
+
+////////////
+// PAUSED // -- types for pausing topics and partitions
+////////////
+
+type pausedTopics map[string]pausedPartitions
+
+type pausedPartitions struct {
+ all bool
+ m map[int32]struct{}
+}
+
+func (m pausedTopics) t(topic string) (pausedPartitions, bool) {
+ if len(m) == 0 { // potentially nil
+ return pausedPartitions{}, false
+ }
+ pps, exists := m[topic]
+ return pps, exists
+}
+
+func (m pausedTopics) has(topic string, partition int32) (paused bool) {
+ if len(m) == 0 {
+ return false
+ }
+ pps, exists := m[topic]
+ if !exists {
+ return false
+ }
+ if pps.all {
+ return true
+ }
+ _, exists = pps.m[partition]
+ return exists
+}
+
+func (m pausedTopics) addTopics(topics ...string) {
+ for _, topic := range topics {
+ pps, exists := m[topic]
+ if !exists {
+ pps = pausedPartitions{m: make(map[int32]struct{})}
+ }
+ pps.all = true
+ m[topic] = pps
+ }
+}
+
+func (m pausedTopics) delTopics(topics ...string) {
+ for _, topic := range topics {
+ pps, exists := m[topic]
+ if !exists {
+ continue
+ }
+ pps.all = false
+ if !pps.all && len(pps.m) == 0 {
+ delete(m, topic)
+ }
+ }
+}
+
+func (m pausedTopics) addPartitions(topicPartitions map[string][]int32) {
+ for topic, partitions := range topicPartitions {
+ pps, exists := m[topic]
+ if !exists {
+ pps = pausedPartitions{m: make(map[int32]struct{})}
+ }
+ for _, partition := range partitions {
+ pps.m[partition] = struct{}{}
+ }
+ m[topic] = pps
+ }
+}
+
+func (m pausedTopics) delPartitions(topicPartitions map[string][]int32) {
+ for topic, partitions := range topicPartitions {
+ pps, exists := m[topic]
+ if !exists {
+ continue
+ }
+ for _, partition := range partitions {
+ delete(pps.m, partition)
+ }
+ if !pps.all && len(pps.m) == 0 {
+ delete(m, topic)
+ }
+ }
+}
+
+func (m pausedTopics) pausedTopics() []string {
+ var r []string
+ for topic, pps := range m {
+ if pps.all {
+ r = append(r, topic)
+ }
+ }
+ return r
+}
+
+func (m pausedTopics) pausedPartitions() map[string][]int32 {
+ r := make(map[string][]int32)
+ for topic, pps := range m {
+ ps := make([]int32, 0, len(pps.m))
+ for partition := range pps.m {
+ ps = append(ps, partition)
+ }
+ r[topic] = ps
+ }
+ return r
+}
+
+func (m pausedTopics) clone() pausedTopics {
+ dup := make(pausedTopics)
+ dup.addTopics(m.pausedTopics()...)
+ dup.addPartitions(m.pausedPartitions())
+ return dup
+}
+
+//////////
+// GUTS // -- the key types for storing important metadata for topics & partitions
+//////////
+
+func newTopicPartitions() *topicPartitions {
+ parts := new(topicPartitions)
+ parts.v.Store(new(topicPartitionsData))
+ return parts
+}
+
+// Contains all information about a topic's partitions.
+type topicPartitions struct {
+ v atomic.Value // *topicPartitionsData
+
+ partsMu sync.Mutex
+ partitioner TopicPartitioner
+ lb *leastBackupInput // for partitioning if the partitioner is a LoadTopicPartitioner
+}
+
+func (t *topicPartitions) load() *topicPartitionsData { return t.v.Load().(*topicPartitionsData) }
+
+func newTopicsPartitions() *topicsPartitions {
+ var t topicsPartitions
+ t.v.Store(make(topicsPartitionsData))
+ return &t
+}
+
+// A helper type mapping topics to their partitions;
+// this is the inner value of topicPartitions.v.
+type topicsPartitionsData map[string]*topicPartitions
+
+func (d topicsPartitionsData) hasTopic(t string) bool { _, exists := d[t]; return exists }
+func (d topicsPartitionsData) loadTopic(t string) *topicPartitionsData {
+ tp, exists := d[t]
+ if !exists {
+ return nil
+ }
+ return tp.load()
+}
+
+// A helper type mapping topics to their partitions that can be updated
+// atomically.
+type topicsPartitions struct {
+ v atomic.Value // topicsPartitionsData (map[string]*topicPartitions)
+}
+
+func (t *topicsPartitions) load() topicsPartitionsData {
+ if t == nil {
+ return nil
+ }
+ return t.v.Load().(topicsPartitionsData)
+}
+func (t *topicsPartitions) storeData(d topicsPartitionsData) { t.v.Store(d) }
+func (t *topicsPartitions) storeTopics(topics []string) { t.v.Store(t.ensureTopics(topics)) }
+func (t *topicsPartitions) clone() topicsPartitionsData {
+ current := t.load()
+ clone := make(map[string]*topicPartitions, len(current))
+ for k, v := range current {
+ clone[k] = v
+ }
+ return clone
+}
+
+// Ensures that the topics exist in the returned map, but does not store the
+// update. This can be used to update the data and store later, rather than
+// storing immediately.
+func (t *topicsPartitions) ensureTopics(topics []string) topicsPartitionsData {
+ var cloned bool
+ current := t.load()
+ for _, topic := range topics {
+ if _, exists := current[topic]; !exists {
+ if !cloned {
+ current = t.clone()
+ cloned = true
+ }
+ current[topic] = newTopicPartitions()
+ }
+ }
+ return current
+}
+
+// Opposite of ensureTopics, this purges the input topics and *does* store.
+func (t *topicsPartitions) purgeTopics(topics []string) {
+ var cloned bool
+ current := t.load()
+ for _, topic := range topics {
+ if _, exists := current[topic]; exists {
+ if !cloned {
+ current = t.clone()
+ cloned = true
+ }
+ delete(current, topic)
+ }
+ }
+ if cloned {
+ t.storeData(current)
+ }
+}
+
+// Updates the topic partitions data atomic value.
+//
+// If this is the first time seeing partitions, we do processing of unknown
+// partitions that may be buffered for producing.
+func (cl *Client) storePartitionsUpdate(topic string, l *topicPartitions, lv *topicPartitionsData, hadPartitions bool) {
+ // If the topic already had partitions, then there would be no
+ // unknown topic waiting and we do not need to notify anything.
+ if hadPartitions {
+ l.v.Store(lv)
+ return
+ }
+
+ p := &cl.producer
+
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+
+ // If the topic did not have partitions, then we need to store the
+ // partition update BEFORE unlocking the mutex to guard against this
+ // sequence of events:
+ //
+ // - unlock waiters
+ // - delete waiter
+ // - new produce recreates waiter
+ // - we store update
+ // - we never notify the recreated waiter
+ //
+ // By storing before releasing the locks, we ensure that later
+ // partition loads for this topic under the mu will see our update.
+ defer l.v.Store(lv)
+
+ // If there are no unknown topics or this topic is not unknown, then we
+ // have nothing to do.
+ if len(p.unknownTopics) == 0 {
+ return
+ }
+ unknown, exists := p.unknownTopics[topic]
+ if !exists {
+ return
+ }
+
+ // If we loaded no partitions because of a retryable error, we signal
+ // the waiting goroutine that a try happened. It is possible the
+ // goroutine is quitting and will not be draining unknownWait, so we do
+ // not require the send.
+ if len(lv.partitions) == 0 && kerr.IsRetriable(lv.loadErr) {
+ select {
+ case unknown.wait <- lv.loadErr:
+ default:
+ }
+ return
+ }
+
+ // Either we have a fatal error or we can successfully partition.
+ //
+ // Even with a fatal error, if we loaded any partitions, we partition.
+ // If we only had a fatal error, we can finish promises in a goroutine.
+ // If we are partitioning, we have to do it under the unknownMu to
+ // ensure prior buffered records are produced in order before we
+ // release the mu.
+ delete(p.unknownTopics, topic)
+ close(unknown.wait) // allow waiting goroutine to quit
+
+ if len(lv.partitions) == 0 {
+ cl.producer.promiseBatch(batchPromise{
+ recs: unknown.buffered,
+ err: lv.loadErr,
+ })
+ } else {
+ for _, pr := range unknown.buffered {
+ cl.doPartitionRecord(l, lv, pr)
+ }
+ }
+}
+
+// If a metadata request fails after retrying (internally retrying, so only a
+// few times), or the metadata request does not return topics that we requested
+// (which may also happen additionally consuming via regex), then we need to
+// bump errors for topics that were previously loaded, and bump errors for
+// topics awaiting load.
+//
+// This has two modes of operation:
+//
+// 1. if no topics were missing, then the metadata request failed outright,
+// and we need to bump errors on all stored topics and unknown topics.
+//
+// 2. if topics were missing, then the metadata request was successful but
+// had missing data, and we need to bump errors on only what was mising.
+func (cl *Client) bumpMetadataFailForTopics(requested map[string]*topicPartitions, err error, missingTopics ...string) {
+ p := &cl.producer
+
+ // mode 1
+ if len(missingTopics) == 0 {
+ for _, topic := range requested {
+ for _, topicPartition := range topic.load().partitions {
+ topicPartition.records.bumpRepeatedLoadErr(err)
+ }
+ }
+ }
+
+ // mode 2
+ var missing map[string]bool
+ for _, failTopic := range missingTopics {
+ if missing == nil {
+ missing = make(map[string]bool, len(missingTopics))
+ }
+ missing[failTopic] = true
+
+ if topic, exists := requested[failTopic]; exists {
+ for _, topicPartition := range topic.load().partitions {
+ topicPartition.records.bumpRepeatedLoadErr(err)
+ }
+ }
+ }
+
+ p.unknownTopicsMu.Lock()
+ defer p.unknownTopicsMu.Unlock()
+
+ for topic, unknown := range p.unknownTopics {
+ // if nil, mode 1 (req err), else mode 2 (missing resp)
+ if missing != nil && !missing[topic] {
+ continue
+ }
+
+ select {
+ case unknown.wait <- err:
+ default:
+ }
+ }
+}
+
+// topicPartitionsData is the data behind a topicPartitions' v.
+//
+// We keep this in an atomic because it is expected to be extremely read heavy,
+// and if it were behind a lock, the lock would need to be held for a while.
+type topicPartitionsData struct {
+ // NOTE if adding anything to this struct, be sure to fix meta merge.
+ loadErr error // could be auth, unknown, leader not avail, or creation err
+ isInternal bool
+ partitions []*topicPartition // partition num => partition
+ writablePartitions []*topicPartition // subset of above
+ topic string
+ when int64
+}
+
+// topicPartition contains all information from Kafka for a topic's partition,
+// as well as what a client is producing to it or info about consuming from it.
+type topicPartition struct {
+ // If we have a load error (leader/listener/replica not available), we
+ // keep the old topicPartition data and the new error.
+ loadErr error
+
+ // If, on metadata refresh, the leader epoch for this partition goes
+ // backwards, we ignore the metadata refresh and signal the metadata
+ // should be reloaded: the broker we requested is stale. However, the
+ // broker could get into a bad state through some weird cluster failure
+ // scenarios. If we see the epoch rewind repeatedly, we eventually keep
+ // the metadata refresh. This is not detrimental and at worst will lead
+ // to the broker telling us to update our metadata.
+ epochRewinds uint8
+
+ // If we do not have a load error, we determine if the new
+ // topicPartition is the same or different from the old based on
+ // whether the data changed (leader or leader epoch, etc.).
+ topicPartitionData
+
+ // If we do not have a load error, we copy the records and cursor
+ // pointers from the old after updating any necessary fields in them
+ // (see migrate functions below).
+ //
+ // Only one of records or cursor is non-nil.
+ records *recBuf
+ cursor *cursor
+}
+
+func (tp *topicPartition) partition() int32 {
+ if tp.records != nil {
+ return tp.records.partition
+ }
+ return tp.cursor.partition
+}
+
+// Contains stuff that changes on metadata update that we copy into a cursor or
+// recBuf.
+type topicPartitionData struct {
+ // Our leader; if metadata sees this change, the metadata update
+ // migrates the cursor to a different source with the session stopped,
+ // and the recBuf to a different sink under a tight mutex.
+ leader int32
+
+ // What we believe to be the epoch of the leader for this partition.
+ //
+ // For cursors, for KIP-320, if a broker receives a fetch request where
+ // the current leader epoch does not match the brokers, either the
+ // broker is behind and returns UnknownLeaderEpoch, or we are behind
+ // and the broker returns FencedLeaderEpoch. For the former, we back
+ // off and retry. For the latter, we update our metadata.
+ leaderEpoch int32
+}
+
+// migrateProductionTo is called on metadata update if a topic partition's sink
+// has changed. This moves record production from one sink to the other; this
+// must be done such that records produced during migration follow those
+// already buffered.
+func (old *topicPartition) migrateProductionTo(new *topicPartition) { //nolint:revive // old/new naming makes this clearer
+ // First, remove our record buffer from the old sink.
+ old.records.sink.removeRecBuf(old.records)
+
+ // Before this next lock, record producing will buffer to the
+ // in-migration-progress records and may trigger draining to
+ // the old sink. That is fine, the old sink no longer consumes
+ // from these records. We just have wasted drain triggers.
+
+ old.records.mu.Lock() // guard setting sink and topic partition data
+ old.records.sink = new.records.sink
+ old.records.topicPartitionData = new.topicPartitionData
+ old.records.mu.Unlock()
+
+ // After the unlock above, record buffering can trigger drains
+ // on the new sink, which is not yet consuming from these
+ // records. Again, just more wasted drain triggers.
+
+ old.records.sink.addRecBuf(old.records) // add our record source to the new sink
+
+ // At this point, the new sink will be draining our records. We lastly
+ // need to copy the records pointer to our new topicPartition.
+ new.records = old.records
+}
+
+// migrateCursorTo is called on metadata update if a topic partition's leader
+// or leader epoch has changed.
+//
+// This is a little bit different from above, in that we do this logic only
+// after stopping a consumer session. With the consumer session stopped, we
+// have fewer concurrency issues to worry about.
+func (old *topicPartition) migrateCursorTo( //nolint:revive // old/new naming makes this clearer
+ new *topicPartition,
+ css *consumerSessionStopper,
+) {
+ css.stop()
+
+ old.cursor.source.removeCursor(old.cursor)
+
+ // With the session stopped, we can update fields on the old cursor
+ // with no concurrency issue.
+ old.cursor.source = new.cursor.source
+
+ // KIP-320: if we had consumed some messages, we need to validate the
+ // leader epoch on the new broker to see if we experienced data loss
+ // before we can use this cursor.
+ //
+ // Metadata ensures that leaderEpoch is non-negative only if the broker
+ // supports KIP-320.
+ if new.leaderEpoch != -1 && old.cursor.lastConsumedEpoch >= 0 {
+ // Since the cursor consumed messages, it is definitely usable.
+ // We use it so that the epoch load can finish using it
+ // properly.
+ old.cursor.use()
+ css.reloadOffsets.addLoad(old.cursor.topic, old.cursor.partition, loadTypeEpoch, offsetLoad{
+ replica: -1,
+ Offset: Offset{
+ at: old.cursor.offset,
+ epoch: old.cursor.lastConsumedEpoch,
+ },
+ })
+ }
+
+ old.cursor.topicPartitionData = new.topicPartitionData
+
+ old.cursor.source.addCursor(old.cursor)
+ new.cursor = old.cursor
+}
+
+type kip951move struct {
+ recBufs map[*recBuf]topicPartitionData
+ cursors map[*cursor]topicPartitionData
+ brokers []BrokerMetadata
+}
+
+func (k *kip951move) empty() bool {
+ return len(k.brokers) == 0
+}
+
+func (k *kip951move) hasRecBuf(rb *recBuf) bool {
+ if k == nil || k.recBufs == nil {
+ return false
+ }
+ _, ok := k.recBufs[rb]
+ return ok
+}
+
+func (k *kip951move) maybeAddProducePartition(resp *kmsg.ProduceResponse, p *kmsg.ProduceResponseTopicPartition, rb *recBuf) bool {
+ if resp.GetVersion() < 10 ||
+ p.ErrorCode != kerr.NotLeaderForPartition.Code ||
+ len(resp.Brokers) == 0 ||
+ p.CurrentLeader.LeaderID < 0 ||
+ p.CurrentLeader.LeaderEpoch < 0 {
+ return false
+ }
+ if len(k.brokers) == 0 {
+ for _, rb := range resp.Brokers {
+ b := BrokerMetadata{
+ NodeID: rb.NodeID,
+ Host: rb.Host,
+ Port: rb.Port,
+ Rack: rb.Rack,
+ }
+ k.brokers = append(k.brokers, b)
+ }
+ }
+ if k.recBufs == nil {
+ k.recBufs = make(map[*recBuf]topicPartitionData)
+ }
+ k.recBufs[rb] = topicPartitionData{
+ leader: p.CurrentLeader.LeaderID,
+ leaderEpoch: p.CurrentLeader.LeaderEpoch,
+ }
+ return true
+}
+
+func (k *kip951move) maybeAddFetchPartition(resp *kmsg.FetchResponse, p *kmsg.FetchResponseTopicPartition, c *cursor) bool {
+ if resp.GetVersion() < 16 ||
+ p.ErrorCode != kerr.NotLeaderForPartition.Code ||
+ len(resp.Brokers) == 0 ||
+ p.CurrentLeader.LeaderID < 0 ||
+ p.CurrentLeader.LeaderEpoch < 0 {
+ return false
+ }
+
+ if len(k.brokers) == 0 {
+ for _, rb := range resp.Brokers {
+ b := BrokerMetadata{
+ NodeID: rb.NodeID,
+ Host: rb.Host,
+ Port: rb.Port,
+ Rack: rb.Rack,
+ }
+ k.brokers = append(k.brokers, b)
+ }
+ }
+ if k.cursors == nil {
+ k.cursors = make(map[*cursor]topicPartitionData)
+ }
+ k.cursors[c] = topicPartitionData{
+ leader: p.CurrentLeader.LeaderID,
+ leaderEpoch: p.CurrentLeader.LeaderEpoch,
+ }
+ return true
+}
+
+func (k *kip951move) ensureSinksAndSources(cl *Client) {
+ cl.sinksAndSourcesMu.Lock()
+ defer cl.sinksAndSourcesMu.Unlock()
+
+ ensure := func(leader int32) {
+ if _, exists := cl.sinksAndSources[leader]; exists {
+ return
+ }
+ cl.sinksAndSources[leader] = sinkAndSource{
+ sink: cl.newSink(leader),
+ source: cl.newSource(leader),
+ }
+ }
+
+ for _, td := range k.recBufs {
+ ensure(td.leader)
+ }
+ for _, td := range k.cursors {
+ ensure(td.leader)
+ }
+}
+
+func (k *kip951move) ensureBrokers(cl *Client) {
+ if len(k.brokers) == 0 {
+ return
+ }
+
+ kbs := make([]kmsg.MetadataResponseBroker, 0, len(k.brokers))
+ for _, b := range k.brokers {
+ kbs = append(kbs, kmsg.MetadataResponseBroker{
+ NodeID: b.NodeID,
+ Host: b.Host,
+ Port: b.Port,
+ Rack: b.Rack,
+ })
+ }
+ cl.updateBrokers(kbs)
+}
+
+func (k *kip951move) maybeBeginMove(cl *Client) {
+ if k.empty() {
+ return
+ }
+ // We want to do the move independent of whatever is calling us, BUT we
+ // want to ensure it is not concurrent with a metadata request.
+ go cl.blockingMetadataFn(func() {
+ k.ensureBrokers(cl)
+ k.ensureSinksAndSources(cl)
+ k.doMove(cl)
+ })
+}
+
+func (k *kip951move) doMove(cl *Client) {
+ // Moving partitions is theoretically simple, but the client is written
+ // in a confusing way around concurrency.
+ //
+ // The problem is that topicPartitionsData is read-only after
+ // initialization. Updates are done via atomic stores of the containing
+ // topicPartitionsData struct. Moving a single partition requires some
+ // deep copying.
+
+ // oldNew pairs what NEEDS to be atomically updated (old; left value)
+ // with the value that WILL be stored (new; right value).
+ type oldNew struct {
+ l *topicPartitions
+ r *topicPartitionsData
+ }
+ topics := make(map[string]oldNew)
+
+ // getT returns the oldNew for the topic, performing a shallow clone of
+ // the old whole-topic struct.
+ getT := func(m topicsPartitionsData, topic string) (oldNew, bool) {
+ lr, ok := topics[topic]
+ if !ok {
+ l := m[topic]
+ if l == nil {
+ return oldNew{}, false
+ }
+ dup := *l.load()
+ r := &dup
+ r.writablePartitions = append([]*topicPartition{}, r.writablePartitions...)
+ r.partitions = append([]*topicPartition{}, r.partitions...)
+ lr = oldNew{l, r}
+ topics[topic] = lr
+ }
+ return lr, true
+ }
+
+ // modifyP returns the old topicPartition and a new one that will be
+ // used in migrateTo. The new topicPartition only contains the sink
+ // and topicPartitionData that will be copied into old under old's
+ // mutex. The actual migration is done in the migrate function (see
+ // below).
+ //
+ // A migration is not needed if the old value has a higher leader
+ // epoch. If the leader epoch is equal, we check if the leader is the
+ // same (this allows easier injection of failures in local testing). A
+ // higher epoch can come from a concurrent metadata update that
+ // actually performed the move first.
+ modifyP := func(d *topicPartitionsData, partition int32, td topicPartitionData) (old, new *topicPartition, modified bool) {
+ old = d.partitions[partition]
+ if old.leaderEpoch > td.leaderEpoch {
+ return nil, nil, false
+ }
+ if old.leaderEpoch == td.leaderEpoch && old.leader == td.leader {
+ return nil, nil, false
+ }
+
+ cl.sinksAndSourcesMu.Lock()
+ sns := cl.sinksAndSources[td.leader]
+ cl.sinksAndSourcesMu.Unlock()
+
+ dup := *old
+ new = &dup
+ new.topicPartitionData = topicPartitionData{
+ leader: td.leader,
+ leaderEpoch: td.leaderEpoch,
+ }
+ if new.records != nil {
+ new.records = &recBuf{
+ sink: sns.sink,
+ topicPartitionData: new.topicPartitionData,
+ }
+ } else {
+ new.cursor = &cursor{
+ source: sns.source,
+ topicPartitionData: new.topicPartitionData,
+ }
+ }
+
+ // We now have to mirror the new partition back to the topic
+ // slice that will be atomically stored.
+ d.partitions[partition] = new
+ idxWritable := sort.Search(len(d.writablePartitions), func(i int) bool { return d.writablePartitions[i].partition() >= partition })
+ if idxWritable < len(d.writablePartitions) && d.writablePartitions[idxWritable].partition() == partition {
+ if d.writablePartitions[idxWritable] != old {
+ panic("invalid invariant -- partition in writablePartitions != partition at expected index in partitions")
+ }
+ d.writablePartitions[idxWritable] = new
+ }
+
+ return old, new, true
+ }
+
+ if k.recBufs != nil {
+ tpsProducer := cl.producer.topics.load() // must be non-nil, since we have recBufs to move
+ for recBuf, td := range k.recBufs {
+ lr, ok := getT(tpsProducer, recBuf.topic)
+ if !ok {
+ continue // perhaps concurrently purged
+ }
+ old, new, modified := modifyP(lr.r, recBuf.partition, td)
+ if modified {
+ cl.cfg.logger.Log(LogLevelInfo, "moving producing partition due to kip-951 not_leader_for_partition",
+ "topic", recBuf.topic,
+ "partition", recBuf.partition,
+ "new_leader", new.leader,
+ "new_leader_epoch", new.leaderEpoch,
+ "old_leader", old.leader,
+ "old_leader_epoch", old.leaderEpoch,
+ )
+ old.migrateProductionTo(new)
+ } else {
+ recBuf.clearFailing()
+ }
+ }
+ } else {
+ var tpsConsumer topicsPartitionsData
+ c := &cl.consumer
+ switch {
+ case c.g != nil:
+ tpsConsumer = c.g.tps.load()
+ case c.d != nil:
+ tpsConsumer = c.d.tps.load()
+ }
+ css := &consumerSessionStopper{cl: cl}
+ defer css.maybeRestart()
+ for cursor, td := range k.cursors {
+ lr, ok := getT(tpsConsumer, cursor.topic)
+ if !ok {
+ continue // perhaps concurrently purged
+ }
+ old, new, modified := modifyP(lr.r, cursor.partition, td)
+ if modified {
+ cl.cfg.logger.Log(LogLevelInfo, "moving consuming partition due to kip-951 not_leader_for_partition",
+ "topic", cursor.topic,
+ "partition", cursor.partition,
+ "new_leader", new.leader,
+ "new_leader_epoch", new.leaderEpoch,
+ "old_leader", old.leader,
+ "old_leader_epoch", old.leaderEpoch,
+ )
+ old.migrateCursorTo(new, css)
+ }
+ }
+ }
+
+ // We can always do a simple store. For producing, we *must* have
+ // had partitions, so this is not updating an unknown topic.
+ for _, lr := range topics {
+ lr.l.v.Store(lr.r)
+ }
+}
+
+// Migrating a cursor requires stopping any consumer session. If we
+// stop a session, we need to eventually re-start any offset listing or
+// epoch loading that was stopped. Thus, we simply merge what we
+// stopped into what we will reload.
+type consumerSessionStopper struct {
+ cl *Client
+ stopped bool
+ reloadOffsets listOrEpochLoads
+ tpsPrior *topicsPartitions
+}
+
+func (css *consumerSessionStopper) stop() {
+ if css.stopped {
+ return
+ }
+ css.stopped = true
+ loads, tps := css.cl.consumer.stopSession()
+ css.reloadOffsets.mergeFrom(loads)
+ css.tpsPrior = tps
+}
+
+func (css *consumerSessionStopper) maybeRestart() {
+ if !css.stopped {
+ return
+ }
+ session := css.cl.consumer.startNewSession(css.tpsPrior)
+ defer session.decWorker()
+ css.reloadOffsets.loadWithSession(session, "resuming reload offsets after session stopped for cursor migrating in metadata")
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go b/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go
new file mode 100644
index 0000000000000..25cfd44356f62
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go
@@ -0,0 +1,1257 @@
+package kgo
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+
+ "github.com/twmb/franz-go/pkg/kerr"
+)
+
+func ctx2fn(ctx context.Context) func() context.Context { return func() context.Context { return ctx } }
+
+// TransactionEndTry is simply a named bool.
+type TransactionEndTry bool
+
+const (
+ // TryAbort attempts to end a transaction with an abort.
+ TryAbort TransactionEndTry = false
+
+ // TryCommit attempts to end a transaction with a commit.
+ TryCommit TransactionEndTry = true
+)
+
+// GroupTransactSession abstracts away the proper way to begin and end a
+// transaction when consuming in a group, modifying records, and producing
+// (EOS).
+//
+// If you are running Kafka 2.5+, it is strongly recommended that you also use
+// RequireStableFetchOffsets. See that config option's documentation for more
+// details.
+type GroupTransactSession struct {
+ cl *Client
+
+ failMu sync.Mutex
+
+ revoked bool
+ revokedCh chan struct{} // closed once when revoked is set; reset after End
+ lost bool
+ lostCh chan struct{} // closed once when lost is set; reset after End
+}
+
+// NewGroupTransactSession is exactly the same as NewClient, but wraps the
+// client's OnPartitionsRevoked / OnPartitionsLost to ensure that transactions
+// are correctly aborted whenever necessary so as to properly provide EOS.
+//
+// When ETLing in a group in a transaction, if a rebalance happens before the
+// transaction is ended, you either (a) must block the rebalance from finishing
+// until you are done producing, and then commit before unblocking, or (b)
+// allow the rebalance to happen, but abort any work you did.
+//
+// The problem with (a) is that if your ETL work loop is slow, you run the risk
+// of exceeding the rebalance timeout and being kicked from the group. You will
+// try to commit, and depending on the Kafka version, the commit may even be
+// erroneously successful (pre Kafka 2.5). This will lead to duplicates.
+//
+// Instead, for safety, a GroupTransactSession favors (b). If a rebalance
+// occurs at any time before ending a transaction with a commit, this will
+// abort the transaction.
+//
+// This leaves the risk that ending the transaction itself exceeds the
+// rebalance timeout, but this is just one request with no cpu logic. With a
+// proper rebalance timeout, this single request will not fail and the commit
+// will succeed properly.
+//
+// If this client detects you are talking to a pre-2.5 cluster, OR if you have
+// not enabled RequireStableFetchOffsets, the client will sleep for 200ms after
+// a successful commit to allow Kafka's txn markers to propagate. This is not
+// foolproof in the event of some extremely unlikely communication patterns and
+// **potentially** could allow duplicates. See this repo's transaction's doc
+// for more details.
+func NewGroupTransactSession(opts ...Opt) (*GroupTransactSession, error) {
+ s := &GroupTransactSession{
+ revokedCh: make(chan struct{}),
+ lostCh: make(chan struct{}),
+ }
+
+ var noGroup error
+
+ // We append one option, which will get applied last. Because it is
+ // applied last, we can execute some logic and override some existing
+ // options.
+ opts = append(opts, groupOpt{func(cfg *cfg) {
+ if cfg.group == "" {
+ cfg.seedBrokers = nil // force a validation error
+ noGroup = errors.New("missing required group")
+ return
+ }
+
+ userRevoked := cfg.onRevoked
+ cfg.onRevoked = func(ctx context.Context, cl *Client, rev map[string][]int32) {
+ s.failMu.Lock()
+ defer s.failMu.Unlock()
+ if s.revoked {
+ return
+ }
+
+ if cl.consumer.g.cooperative.Load() && len(rev) == 0 && !s.revoked {
+ cl.cfg.logger.Log(LogLevelInfo, "transact session in on_revoke with nothing to revoke; allowing next commit")
+ } else {
+ cl.cfg.logger.Log(LogLevelInfo, "transact session in on_revoke; aborting next commit if we are currently in a transaction")
+ s.revoked = true
+ close(s.revokedCh)
+ }
+
+ if userRevoked != nil {
+ userRevoked(ctx, cl, rev)
+ }
+ }
+
+ userLost := cfg.onLost
+ cfg.onLost = func(ctx context.Context, cl *Client, lost map[string][]int32) {
+ s.failMu.Lock()
+ defer s.failMu.Unlock()
+ if s.lost {
+ return
+ }
+
+ cl.cfg.logger.Log(LogLevelInfo, "transact session in on_lost; aborting next commit if we are currently in a transaction")
+ s.lost = true
+ close(s.lostCh)
+
+ if userLost != nil {
+ userLost(ctx, cl, lost)
+ } else if userRevoked != nil {
+ userRevoked(ctx, cl, lost)
+ }
+ }
+ }})
+
+ cl, err := NewClient(opts...)
+ if err != nil {
+ if noGroup != nil {
+ err = noGroup
+ }
+ return nil, err
+ }
+ s.cl = cl
+ return s, nil
+}
+
+// Client returns the underlying client that this transact session wraps. This
+// can be useful for functions that require a client, such as raw requests. The
+// returned client should not be used to manage transactions (leave that to the
+// GroupTransactSession).
+func (s *GroupTransactSession) Client() *Client {
+ return s.cl
+}
+
+// Close is a wrapper around Client.Close, with the exact same semantics.
+// Refer to that function's documentation.
+//
+// This function must be called to leave the group before shutting down.
+func (s *GroupTransactSession) Close() {
+ s.cl.Close()
+}
+
+// PollFetches is a wrapper around Client.PollFetches, with the exact same
+// semantics. Refer to that function's documentation.
+//
+// It is invalid to call PollFetches concurrently with Begin or End.
+func (s *GroupTransactSession) PollFetches(ctx context.Context) Fetches {
+ return s.cl.PollFetches(ctx)
+}
+
+// PollRecords is a wrapper around Client.PollRecords, with the exact same
+// semantics. Refer to that function's documentation.
+//
+// It is invalid to call PollRecords concurrently with Begin or End.
+func (s *GroupTransactSession) PollRecords(ctx context.Context, maxPollRecords int) Fetches {
+ return s.cl.PollRecords(ctx, maxPollRecords)
+}
+
+// ProduceSync is a wrapper around Client.ProduceSync, with the exact same
+// semantics. Refer to that function's documentation.
+//
+// It is invalid to call ProduceSync concurrently with Begin or End.
+func (s *GroupTransactSession) ProduceSync(ctx context.Context, rs ...*Record) ProduceResults {
+ return s.cl.ProduceSync(ctx, rs...)
+}
+
+// Produce is a wrapper around Client.Produce, with the exact same semantics.
+// Refer to that function's documentation.
+//
+// It is invalid to call Produce concurrently with Begin or End.
+func (s *GroupTransactSession) Produce(ctx context.Context, r *Record, promise func(*Record, error)) {
+ s.cl.Produce(ctx, r, promise)
+}
+
+// TryProduce is a wrapper around Client.TryProduce, with the exact same
+// semantics. Refer to that function's documentation.
+//
+// It is invalid to call TryProduce concurrently with Begin or End.
+func (s *GroupTransactSession) TryProduce(ctx context.Context, r *Record, promise func(*Record, error)) {
+ s.cl.TryProduce(ctx, r, promise)
+}
+
+// Begin begins a transaction, returning an error if the client has no
+// transactional id or is already in a transaction. Begin must be called
+// before producing records in a transaction.
+func (s *GroupTransactSession) Begin() error {
+ s.cl.cfg.logger.Log(LogLevelInfo, "beginning transact session")
+ return s.cl.BeginTransaction()
+}
+
+func (s *GroupTransactSession) failed() bool {
+ return s.revoked || s.lost
+}
+
+// End ends a transaction, committing if commit is true, if the group did not
+// rebalance since the transaction began, and if committing offsets is
+// successful. If any of these conditions are false, this aborts. This flushes
+// or aborts depending on `commit`.
+//
+// This returns whether the transaction committed or any error that occurred.
+// No returned error is retryable. Either the transactional ID has entered a
+// failed state, or the client retried so much that the retry limit was hit,
+// and odds are you should not continue. While a context is allowed, canceling
+// it will likely leave the client in an invalid state. Canceling should only
+// be done if you want to shut down.
+func (s *GroupTransactSession) End(ctx context.Context, commit TransactionEndTry) (committed bool, err error) {
+ defer func() {
+ s.failMu.Lock()
+ s.revoked = false
+ s.revokedCh = make(chan struct{})
+ s.lost = false
+ s.lostCh = make(chan struct{})
+ s.failMu.Unlock()
+ }()
+
+ switch commit {
+ case TryCommit:
+ if err := s.cl.Flush(ctx); err != nil {
+ return false, err // we do not abort below, because an error here is ctx closing
+ }
+ case TryAbort:
+ if err := s.cl.AbortBufferedRecords(ctx); err != nil {
+ return false, err // same
+ }
+ }
+
+ wantCommit := bool(commit)
+
+ s.failMu.Lock()
+ failed := s.failed()
+
+ precommit := s.cl.CommittedOffsets()
+ postcommit := s.cl.UncommittedOffsets()
+ s.failMu.Unlock()
+
+ var hasAbortableCommitErr bool
+ var commitErr error
+ var g *groupConsumer
+
+ kip447 := false
+ if wantCommit && !failed {
+ isAbortableCommitErr := func(err error) bool {
+ // ILLEGAL_GENERATION: rebalance began and completed
+ // before we committed.
+ //
+ // REBALANCE_IN_PREGRESS: rebalance began, abort.
+ //
+ // COORDINATOR_NOT_AVAILABLE,
+ // COORDINATOR_LOAD_IN_PROGRESS,
+ // NOT_COORDINATOR: request failed too many times
+ //
+ // CONCURRENT_TRANSACTIONS: Kafka not harmonized,
+ // we can just abort.
+ //
+ // UNKNOWN_SERVER_ERROR: technically should not happen,
+ // but we can just abort. Redpanda returns this in
+ // certain versions.
+ switch {
+ case errors.Is(err, kerr.IllegalGeneration),
+ errors.Is(err, kerr.RebalanceInProgress),
+ errors.Is(err, kerr.CoordinatorNotAvailable),
+ errors.Is(err, kerr.CoordinatorLoadInProgress),
+ errors.Is(err, kerr.NotCoordinator),
+ errors.Is(err, kerr.ConcurrentTransactions),
+ errors.Is(err, kerr.UnknownServerError):
+ return true
+ }
+ return false
+ }
+
+ var commitErrs []string
+
+ committed := make(chan struct{})
+ g = s.cl.commitTransactionOffsets(ctx, postcommit,
+ func(_ *kmsg.TxnOffsetCommitRequest, resp *kmsg.TxnOffsetCommitResponse, err error) {
+ defer close(committed)
+ if err != nil {
+ if isAbortableCommitErr(err) {
+ hasAbortableCommitErr = true
+ return
+ }
+ commitErrs = append(commitErrs, err.Error())
+ return
+ }
+ kip447 = resp.Version >= 3
+
+ for _, t := range resp.Topics {
+ for _, p := range t.Partitions {
+ if err := kerr.ErrorForCode(p.ErrorCode); err != nil {
+ if isAbortableCommitErr(err) {
+ hasAbortableCommitErr = true
+ } else {
+ commitErrs = append(commitErrs, fmt.Sprintf("topic %s partition %d: %v", t.Topic, p.Partition, err))
+ }
+ }
+ }
+ }
+ },
+ )
+ <-committed
+
+ if len(commitErrs) > 0 {
+ commitErr = fmt.Errorf("unable to commit transaction offsets: %s", strings.Join(commitErrs, ", "))
+ }
+ }
+
+ // Now that we have committed our offsets, before we allow them to be
+ // used, we force a heartbeat. By forcing a heartbeat, if there is no
+ // error, then we know we have up to RebalanceTimeout to write our
+ // EndTxnRequest without a problem.
+ //
+ // We should not be booted from the group if we receive an ok
+ // heartbeat, meaning that, as mentioned, we should be able to end the
+ // transaction safely.
+ var okHeartbeat bool
+ if g != nil && commitErr == nil {
+ waitHeartbeat := make(chan struct{})
+ var heartbeatErr error
+ select {
+ case g.heartbeatForceCh <- func(err error) {
+ defer close(waitHeartbeat)
+ heartbeatErr = err
+ }:
+ select {
+ case <-waitHeartbeat:
+ okHeartbeat = heartbeatErr == nil
+ case <-s.revokedCh:
+ case <-s.lostCh:
+ }
+ case <-s.revokedCh:
+ case <-s.lostCh:
+ }
+ }
+
+ s.failMu.Lock()
+
+ // If we know we are KIP-447 and the user is requiring stable, we can
+ // unlock immediately because Kafka will itself block a rebalance
+ // fetching offsets from outstanding transactions.
+ //
+ // If either of these are false, we spin up a goroutine that sleeps for
+ // 200ms before unlocking to give Kafka a chance to avoid some odd race
+ // that would permit duplicates (i.e., what KIP-447 is preventing).
+ //
+ // This 200ms is not perfect but it should be well enough time on a
+ // stable cluster. On an unstable cluster, I still expect clients to be
+ // slower than intra-cluster communication, but there is a risk.
+ if kip447 && s.cl.cfg.requireStable {
+ defer s.failMu.Unlock()
+ } else {
+ defer func() {
+ if committed {
+ s.cl.cfg.logger.Log(LogLevelDebug, "sleeping 200ms before allowing a rebalance to continue to give the brokers a chance to write txn markers and avoid duplicates")
+ go func() {
+ time.Sleep(200 * time.Millisecond)
+ s.failMu.Unlock()
+ }()
+ } else {
+ s.failMu.Unlock()
+ }
+ }()
+ }
+
+ tryCommit := !s.failed() && commitErr == nil && !hasAbortableCommitErr && okHeartbeat
+ willTryCommit := wantCommit && tryCommit
+
+ s.cl.cfg.logger.Log(LogLevelInfo, "transaction session ending",
+ "was_failed", s.failed(),
+ "want_commit", wantCommit,
+ "can_try_commit", tryCommit,
+ "will_try_commit", willTryCommit,
+ )
+
+ // We have a few potential retryable errors from EndTransaction.
+ // OperationNotAttempted will be returned at most once.
+ //
+ // UnknownServerError should not be returned, but some brokers do:
+ // technically this is fatal, but there is no downside to retrying
+ // (even retrying a commit) and seeing if we are successful or if we
+ // get a better error.
+ var tries int
+retry:
+ endTxnErr := s.cl.EndTransaction(ctx, TransactionEndTry(willTryCommit))
+ tries++
+ if endTxnErr != nil && tries < 10 {
+ switch {
+ case errors.Is(endTxnErr, kerr.OperationNotAttempted):
+ s.cl.cfg.logger.Log(LogLevelInfo, "end transaction with commit not attempted; retrying as abort")
+ willTryCommit = false
+ goto retry
+
+ case errors.Is(endTxnErr, kerr.UnknownServerError):
+ s.cl.cfg.logger.Log(LogLevelInfo, "end transaction with commit unknown server error; retrying")
+ after := time.NewTimer(s.cl.cfg.retryBackoff(tries))
+ select {
+ case <-after.C: // context canceled; we will see when we retry
+ case <-s.cl.ctx.Done():
+ after.Stop()
+ }
+ goto retry
+ }
+ }
+
+ if !willTryCommit || endTxnErr != nil {
+ currentCommit := s.cl.CommittedOffsets()
+ s.cl.cfg.logger.Log(LogLevelInfo, "transact session resetting to current committed state (potentially after a rejoin)",
+ "tried_commit", willTryCommit,
+ "commit_err", endTxnErr,
+ "state_precommit", precommit,
+ "state_currently_committed", currentCommit,
+ )
+ s.cl.setOffsets(currentCommit, false)
+ } else if willTryCommit && endTxnErr == nil {
+ s.cl.cfg.logger.Log(LogLevelInfo, "transact session successful, setting to newly committed state",
+ "tried_commit", willTryCommit,
+ "postcommit", postcommit,
+ )
+ s.cl.setOffsets(postcommit, false)
+ }
+
+ switch {
+ case commitErr != nil && endTxnErr == nil:
+ return false, commitErr
+
+ case commitErr == nil && endTxnErr != nil:
+ return false, endTxnErr
+
+ case commitErr != nil && endTxnErr != nil:
+ return false, endTxnErr
+
+ default: // both errs nil
+ committed = willTryCommit
+ return willTryCommit, nil
+ }
+}
+
+// BeginTransaction sets the client to a transactional state, erroring if there
+// is no transactional ID, or if the producer is currently in a fatal
+// (unrecoverable) state, or if the client is already in a transaction.
+//
+// This must not be called concurrently with other client functions.
+func (cl *Client) BeginTransaction() error {
+ if cl.cfg.txnID == nil {
+ return errNotTransactional
+ }
+
+ cl.producer.txnMu.Lock()
+ defer cl.producer.txnMu.Unlock()
+
+ if cl.producer.inTxn {
+ return errors.New("invalid attempt to begin a transaction while already in a transaction")
+ }
+
+ needRecover, didRecover, err := cl.maybeRecoverProducerID(context.Background())
+ if needRecover && !didRecover {
+ cl.cfg.logger.Log(LogLevelInfo, "unable to begin transaction due to unrecoverable producer id error", "err", err)
+ return fmt.Errorf("producer ID has a fatal, unrecoverable error, err: %w", err)
+ }
+
+ cl.producer.inTxn = true
+ cl.producer.producingTxn.Store(true) // allow produces for txns now
+ cl.cfg.logger.Log(LogLevelInfo, "beginning transaction", "transactional_id", *cl.cfg.txnID)
+
+ return nil
+}
+
+// EndBeginTxnHow controls the safety of how EndAndBeginTransaction executes.
+type EndBeginTxnHow uint8
+
+const (
+ // EndBeginTxnSafe ensures a "safe" execution of EndAndBeginTransaction
+ // at the expense of speed. This option blocks all produce requests and
+ // only resumes produce requests when onEnd finishes. Note that some
+ // produce requests may have finished successfully and records that
+ // were a part of a transaction may have their promises waiting to be
+ // called: not all promises are guaranteed to be called.
+ EndBeginTxnSafe EndBeginTxnHow = iota
+
+ // EndBeginTxnUnsafe opts for less safe EndAndBeginTransaction flow to
+ // achieve higher throughput. This option allows produce requests to
+ // continue while EndTxn actually commits. This is unsafe because a
+ // produce request itself only half begins a transaction. Internally,
+ // AddPartitionsToTxn actually begins a transaction. If your
+ // application dies before the client is able to successfully issue
+ // AddPartitionsToTxn, then a transaction will have partially begun
+ // within Kafka: the partial transaction will prevent the partition
+ // from being consumable past where the transaction begun, and the
+ // transaction will not timeout. You will have to restart your
+ // application with the SAME transactional ID and produce to all the
+ // same partitions to ensure to resume the transaction and unstick the
+ // partitions.
+ //
+ // Also note: this option does not work on all broker implementations.
+ // This relies on Kafka internals. Some brokers (notably Redpanda) are
+ // more strict with enforcing transaction correctness and this option
+ // cannot be used and will cause errors.
+ //
+ // Deprecated: Kafka 3.6 removed support for the hacky behavior that
+ // this option was abusing. Thus, as of Kafka 3.6, this option does not
+ // work against Kafka. This option also has never worked for Redpanda
+ // becuse Redpanda always strictly validated that partitions were a
+ // part of a transaction. Later versions of Kafka and Redpanda will
+ // remove the need for AddPartitionsToTxn at all and thus this option
+ // ultimately will be unnecessary anyway.
+ EndBeginTxnUnsafe
+)
+
+// EndAndBeginTransaction is a combination of EndTransaction and
+// BeginTransaction, and relaxes the restriction that the client must have no
+// buffered records. This function does not flush nor abort any buffered
+// records. It is ok to concurrently produce while this function executes.
+//
+// This function has different safety guarantees which are up to the user to
+// decide. See the documentation on EndBeginTxnHow for which you would like to
+// choose.
+//
+// The onEnd function is called with your input context and the result of
+// EndTransaction. Promises are paused while onEnd executes. If onEnd returns
+// an error, BeginTransaction is not called and this function returns the
+// result of onEnd. Otherwise, this function returns the result of
+// BeginTransaction. See the documentation on EndTransaction and
+// BeginTransaction for further details. It is invalid to call this function
+// more than once at a time, and it is invalid to call concurrent with
+// EndTransaction or BeginTransaction.
+func (cl *Client) EndAndBeginTransaction(
+ ctx context.Context,
+ how EndBeginTxnHow,
+ commit TransactionEndTry,
+ onEnd func(context.Context, error) error,
+) (rerr error) {
+ if g := cl.consumer.g; g != nil {
+ return errors.New("cannot use EndAndBeginTransaction with EOS")
+ }
+
+ cl.producer.txnMu.Lock()
+ defer cl.producer.txnMu.Unlock()
+
+ // From BeginTransaction: if we return with no error, we begin. Unlike
+ // BeginTransaction, we do not error if in a transaction, because we
+ // expect to be in one.
+ defer func() {
+ if rerr == nil {
+ needRecover, didRecover, err := cl.maybeRecoverProducerID(ctx)
+ if needRecover && !didRecover {
+ cl.cfg.logger.Log(LogLevelInfo, "unable to begin transaction due to unrecoverable producer id error", "err", err)
+ rerr = fmt.Errorf("producer ID has a fatal, unrecoverable error, err: %w", err)
+ return
+ }
+ cl.producer.inTxn = true
+ cl.cfg.logger.Log(LogLevelInfo, "beginning transaction", "transactional_id", *cl.cfg.txnID)
+ }
+ }()
+
+ // If end/beginning safely, we have to pause AddPartitionsToTxn and
+ // ProduceRequest, and we only resume after the user's onEnd has been
+ // called.
+ if how == EndBeginTxnSafe {
+ if err := cl.producer.pause(ctx); err != nil {
+ return err
+ }
+ defer cl.producer.resume()
+ }
+
+ // Before BeginTransaction, we block promises & call onEnd with whatever
+ // the return error is.
+ cl.producer.promisesMu.Lock()
+ var promisesUnblocked bool
+ unblockPromises := func() {
+ if promisesUnblocked {
+ return
+ }
+ promisesUnblocked = true
+ defer cl.producer.promisesMu.Unlock()
+ rerr = onEnd(ctx, rerr)
+ }
+ defer unblockPromises()
+
+ if !cl.producer.inTxn {
+ return nil
+ }
+
+ var anyAdded bool
+ var readd map[string][]int32
+ for topic, parts := range cl.producer.topics.load() {
+ for i, part := range parts.load().partitions {
+ if part.records.addedToTxn.Swap(false) {
+ if how == EndBeginTxnUnsafe {
+ if readd == nil {
+ readd = make(map[string][]int32)
+ }
+ readd[topic] = append(readd[topic], int32(i))
+ }
+ anyAdded = true
+ }
+ }
+ }
+ anyAdded = anyAdded || cl.producer.readded
+
+ // EndTxn when no txn was started returns INVALID_TXN_STATE.
+ if !anyAdded {
+ cl.cfg.logger.Log(LogLevelDebug, "no records were produced during the commit; thus no transaction was began; ending without doing anything")
+ return nil
+ }
+
+ // From EndTransaction: if the pid has an error, we may try to recover.
+ id, epoch, err := cl.producerID(ctx2fn(ctx))
+ if err != nil {
+ if commit {
+ return kerr.OperationNotAttempted
+ }
+ if _, didRecover, _ := cl.maybeRecoverProducerID(ctx); didRecover {
+ return nil
+ }
+ }
+ cl.cfg.logger.Log(LogLevelInfo, "ending transaction",
+ "transactional_id", *cl.cfg.txnID,
+ "producer_id", id,
+ "epoch", epoch,
+ "commit", commit,
+ )
+ cl.producer.readded = false
+ err = cl.doWithConcurrentTransactions(ctx, "EndTxn", func() error {
+ req := kmsg.NewPtrEndTxnRequest()
+ req.TransactionalID = *cl.cfg.txnID
+ req.ProducerID = id
+ req.ProducerEpoch = epoch
+ req.Commit = bool(commit)
+ resp, err := req.RequestWith(ctx, cl)
+ if err != nil {
+ return err
+ }
+
+ // When ending a transaction, if the user is using unsafe mode,
+ // there is a logic race where the user can actually end before
+ // AddPartitionsToTxn is issued. This should be rare and is
+ // most likely only to happen whenever a new transaction is
+ // starting from a not-in-transaction state (i.e., the first
+ // transaction). If we see InvalidTxnState in unsafe mode, we
+ // assume that a transaction was not actually begun and we
+ // return success.
+ //
+ // In Kafka, InvalidTxnState is also returned when producing
+ // non-transactional records from a producer that is currently
+ // in a transaction.
+ //
+ // All other cases it is returned is in EndTxn:
+ // * state == CompleteCommit and EndTxn != commit
+ // * state == CompleteAbort and EndTxn != abort
+ // * state == PrepareCommit and EndTxn != commit (otherwise, returns concurrent transactions)
+ // * state == PrepareAbort and EndTxn != abort (otherwise, returns concurrent transactions)
+ // * state == Empty
+ //
+ // This basically guards against the final case, all others are
+ // Kafka internal state transitioning and we should never hit
+ // them.
+ if how == EndBeginTxnUnsafe && resp.ErrorCode == kerr.InvalidTxnState.Code {
+ return nil
+ }
+ return kerr.ErrorForCode(resp.ErrorCode)
+ })
+ var ke *kerr.Error
+ if errors.As(err, &ke) && !ke.Retriable {
+ cl.failProducerID(id, epoch, err)
+ }
+ if err != nil || how != EndBeginTxnUnsafe {
+ return err
+ }
+ unblockPromises()
+
+ // If we are end/beginning unsafely, then we need to re-add all
+ // partitions to a new transaction immediately. Timing makes it
+ // impossible to know what was truly added before EndTxn, so we
+ // pessimistically assume that every partition must be re-added.
+ //
+ // We track readd before the txn and swap those to un-added, but we
+ // also need to track anything that is newly added that raced with our
+ // EndTxn. We swap before the txn to ensure that *eventually*,
+ // partitions will be tracked as not in a transaction if people stop
+ // producing.
+ //
+ // We do this before the user callback because we *need* to start a new
+ // transaction within Kafka to ensure there will be a timeout. Per the
+ // unsafe aspect, the client could die or this request could error and
+ // there could be a stranded txn within Kafka's ProducerStateManager,
+ // but ideally the user will reconnect with the same txnal id.
+ cl.producer.readded = true
+ return cl.doWithConcurrentTransactions(ctx, "AddPartitionsToTxn", func() error {
+ req := kmsg.NewPtrAddPartitionsToTxnRequest()
+ req.TransactionalID = *cl.cfg.txnID
+ req.ProducerID = id
+ req.ProducerEpoch = epoch
+
+ for topic, parts := range cl.producer.topics.load() {
+ for i, part := range parts.load().partitions {
+ if part.records.addedToTxn.Load() {
+ readd[topic] = append(readd[topic], int32(i))
+ }
+ }
+ }
+
+ ps := make(map[int32]struct{})
+ for topic, parts := range readd {
+ t := kmsg.NewAddPartitionsToTxnRequestTopic()
+ t.Topic = topic
+ for _, part := range parts {
+ ps[part] = struct{}{}
+ }
+ for p := range ps {
+ t.Partitions = append(t.Partitions, p)
+ delete(ps, p)
+ }
+ if len(t.Partitions) > 0 {
+ req.Topics = append(req.Topics, t)
+ }
+ }
+
+ resp, err := req.RequestWith(ctx, cl)
+ if err != nil {
+ return err
+ }
+
+ for i := range resp.Topics {
+ t := &resp.Topics[i]
+ for j := range t.Partitions {
+ p := &t.Partitions[j]
+ if err := kerr.ErrorForCode(p.ErrorCode); err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+ })
+}
+
+// AbortBufferedRecords fails all unflushed records with ErrAborted and waits
+// for there to be no buffered records.
+//
+// This accepts a context to quit the wait early, but quitting the wait may
+// lead to an invalid state and should only be used if you are quitting your
+// application. This function waits to abort records at safe points: if records
+// are known to not be in flight. This function is safe to call multiple times
+// concurrently, and safe to call concurrent with Flush.
+//
+// NOTE: This aborting record waits until all inflight requests have known
+// responses. The client must wait to ensure no duplicate sequence number
+// issues. For more details, and for an immediate alternative, check the
+// documentation on UnsafeAbortBufferedRecords.
+func (cl *Client) AbortBufferedRecords(ctx context.Context) error {
+ cl.producer.aborting.Add(1)
+ defer cl.producer.aborting.Add(-1)
+
+ cl.cfg.logger.Log(LogLevelInfo, "producer state set to aborting; continuing to wait via flushing")
+ defer cl.cfg.logger.Log(LogLevelDebug, "aborted buffered records")
+
+ // We must clear unknown topics ourselves, because flush just waits
+ // like normal.
+ p := &cl.producer
+ p.unknownTopicsMu.Lock()
+ for _, unknown := range p.unknownTopics {
+ select {
+ case unknown.fatal <- ErrAborting:
+ default:
+ }
+ }
+ p.unknownTopicsMu.Unlock()
+
+ // Setting the aborting state allows records to fail before
+ // or after produce requests; thus, now we just flush.
+ return cl.Flush(ctx)
+}
+
+// UnsafeAbortBufferedRecords fails all unflushed records with ErrAborted and
+// waits for there to be no buffered records. This function does NOT wait for
+// any inflight produce requests to finish, meaning topics in the client may be
+// in an invalid state and producing to an invalid-state topic may cause the
+// client to enter a fatal failed state. If you want to produce to topics that
+// were unsafely aborted, it is recommended to use PurgeTopicsFromClient to
+// forcefully reset the topics before producing to them again.
+//
+// When producing with idempotency enabled or with transactions, every record
+// has a sequence number. The client must wait for inflight requests to have
+// responses before failing a record, otherwise the client cannot know if a
+// sequence number was seen by the broker and tracked or not seen by the broker
+// and not tracked. By unsafely aborting, the client forcefully abandons all
+// records, and producing to the topics again may re-use a sequence number and
+// cause internal errors.
+func (cl *Client) UnsafeAbortBufferedRecords() {
+ cl.failBufferedRecords(ErrAborting)
+}
+
+// EndTransaction ends a transaction and resets the client's internal state to
+// not be in a transaction.
+//
+// Flush and CommitOffsetsForTransaction must be called before this function;
+// this function does not flush and does not itself ensure that all buffered
+// records are flushed. If no record yet has caused a partition to be added to
+// the transaction, this function does nothing and returns nil. Alternatively,
+// AbortBufferedRecords should be called before aborting a transaction to
+// ensure that any buffered records not yet flushed will not be a part of a new
+// transaction.
+//
+// If the producer ID has an error and you are trying to commit, this will
+// return with kerr.OperationNotAttempted. If this happened, retry
+// EndTransaction with TryAbort. Not other error is retryable, and you should
+// not retry with TryAbort.
+//
+// If records failed with UnknownProducerID and your Kafka version is at least
+// 2.5, then aborting here will potentially allow the client to recover for
+// more production.
+//
+// Note that canceling the context will likely leave the client in an
+// undesirable state, because canceling the context may cancel the in-flight
+// EndTransaction request, making it impossible to know whether the commit or
+// abort was successful. It is recommended to not cancel the context.
+func (cl *Client) EndTransaction(ctx context.Context, commit TransactionEndTry) error {
+ cl.producer.txnMu.Lock()
+ defer cl.producer.txnMu.Unlock()
+
+ if !cl.producer.inTxn {
+ return nil
+ }
+ cl.producer.inTxn = false
+
+ cl.producer.producingTxn.Store(false) // forbid any new produces while ending txn
+
+ // anyAdded tracks if any partitions were added to this txn, because
+ // any partitions written to triggers AddPartitionToTxn, which triggers
+ // the txn to actually begin within Kafka.
+ //
+ // If we consumed at all but did not produce, the transaction ending
+ // issues AddOffsetsToTxn, which internally adds a __consumer_offsets
+ // partition to the transaction. Thus, if we added offsets, then we
+ // also produced.
+ var anyAdded bool
+ if g := cl.consumer.g; g != nil {
+ // We do not lock because we expect commitTransactionOffsets to
+ // be called *before* ending a transaction.
+ if g.offsetsAddedToTxn {
+ g.offsetsAddedToTxn = false
+ anyAdded = true
+ }
+ } else {
+ cl.cfg.logger.Log(LogLevelDebug, "transaction ending, no group loaded; this must be a producer-only transaction, not consume-modify-produce EOS")
+ }
+
+ // After the flush, no records are being produced to, and we can set
+ // addedToTxn to false outside of any mutex.
+ for _, parts := range cl.producer.topics.load() {
+ for _, part := range parts.load().partitions {
+ anyAdded = part.records.addedToTxn.Swap(false) || anyAdded
+ }
+ }
+
+ // If the user previously used EndAndBeginTransaction with
+ // EndBeginTxnUnsafe, we may have to end a transaction even though
+ // nothing may be in it.
+ anyAdded = anyAdded || cl.producer.readded
+
+ // If no partition was added to a transaction, then we have nothing to commit.
+ //
+ // Note that anyAdded is true if the producer ID was failed, meaning we will
+ // get to the potential recovery logic below if necessary.
+ if !anyAdded {
+ cl.cfg.logger.Log(LogLevelDebug, "no records were produced during the commit; thus no transaction was began; ending without doing anything")
+ return nil
+ }
+
+ id, epoch, err := cl.producerID(ctx2fn(ctx))
+ if err != nil {
+ if commit {
+ return kerr.OperationNotAttempted
+ }
+
+ // If we recovered the producer ID, we return early, since
+ // there is no reason to issue an abort now that the id is
+ // different. Otherwise, we issue our EndTxn which will likely
+ // fail, but that is ok, we will just return error.
+ _, didRecover, _ := cl.maybeRecoverProducerID(ctx)
+ if didRecover {
+ return nil
+ }
+ }
+
+ cl.cfg.logger.Log(LogLevelInfo, "ending transaction",
+ "transactional_id", *cl.cfg.txnID,
+ "producer_id", id,
+ "epoch", epoch,
+ "commit", commit,
+ )
+
+ cl.producer.readded = false
+ err = cl.doWithConcurrentTransactions(ctx, "EndTxn", func() error {
+ req := kmsg.NewPtrEndTxnRequest()
+ req.TransactionalID = *cl.cfg.txnID
+ req.ProducerID = id
+ req.ProducerEpoch = epoch
+ req.Commit = bool(commit)
+ resp, err := req.RequestWith(ctx, cl)
+ if err != nil {
+ return err
+ }
+ return kerr.ErrorForCode(resp.ErrorCode)
+ })
+
+ // If the returned error is still a Kafka error, this is fatal and we
+ // need to fail our producer ID we loaded above.
+ //
+ // UNKNOWN_SERVER_ERROR can theoretically be returned (not all brokers
+ // do). This technically is fatal, but we do not really know whether it
+ // is. We can just return this error and let the caller decide to
+ // continue, if the caller does continue, we will try something and
+ // eventually then receive our proper transactional error, if any.
+ var ke *kerr.Error
+ if errors.As(err, &ke) && !ke.Retriable && ke.Code != kerr.UnknownServerError.Code {
+ cl.failProducerID(id, epoch, err)
+ }
+
+ return err
+}
+
+// This returns if it is necessary to recover the producer ID (it has an
+// error), whether it is possible to recover, and, if not, the error.
+//
+// We call this when beginning a transaction or when ending with an abort.
+func (cl *Client) maybeRecoverProducerID(ctx context.Context) (necessary, did bool, err error) {
+ cl.producer.mu.Lock()
+ defer cl.producer.mu.Unlock()
+
+ id, epoch, err := cl.producerID(ctx2fn(ctx))
+ if err == nil {
+ return false, false, nil
+ }
+
+ var ke *kerr.Error
+ if ok := errors.As(err, &ke); !ok {
+ return true, false, err
+ }
+
+ kip360 := cl.producer.idVersion >= 3 && (errors.Is(ke, kerr.UnknownProducerID) || errors.Is(ke, kerr.InvalidProducerIDMapping))
+ kip588 := cl.producer.idVersion >= 4 && errors.Is(ke, kerr.InvalidProducerEpoch /* || err == kerr.TransactionTimedOut when implemented in Kafka */)
+
+ recoverable := kip360 || kip588
+ if !recoverable {
+ return true, false, err // fatal, unrecoverable
+ }
+
+ // Storing errReloadProducerID will reset sequence numbers as appropriate
+ // when the producer ID is reloaded successfully.
+ cl.producer.id.Store(&producerID{
+ id: id,
+ epoch: epoch,
+ err: errReloadProducerID,
+ })
+ return true, true, nil
+}
+
+// If a transaction is begun too quickly after finishing an old transaction,
+// Kafka may still be finalizing its commit / abort and will return a
+// concurrent transactions error. We handle that by retrying for a bit.
+func (cl *Client) doWithConcurrentTransactions(ctx context.Context, name string, fn func() error) error {
+ start := time.Now()
+ tries := 0
+ backoff := cl.cfg.txnBackoff
+
+start:
+ err := fn()
+ if errors.Is(err, kerr.ConcurrentTransactions) {
+ // The longer we are stalled, the more we enforce a minimum
+ // backoff.
+ since := time.Since(start)
+ switch {
+ case since > time.Second:
+ if backoff < 200*time.Millisecond {
+ backoff = 200 * time.Millisecond
+ }
+ case since > 5*time.Second/2:
+ if backoff < 500*time.Millisecond {
+ backoff = 500 * time.Millisecond
+ }
+ case since > 5*time.Second:
+ if backoff < time.Second {
+ backoff = time.Second
+ }
+ }
+
+ tries++
+ cl.cfg.logger.Log(LogLevelDebug, fmt.Sprintf("%s failed with CONCURRENT_TRANSACTIONS, which may be because we ended a txn and began producing in a new txn too quickly; backing off and retrying", name),
+ "backoff", backoff,
+ "since_request_tries_start", time.Since(start),
+ "tries", tries,
+ )
+ select {
+ case <-time.After(backoff):
+ case <-ctx.Done():
+ cl.cfg.logger.Log(LogLevelError, fmt.Sprintf("abandoning %s retry due to request ctx quitting", name))
+ return err
+ case <-cl.ctx.Done():
+ cl.cfg.logger.Log(LogLevelError, fmt.Sprintf("abandoning %s retry due to client ctx quitting", name))
+ return err
+ }
+ goto start
+ }
+ return err
+}
+
+////////////////////////////////////////////////////////////////////////////////////////////
+// TRANSACTIONAL COMMITTING //
+// MOSTLY DUPLICATED CODE DUE TO NO GENERICS AND BECAUSE THE TYPES ARE SLIGHTLY DIFFERENT //
+////////////////////////////////////////////////////////////////////////////////////////////
+
+// commitTransactionOffsets is exactly like CommitOffsets, but specifically for
+// use with transactional consuming and producing.
+//
+// Since this function is a gigantic footgun if not done properly, we hide this
+// and only allow transaction sessions to commit.
+//
+// Unlike CommitOffsets, we do not update the group's uncommitted map. We leave
+// that to the calling code to do properly with SetOffsets depending on whether
+// an eventual abort happens or not.
+func (cl *Client) commitTransactionOffsets(
+ ctx context.Context,
+ uncommitted map[string]map[int32]EpochOffset,
+ onDone func(*kmsg.TxnOffsetCommitRequest, *kmsg.TxnOffsetCommitResponse, error),
+) *groupConsumer {
+ cl.cfg.logger.Log(LogLevelDebug, "in commitTransactionOffsets", "with", uncommitted)
+ defer cl.cfg.logger.Log(LogLevelDebug, "left commitTransactionOffsets")
+
+ if cl.cfg.txnID == nil {
+ onDone(nil, nil, errNotTransactional)
+ return nil
+ }
+
+ // Before committing, ensure we are at least in a transaction. We
+ // unlock the producer txnMu before committing to allow EndTransaction
+ // to go through, even though that could cut off our commit.
+ cl.producer.txnMu.Lock()
+ var unlockedTxn bool
+ unlockTxn := func() {
+ if !unlockedTxn {
+ cl.producer.txnMu.Unlock()
+ }
+ unlockedTxn = true
+ }
+ defer unlockTxn()
+ if !cl.producer.inTxn {
+ onDone(nil, nil, errNotInTransaction)
+ return nil
+ }
+
+ g := cl.consumer.g
+ if g == nil {
+ onDone(kmsg.NewPtrTxnOffsetCommitRequest(), kmsg.NewPtrTxnOffsetCommitResponse(), errNotGroup)
+ return nil
+ }
+
+ req, err := g.prepareTxnOffsetCommit(ctx, uncommitted)
+ if err != nil {
+ onDone(req, kmsg.NewPtrTxnOffsetCommitResponse(), err)
+ return g
+ }
+ if len(req.Topics) == 0 {
+ onDone(kmsg.NewPtrTxnOffsetCommitRequest(), kmsg.NewPtrTxnOffsetCommitResponse(), nil)
+ return g
+ }
+
+ if !g.offsetsAddedToTxn {
+ if err := cl.addOffsetsToTxn(ctx, g.cfg.group); err != nil {
+ if onDone != nil {
+ onDone(nil, nil, err)
+ }
+ return g
+ }
+ g.offsetsAddedToTxn = true
+ }
+
+ unlockTxn()
+
+ if err := g.waitJoinSyncMu(ctx); err != nil {
+ onDone(kmsg.NewPtrTxnOffsetCommitRequest(), kmsg.NewPtrTxnOffsetCommitResponse(), err)
+ return nil
+ }
+ unblockJoinSync := func(req *kmsg.TxnOffsetCommitRequest, resp *kmsg.TxnOffsetCommitResponse, err error) {
+ g.noCommitDuringJoinAndSync.RUnlock()
+ onDone(req, resp, err)
+ }
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ g.commitTxn(ctx, req, unblockJoinSync)
+ return g
+}
+
+// Ties a transactional producer to a group. Since this requires a producer ID,
+// this initializes one if it is not yet initialized. This would only be the
+// case if trying to commit before any records have been sent.
+func (cl *Client) addOffsetsToTxn(ctx context.Context, group string) error {
+ id, epoch, err := cl.producerID(ctx2fn(ctx))
+ if err != nil {
+ return err
+ }
+
+ err = cl.doWithConcurrentTransactions(ctx, "AddOffsetsToTxn", func() error { // committing offsets without producing causes a transaction to begin within Kafka
+ cl.cfg.logger.Log(LogLevelInfo, "issuing AddOffsetsToTxn",
+ "txn", *cl.cfg.txnID,
+ "producerID", id,
+ "producerEpoch", epoch,
+ "group", group,
+ )
+ req := kmsg.NewPtrAddOffsetsToTxnRequest()
+ req.TransactionalID = *cl.cfg.txnID
+ req.ProducerID = id
+ req.ProducerEpoch = epoch
+ req.Group = group
+ resp, err := req.RequestWith(ctx, cl)
+ if err != nil {
+ return err
+ }
+ return kerr.ErrorForCode(resp.ErrorCode)
+ })
+
+ // If the returned error is still a Kafka error, this is fatal and we
+ // need to fail our producer ID we created just above.
+ //
+ // We special case UNKNOWN_SERVER_ERROR, because we do not really know
+ // if this is fatal. If it is, we will catch it later on a better
+ // error. Some brokers send this when things fail internally, we can
+ // just abort our commit and see if things are still bad in
+ // EndTransaction.
+ var ke *kerr.Error
+ if errors.As(err, &ke) && !ke.Retriable && ke.Code != kerr.UnknownServerError.Code {
+ cl.failProducerID(id, epoch, err)
+ }
+
+ return err
+}
+
+// commitTxn is ALMOST EXACTLY THE SAME as commit, but changed for txn types
+// and we avoid updateCommitted. We avoid updating because we manually
+// SetOffsets when ending the transaction.
+func (g *groupConsumer) commitTxn(ctx context.Context, req *kmsg.TxnOffsetCommitRequest, onDone func(*kmsg.TxnOffsetCommitRequest, *kmsg.TxnOffsetCommitResponse, error)) {
+ if onDone == nil { // note we must always call onDone
+ onDone = func(_ *kmsg.TxnOffsetCommitRequest, _ *kmsg.TxnOffsetCommitResponse, _ error) {}
+ }
+
+ if g.commitCancel != nil {
+ g.commitCancel() // cancel any prior commit
+ }
+ priorCancel := g.commitCancel
+ priorDone := g.commitDone
+
+ // Unlike the non-txn consumer, we use the group context for
+ // transaction offset committing. We want to quit when the group is
+ // left, and we are not committing when leaving. We rely on proper
+ // usage of the GroupTransactSession API to issue commits, so there is
+ // no reason not to use the group context here.
+ commitCtx, commitCancel := context.WithCancel(g.ctx) // enable ours to be canceled and waited for
+ commitDone := make(chan struct{})
+
+ g.commitCancel = commitCancel
+ g.commitDone = commitDone
+
+ if ctx.Done() != nil {
+ go func() {
+ select {
+ case <-ctx.Done():
+ commitCancel()
+ case <-commitCtx.Done():
+ }
+ }()
+ }
+
+ go func() {
+ defer close(commitDone) // allow future commits to continue when we are done
+ defer commitCancel()
+ if priorDone != nil {
+ select {
+ case <-priorDone:
+ default:
+ g.cl.cfg.logger.Log(LogLevelDebug, "canceling prior txn offset commit to issue another")
+ priorCancel()
+ <-priorDone // wait for any prior request to finish
+ }
+ }
+ g.cl.cfg.logger.Log(LogLevelDebug, "issuing txn offset commit", "uncommitted", req)
+
+ var resp *kmsg.TxnOffsetCommitResponse
+ var err error
+ if len(req.Topics) > 0 {
+ resp, err = req.RequestWith(commitCtx, g.cl)
+ }
+ if err != nil {
+ onDone(req, nil, err)
+ return
+ }
+ onDone(req, resp, nil)
+ }()
+}
+
+func (g *groupConsumer) prepareTxnOffsetCommit(ctx context.Context, uncommitted map[string]map[int32]EpochOffset) (*kmsg.TxnOffsetCommitRequest, error) {
+ req := kmsg.NewPtrTxnOffsetCommitRequest()
+
+ // We're now generating the producerID before addOffsetsToTxn.
+ // We will not make this request until after addOffsetsToTxn, but it's possible to fail here due to a failed producerID.
+ id, epoch, err := g.cl.producerID(ctx2fn(ctx))
+ if err != nil {
+ return req, err
+ }
+
+ req.TransactionalID = *g.cl.cfg.txnID
+ req.Group = g.cfg.group
+ req.ProducerID = id
+ req.ProducerEpoch = epoch
+ memberID, generation := g.memberGen.load()
+ req.Generation = generation
+ req.MemberID = memberID
+ req.InstanceID = g.cfg.instanceID
+
+ for topic, partitions := range uncommitted {
+ reqTopic := kmsg.NewTxnOffsetCommitRequestTopic()
+ reqTopic.Topic = topic
+ for partition, eo := range partitions {
+ reqPartition := kmsg.NewTxnOffsetCommitRequestTopicPartition()
+ reqPartition.Partition = partition
+ reqPartition.Offset = eo.Offset
+ reqPartition.LeaderEpoch = eo.Epoch
+ reqPartition.Metadata = &req.MemberID
+ reqTopic.Partitions = append(reqTopic.Partitions, reqPartition)
+ }
+ req.Topics = append(req.Topics, reqTopic)
+ }
+
+ if fn, ok := ctx.Value(txnCommitContextFn).(func(*kmsg.TxnOffsetCommitRequest) error); ok {
+ if err := fn(req); err != nil {
+ return req, err
+ }
+ }
+ return req, nil
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kmsg/LICENSE b/vendor/github.com/twmb/franz-go/pkg/kmsg/LICENSE
new file mode 100644
index 0000000000000..36e18034325d5
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kmsg/LICENSE
@@ -0,0 +1,24 @@
+Copyright 2020, Travis Bischel.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the library nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/twmb/franz-go/pkg/kmsg/api.go b/vendor/github.com/twmb/franz-go/pkg/kmsg/api.go
new file mode 100644
index 0000000000000..6bda2e61bd9d1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kmsg/api.go
@@ -0,0 +1,423 @@
+// Package kmsg contains Kafka request and response types and autogenerated
+// serialization and deserialization functions.
+//
+// This package may bump major versions whenever Kafka makes a backwards
+// incompatible protocol change, per the types chosen for this package. For
+// example, Kafka can change a field from non-nullable to nullable, which would
+// require changing a field from a non-pointer to a pointer. We could get
+// around this by making everything an opaque struct and having getters, but
+// that is more tedious than having a few rare major version bumps.
+//
+// If you are using this package directly with kgo, you should either always
+// use New functions, or Default functions after creating structs, or you
+// should pin the max supported version. If you use New functions, you will
+// have safe defaults as new fields are added. If you pin versions, you will
+// avoid new fields being used. If you do neither of these, you may opt in to
+// new fields that do not have safe zero value defaults, and this may lead to
+// errors or unexpected results.
+//
+// Thus, whenever you initialize a struct from this package, do the following:
+//
+// struct := kmsg.NewFoo()
+// struct.Field = "value I want to set"
+//
+// Most of this package is generated, but a few things are manual. What is
+// manual: all interfaces, the RequestFormatter, record / message / record
+// batch reading, and sticky member metadata serialization.
+package kmsg
+
+import (
+ "context"
+ "sort"
+
+ "github.com/twmb/franz-go/pkg/kmsg/internal/kbin"
+)
+
+//go:generate cp ../kbin/primitives.go internal/kbin/
+
+// Requestor issues requests. Notably, the kgo.Client and kgo.Broker implements
+// Requestor. All Requests in this package have a RequestWith function to have
+// type-safe requests.
+type Requestor interface {
+ // Request issues a Request and returns either a Response or an error.
+ Request(context.Context, Request) (Response, error)
+}
+
+// Request represents a type that can be requested to Kafka.
+type Request interface {
+ // Key returns the protocol key for this message kind.
+ Key() int16
+ // MaxVersion returns the maximum protocol version this message
+ // supports.
+ //
+ // This function allows one to implement a client that chooses message
+ // versions based off of the max of a message's max version in the
+ // client and the broker's max supported version.
+ MaxVersion() int16
+ // SetVersion sets the version to use for this request and response.
+ SetVersion(int16)
+ // GetVersion returns the version currently set to use for the request
+ // and response.
+ GetVersion() int16
+ // IsFlexible returns whether the request at its current version is
+ // "flexible" as per the KIP-482.
+ IsFlexible() bool
+ // AppendTo appends this message in wire protocol form to a slice and
+ // returns the slice.
+ AppendTo([]byte) []byte
+ // ReadFrom parses all of the input slice into the response type.
+ //
+ // This should return an error if too little data is input.
+ ReadFrom([]byte) error
+ // ResponseKind returns an empty Response that is expected for
+ // this message request.
+ ResponseKind() Response
+}
+
+// AdminRequest represents a request that must be issued to Kafka controllers.
+type AdminRequest interface {
+ // IsAdminRequest is a method attached to requests that must be
+ // issed to Kafka controllers.
+ IsAdminRequest()
+ Request
+}
+
+// GroupCoordinatorRequest represents a request that must be issued to a
+// group coordinator.
+type GroupCoordinatorRequest interface {
+ // IsGroupCoordinatorRequest is a method attached to requests that
+ // must be issued to group coordinators.
+ IsGroupCoordinatorRequest()
+ Request
+}
+
+// TxnCoordinatorRequest represents a request that must be issued to a
+// transaction coordinator.
+type TxnCoordinatorRequest interface {
+ // IsTxnCoordinatorRequest is a method attached to requests that
+ // must be issued to transaction coordinators.
+ IsTxnCoordinatorRequest()
+ Request
+}
+
+// Response represents a type that Kafka responds with.
+type Response interface {
+ // Key returns the protocol key for this message kind.
+ Key() int16
+ // MaxVersion returns the maximum protocol version this message
+ // supports.
+ MaxVersion() int16
+ // SetVersion sets the version to use for this request and response.
+ SetVersion(int16)
+ // GetVersion returns the version currently set to use for the request
+ // and response.
+ GetVersion() int16
+ // IsFlexible returns whether the request at its current version is
+ // "flexible" as per the KIP-482.
+ IsFlexible() bool
+ // AppendTo appends this message in wire protocol form to a slice and
+ // returns the slice.
+ AppendTo([]byte) []byte
+ // ReadFrom parses all of the input slice into the response type.
+ //
+ // This should return an error if too little data is input.
+ ReadFrom([]byte) error
+ // RequestKind returns an empty Request that is expected for
+ // this message request.
+ RequestKind() Request
+}
+
+// UnsafeReadFrom, implemented by all requests and responses generated in this
+// package, switches to using unsafe slice-to-string conversions when reading.
+// This can be used to avoid a lot of garbage, but it means to have to be
+// careful when using any strings in structs: if you hold onto the string, the
+// underlying response slice will not be garbage collected.
+type UnsafeReadFrom interface {
+ UnsafeReadFrom([]byte) error
+}
+
+// ThrottleResponse represents a response that could have a throttle applied by
+// Kafka. Any response that implements ThrottleResponse also implements
+// SetThrottleResponse.
+//
+// Kafka 2.0.0 switched throttles from being applied before responses to being
+// applied after responses.
+type ThrottleResponse interface {
+ // Throttle returns the response's throttle millis value and
+ // whether Kafka applies the throttle after the response.
+ Throttle() (int32, bool)
+}
+
+// SetThrottleResponse sets the throttle in a response that can have a throttle
+// applied. Any kmsg interface that implements ThrottleResponse also implements
+// SetThrottleResponse.
+type SetThrottleResponse interface {
+ // SetThrottle sets the response's throttle millis value.
+ SetThrottle(int32)
+}
+
+// TimeoutRequest represents a request that has a TimeoutMillis field.
+// Any request that implements TimeoutRequest also implements SetTimeoutRequest.
+type TimeoutRequest interface {
+ // Timeout returns the request's timeout millis value.
+ Timeout() int32
+}
+
+// SetTimeoutRequest sets the timeout in a request that can have a timeout
+// applied. Any kmsg interface that implements ThrottleRequest also implements
+// SetThrottleRequest.
+type SetTimeoutRequest interface {
+ // SetTimeout sets the request's timeout millis value.
+ SetTimeout(timeoutMillis int32)
+}
+
+// RequestFormatter formats requests.
+//
+// The default empty struct works correctly, but can be extended with the
+// NewRequestFormatter function.
+type RequestFormatter struct {
+ clientID *string
+}
+
+// RequestFormatterOpt applys options to a RequestFormatter.
+type RequestFormatterOpt interface {
+ apply(*RequestFormatter)
+}
+
+type formatterOpt struct{ fn func(*RequestFormatter) }
+
+func (opt formatterOpt) apply(f *RequestFormatter) { opt.fn(f) }
+
+// FormatterClientID attaches the given client ID to any issued request,
+// minus controlled shutdown v0, which uses its own special format.
+func FormatterClientID(id string) RequestFormatterOpt {
+ return formatterOpt{func(f *RequestFormatter) { f.clientID = &id }}
+}
+
+// NewRequestFormatter returns a RequestFormatter with the opts applied.
+func NewRequestFormatter(opts ...RequestFormatterOpt) *RequestFormatter {
+ a := new(RequestFormatter)
+ for _, opt := range opts {
+ opt.apply(a)
+ }
+ return a
+}
+
+// AppendRequest appends a full message request to dst, returning the updated
+// slice. This message is the full body that needs to be written to issue a
+// Kafka request.
+func (f *RequestFormatter) AppendRequest(
+ dst []byte,
+ r Request,
+ correlationID int32,
+) []byte {
+ dst = append(dst, 0, 0, 0, 0) // reserve length
+ k := r.Key()
+ v := r.GetVersion()
+ dst = kbin.AppendInt16(dst, k)
+ dst = kbin.AppendInt16(dst, v)
+ dst = kbin.AppendInt32(dst, correlationID)
+ if k == 7 && v == 0 {
+ return dst
+ }
+
+ // Even with flexible versions, we do not use a compact client id.
+ // Clients issue ApiVersions immediately before knowing the broker
+ // version, and old brokers will not be able to understand a compact
+ // client id.
+ dst = kbin.AppendNullableString(dst, f.clientID)
+
+ // The flexible tags end the request header, and then begins the
+ // request body.
+ if r.IsFlexible() {
+ var numTags uint8
+ dst = append(dst, numTags)
+ if numTags != 0 {
+ // TODO when tags are added
+ }
+ }
+
+ // Now the request body.
+ dst = r.AppendTo(dst)
+
+ kbin.AppendInt32(dst[:0], int32(len(dst[4:])))
+ return dst
+}
+
+// StringPtr is a helper to return a pointer to a string.
+func StringPtr(in string) *string {
+ return &in
+}
+
+// ReadFrom provides decoding various versions of sticky member metadata. A key
+// point of this type is that it does not contain a version number inside it,
+// but it is versioned: if decoding v1 fails, this falls back to v0.
+func (s *StickyMemberMetadata) ReadFrom(src []byte) error {
+ return s.readFrom(src, false)
+}
+
+// UnsafeReadFrom is the same as ReadFrom, but uses unsafe slice to string
+// conversions to reduce garbage.
+func (s *StickyMemberMetadata) UnsafeReadFrom(src []byte) error {
+ return s.readFrom(src, true)
+}
+
+func (s *StickyMemberMetadata) readFrom(src []byte, unsafe bool) error {
+ b := kbin.Reader{Src: src}
+ numAssignments := b.ArrayLen()
+ if numAssignments < 0 {
+ numAssignments = 0
+ }
+ need := numAssignments - int32(cap(s.CurrentAssignment))
+ if need > 0 {
+ s.CurrentAssignment = append(s.CurrentAssignment[:cap(s.CurrentAssignment)], make([]StickyMemberMetadataCurrentAssignment, need)...)
+ } else {
+ s.CurrentAssignment = s.CurrentAssignment[:numAssignments]
+ }
+ for i := int32(0); i < numAssignments; i++ {
+ var topic string
+ if unsafe {
+ topic = b.UnsafeString()
+ } else {
+ topic = b.String()
+ }
+ numPartitions := b.ArrayLen()
+ if numPartitions < 0 {
+ numPartitions = 0
+ }
+ a := &s.CurrentAssignment[i]
+ a.Topic = topic
+ need := numPartitions - int32(cap(a.Partitions))
+ if need > 0 {
+ a.Partitions = append(a.Partitions[:cap(a.Partitions)], make([]int32, need)...)
+ } else {
+ a.Partitions = a.Partitions[:numPartitions]
+ }
+ for i := range a.Partitions {
+ a.Partitions[i] = b.Int32()
+ }
+ }
+ if len(b.Src) > 0 {
+ s.Generation = b.Int32()
+ } else {
+ s.Generation = -1
+ }
+ return b.Complete()
+}
+
+// AppendTo provides appending various versions of sticky member metadata to dst.
+// If generation is not -1 (default for v0), this appends as version 1.
+func (s *StickyMemberMetadata) AppendTo(dst []byte) []byte {
+ dst = kbin.AppendArrayLen(dst, len(s.CurrentAssignment))
+ for _, assignment := range s.CurrentAssignment {
+ dst = kbin.AppendString(dst, assignment.Topic)
+ dst = kbin.AppendArrayLen(dst, len(assignment.Partitions))
+ for _, partition := range assignment.Partitions {
+ dst = kbin.AppendInt32(dst, partition)
+ }
+ }
+ if s.Generation != -1 {
+ dst = kbin.AppendInt32(dst, s.Generation)
+ }
+ return dst
+}
+
+// TagReader has is a type that has the ability to skip tags.
+//
+// This is effectively a trimmed version of the kbin.Reader, with the purpose
+// being that kmsg cannot depend on an external package.
+type TagReader interface {
+ // Uvarint returns a uint32. If the reader has read too much and has
+ // exhausted all bytes, this should set the reader's internal state
+ // to failed and return 0.
+ Uvarint() uint32
+
+ // Span returns n bytes from the reader. If the reader has read too
+ // much and exhausted all bytes this should set the reader's internal
+ // to failed and return nil.
+ Span(n int) []byte
+}
+
+// SkipTags skips tags in a TagReader.
+func SkipTags(b TagReader) {
+ for num := b.Uvarint(); num > 0; num-- {
+ _, size := b.Uvarint(), b.Uvarint()
+ b.Span(int(size))
+ }
+}
+
+// internalSkipTags skips tags in the duplicated inner kbin.Reader.
+func internalSkipTags(b *kbin.Reader) {
+ for num := b.Uvarint(); num > 0; num-- {
+ _, size := b.Uvarint(), b.Uvarint()
+ b.Span(int(size))
+ }
+}
+
+// ReadTags reads tags in a TagReader and returns the tags.
+func ReadTags(b TagReader) Tags {
+ var t Tags
+ for num := b.Uvarint(); num > 0; num-- {
+ key, size := b.Uvarint(), b.Uvarint()
+ t.Set(key, b.Span(int(size)))
+ }
+ return t
+}
+
+// internalReadTags reads tags in a reader and returns the tags from a
+// duplicated inner kbin.Reader.
+func internalReadTags(b *kbin.Reader) Tags {
+ var t Tags
+ for num := b.Uvarint(); num > 0; num-- {
+ key, size := b.Uvarint(), b.Uvarint()
+ t.Set(key, b.Span(int(size)))
+ }
+ return t
+}
+
+// Tags is an opaque structure capturing unparsed tags.
+type Tags struct {
+ keyvals map[uint32][]byte
+}
+
+// Len returns the number of keyvals in Tags.
+func (t *Tags) Len() int { return len(t.keyvals) }
+
+// Each calls fn for each key and val in the tags.
+func (t *Tags) Each(fn func(uint32, []byte)) {
+ if len(t.keyvals) == 0 {
+ return
+ }
+ // We must encode keys in order. We expect to have limited (no) unknown
+ // keys, so for now, we take a lazy approach and allocate an ordered
+ // slice.
+ ordered := make([]uint32, 0, len(t.keyvals))
+ for key := range t.keyvals {
+ ordered = append(ordered, key)
+ }
+ sort.Slice(ordered, func(i, j int) bool { return ordered[i] < ordered[j] })
+ for _, key := range ordered {
+ fn(key, t.keyvals[key])
+ }
+}
+
+// Set sets a tag's key and val.
+//
+// Note that serializing tags does NOT check if the set key overlaps with an
+// existing used key. It is invalid to set a key used by Kafka itself.
+func (t *Tags) Set(key uint32, val []byte) {
+ if t.keyvals == nil {
+ t.keyvals = make(map[uint32][]byte)
+ }
+ t.keyvals[key] = val
+}
+
+// AppendEach appends each keyval in tags to dst and returns the updated dst.
+func (t *Tags) AppendEach(dst []byte) []byte {
+ t.Each(func(key uint32, val []byte) {
+ dst = kbin.AppendUvarint(dst, key)
+ dst = kbin.AppendUvarint(dst, uint32(len(val)))
+ dst = append(dst, val...)
+ })
+ return dst
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go b/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go
new file mode 100644
index 0000000000000..75bff9958e78e
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go
@@ -0,0 +1,46895 @@
+package kmsg
+
+import (
+ "context"
+ "fmt"
+ "reflect"
+ "strings"
+
+ "github.com/twmb/franz-go/pkg/kmsg/internal/kbin"
+)
+
+// Code generated by franz-go/generate. DO NOT EDIT.
+
+// MaxKey is the maximum key used for any messages in this package.
+// Note that this value will change as Kafka adds more messages.
+const MaxKey = 68
+
+// MessageV0 is the message format Kafka used prior to 0.10.
+//
+// To produce or fetch messages, Kafka would write many messages contiguously
+// as an array without specifying the array length.
+type MessageV0 struct {
+ // Offset is the offset of this record.
+ //
+ // If this is the outer message of a recursive message set (i.e. a
+ // message set has been compressed and this is the outer message),
+ // then the offset should be the offset of the last inner value.
+ Offset int64
+
+ // MessageSize is the size of everything that follows in this message.
+ MessageSize int32
+
+ // CRC is the crc of everything that follows this field (NOT using the
+ // Castagnoli polynomial, as is the case in the 0.11+ RecordBatch).
+ CRC int32
+
+ // Magic is 0.
+ Magic int8
+
+ // Attributes describe the attributes of this message.
+ //
+ // The first three bits correspond to compression:
+ // - 00 is no compression
+ // - 01 is gzip compression
+ // - 10 is snappy compression
+ //
+ // The remaining bits are unused and must be 0.
+ Attributes int8
+
+ // Key is an blob of data for a record.
+ //
+ // Key's are usually used for hashing the record to specific Kafka partitions.
+ Key []byte
+
+ // Value is a blob of data. This field is the main "message" portion of a
+ // record.
+ Value []byte
+}
+
+func (v *MessageV0) AppendTo(dst []byte) []byte {
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.MessageSize
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.CRC
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Magic
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Attributes
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Key
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ return dst
+}
+
+func (v *MessageV0) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *MessageV0) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *MessageV0) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ s := v
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ {
+ v := b.Int32()
+ s.MessageSize = v
+ }
+ {
+ v := b.Int32()
+ s.CRC = v
+ }
+ {
+ v := b.Int8()
+ s.Magic = v
+ }
+ {
+ v := b.Int8()
+ s.Attributes = v
+ }
+ {
+ v := b.NullableBytes()
+ s.Key = v
+ }
+ {
+ v := b.NullableBytes()
+ s.Value = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MessageV0.
+func (v *MessageV0) Default() {
+}
+
+// NewMessageV0 returns a default MessageV0
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMessageV0() MessageV0 {
+ var v MessageV0
+ v.Default()
+ return v
+}
+
+// MessageV1 is the message format Kafka used prior to 0.11.
+//
+// To produce or fetch messages, Kafka would write many messages contiguously
+// as an array without specifying the array length.
+//
+// To support compression, an entire message set would be compressed and used
+// as the Value in another message set (thus being "recursive"). The key for
+// this outer message set must be null.
+type MessageV1 struct {
+ // Offset is the offset of this record.
+ //
+ // Different from v0, if this message set is a recursive message set
+ // (that is, compressed and inside another message set), the offset
+ // on the inner set is relative to the offset of the outer set.
+ Offset int64
+
+ // MessageSize is the size of everything that follows in this message.
+ MessageSize int32
+
+ // CRC is the crc of everything that follows this field (NOT using the
+ // Castagnoli polynomial, as is the case in the 0.11+ RecordBatch).
+ CRC int32
+
+ // Magic is 1.
+ Magic int8
+
+ // Attributes describe the attributes of this message.
+ //
+ // The first three bits correspond to compression:
+ // - 00 is no compression
+ // - 01 is gzip compression
+ // - 10 is snappy compression
+ //
+ // Bit 4 is the timestamp type, with 0 meaning CreateTime corresponding
+ // to the timestamp being from the producer, and 1 meaning LogAppendTime
+ // corresponding to the timestamp being from the broker.
+ // Setting this to LogAppendTime will cause batches to be rejected.
+ //
+ // The remaining bits are unused and must be 0.
+ Attributes int8
+
+ // Timestamp is the millisecond timestamp of this message.
+ Timestamp int64
+
+ // Key is an blob of data for a record.
+ //
+ // Key's are usually used for hashing the record to specific Kafka partitions.
+ Key []byte
+
+ // Value is a blob of data. This field is the main "message" portion of a
+ // record.
+ Value []byte
+}
+
+func (v *MessageV1) AppendTo(dst []byte) []byte {
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.MessageSize
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.CRC
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Magic
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Attributes
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Timestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Key
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ return dst
+}
+
+func (v *MessageV1) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *MessageV1) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *MessageV1) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ s := v
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ {
+ v := b.Int32()
+ s.MessageSize = v
+ }
+ {
+ v := b.Int32()
+ s.CRC = v
+ }
+ {
+ v := b.Int8()
+ s.Magic = v
+ }
+ {
+ v := b.Int8()
+ s.Attributes = v
+ }
+ {
+ v := b.Int64()
+ s.Timestamp = v
+ }
+ {
+ v := b.NullableBytes()
+ s.Key = v
+ }
+ {
+ v := b.NullableBytes()
+ s.Value = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MessageV1.
+func (v *MessageV1) Default() {
+}
+
+// NewMessageV1 returns a default MessageV1
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMessageV1() MessageV1 {
+ var v MessageV1
+ v.Default()
+ return v
+}
+
+// Header is user provided metadata for a record. Kafka does not look at
+// headers at all; they are solely for producers and consumers.
+type Header struct {
+ Key string
+
+ Value []byte
+}
+
+func (v *Header) AppendTo(dst []byte) []byte {
+ {
+ v := v.Key
+ dst = kbin.AppendVarintString(dst, v)
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendVarintBytes(dst, v)
+ }
+ return dst
+}
+
+func (v *Header) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *Header) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *Header) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeVarintString()
+ } else {
+ v = b.VarintString()
+ }
+ s.Key = v
+ }
+ {
+ v := b.VarintBytes()
+ s.Value = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to Header.
+func (v *Header) Default() {
+}
+
+// NewHeader returns a default Header
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewHeader() Header {
+ var v Header
+ v.Default()
+ return v
+}
+
+// RecordBatch is a Kafka concept that groups many individual records together
+// in a more optimized format.
+type RecordBatch struct {
+ // FirstOffset is the first offset in a record batch.
+ //
+ // For producing, this is usually 0.
+ FirstOffset int64
+
+ // Length is the wire length of everything that follows this field.
+ Length int32
+
+ // PartitionLeaderEpoch is the leader epoch of the broker at the time
+ // this batch was written. Kafka uses this for cluster communication,
+ // but clients can also use this to better aid truncation detection.
+ // See KIP-320. Producers should set this to -1.
+ PartitionLeaderEpoch int32
+
+ // Magic is the current "magic" number of this message format.
+ // The current magic number is 2.
+ Magic int8
+
+ // CRC is the crc of everything that follows this field using the
+ // Castagnoli polynomial.
+ CRC int32
+
+ // Attributes describe the records array of this batch.
+ //
+ // The first three bits correspond to compression:
+ // - 000 is no compression
+ // - 001 is gzip compression
+ // - 010 is snappy compression
+ // - 011 is lz4 compression
+ // - 100 is zstd compression (produce request version 7+)
+ //
+ // Bit 4 is the timestamp type, with 0 meaning CreateTime corresponding
+ // to the timestamp being from the producer, and 1 meaning LogAppendTime
+ // corresponding to the timestamp being from the broker.
+ // Setting this to LogAppendTime will cause batches to be rejected.
+ //
+ // Bit 5 indicates whether the batch is part of a transaction (1 is yes).
+ //
+ // Bit 6 indicates if the batch includes a control message (1 is yes).
+ // Control messages are used to enable transactions and are generated from
+ // the broker. Clients should not return control batches to applications.
+ Attributes int16
+
+ // LastOffsetDelta is the offset of the last message in a batch. This is used
+ // by the broker to ensure correct behavior even with batch compaction.
+ LastOffsetDelta int32
+
+ // FirstTimestamp is the timestamp (in milliseconds) of the first record
+ // in a batch.
+ FirstTimestamp int64
+
+ // MaxTimestamp is the timestamp (in milliseconds) of the last record
+ // in a batch. Similar to LastOffsetDelta, this is used to ensure correct
+ // behavior with compacting.
+ MaxTimestamp int64
+
+ // ProducerID is the broker assigned producerID from an InitProducerID
+ // request.
+ //
+ // Clients that wish to support idempotent messages and transactions must
+ // set this field.
+ //
+ // Note that when not using transactions, any producer here is always
+ // accepted (and the epoch is always zero). Outside transactions, the ID
+ // is used only to deduplicate requests (and there must be at max 5
+ // concurrent requests).
+ ProducerID int64
+
+ // ProducerEpoch is the broker assigned producerEpoch from an InitProducerID
+ // request.
+ //
+ // Clients that wish to support idempotent messages and transactions must
+ // set this field.
+ ProducerEpoch int16
+
+ // FirstSequence is the producer assigned sequence number used by the
+ // broker to deduplicate messages.
+ //
+ // Clients that wish to support idempotent messages and transactions must
+ // set this field.
+ //
+ // The sequence number for each record in a batch is OffsetDelta + FirstSequence.
+ FirstSequence int32
+
+ // NumRecords is the number of records in the array below.
+ //
+ // This is separate from Records due to the potential for records to be
+ // compressed.
+ NumRecords int32
+
+ // Records contains records, either compressed or uncompressed.
+ //
+ // For uncompressed records, this is an array of records ([Record]).
+ //
+ // For compressed records, the length of the uncompressed array is kept
+ // but everything that follows is compressed.
+ //
+ // The number of bytes is expected to be the Length field minus 49.
+ Records []byte
+}
+
+func (v *RecordBatch) AppendTo(dst []byte) []byte {
+ {
+ v := v.FirstOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Length
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.PartitionLeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Magic
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.CRC
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Attributes
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LastOffsetDelta
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.FirstTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.MaxTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.FirstSequence
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.NumRecords
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Records
+ dst = append(dst, v...)
+ }
+ return dst
+}
+
+func (v *RecordBatch) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *RecordBatch) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *RecordBatch) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ s := v
+ {
+ v := b.Int64()
+ s.FirstOffset = v
+ }
+ {
+ v := b.Int32()
+ s.Length = v
+ }
+ {
+ v := b.Int32()
+ s.PartitionLeaderEpoch = v
+ }
+ {
+ v := b.Int8()
+ s.Magic = v
+ }
+ {
+ v := b.Int32()
+ s.CRC = v
+ }
+ {
+ v := b.Int16()
+ s.Attributes = v
+ }
+ {
+ v := b.Int32()
+ s.LastOffsetDelta = v
+ }
+ {
+ v := b.Int64()
+ s.FirstTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.MaxTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.FirstSequence = v
+ }
+ {
+ v := b.Int32()
+ s.NumRecords = v
+ }
+ {
+ v := b.Span(int(s.Length) - 49)
+ s.Records = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to RecordBatch.
+func (v *RecordBatch) Default() {
+}
+
+// NewRecordBatch returns a default RecordBatch
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewRecordBatch() RecordBatch {
+ var v RecordBatch
+ v.Default()
+ return v
+}
+
+// OffsetCommitKey is the key for the Kafka internal __consumer_offsets topic
+// if the key starts with an int16 with a value of 0 or 1.
+//
+// This type was introduced in KAFKA-1012 commit a670537aa3 with release 0.8.2
+// and has been in use ever since.
+type OffsetCommitKey struct {
+ // Version is which encoding version this value is using.
+ Version int16
+
+ // Group is the group being committed.
+ Group string
+
+ // Topic is the topic being committed.
+ Topic string
+
+ // Partition is the partition being committed.
+ Partition int32
+}
+
+func (v *OffsetCommitKey) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Group
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ return dst
+}
+
+func (v *OffsetCommitKey) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetCommitKey) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetCommitKey) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Group = v
+ }
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitKey.
+func (v *OffsetCommitKey) Default() {
+}
+
+// NewOffsetCommitKey returns a default OffsetCommitKey
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitKey() OffsetCommitKey {
+ var v OffsetCommitKey
+ v.Default()
+ return v
+}
+
+// OffsetCommitValue is the value for the Kafka internal __consumer_offsets
+// topic if the key is of OffsetCommitKey type.
+//
+// Version 0 was introduced with the key version 0.
+//
+// KAFKA-1634 commit c5df2a8e3a in 0.9.0 released version 1.
+//
+// KAFKA-4682 commit 418a91b5d4, proposed in KIP-211 and included in 2.1.0
+// released version 2.
+//
+// KAFKA-7437 commit 9f7267dd2f, proposed in KIP-320 and included in 2.1.0
+// released version 3.
+type OffsetCommitValue struct {
+ // Version is which encoding version this value is using.
+ Version int16
+
+ // Offset is the committed offset.
+ Offset int64
+
+ // LeaderEpoch is the epoch of the leader committing this message.
+ LeaderEpoch int32 // v3+
+
+ // Metadata is the metadata included in the commit.
+ Metadata string
+
+ // CommitTimestamp is when this commit occurred.
+ CommitTimestamp int64
+
+ // ExpireTimestamp, introduced in v1 and dropped in v2 with KIP-111,
+ // is when this commit expires.
+ ExpireTimestamp int64 // v1-v1
+}
+
+func (v *OffsetCommitValue) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 3 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Metadata
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.CommitTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 && version <= 1 {
+ v := v.ExpireTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ return dst
+}
+
+func (v *OffsetCommitValue) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetCommitValue) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetCommitValue) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ if version >= 3 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Metadata = v
+ }
+ {
+ v := b.Int64()
+ s.CommitTimestamp = v
+ }
+ if version >= 1 && version <= 1 {
+ v := b.Int64()
+ s.ExpireTimestamp = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitValue.
+func (v *OffsetCommitValue) Default() {
+}
+
+// NewOffsetCommitValue returns a default OffsetCommitValue
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitValue() OffsetCommitValue {
+ var v OffsetCommitValue
+ v.Default()
+ return v
+}
+
+// GroupMetadataKey is the key for the Kafka internal __consumer_offsets topic
+// if the key starts with an int16 with a value of 2.
+//
+// This type was introduced in KAFKA-2017 commit 7c33475274 with release 0.9.0
+// and has been in use ever since.
+type GroupMetadataKey struct {
+ // Version is which encoding version this value is using.
+ Version int16
+
+ // Group is the group this metadata is for.
+ Group string
+}
+
+func (v *GroupMetadataKey) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Group
+ dst = kbin.AppendString(dst, v)
+ }
+ return dst
+}
+
+func (v *GroupMetadataKey) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *GroupMetadataKey) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *GroupMetadataKey) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Group = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to GroupMetadataKey.
+func (v *GroupMetadataKey) Default() {
+}
+
+// NewGroupMetadataKey returns a default GroupMetadataKey
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewGroupMetadataKey() GroupMetadataKey {
+ var v GroupMetadataKey
+ v.Default()
+ return v
+}
+
+type GroupMetadataValueMember struct {
+ // MemberID is a group member.
+ MemberID string
+
+ // InstanceID is the instance ID of this member in the group (KIP-345).
+ InstanceID *string // v3+
+
+ // ClientID is the client ID of this group member.
+ ClientID string
+
+ // ClientHost is the hostname of this group member.
+ ClientHost string
+
+ // RebalanceTimeoutMillis is the rebalance timeout of this group member.
+ RebalanceTimeoutMillis int32 // v1+
+
+ // SessionTimeoutMillis is the session timeout of this group member.
+ SessionTimeoutMillis int32
+
+ // Subscription is the subscription of this group member.
+ Subscription []byte
+
+ // Assignment is what the leader assigned this group member.
+ Assignment []byte
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to GroupMetadataValueMember.
+func (v *GroupMetadataValueMember) Default() {
+}
+
+// NewGroupMetadataValueMember returns a default GroupMetadataValueMember
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewGroupMetadataValueMember() GroupMetadataValueMember {
+ var v GroupMetadataValueMember
+ v.Default()
+ return v
+}
+
+// GroupMetadataValue is the value for the Kafka internal __consumer_offsets
+// topic if the key is of GroupMetadataKey type.
+//
+// Version 0 was introduced with the key version 0.
+//
+// KAFKA-3888 commit 40b1dd3f49, proposed in KIP-62 and included in 0.10.1
+// released version 1.
+//
+// KAFKA-4682 commit 418a91b5d4, proposed in KIP-211 and included in 2.1.0
+// released version 2.
+//
+// KAFKA-7862 commit 0f995ba6be, proposed in KIP-345 and included in 2.3.0
+// released version 3.
+type GroupMetadataValue struct {
+ // Version is the version of this value.
+ Version int16
+
+ // ProtocolType is the type of protocol being used for the group
+ // (i.e., "consumer").
+ ProtocolType string
+
+ // Generation is the generation of this group.
+ Generation int32
+
+ // Protocol is the agreed upon protocol all members are using to partition
+ // (i.e., "sticky").
+ Protocol *string
+
+ // Leader is the group leader.
+ Leader *string
+
+ // CurrentStateTimestamp is the timestamp for this state of the group
+ // (stable, etc.).
+ CurrentStateTimestamp int64 // v2+
+
+ // Members are the group members.
+ Members []GroupMetadataValueMember
+}
+
+func (v *GroupMetadataValue) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ProtocolType
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Protocol
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ if version >= 2 {
+ v := v.CurrentStateTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Members
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ dst = kbin.AppendString(dst, v)
+ }
+ if version >= 3 {
+ v := v.InstanceID
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ {
+ v := v.ClientID
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.ClientHost
+ dst = kbin.AppendString(dst, v)
+ }
+ if version >= 1 {
+ v := v.RebalanceTimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.SessionTimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Subscription
+ dst = kbin.AppendBytes(dst, v)
+ }
+ {
+ v := v.Assignment
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ }
+ return dst
+}
+
+func (v *GroupMetadataValue) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *GroupMetadataValue) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *GroupMetadataValue) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.ProtocolType = v
+ }
+ {
+ v := b.Int32()
+ s.Generation = v
+ }
+ {
+ var v *string
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ s.Protocol = v
+ }
+ {
+ var v *string
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ s.Leader = v
+ }
+ if version >= 2 {
+ v := b.Int64()
+ s.CurrentStateTimestamp = v
+ }
+ {
+ v := s.Members
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]GroupMetadataValueMember, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.MemberID = v
+ }
+ if version >= 3 {
+ var v *string
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ s.InstanceID = v
+ }
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.ClientID = v
+ }
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.ClientHost = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.RebalanceTimeoutMillis = v
+ }
+ {
+ v := b.Int32()
+ s.SessionTimeoutMillis = v
+ }
+ {
+ v := b.Bytes()
+ s.Subscription = v
+ }
+ {
+ v := b.Bytes()
+ s.Assignment = v
+ }
+ }
+ v = a
+ s.Members = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to GroupMetadataValue.
+func (v *GroupMetadataValue) Default() {
+}
+
+// NewGroupMetadataValue returns a default GroupMetadataValue
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewGroupMetadataValue() GroupMetadataValue {
+ var v GroupMetadataValue
+ v.Default()
+ return v
+}
+
+// TxnMetadataKey is the key for the Kafka internal __transaction_state topic
+// if the key starts with an int16 with a value of 0.
+type TxnMetadataKey struct {
+ // Version is the version of this type.
+ Version int16
+
+ // TransactionalID is the transactional ID this record is for.
+ TransactionalID string
+}
+
+func (v *TxnMetadataKey) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.TransactionalID
+ dst = kbin.AppendString(dst, v)
+ }
+ return dst
+}
+
+func (v *TxnMetadataKey) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *TxnMetadataKey) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *TxnMetadataKey) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.TransactionalID = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnMetadataKey.
+func (v *TxnMetadataKey) Default() {
+}
+
+// NewTxnMetadataKey returns a default TxnMetadataKey
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnMetadataKey() TxnMetadataKey {
+ var v TxnMetadataKey
+ v.Default()
+ return v
+}
+
+type TxnMetadataValueTopic struct {
+ // Topic is a topic involved in this transaction.
+ Topic string
+
+ // Partitions are partitions in this topic involved in the transaction.
+ Partitions []int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnMetadataValueTopic.
+func (v *TxnMetadataValueTopic) Default() {
+}
+
+// NewTxnMetadataValueTopic returns a default TxnMetadataValueTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnMetadataValueTopic() TxnMetadataValueTopic {
+ var v TxnMetadataValueTopic
+ v.Default()
+ return v
+}
+
+// TxnMetadataValue is the value for the Kafka internal __transaction_state
+// topic if the key is of TxnMetadataKey type.
+type TxnMetadataValue struct {
+ // Version is the version of this value.
+ Version int16
+
+ // ProducerID is the ID in use by the transactional ID.
+ ProducerID int64
+
+ // ProducerEpoch is the epoch associated with the producer ID.
+ ProducerEpoch int16
+
+ // TimeoutMillis is the timeout of this transaction in milliseconds.
+ TimeoutMillis int32
+
+ // State is the state this transaction is in,
+ // 0 is Empty, 1 is Ongoing, 2 is PrepareCommit, 3 is PrepareAbort, 4 is
+ // CompleteCommit, 5 is CompleteAbort, 6 is Dead, and 7 is PrepareEpochFence.
+ State TransactionState
+
+ // Topics are topics that are involved in this transaction.
+ Topics []TxnMetadataValueTopic
+
+ // LastUpdateTimestamp is the timestamp in millis of when this transaction
+ // was last updated.
+ LastUpdateTimestamp int64
+
+ // StartTimestamp is the timestamp in millis of when this transaction started.
+ StartTimestamp int64
+}
+
+func (v *TxnMetadataValue) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.State
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ {
+ v := v.LastUpdateTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.StartTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ return dst
+}
+
+func (v *TxnMetadataValue) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *TxnMetadataValue) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *TxnMetadataValue) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ var t TransactionState
+ {
+ v := b.Int8()
+ t = TransactionState(v)
+ }
+ v := t
+ s.State = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]TxnMetadataValueTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int64()
+ s.LastUpdateTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.StartTimestamp = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnMetadataValue.
+func (v *TxnMetadataValue) Default() {
+}
+
+// NewTxnMetadataValue returns a default TxnMetadataValue
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnMetadataValue() TxnMetadataValue {
+ var v TxnMetadataValue
+ v.Default()
+ return v
+}
+
+type StickyMemberMetadataCurrentAssignment struct {
+ // Topic is a topic the group member is currently assigned.
+ Topic string
+
+ // Partitions are the partitions within a topic that a group member is
+ // currently assigned.
+ Partitions []int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StickyMemberMetadataCurrentAssignment.
+func (v *StickyMemberMetadataCurrentAssignment) Default() {
+}
+
+// NewStickyMemberMetadataCurrentAssignment returns a default StickyMemberMetadataCurrentAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStickyMemberMetadataCurrentAssignment() StickyMemberMetadataCurrentAssignment {
+ var v StickyMemberMetadataCurrentAssignment
+ v.Default()
+ return v
+}
+
+// StickyMemberMetadata is is what is encoded in UserData for
+// ConsumerMemberMetadata in group join requests with the sticky partitioning
+// strategy.
+//
+// V1 added generation, which fixed a bug with flaky group members joining
+// repeatedly. See KIP-341 for more details.
+//
+// Note that clients should always try decoding as v1 and, if that fails,
+// fall back to v0. This is necessary due to there being no version number
+// anywhere in this type.
+type StickyMemberMetadata struct {
+ // CurrentAssignment is the assignment that a group member has when
+ // issuing a join.
+ CurrentAssignment []StickyMemberMetadataCurrentAssignment
+
+ // Generation is the generation of this join. This is incremented every join.
+ //
+ // This field has a default of -1.
+ Generation int32 // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StickyMemberMetadata.
+func (v *StickyMemberMetadata) Default() {
+ v.Generation = -1
+}
+
+// NewStickyMemberMetadata returns a default StickyMemberMetadata
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStickyMemberMetadata() StickyMemberMetadata {
+ var v StickyMemberMetadata
+ v.Default()
+ return v
+}
+
+type ConsumerMemberMetadataOwnedPartition struct {
+ Topic string
+
+ Partitions []int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerMemberMetadataOwnedPartition.
+func (v *ConsumerMemberMetadataOwnedPartition) Default() {
+}
+
+// NewConsumerMemberMetadataOwnedPartition returns a default ConsumerMemberMetadataOwnedPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerMemberMetadataOwnedPartition() ConsumerMemberMetadataOwnedPartition {
+ var v ConsumerMemberMetadataOwnedPartition
+ v.Default()
+ return v
+}
+
+// ConsumerMemberMetadata is the metadata that is usually sent with a join group
+// request with the "consumer" protocol (normal, non-connect consumers).
+type ConsumerMemberMetadata struct {
+ // Version is 0, 1, 2, or 3.
+ Version int16
+
+ // Topics is the list of topics in the group that this member is interested
+ // in consuming.
+ Topics []string
+
+ // UserData is arbitrary client data for a given client in the group.
+ // For sticky assignment, this is StickyMemberMetadata.
+ UserData []byte
+
+ // OwnedPartitions, introduced for KIP-429, are the partitions that this
+ // member currently owns.
+ OwnedPartitions []ConsumerMemberMetadataOwnedPartition // v1+
+
+ // Generation is the generation of the group.
+ //
+ // This field has a default of -1.
+ Generation int32 // v2+
+
+ // Rack, if non-nil, opts into rack-aware replica assignment.
+ Rack *string // v3+
+}
+
+func (v *ConsumerMemberMetadata) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.UserData
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ if version >= 1 {
+ v := v.OwnedPartitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ if version >= 2 {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 3 {
+ v := v.Rack
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ return dst
+}
+
+func (v *ConsumerMemberMetadata) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConsumerMemberMetadata) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConsumerMemberMetadata) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ a[i] = v
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.NullableBytes()
+ s.UserData = v
+ }
+ if version >= 1 {
+ v := s.OwnedPartitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConsumerMemberMetadataOwnedPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.OwnedPartitions = v
+ }
+ if version >= 2 {
+ v := b.Int32()
+ s.Generation = v
+ }
+ if version >= 3 {
+ var v *string
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ s.Rack = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerMemberMetadata.
+func (v *ConsumerMemberMetadata) Default() {
+ v.Generation = -1
+}
+
+// NewConsumerMemberMetadata returns a default ConsumerMemberMetadata
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerMemberMetadata() ConsumerMemberMetadata {
+ var v ConsumerMemberMetadata
+ v.Default()
+ return v
+}
+
+type ConsumerMemberAssignmentTopic struct {
+ // Topic is a topic in the assignment.
+ Topic string
+
+ // Partitions contains partitions in the assignment.
+ Partitions []int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerMemberAssignmentTopic.
+func (v *ConsumerMemberAssignmentTopic) Default() {
+}
+
+// NewConsumerMemberAssignmentTopic returns a default ConsumerMemberAssignmentTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerMemberAssignmentTopic() ConsumerMemberAssignmentTopic {
+ var v ConsumerMemberAssignmentTopic
+ v.Default()
+ return v
+}
+
+// ConsumerMemberAssignment is the assignment data that is usually sent with a
+// sync group request with the "consumer" protocol (normal, non-connect
+// consumers).
+type ConsumerMemberAssignment struct {
+ // Verson is 0, 1, or 2.
+ Version int16
+
+ // Topics contains topics in the assignment.
+ Topics []ConsumerMemberAssignmentTopic
+
+ // UserData is arbitrary client data for a given client in the group.
+ UserData []byte
+}
+
+func (v *ConsumerMemberAssignment) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ {
+ v := v.UserData
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ return dst
+}
+
+func (v *ConsumerMemberAssignment) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConsumerMemberAssignment) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConsumerMemberAssignment) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConsumerMemberAssignmentTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.NullableBytes()
+ s.UserData = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerMemberAssignment.
+func (v *ConsumerMemberAssignment) Default() {
+}
+
+// NewConsumerMemberAssignment returns a default ConsumerMemberAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerMemberAssignment() ConsumerMemberAssignment {
+ var v ConsumerMemberAssignment
+ v.Default()
+ return v
+}
+
+// ConnectMemberMetadata is the metadata used in a join group request with the
+// "connect" protocol. v1 introduced incremental cooperative rebalancing (akin
+// to cooperative-sticky) per KIP-415.
+//
+// v0 defined in connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/ConnectProtocol.java
+// v1+ defined in connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/IncrementalCooperativeConnectProtocol.java
+type ConnectMemberMetadata struct {
+ Version int16
+
+ URL string
+
+ ConfigOffset int64
+
+ CurrentAssignment []byte // v1+
+}
+
+func (v *ConnectMemberMetadata) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.URL
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.ConfigOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 {
+ v := v.CurrentAssignment
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ return dst
+}
+
+func (v *ConnectMemberMetadata) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConnectMemberMetadata) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConnectMemberMetadata) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.URL = v
+ }
+ {
+ v := b.Int64()
+ s.ConfigOffset = v
+ }
+ if version >= 1 {
+ v := b.NullableBytes()
+ s.CurrentAssignment = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConnectMemberMetadata.
+func (v *ConnectMemberMetadata) Default() {
+}
+
+// NewConnectMemberMetadata returns a default ConnectMemberMetadata
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConnectMemberMetadata() ConnectMemberMetadata {
+ var v ConnectMemberMetadata
+ v.Default()
+ return v
+}
+
+type ConnectMemberAssignmentAssignment struct {
+ Connector string
+
+ Tasks []int16
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConnectMemberAssignmentAssignment.
+func (v *ConnectMemberAssignmentAssignment) Default() {
+}
+
+// NewConnectMemberAssignmentAssignment returns a default ConnectMemberAssignmentAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConnectMemberAssignmentAssignment() ConnectMemberAssignmentAssignment {
+ var v ConnectMemberAssignmentAssignment
+ v.Default()
+ return v
+}
+
+type ConnectMemberAssignmentRevoked struct {
+ Connector string
+
+ Tasks []int16
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConnectMemberAssignmentRevoked.
+func (v *ConnectMemberAssignmentRevoked) Default() {
+}
+
+// NewConnectMemberAssignmentRevoked returns a default ConnectMemberAssignmentRevoked
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConnectMemberAssignmentRevoked() ConnectMemberAssignmentRevoked {
+ var v ConnectMemberAssignmentRevoked
+ v.Default()
+ return v
+}
+
+// ConnectMemberAssignment is the assignment that is used in a sync group
+// request with the "connect" protocol. See ConnectMemberMetadata for links to
+// the Kafka code where these fields are defined.
+type ConnectMemberAssignment struct {
+ Version int16
+
+ Error int16
+
+ Leader string
+
+ LeaderURL string
+
+ ConfigOffset int64
+
+ Assignment []ConnectMemberAssignmentAssignment
+
+ Revoked []ConnectMemberAssignmentRevoked // v1+
+
+ ScheduledDelay int32 // v1+
+}
+
+func (v *ConnectMemberAssignment) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Error
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.LeaderURL
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.ConfigOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Assignment
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Connector
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Tasks
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt16(dst, v)
+ }
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.Revoked
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Connector
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Tasks
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt16(dst, v)
+ }
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.ScheduledDelay
+ dst = kbin.AppendInt32(dst, v)
+ }
+ return dst
+}
+
+func (v *ConnectMemberAssignment) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConnectMemberAssignment) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConnectMemberAssignment) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int16()
+ s.Error = v
+ }
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Leader = v
+ }
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.LeaderURL = v
+ }
+ {
+ v := b.Int64()
+ s.ConfigOffset = v
+ }
+ {
+ v := s.Assignment
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConnectMemberAssignmentAssignment, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Connector = v
+ }
+ {
+ v := s.Tasks
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int16, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int16()
+ a[i] = v
+ }
+ v = a
+ s.Tasks = v
+ }
+ }
+ v = a
+ s.Assignment = v
+ }
+ if version >= 1 {
+ v := s.Revoked
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConnectMemberAssignmentRevoked, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Connector = v
+ }
+ {
+ v := s.Tasks
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int16, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int16()
+ a[i] = v
+ }
+ v = a
+ s.Tasks = v
+ }
+ }
+ v = a
+ s.Revoked = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.ScheduledDelay = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConnectMemberAssignment.
+func (v *ConnectMemberAssignment) Default() {
+}
+
+// NewConnectMemberAssignment returns a default ConnectMemberAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConnectMemberAssignment() ConnectMemberAssignment {
+ var v ConnectMemberAssignment
+ v.Default()
+ return v
+}
+
+// DefaultPrincipalData is the encoded principal data. This is used in an
+// envelope request from broker to broker.
+type DefaultPrincipalData struct {
+ Version int16
+
+ // The principal type.
+ Type string
+
+ // The principal name.
+ Name string
+
+ // Whether the principal was authenticated by a delegation token on the forwarding broker.
+ TokenAuthenticated bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (v *DefaultPrincipalData) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Type
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.TokenAuthenticated
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DefaultPrincipalData) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DefaultPrincipalData) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DefaultPrincipalData) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Type = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ v := b.Bool()
+ s.TokenAuthenticated = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+func (v *DefaultPrincipalData) IsFlexible() bool { return v.Version >= 0 }
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DefaultPrincipalData.
+func (v *DefaultPrincipalData) Default() {
+}
+
+// NewDefaultPrincipalData returns a default DefaultPrincipalData
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDefaultPrincipalData() DefaultPrincipalData {
+ var v DefaultPrincipalData
+ v.Default()
+ return v
+}
+
+// ControlRecordKey is the key in a control record.
+type ControlRecordKey struct {
+ Version int16
+
+ Type ControlRecordKeyType
+}
+
+func (v *ControlRecordKey) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Type
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ return dst
+}
+
+func (v *ControlRecordKey) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ControlRecordKey) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ControlRecordKey) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var t ControlRecordKeyType
+ {
+ v := b.Int8()
+ t = ControlRecordKeyType(v)
+ }
+ v := t
+ s.Type = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ControlRecordKey.
+func (v *ControlRecordKey) Default() {
+}
+
+// NewControlRecordKey returns a default ControlRecordKey
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewControlRecordKey() ControlRecordKey {
+ var v ControlRecordKey
+ v.Default()
+ return v
+}
+
+// EndTxnMarker is the value for a control record when the key is type 0 or 1.
+type EndTxnMarker struct {
+ Version int16
+
+ CoordinatorEpoch int32
+}
+
+func (v *EndTxnMarker) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.CoordinatorEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ return dst
+}
+
+func (v *EndTxnMarker) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EndTxnMarker) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EndTxnMarker) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int32()
+ s.CoordinatorEpoch = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndTxnMarker.
+func (v *EndTxnMarker) Default() {
+}
+
+// NewEndTxnMarker returns a default EndTxnMarker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndTxnMarker() EndTxnMarker {
+ var v EndTxnMarker
+ v.Default()
+ return v
+}
+
+type LeaderChangeMessageVoter struct {
+ VoterID int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderChangeMessageVoter.
+func (v *LeaderChangeMessageVoter) Default() {
+}
+
+// NewLeaderChangeMessageVoter returns a default LeaderChangeMessageVoter
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderChangeMessageVoter() LeaderChangeMessageVoter {
+ var v LeaderChangeMessageVoter
+ v.Default()
+ return v
+}
+
+// LeaderChangeMessage is the value for a control record when the key is type 3.
+type LeaderChangeMessage struct {
+ Version int16
+
+ // The ID of the newly elected leader.
+ LeaderID int32
+
+ // The set of voters in the quorum for this epoch.
+ Voters []LeaderChangeMessageVoter
+
+ // The voters who voted for the leader at the time of election.
+ GrantingVoters []LeaderChangeMessageVoter
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (v *LeaderChangeMessage) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Version
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Voters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.VoterID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.GrantingVoters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.VoterID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *LeaderChangeMessage) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *LeaderChangeMessage) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *LeaderChangeMessage) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ v.Version = b.Int16()
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := s.Voters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderChangeMessageVoter, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.VoterID = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Voters = v
+ }
+ {
+ v := s.GrantingVoters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderChangeMessageVoter, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.VoterID = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.GrantingVoters = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+func (v *LeaderChangeMessage) IsFlexible() bool { return v.Version >= 0 }
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderChangeMessage.
+func (v *LeaderChangeMessage) Default() {
+}
+
+// NewLeaderChangeMessage returns a default LeaderChangeMessage
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderChangeMessage() LeaderChangeMessage {
+ var v LeaderChangeMessage
+ v.Default()
+ return v
+}
+
+type ProduceRequestTopicPartition struct {
+ // Partition is a partition to send a record batch to.
+ Partition int32
+
+ // Records is a batch of records to write to a topic's partition.
+ //
+ // For Kafka pre 0.11.0, the contents of the byte array is a serialized
+ // message set. At or after 0.11.0, the contents of the byte array is a
+ // serialized RecordBatch.
+ Records []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceRequestTopicPartition.
+func (v *ProduceRequestTopicPartition) Default() {
+}
+
+// NewProduceRequestTopicPartition returns a default ProduceRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceRequestTopicPartition() ProduceRequestTopicPartition {
+ var v ProduceRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type ProduceRequestTopic struct {
+ // Topic is a topic to send record batches to.
+ Topic string
+
+ // Partitions is an array of partitions to send record batches to.
+ Partitions []ProduceRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceRequestTopic.
+func (v *ProduceRequestTopic) Default() {
+}
+
+// NewProduceRequestTopic returns a default ProduceRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceRequestTopic() ProduceRequestTopic {
+ var v ProduceRequestTopic
+ v.Default()
+ return v
+}
+
+// ProduceRequest issues records to be created to Kafka.
+//
+// Kafka 0.10.0 (v2) changed Records from MessageSet v0 to MessageSet v1.
+// Kafka 0.11.0 (v3) again changed Records to RecordBatch.
+//
+// Note that the special client ID "__admin_client" will allow you to produce
+// records to internal topics. This is generally recommended if you want to
+// break your Kafka cluster.
+type ProduceRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TransactionID is the transaction ID to use for this request, allowing for
+ // exactly once semantics.
+ TransactionID *string // v3+
+
+ // Acks specifies the number of acks that the partition leaders must receive
+ // from in sync replicas before considering a record batch fully written.
+ //
+ // Valid values are -1, 0, or 1 corresponding to all, none, or the leader only.
+ //
+ // Note that if no acks are requested, Kafka will close the connection
+ // if any topic or partition errors to trigger a client metadata refresh.
+ Acks int16
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 15000.
+ TimeoutMillis int32
+
+ // Topics is an array of topics to send record batches to.
+ Topics []ProduceRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+func (*ProduceRequest) Key() int16 { return 0 }
+func (*ProduceRequest) MaxVersion() int16 { return 10 }
+func (v *ProduceRequest) SetVersion(version int16) { v.Version = version }
+func (v *ProduceRequest) GetVersion() int16 { return v.Version }
+func (v *ProduceRequest) IsFlexible() bool { return v.Version >= 9 }
+func (v *ProduceRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *ProduceRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *ProduceRequest) ResponseKind() Response {
+ r := &ProduceResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ProduceRequest) RequestWith(ctx context.Context, r Requestor) (*ProduceResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ProduceResponse)
+ return resp, err
+}
+
+func (v *ProduceRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ if version >= 3 {
+ v := v.TransactionID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Acks
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Records
+ if isFlexible {
+ dst = kbin.AppendCompactNullableBytes(dst, v)
+ } else {
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ProduceRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ProduceRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ProduceRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.TransactionID = v
+ }
+ {
+ v := b.Int16()
+ s.Acks = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ProduceRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ProduceRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactNullableBytes()
+ } else {
+ v = b.NullableBytes()
+ }
+ s.Records = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrProduceRequest returns a pointer to a default ProduceRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrProduceRequest() *ProduceRequest {
+ var v ProduceRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceRequest.
+func (v *ProduceRequest) Default() {
+ v.TimeoutMillis = 15000
+}
+
+// NewProduceRequest returns a default ProduceRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceRequest() ProduceRequest {
+ var v ProduceRequest
+ v.Default()
+ return v
+}
+
+type ProduceResponseTopicPartitionErrorRecord struct {
+ // RelativeOffset is the offset of the record that caused problems.
+ RelativeOffset int32
+
+ // ErrorMessage is the error of this record.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceResponseTopicPartitionErrorRecord.
+func (v *ProduceResponseTopicPartitionErrorRecord) Default() {
+}
+
+// NewProduceResponseTopicPartitionErrorRecord returns a default ProduceResponseTopicPartitionErrorRecord
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceResponseTopicPartitionErrorRecord() ProduceResponseTopicPartitionErrorRecord {
+ var v ProduceResponseTopicPartitionErrorRecord
+ v.Default()
+ return v
+}
+
+type ProduceResponseTopicPartitionCurrentLeader struct {
+ // The ID of the current leader, or -1 if unknown.
+ //
+ // This field has a default of -1.
+ LeaderID int32
+
+ // The latest known leader epoch.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceResponseTopicPartitionCurrentLeader.
+func (v *ProduceResponseTopicPartitionCurrentLeader) Default() {
+ v.LeaderID = -1
+ v.LeaderEpoch = -1
+}
+
+// NewProduceResponseTopicPartitionCurrentLeader returns a default ProduceResponseTopicPartitionCurrentLeader
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceResponseTopicPartitionCurrentLeader() ProduceResponseTopicPartitionCurrentLeader {
+ var v ProduceResponseTopicPartitionCurrentLeader
+ v.Default()
+ return v
+}
+
+type ProduceResponseTopicPartition struct {
+ // Partition is the partition this response pertains to.
+ Partition int32
+
+ // ErrorCode is any error for a topic/partition in the request.
+ // There are many error codes for produce requests.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned for all topics and
+ // partitions if the request had a transactional ID but the client
+ // is not authorized for transactions.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned for all topics and partitions
+ // if the request was idempotent but the client is not authorized
+ // for idempotent requests.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned for all topics the client
+ // is not authorized to talk to.
+ //
+ // INVALID_REQUIRED_ACKS is returned if the request contained an invalid
+ // number for "acks".
+ //
+ // CORRUPT_MESSAGE is returned for many reasons, generally related to
+ // problems with messages (invalid magic, size mismatch, etc.).
+ //
+ // MESSAGE_TOO_LARGE is returned if a record batch is larger than the
+ // broker's configured max.message.size.
+ //
+ // RECORD_LIST_TOO_LARGE is returned if the record batch is larger than
+ // the broker's segment.bytes.
+ //
+ // INVALID_TIMESTAMP is returned if the record batch uses LogAppendTime
+ // or if the timestamp delta from when the broker receives the message
+ // is more than the broker's log.message.timestamp.difference.max.ms.
+ //
+ // UNSUPPORTED_FOR_MESSAGE_FORMAT is returned if using a Kafka v2 message
+ // format (i.e. RecordBatch) feature (idempotence) while sending v1
+ // messages (i.e. a MessageSet).
+ //
+ // KAFKA_STORAGE_ERROR is returned if the log directory for a partition
+ // is offline.
+ //
+ // NOT_ENOUGH_REPLICAS is returned if all acks are required, but there
+ // are not enough in sync replicas yet.
+ //
+ // NOT_ENOUGH_REPLICAS_AFTER_APPEND is returned on old Kafka versions
+ // (pre 0.11.0.0) when a message was written to disk and then Kafka
+ // noticed not enough replicas existed to replicate the message.
+ //
+ // DUPLICATE_SEQUENCE_NUMBER is returned for Kafka <1.1.0 when a
+ // sequence number is detected as a duplicate. After, out of order
+ // is returned.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the topic or partition
+ // is unknown.
+ //
+ // NOT_LEADER_FOR_PARTITION is returned if the broker is not a leader
+ // for this partition. This means that the client has stale metadata.
+ //
+ // INVALID_PRODUCER_EPOCH is returned if the produce request was
+ // attempted with an old epoch. Either there is a newer producer using
+ // the same transaction ID, or the transaction ID used has expired.
+ //
+ // UNKNOWN_PRODUCER_ID, added in Kafka 1.0.0 (message format v5+) is
+ // returned if the producer used an ID that Kafka does not know about or
+ // if the request has a larger sequence number than Kafka expects. The
+ // LogStartOffset must be checked in this case. If the offset is greater
+ // than the last acknowledged offset, then no data loss has occurred; the
+ // client just sent data so long ago that Kafka rotated the partition out
+ // of existence and no longer knows of this producer ID. In this case,
+ // reset your sequence numbers to 0. If the log start offset is equal to
+ // or less than what the client sent prior, then data loss has occurred.
+ // See KAFKA-5793 for more details. NOTE: Unfortunately, even UNKNOWN_PRODUCER_ID
+ // is unsafe to handle, so this error should likely be treated the same
+ // as OUT_OF_ORDER_SEQUENCE_NUMER. See KIP-360 for more details.
+ //
+ // OUT_OF_ORDER_SEQUENCE_NUMBER is sent if the batch's FirstSequence was
+ // not what it should be (the last FirstSequence, plus the number of
+ // records in the last batch, plus one). After 1.0.0, this generally
+ // means data loss. Before, there could be confusion on if the broker
+ // actually rotated the partition out of existence (this is why
+ // UNKNOWN_PRODUCER_ID was introduced).
+ ErrorCode int16
+
+ // BaseOffset is the offset that the records in the produce request began
+ // at in the partition.
+ BaseOffset int64
+
+ // LogAppendTime is the millisecond that records were appended to the
+ // partition inside Kafka. This is only not -1 if records were written
+ // with the log append time flag (which producers cannot do).
+ //
+ // This field has a default of -1.
+ LogAppendTime int64 // v2+
+
+ // LogStartOffset, introduced in Kafka 1.0.0, can be used to see if an
+ // UNKNOWN_PRODUCER_ID means Kafka rotated records containing the used
+ // producer ID out of existence, or if Kafka lost data.
+ //
+ // This field has a default of -1.
+ LogStartOffset int64 // v5+
+
+ // ErrorRecords are indices of individual records that caused a batch
+ // to error. This was added for KIP-467.
+ ErrorRecords []ProduceResponseTopicPartitionErrorRecord // v8+
+
+ // ErrorMessage is the global error message of of what caused this batch
+ // to error.
+ ErrorMessage *string // v8+
+
+ CurrentLeader ProduceResponseTopicPartitionCurrentLeader // tag 0
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceResponseTopicPartition.
+func (v *ProduceResponseTopicPartition) Default() {
+ v.LogAppendTime = -1
+ v.LogStartOffset = -1
+ {
+ v := &v.CurrentLeader
+ _ = v
+ v.LeaderID = -1
+ v.LeaderEpoch = -1
+ }
+}
+
+// NewProduceResponseTopicPartition returns a default ProduceResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceResponseTopicPartition() ProduceResponseTopicPartition {
+ var v ProduceResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type ProduceResponseTopic struct {
+ // Topic is the topic this response pertains to.
+ Topic string
+
+ // Partitions is an array of responses for the partition's that
+ // batches were sent to.
+ Partitions []ProduceResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceResponseTopic.
+func (v *ProduceResponseTopic) Default() {
+}
+
+// NewProduceResponseTopic returns a default ProduceResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceResponseTopic() ProduceResponseTopic {
+ var v ProduceResponseTopic
+ v.Default()
+ return v
+}
+
+type ProduceResponseBroker struct {
+ // NodeID is the node ID of a Kafka broker.
+ NodeID int32
+
+ // Host is the hostname of a Kafka broker.
+ Host string
+
+ // Port is the port of a Kafka broker.
+ Port int32
+
+ // Rack is the rack this Kafka broker is in.
+ Rack *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceResponseBroker.
+func (v *ProduceResponseBroker) Default() {
+}
+
+// NewProduceResponseBroker returns a default ProduceResponseBroker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceResponseBroker() ProduceResponseBroker {
+ var v ProduceResponseBroker
+ v.Default()
+ return v
+}
+
+// ProduceResponse is returned from a ProduceRequest.
+type ProduceResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics is an array of responses for the topic's that batches were sent
+ // to.
+ Topics []ProduceResponseTopic
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 6.
+ ThrottleMillis int32 // v1+
+
+ // Brokers is present if any partition responses contain the error
+ // NOT_LEADER_OR_FOLLOWER.
+ Brokers []ProduceResponseBroker // tag 0
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+func (*ProduceResponse) Key() int16 { return 0 }
+func (*ProduceResponse) MaxVersion() int16 { return 10 }
+func (v *ProduceResponse) SetVersion(version int16) { v.Version = version }
+func (v *ProduceResponse) GetVersion() int16 { return v.Version }
+func (v *ProduceResponse) IsFlexible() bool { return v.Version >= 9 }
+func (v *ProduceResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 6 }
+func (v *ProduceResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *ProduceResponse) RequestKind() Request { return &ProduceRequest{Version: v.Version} }
+
+func (v *ProduceResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.BaseOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 2 {
+ v := v.LogAppendTime
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 5 {
+ v := v.LogStartOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 8 {
+ v := v.ErrorRecords
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.RelativeOffset
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 8 {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if !reflect.DeepEqual(v.CurrentLeader, (func() ProduceResponseTopicPartitionCurrentLeader {
+ var v ProduceResponseTopicPartitionCurrentLeader
+ v.Default()
+ return v
+ })()) {
+ toEncode = append(toEncode, 0)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.CurrentLeader
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fCurrentLeader:
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fCurrentLeader
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if len(v.Brokers) > 0 {
+ toEncode = append(toEncode, 0)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.Brokers
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fBrokers:
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.NodeID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fBrokers
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ProduceResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ProduceResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ProduceResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ProduceResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ProduceResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.BaseOffset = v
+ }
+ if version >= 2 {
+ v := b.Int64()
+ s.LogAppendTime = v
+ }
+ if version >= 5 {
+ v := b.Int64()
+ s.LogStartOffset = v
+ }
+ if version >= 8 {
+ v := s.ErrorRecords
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ProduceResponseTopicPartitionErrorRecord, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.RelativeOffset = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ErrorRecords = v
+ }
+ if version >= 8 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := &s.CurrentLeader
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := s.Brokers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ProduceResponseBroker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.NodeID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Rack = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Brokers = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ return b.Complete()
+}
+
+// NewPtrProduceResponse returns a pointer to a default ProduceResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrProduceResponse() *ProduceResponse {
+ var v ProduceResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ProduceResponse.
+func (v *ProduceResponse) Default() {
+}
+
+// NewProduceResponse returns a default ProduceResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewProduceResponse() ProduceResponse {
+ var v ProduceResponse
+ v.Default()
+ return v
+}
+
+type FetchRequestReplicaState struct {
+ // The replica ID of the follower, or -1 if this request is from a consumer.
+ //
+ // This field has a default of -1.
+ ID int32
+
+ // The epoch of this follower, or -1 if not available.
+ //
+ // This field has a default of -1.
+ Epoch int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchRequestReplicaState.
+func (v *FetchRequestReplicaState) Default() {
+ v.ID = -1
+ v.Epoch = -1
+}
+
+// NewFetchRequestReplicaState returns a default FetchRequestReplicaState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchRequestReplicaState() FetchRequestReplicaState {
+ var v FetchRequestReplicaState
+ v.Default()
+ return v
+}
+
+type FetchRequestTopicPartition struct {
+ // Partition is a partition in a topic to try to fetch records for.
+ Partition int32
+
+ // CurrentLeaderEpoch, proposed in KIP-320 and introduced in Kafka 2.1.0,
+ // allows brokers to check if the client is fenced (has an out of date
+ // leader) or is using an unknown leader.
+ //
+ // The initial leader epoch can be determined from a MetadataResponse.
+ // To skip log truncation checking, use -1.
+ //
+ // This field has a default of -1.
+ CurrentLeaderEpoch int32 // v9+
+
+ // FetchOffset is the offset to begin the fetch from. Kafka will
+ // return records at and after this offset.
+ FetchOffset int64
+
+ // The epoch of the last fetched record, or -1 if there is none.
+ //
+ // This field has a default of -1.
+ LastFetchedEpoch int32 // v12+
+
+ // LogStartOffset is a broker-follower only field added for KIP-107.
+ // This is the start offset of the partition in a follower.
+ //
+ // This field has a default of -1.
+ LogStartOffset int64 // v5+
+
+ // PartitionMaxBytes is the maximum bytes to return for this partition.
+ // This can be used to limit how many bytes an individual partition in
+ // a request is allotted so that it does not dominate all of MaxBytes.
+ PartitionMaxBytes int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchRequestTopicPartition.
+func (v *FetchRequestTopicPartition) Default() {
+ v.CurrentLeaderEpoch = -1
+ v.LastFetchedEpoch = -1
+ v.LogStartOffset = -1
+}
+
+// NewFetchRequestTopicPartition returns a default FetchRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchRequestTopicPartition() FetchRequestTopicPartition {
+ var v FetchRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type FetchRequestTopic struct {
+ // Topic is a topic to try to fetch records for.
+ Topic string // v0-v12
+
+ // TopicID is the uuid of the topic to fetch records for.
+ TopicID [16]byte // v13+
+
+ // Partitions contains partitions in a topic to try to fetch records for.
+ Partitions []FetchRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchRequestTopic.
+func (v *FetchRequestTopic) Default() {
+}
+
+// NewFetchRequestTopic returns a default FetchRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchRequestTopic() FetchRequestTopic {
+ var v FetchRequestTopic
+ v.Default()
+ return v
+}
+
+type FetchRequestForgottenTopic struct {
+ // Topic is a topic to remove from being tracked (with the partitions below).
+ Topic string // v7-v12
+
+ // TopicID is the uuid of a topic to remove from being tracked (with the
+ // partitions below).
+ TopicID [16]byte // v13+
+
+ // Partitions are partitions to remove from tracking for a topic.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchRequestForgottenTopic.
+func (v *FetchRequestForgottenTopic) Default() {
+}
+
+// NewFetchRequestForgottenTopic returns a default FetchRequestForgottenTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchRequestForgottenTopic() FetchRequestForgottenTopic {
+ var v FetchRequestForgottenTopic
+ v.Default()
+ return v
+}
+
+// FetchRequest is a long-poll request of records from Kafka.
+//
+// Kafka 0.11.0.0 released v4 and changed the returned RecordBatches to contain
+// the RecordBatch type. Prior, Kafka used the MessageSet type (and, for v0 and
+// v1, Kafka used a different type).
+//
+// Note that starting in v3, Kafka began processing partitions in order,
+// meaning the order of partitions in the fetch request is important due to
+// potential size constraints.
+//
+// Starting in v13, topics must use UUIDs rather than their string name
+// identifiers.
+//
+// Version 15 adds the ReplicaState which includes new field ReplicaEpoch and
+// the ReplicaID, and deprecates the old ReplicaID (KIP-903).
+type FetchRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The cluster ID, if known. This is used to validate metadata fetches
+ // prior to broker registration.
+ //
+ // This field has a default of null.
+ ClusterID *string // tag 0
+
+ // ReplicaID is the broker ID of performing the fetch request. Standard
+ // clients should use -1. To be a "debug" replica, use -2. The debug
+ // replica can be used to fetch messages from non-leaders.
+ //
+ // This field has a default of -1.
+ ReplicaID int32 // v0-v14
+
+ // ReplicaState is a broker-only tag for v15+, see KIP-903 for more details.
+ ReplicaState FetchRequestReplicaState // tag 1
+
+ // MaxWaitMillis is how long to wait for MinBytes to be hit before a broker
+ // responds to a fetch request.
+ MaxWaitMillis int32
+
+ // MinBytes is the minimum amount of bytes to attempt to read before a broker
+ // responds to a fetch request.
+ MinBytes int32
+
+ // MaxBytes is the maximum amount of bytes to read in a fetch request. The
+ // response can exceed MaxBytes if the first record in the first non-empty
+ // partition is larger than MaxBytes.
+ //
+ // This field has a default of 0x7fffffff.
+ MaxBytes int32 // v3+
+
+ // IsolationLevel changes which messages are fetched. Follower replica ID's
+ // (non-negative, non-standard-client) fetch from the end.
+ //
+ // Standard clients fetch from the high watermark, which corresponds to
+ // IsolationLevel 0, READ_UNCOMMITTED.
+ //
+ // To only read committed records, use IsolationLevel 1, corresponding to
+ // READ_COMMITTED.
+ IsolationLevel int8 // v4+
+
+ // SessionID is used to potentially reduce the amount of back and forth
+ // data between a client and a broker. If opting in to sessions, the first
+ // ID used should be 0, and thereafter (until session resets) the ID should
+ // be the ID returned in the fetch response.
+ //
+ // Read KIP-227 for more details. Use -1 if you want to disable sessions.
+ SessionID int32 // v7+
+
+ // SessionEpoch is the session epoch for this request if using sessions.
+ //
+ // Read KIP-227 for more details. Use -1 if you are not using sessions.
+ //
+ // This field has a default of -1.
+ SessionEpoch int32 // v7+
+
+ // Topic contains topics to try to fetch records for.
+ Topics []FetchRequestTopic
+
+ // ForgottenTopics contains topics and partitions that a fetch session
+ // wants to remove from its session.
+ //
+ // See KIP-227 for more details.
+ ForgottenTopics []FetchRequestForgottenTopic // v7+
+
+ // Rack of the consumer making this request (see KIP-392; introduced in
+ // Kafka 2.2.0).
+ Rack string // v11+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+func (*FetchRequest) Key() int16 { return 1 }
+func (*FetchRequest) MaxVersion() int16 { return 16 }
+func (v *FetchRequest) SetVersion(version int16) { v.Version = version }
+func (v *FetchRequest) GetVersion() int16 { return v.Version }
+func (v *FetchRequest) IsFlexible() bool { return v.Version >= 12 }
+func (v *FetchRequest) ResponseKind() Response {
+ r := &FetchResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *FetchRequest) RequestWith(ctx context.Context, r Requestor) (*FetchResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*FetchResponse)
+ return resp, err
+}
+
+func (v *FetchRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 12
+ _ = isFlexible
+ if version >= 0 && version <= 14 {
+ v := v.ReplicaID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.MaxWaitMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.MinBytes
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 3 {
+ v := v.MaxBytes
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 4 {
+ v := v.IsolationLevel
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if version >= 7 {
+ v := v.SessionID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 7 {
+ v := v.SessionEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 12 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 13 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 9 {
+ v := v.CurrentLeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.FetchOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 12 {
+ v := v.LastFetchedEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 5 {
+ v := v.LogStartOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.PartitionMaxBytes
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 7 {
+ v := v.ForgottenTopics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 7 && version <= 12 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 13 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 11 {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if v.ClusterID != nil {
+ toEncode = append(toEncode, 0)
+ }
+ if !reflect.DeepEqual(v.ReplicaState, (func() FetchRequestReplicaState { var v FetchRequestReplicaState; v.Default(); return v })()) {
+ toEncode = append(toEncode, 1)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.ClusterID
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fClusterID:
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fClusterID
+ }
+ }
+ case 1:
+ {
+ v := v.ReplicaState
+ dst = kbin.AppendUvarint(dst, 1)
+ sized := false
+ lenAt := len(dst)
+ fReplicaState:
+ {
+ v := v.ID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Epoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fReplicaState
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *FetchRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *FetchRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *FetchRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 12
+ _ = isFlexible
+ s := v
+ if version >= 0 && version <= 14 {
+ v := b.Int32()
+ s.ReplicaID = v
+ }
+ {
+ v := b.Int32()
+ s.MaxWaitMillis = v
+ }
+ {
+ v := b.Int32()
+ s.MinBytes = v
+ }
+ if version >= 3 {
+ v := b.Int32()
+ s.MaxBytes = v
+ }
+ if version >= 4 {
+ v := b.Int8()
+ s.IsolationLevel = v
+ }
+ if version >= 7 {
+ v := b.Int32()
+ s.SessionID = v
+ }
+ if version >= 7 {
+ v := b.Int32()
+ s.SessionEpoch = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 12 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 13 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if version >= 9 {
+ v := b.Int32()
+ s.CurrentLeaderEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.FetchOffset = v
+ }
+ if version >= 12 {
+ v := b.Int32()
+ s.LastFetchedEpoch = v
+ }
+ if version >= 5 {
+ v := b.Int64()
+ s.LogStartOffset = v
+ }
+ {
+ v := b.Int32()
+ s.PartitionMaxBytes = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 7 {
+ v := s.ForgottenTopics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchRequestForgottenTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 7 && version <= 12 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 13 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ForgottenTopics = v
+ }
+ if version >= 11 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Rack = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ClusterID = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ case 1:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := &s.ReplicaState
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.ID = v
+ }
+ {
+ v := b.Int64()
+ s.Epoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ return b.Complete()
+}
+
+// NewPtrFetchRequest returns a pointer to a default FetchRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrFetchRequest() *FetchRequest {
+ var v FetchRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchRequest.
+func (v *FetchRequest) Default() {
+ v.ClusterID = nil
+ v.ReplicaID = -1
+ {
+ v := &v.ReplicaState
+ _ = v
+ v.ID = -1
+ v.Epoch = -1
+ }
+ v.MaxBytes = 2147483647
+ v.SessionEpoch = -1
+}
+
+// NewFetchRequest returns a default FetchRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchRequest() FetchRequest {
+ var v FetchRequest
+ v.Default()
+ return v
+}
+
+type FetchResponseTopicPartitionDivergingEpoch struct {
+ // This field has a default of -1.
+ Epoch int32
+
+ // This field has a default of -1.
+ EndOffset int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseTopicPartitionDivergingEpoch.
+func (v *FetchResponseTopicPartitionDivergingEpoch) Default() {
+ v.Epoch = -1
+ v.EndOffset = -1
+}
+
+// NewFetchResponseTopicPartitionDivergingEpoch returns a default FetchResponseTopicPartitionDivergingEpoch
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseTopicPartitionDivergingEpoch() FetchResponseTopicPartitionDivergingEpoch {
+ var v FetchResponseTopicPartitionDivergingEpoch
+ v.Default()
+ return v
+}
+
+type FetchResponseTopicPartitionCurrentLeader struct {
+ // The ID of the current leader, or -1 if unknown.
+ //
+ // This field has a default of -1.
+ LeaderID int32
+
+ // The latest known leader epoch.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseTopicPartitionCurrentLeader.
+func (v *FetchResponseTopicPartitionCurrentLeader) Default() {
+ v.LeaderID = -1
+ v.LeaderEpoch = -1
+}
+
+// NewFetchResponseTopicPartitionCurrentLeader returns a default FetchResponseTopicPartitionCurrentLeader
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseTopicPartitionCurrentLeader() FetchResponseTopicPartitionCurrentLeader {
+ var v FetchResponseTopicPartitionCurrentLeader
+ v.Default()
+ return v
+}
+
+type FetchResponseTopicPartitionSnapshotID struct {
+ // This field has a default of -1.
+ EndOffset int64
+
+ // This field has a default of -1.
+ Epoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseTopicPartitionSnapshotID.
+func (v *FetchResponseTopicPartitionSnapshotID) Default() {
+ v.EndOffset = -1
+ v.Epoch = -1
+}
+
+// NewFetchResponseTopicPartitionSnapshotID returns a default FetchResponseTopicPartitionSnapshotID
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseTopicPartitionSnapshotID() FetchResponseTopicPartitionSnapshotID {
+ var v FetchResponseTopicPartitionSnapshotID
+ v.Default()
+ return v
+}
+
+type FetchResponseTopicPartitionAbortedTransaction struct {
+ // ProducerID is the producer ID that caused this aborted transaction.
+ ProducerID int64
+
+ // FirstOffset is the offset where this aborted transaction began.
+ FirstOffset int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseTopicPartitionAbortedTransaction.
+func (v *FetchResponseTopicPartitionAbortedTransaction) Default() {
+}
+
+// NewFetchResponseTopicPartitionAbortedTransaction returns a default FetchResponseTopicPartitionAbortedTransaction
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseTopicPartitionAbortedTransaction() FetchResponseTopicPartitionAbortedTransaction {
+ var v FetchResponseTopicPartitionAbortedTransaction
+ v.Default()
+ return v
+}
+
+type FetchResponseTopicPartition struct {
+ // Partition is a partition in a topic that records may have been
+ // received for.
+ Partition int32
+
+ // ErrorCode is an error returned for an individual partition in a
+ // fetch request.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not
+ // authorized to read the partition.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the topic or partition
+ // does not exist on this broker.
+ //
+ // UNSUPPORTED_COMPRESSION_TYPE is returned if the request version was
+ // under 10 and the batch is compressed with zstd.
+ //
+ // UNSUPPORTED_VERSION is returned if the broker has records newer than
+ // the client can support (magic value) and the broker has disabled
+ // message downconversion.
+ //
+ // NOT_LEADER_FOR_PARTITION is returned if requesting data for this
+ // partition as a follower (non-negative ReplicaID) and the broker
+ // is not the leader for this partition.
+ //
+ // REPLICA_NOT_AVAILABLE is returned if the partition exists but
+ // the requested broker is not the leader for it.
+ //
+ // KAFKA_STORAGE_EXCEPTION is returned if the requested partition is
+ // offline.
+ //
+ // UNKNOWN_LEADER_EPOCH is returned if the request used a larger leader
+ // epoch than the broker knows of.
+ //
+ // FENCED_LEADER_EPOCH is returned if the request used a smaller leader
+ // epoch than the broker is at (see KIP-320).
+ //
+ // OFFSET_OUT_OF_RANGE is returned if requesting an offset past the
+ // current end offset or before the beginning offset.
+ //
+ // UNKNOWN_TOPIC_ID is returned if using uuid's and the uuid is unknown
+ // (v13+ / Kafka 3.1+).
+ //
+ // OFFSET_MOVED_TO_TIERED_STORAGE is returned if a follower is trying to
+ // fetch from an offset that is now in tiered storage.
+ ErrorCode int16
+
+ // HighWatermark is the current high watermark for this partition,
+ // that is, the current offset that is on all in sync replicas.
+ HighWatermark int64
+
+ // LastStableOffset is the offset at which all prior offsets have
+ // been "decided". Non transactional records are always decided
+ // immediately, but transactional records are only decided once
+ // they are commited or aborted.
+ //
+ // The LastStableOffset will always be at or under the HighWatermark.
+ //
+ // This field has a default of -1.
+ LastStableOffset int64 // v4+
+
+ // LogStartOffset is the beginning offset for this partition.
+ // This field was added for KIP-107.
+ //
+ // This field has a default of -1.
+ LogStartOffset int64 // v5+
+
+ // In case divergence is detected based on the LastFetchedEpoch and
+ // FetchOffset in the request, this field indicates the largest epoch and
+ // its end offset such that subsequent records are known to diverge.
+ DivergingEpoch FetchResponseTopicPartitionDivergingEpoch // tag 0
+
+ // CurrentLeader is the currently known leader ID and epoch for this
+ // partition.
+ CurrentLeader FetchResponseTopicPartitionCurrentLeader // tag 1
+
+ // In the case of fetching an offset less than the LogStartOffset, this
+ // is the end offset and epoch that should be used in the FetchSnapshot
+ // request.
+ SnapshotID FetchResponseTopicPartitionSnapshotID // tag 2
+
+ // AbortedTransactions is an array of aborted transactions within the
+ // returned offset range. This is only returned if the requested
+ // isolation level was READ_COMMITTED.
+ AbortedTransactions []FetchResponseTopicPartitionAbortedTransaction // v4+
+
+ // PreferredReadReplica is the preferred replica for the consumer
+ // to use on its next fetch request. See KIP-392.
+ //
+ // This field has a default of -1.
+ PreferredReadReplica int32 // v11+
+
+ // RecordBatches is an array of record batches for a topic partition.
+ //
+ // This is encoded as a raw byte array, with the standard int32 size
+ // prefix. One important catch to note is that the final element of the
+ // array may be **partial**. This is an optimization in Kafka that
+ // clients must deal with by discarding a partial trailing batch.
+ //
+ // Starting v2, this transitioned to the MessageSet v1 format (and this
+ // would contain many MessageV1 structs).
+ //
+ // Starting v4, this transitioned to the RecordBatch format (thus this
+ // contains many RecordBatch structs).
+ RecordBatches []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseTopicPartition.
+func (v *FetchResponseTopicPartition) Default() {
+ v.LastStableOffset = -1
+ v.LogStartOffset = -1
+ {
+ v := &v.DivergingEpoch
+ _ = v
+ v.Epoch = -1
+ v.EndOffset = -1
+ }
+ {
+ v := &v.CurrentLeader
+ _ = v
+ v.LeaderID = -1
+ v.LeaderEpoch = -1
+ }
+ {
+ v := &v.SnapshotID
+ _ = v
+ v.EndOffset = -1
+ v.Epoch = -1
+ }
+ v.PreferredReadReplica = -1
+}
+
+// NewFetchResponseTopicPartition returns a default FetchResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseTopicPartition() FetchResponseTopicPartition {
+ var v FetchResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type FetchResponseTopic struct {
+ // Topic is a topic that records may have been received for.
+ Topic string // v0-v12
+
+ // TopicID is the uuid of a topic that records may have been received for.
+ TopicID [16]byte // v13+
+
+ // Partitions contains partitions in a topic that records may have
+ // been received for.
+ Partitions []FetchResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseTopic.
+func (v *FetchResponseTopic) Default() {
+}
+
+// NewFetchResponseTopic returns a default FetchResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseTopic() FetchResponseTopic {
+ var v FetchResponseTopic
+ v.Default()
+ return v
+}
+
+type FetchResponseBroker struct {
+ // NodeID is the node ID of a Kafka broker.
+ NodeID int32
+
+ // Host is the hostname of a Kafka broker.
+ Host string
+
+ // Port is the port of a Kafka broker.
+ Port int32
+
+ // Rack is the rack this Kafka broker is in.
+ Rack *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponseBroker.
+func (v *FetchResponseBroker) Default() {
+}
+
+// NewFetchResponseBroker returns a default FetchResponseBroker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponseBroker() FetchResponseBroker {
+ var v FetchResponseBroker
+ v.Default()
+ return v
+}
+
+// FetchResponse is returned from a FetchRequest.
+type FetchResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 8.
+ ThrottleMillis int32 // v1+
+
+ // ErrorCode is a full-response error code for a fetch request. This was
+ // added in support of KIP-227. This error is only non-zero if using fetch
+ // sessions.
+ //
+ // FETCH_SESSION_ID_NOT_FOUND is returned if the request used a
+ // session ID that the broker does not know of.
+ //
+ // INVALID_FETCH_SESSION_EPOCH is returned if the request used an
+ // invalid session epoch.
+ ErrorCode int16 // v7+
+
+ // SessionID is the id for this session if using sessions.
+ //
+ // See KIP-227 for more details.
+ SessionID int32 // v7+
+
+ // Topics contains an array of topic partitions and the records received
+ // for them.
+ Topics []FetchResponseTopic
+
+ // Brokers is present if any partition responses contain the error
+ // NOT_LEADER_OR_FOLLOWER.
+ Brokers []FetchResponseBroker // tag 0
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v12+
+}
+
+func (*FetchResponse) Key() int16 { return 1 }
+func (*FetchResponse) MaxVersion() int16 { return 16 }
+func (v *FetchResponse) SetVersion(version int16) { v.Version = version }
+func (v *FetchResponse) GetVersion() int16 { return v.Version }
+func (v *FetchResponse) IsFlexible() bool { return v.Version >= 12 }
+func (v *FetchResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 8 }
+func (v *FetchResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *FetchResponse) RequestKind() Request { return &FetchRequest{Version: v.Version} }
+
+func (v *FetchResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 12
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 7 {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 7 {
+ v := v.SessionID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 12 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 13 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.HighWatermark
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 4 {
+ v := v.LastStableOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 5 {
+ v := v.LogStartOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 4 {
+ v := v.AbortedTransactions
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.FirstOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 11 {
+ v := v.PreferredReadReplica
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.RecordBatches
+ if isFlexible {
+ dst = kbin.AppendCompactNullableBytes(dst, v)
+ } else {
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if !reflect.DeepEqual(v.DivergingEpoch, (func() FetchResponseTopicPartitionDivergingEpoch {
+ var v FetchResponseTopicPartitionDivergingEpoch
+ v.Default()
+ return v
+ })()) {
+ toEncode = append(toEncode, 0)
+ }
+ if !reflect.DeepEqual(v.CurrentLeader, (func() FetchResponseTopicPartitionCurrentLeader {
+ var v FetchResponseTopicPartitionCurrentLeader
+ v.Default()
+ return v
+ })()) {
+ toEncode = append(toEncode, 1)
+ }
+ if !reflect.DeepEqual(v.SnapshotID, (func() FetchResponseTopicPartitionSnapshotID {
+ var v FetchResponseTopicPartitionSnapshotID
+ v.Default()
+ return v
+ })()) {
+ toEncode = append(toEncode, 2)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.DivergingEpoch
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fDivergingEpoch:
+ {
+ v := v.Epoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.EndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fDivergingEpoch
+ }
+ }
+ case 1:
+ {
+ v := v.CurrentLeader
+ dst = kbin.AppendUvarint(dst, 1)
+ sized := false
+ lenAt := len(dst)
+ fCurrentLeader:
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fCurrentLeader
+ }
+ }
+ case 2:
+ {
+ v := v.SnapshotID
+ dst = kbin.AppendUvarint(dst, 2)
+ sized := false
+ lenAt := len(dst)
+ fSnapshotID:
+ {
+ v := v.EndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Epoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fSnapshotID
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if len(v.Brokers) > 0 {
+ toEncode = append(toEncode, 0)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.Brokers
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fBrokers:
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.NodeID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fBrokers
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *FetchResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *FetchResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *FetchResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 12
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if version >= 7 {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 7 {
+ v := b.Int32()
+ s.SessionID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 12 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 13 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.HighWatermark = v
+ }
+ if version >= 4 {
+ v := b.Int64()
+ s.LastStableOffset = v
+ }
+ if version >= 5 {
+ v := b.Int64()
+ s.LogStartOffset = v
+ }
+ if version >= 4 {
+ v := s.AbortedTransactions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []FetchResponseTopicPartitionAbortedTransaction{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchResponseTopicPartitionAbortedTransaction, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int64()
+ s.FirstOffset = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.AbortedTransactions = v
+ }
+ if version >= 11 {
+ v := b.Int32()
+ s.PreferredReadReplica = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactNullableBytes()
+ } else {
+ v = b.NullableBytes()
+ }
+ s.RecordBatches = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := &s.DivergingEpoch
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Epoch = v
+ }
+ {
+ v := b.Int64()
+ s.EndOffset = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ case 1:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := &s.CurrentLeader
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ case 2:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := &s.SnapshotID
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.EndOffset = v
+ }
+ {
+ v := b.Int32()
+ s.Epoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := s.Brokers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchResponseBroker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.NodeID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Rack = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Brokers = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ return b.Complete()
+}
+
+// NewPtrFetchResponse returns a pointer to a default FetchResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrFetchResponse() *FetchResponse {
+ var v FetchResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchResponse.
+func (v *FetchResponse) Default() {
+}
+
+// NewFetchResponse returns a default FetchResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchResponse() FetchResponse {
+ var v FetchResponse
+ v.Default()
+ return v
+}
+
+type ListOffsetsRequestTopicPartition struct {
+ // Partition is a partition of a topic to get offsets for.
+ Partition int32
+
+ // CurrentLeaderEpoch, proposed in KIP-320 and introduced in Kafka 2.1.0,
+ // allows brokers to check if the client is fenced (has an out of date
+ // leader) or is using an unknown leader.
+ //
+ // The initial leader epoch can be determined from a MetadataResponse.
+ // To skip log truncation checking, use -1.
+ //
+ // This field has a default of -1.
+ CurrentLeaderEpoch int32 // v4+
+
+ // Timestamp controls which offset to return in a response for this
+ // partition.
+ //
+ // The offset returned will be the one of the message whose timestamp is
+ // the first timestamp greater than or equal to this requested timestamp.
+ //
+ // If no such message is found, then no offset is returned (-1).
+ //
+ // There exist two special timestamps: -2 corresponds to the earliest
+ // timestamp, and -1 corresponds to the latest.
+ //
+ // If you are talking to Kafka 3.0+, there exists an additional special
+ // timestamp -3 that returns the latest timestamp produced so far and its
+ // corresponding offset. This is subtly different from the latest offset,
+ // because timestamps are client-side generated. More importantly though,
+ // because this returns the latest produced timestamp, this can be used
+ // to determine topic "liveness" (when was the last produce?).
+ // Previously, this was not easy to determine. See KIP-734 for more
+ // detail.
+ //
+ // If you are talking to Kafka 3.4+ and using request version 8+ (for
+ // KIP-405), the new special timestamp -4 returns the local log start
+ // offset. In the context of tiered storage, the earliest local log start
+ // offset is the offset actually available on disk on the broker.
+ Timestamp int64
+
+ // MaxNumOffsets is the maximum number of offsets to report.
+ // This was removed after v0.
+ //
+ // This field has a default of 1.
+ MaxNumOffsets int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListOffsetsRequestTopicPartition.
+func (v *ListOffsetsRequestTopicPartition) Default() {
+ v.CurrentLeaderEpoch = -1
+ v.MaxNumOffsets = 1
+}
+
+// NewListOffsetsRequestTopicPartition returns a default ListOffsetsRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListOffsetsRequestTopicPartition() ListOffsetsRequestTopicPartition {
+ var v ListOffsetsRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type ListOffsetsRequestTopic struct {
+ // Topic is a topic to get offsets for.
+ Topic string
+
+ // Partitions is an array of partitions in a topic to get offsets for.
+ Partitions []ListOffsetsRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListOffsetsRequestTopic.
+func (v *ListOffsetsRequestTopic) Default() {
+}
+
+// NewListOffsetsRequestTopic returns a default ListOffsetsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListOffsetsRequestTopic() ListOffsetsRequestTopic {
+ var v ListOffsetsRequestTopic
+ v.Default()
+ return v
+}
+
+// ListOffsetsRequest requests partition offsets from Kafka for use in
+// consuming records.
+//
+// Version 5, introduced in Kafka 2.2.0, is the same as version 4. Using
+// version 5 implies you support Kafka's OffsetNotAvailableException
+// See KIP-207 for details.
+//
+// Version 7, introduced in Kafka 3.0, supports -3 as a timestamp to return
+// the timestamp and offset for the record with the largest timestamp.
+//
+// Version 8, introduced in Kafka 3.4, supports -4 as a timestamp to return
+// the local log start offset (in the context of tiered storage, see KIP-405).
+type ListOffsetsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ReplicaID is the broker ID to get offsets from. As a Kafka client, use -1.
+ // The consumer replica ID (-1) causes requests to only succeed if issued
+ // against the leader broker.
+ //
+ // This field has a default of -1.
+ ReplicaID int32
+
+ // IsolationLevel configures which record offsets are visible in the
+ // response. READ_UNCOMMITTED (0) makes all records visible. READ_COMMITTED
+ // (1) makes non-transactional and committed transactional records visible.
+ // READ_COMMITTED means all offsets smaller than the last stable offset and
+ // includes aborted transactions (allowing consumers to discard aborted
+ // records).
+ IsolationLevel int8 // v2+
+
+ // Topics is an array of topics to get offsets for.
+ Topics []ListOffsetsRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*ListOffsetsRequest) Key() int16 { return 2 }
+func (*ListOffsetsRequest) MaxVersion() int16 { return 8 }
+func (v *ListOffsetsRequest) SetVersion(version int16) { v.Version = version }
+func (v *ListOffsetsRequest) GetVersion() int16 { return v.Version }
+func (v *ListOffsetsRequest) IsFlexible() bool { return v.Version >= 6 }
+func (v *ListOffsetsRequest) ResponseKind() Response {
+ r := &ListOffsetsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ListOffsetsRequest) RequestWith(ctx context.Context, r Requestor) (*ListOffsetsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ListOffsetsResponse)
+ return resp, err
+}
+
+func (v *ListOffsetsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ {
+ v := v.ReplicaID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 2 {
+ v := v.IsolationLevel
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 4 {
+ v := v.CurrentLeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Timestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 0 && version <= 0 {
+ v := v.MaxNumOffsets
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListOffsetsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListOffsetsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListOffsetsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ReplicaID = v
+ }
+ if version >= 2 {
+ v := b.Int8()
+ s.IsolationLevel = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListOffsetsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListOffsetsRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if version >= 4 {
+ v := b.Int32()
+ s.CurrentLeaderEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.Timestamp = v
+ }
+ if version >= 0 && version <= 0 {
+ v := b.Int32()
+ s.MaxNumOffsets = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListOffsetsRequest returns a pointer to a default ListOffsetsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListOffsetsRequest() *ListOffsetsRequest {
+ var v ListOffsetsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListOffsetsRequest.
+func (v *ListOffsetsRequest) Default() {
+ v.ReplicaID = -1
+}
+
+// NewListOffsetsRequest returns a default ListOffsetsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListOffsetsRequest() ListOffsetsRequest {
+ var v ListOffsetsRequest
+ v.Default()
+ return v
+}
+
+type ListOffsetsResponseTopicPartition struct {
+ // Partition is the partition this array slot is for.
+ Partition int32
+
+ // ErrorCode is any error for a topic partition in a ListOffsets request.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to describe the topic.
+ //
+ // INVALID_REQUEST is returned if the requested topic partitions had
+ // contained duplicates.
+ //
+ // KAFKA_STORAGE_EXCEPTION is returned if the topic / partition is in
+ // an offline log directory.
+ //
+ // UNSUPPORTED_FOR_MESSAGE_FORMAT is returned if the broker is using
+ // Kafka 0.10.0 messages and the requested timestamp was not -1 nor -2.
+ //
+ // NOT_LEADER_FOR_PARTITION is returned if the broker is not a leader
+ // for this partition. This means that the client has stale metadata.
+ // If the request used the debug replica ID, the returned error will
+ // be REPLICA_NOT_AVAILABLE.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know
+ // of the requested topic or partition.
+ //
+ // FENCED_LEADER_EPOCH is returned if the broker has a higher leader
+ // epoch than what the request sent.
+ //
+ // UNKNOWN_LEADER_EPOCH is returned if the request used a leader epoch
+ // that the broker does not know about.
+ //
+ // OFFSET_NOT_AVAILABLE, introduced in Kafka 2.2.0 with produce request
+ // v5+, is returned when talking to a broker that is a new leader while
+ // that broker's high water mark catches up. This avoids situations where
+ // the old broker returned higher offsets than the new broker would. Note
+ // that if unclean leader election is allowed, you could still run into
+ // the situation where offsets returned from list offsets requests are
+ // not monotonically increasing. This error is only returned if the
+ // request used the consumer replica ID (-1). If the client did not use
+ // a v5+ list offsets request, LEADER_NOT_AVAILABLE is returned.
+ // See KIP-207 for more details.
+ ErrorCode int16
+
+ // OldStyleOffsets is a list of offsets. This was removed after
+ // version 0 and, since it is so historic, is undocumented.
+ OldStyleOffsets []int64
+
+ // If the request was for the earliest or latest timestamp (-2 or -1), or
+ // if an offset could not be found after the requested one, this will be -1.
+ //
+ // This field has a default of -1.
+ Timestamp int64 // v1+
+
+ // Offset is the offset corresponding to the record on or after the
+ // requested timestamp. If one could not be found, this will be -1.
+ //
+ // This field has a default of -1.
+ Offset int64 // v1+
+
+ // LeaderEpoch is the leader epoch of the record at this offset,
+ // or -1 if there was no leader epoch.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32 // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListOffsetsResponseTopicPartition.
+func (v *ListOffsetsResponseTopicPartition) Default() {
+ v.Timestamp = -1
+ v.Offset = -1
+ v.LeaderEpoch = -1
+}
+
+// NewListOffsetsResponseTopicPartition returns a default ListOffsetsResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListOffsetsResponseTopicPartition() ListOffsetsResponseTopicPartition {
+ var v ListOffsetsResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type ListOffsetsResponseTopic struct {
+ // Topic is the topic this array slot is for.
+ Topic string
+
+ // Partitions is an array of partition responses corresponding to
+ // the requested partitions for a topic.
+ Partitions []ListOffsetsResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListOffsetsResponseTopic.
+func (v *ListOffsetsResponseTopic) Default() {
+}
+
+// NewListOffsetsResponseTopic returns a default ListOffsetsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListOffsetsResponseTopic() ListOffsetsResponseTopic {
+ var v ListOffsetsResponseTopic
+ v.Default()
+ return v
+}
+
+// ListOffsetsResponse is returned from a ListOffsetsRequest.
+type ListOffsetsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 3.
+ ThrottleMillis int32 // v2+
+
+ // Topics is an array of topic / partition responses corresponding to
+ // the requested topics and partitions.
+ Topics []ListOffsetsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*ListOffsetsResponse) Key() int16 { return 2 }
+func (*ListOffsetsResponse) MaxVersion() int16 { return 8 }
+func (v *ListOffsetsResponse) SetVersion(version int16) { v.Version = version }
+func (v *ListOffsetsResponse) GetVersion() int16 { return v.Version }
+func (v *ListOffsetsResponse) IsFlexible() bool { return v.Version >= 6 }
+func (v *ListOffsetsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 3 }
+func (v *ListOffsetsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *ListOffsetsResponse) RequestKind() Request { return &ListOffsetsRequest{Version: v.Version} }
+
+func (v *ListOffsetsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ if version >= 2 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 0 && version <= 0 {
+ v := v.OldStyleOffsets
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt64(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.Timestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 4 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListOffsetsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListOffsetsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListOffsetsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ if version >= 2 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListOffsetsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListOffsetsResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 0 && version <= 0 {
+ v := s.OldStyleOffsets
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int64, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int64()
+ a[i] = v
+ }
+ v = a
+ s.OldStyleOffsets = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.Timestamp = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.Offset = v
+ }
+ if version >= 4 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListOffsetsResponse returns a pointer to a default ListOffsetsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListOffsetsResponse() *ListOffsetsResponse {
+ var v ListOffsetsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListOffsetsResponse.
+func (v *ListOffsetsResponse) Default() {
+}
+
+// NewListOffsetsResponse returns a default ListOffsetsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListOffsetsResponse() ListOffsetsResponse {
+ var v ListOffsetsResponse
+ v.Default()
+ return v
+}
+
+type MetadataRequestTopic struct {
+ // The topic ID. Only one of either topic ID or topic name should be used.
+ // If using the topic name, this should just be the default empty value.
+ TopicID [16]byte // v10+
+
+ // Topic is the topic to request metadata for. Version 10 switched this
+ // from a string to a nullable string; if using a topic ID, this field
+ // should be null.
+ Topic *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MetadataRequestTopic.
+func (v *MetadataRequestTopic) Default() {
+}
+
+// NewMetadataRequestTopic returns a default MetadataRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMetadataRequestTopic() MetadataRequestTopic {
+ var v MetadataRequestTopic
+ v.Default()
+ return v
+}
+
+// MetadataRequest requests metadata from Kafka.
+type MetadataRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics is a list of topics to return metadata about. If this is null
+ // in v1+, all topics are included. If this is empty, no topics are.
+ // For v0 (= 9 }
+func (v *MetadataRequest) ResponseKind() Response {
+ r := &MetadataResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *MetadataRequest) RequestWith(ctx context.Context, r Requestor) (*MetadataResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*MetadataResponse)
+ return resp, err
+}
+
+func (v *MetadataRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ {
+ v := v.Topics
+ if version >= 1 {
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ } else {
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 10 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Topic
+ if version < 10 {
+ var vv string
+ if v != nil {
+ vv = *v
+ }
+ {
+ v := vv
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ } else {
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 4 {
+ v := v.AllowAutoTopicCreation
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 8 && version <= 10 {
+ v := v.IncludeClusterAuthorizedOperations
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 8 {
+ v := v.IncludeTopicAuthorizedOperations
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *MetadataRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *MetadataRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *MetadataRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 1 || l == 0 {
+ a = []MetadataRequestTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]MetadataRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 10 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ var v *string
+ if version < 10 {
+ var vv string
+ if isFlexible {
+ if unsafe {
+ vv = b.UnsafeCompactString()
+ } else {
+ vv = b.CompactString()
+ }
+ } else {
+ if unsafe {
+ vv = b.UnsafeString()
+ } else {
+ vv = b.String()
+ }
+ }
+ v = &vv
+ } else {
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ }
+ s.Topic = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 4 {
+ v := b.Bool()
+ s.AllowAutoTopicCreation = v
+ }
+ if version >= 8 && version <= 10 {
+ v := b.Bool()
+ s.IncludeClusterAuthorizedOperations = v
+ }
+ if version >= 8 {
+ v := b.Bool()
+ s.IncludeTopicAuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrMetadataRequest returns a pointer to a default MetadataRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrMetadataRequest() *MetadataRequest {
+ var v MetadataRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MetadataRequest.
+func (v *MetadataRequest) Default() {
+}
+
+// NewMetadataRequest returns a default MetadataRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMetadataRequest() MetadataRequest {
+ var v MetadataRequest
+ v.Default()
+ return v
+}
+
+type MetadataResponseBroker struct {
+ // NodeID is the node ID of a Kafka broker.
+ NodeID int32
+
+ // Host is the hostname of a Kafka broker.
+ Host string
+
+ // Port is the port of a Kafka broker.
+ Port int32
+
+ // Rack is the rack this Kafka broker is in.
+ Rack *string // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MetadataResponseBroker.
+func (v *MetadataResponseBroker) Default() {
+}
+
+// NewMetadataResponseBroker returns a default MetadataResponseBroker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMetadataResponseBroker() MetadataResponseBroker {
+ var v MetadataResponseBroker
+ v.Default()
+ return v
+}
+
+type MetadataResponseTopicPartition struct {
+ // ErrorCode is any error for a partition in topic metadata.
+ //
+ // LEADER_NOT_AVAILABLE is returned if a leader is unavailable for this
+ // partition. For v0 metadata responses, this is also returned if a
+ // partition leader's listener does not exist.
+ //
+ // LISTENER_NOT_FOUND is returned if a leader ID is known but the
+ // listener for it is not (v1+).
+ //
+ // REPLICA_NOT_AVAILABLE is returned in v0 responses if any replica is
+ // unavailable.
+ //
+ // UNKNOWN_TOPIC_ID is returned if using a topic ID and the ID does not
+ // exist.
+ ErrorCode int16
+
+ // Partition is a partition number for a topic.
+ Partition int32
+
+ // Leader is the broker leader for this partition. This will be -1
+ // on leader / listener error.
+ Leader int32
+
+ // LeaderEpoch, proposed in KIP-320 and introduced in Kafka 2.1.0 is the
+ // epoch of the broker leader.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32 // v7+
+
+ // Replicas returns all broker IDs containing replicas of this partition.
+ Replicas []int32
+
+ // ISR returns all broker IDs of in-sync replicas of this partition.
+ ISR []int32
+
+ // OfflineReplicas, proposed in KIP-112 and introduced in Kafka 1.0,
+ // returns all offline broker IDs that should be replicating this partition.
+ OfflineReplicas []int32 // v5+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MetadataResponseTopicPartition.
+func (v *MetadataResponseTopicPartition) Default() {
+ v.LeaderEpoch = -1
+}
+
+// NewMetadataResponseTopicPartition returns a default MetadataResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMetadataResponseTopicPartition() MetadataResponseTopicPartition {
+ var v MetadataResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type MetadataResponseTopic struct {
+ // ErrorCode is any error for a topic in a metadata request.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to describe the topic, or if the metadata request specified topic auto
+ // creation, the topic did not exist, and the user lacks permission to create.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if a topic does not exist and
+ // the request did not specify autocreation.
+ //
+ // LEADER_NOT_AVAILABLE is returned if a new topic is created successfully
+ // (since there is no leader on an immediately new topic).
+ //
+ // There can be a myriad of other errors for unsuccessful topic creation.
+ ErrorCode int16
+
+ // Topic is the topic this metadata corresponds to.
+ Topic *string
+
+ // The topic ID.
+ TopicID [16]byte // v10+
+
+ // IsInternal signifies whether this topic is a Kafka internal topic.
+ IsInternal bool // v1+
+
+ // Partitions contains metadata about partitions for a topic.
+ Partitions []MetadataResponseTopicPartition
+
+ // AuthorizedOperations, proposed in KIP-430 and introduced in Kafka 2.3.0,
+ // is a bitfield (corresponding to AclOperation) containing which operations
+ // the client is allowed to perform on this topic.
+ // This is only returned if requested.
+ //
+ // This field has a default of -2147483648.
+ AuthorizedOperations int32 // v8+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MetadataResponseTopic.
+func (v *MetadataResponseTopic) Default() {
+ v.AuthorizedOperations = -2147483648
+}
+
+// NewMetadataResponseTopic returns a default MetadataResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMetadataResponseTopic() MetadataResponseTopic {
+ var v MetadataResponseTopic
+ v.Default()
+ return v
+}
+
+// MetadataResponse is returned from a MetdataRequest.
+type MetadataResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 6.
+ ThrottleMillis int32 // v3+
+
+ // Brokers is a set of alive Kafka brokers.
+ Brokers []MetadataResponseBroker
+
+ // ClusterID, proposed in KIP-78 and introduced in Kafka 0.10.1.0, is a
+ // unique string specifying the cluster that the replying Kafka belongs to.
+ ClusterID *string // v2+
+
+ // ControllerID is the ID of the controller broker (the admin broker).
+ //
+ // This field has a default of -1.
+ ControllerID int32 // v1+
+
+ // Topics contains metadata about each topic requested in the
+ // MetadataRequest.
+ Topics []MetadataResponseTopic
+
+ // AuthorizedOperations is a bitfield containing which operations the client
+ // is allowed to perform on this cluster.
+ //
+ // This field has a default of -2147483648.
+ AuthorizedOperations int32 // v8-v10
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v9+
+}
+
+func (*MetadataResponse) Key() int16 { return 3 }
+func (*MetadataResponse) MaxVersion() int16 { return 12 }
+func (v *MetadataResponse) SetVersion(version int16) { v.Version = version }
+func (v *MetadataResponse) GetVersion() int16 { return v.Version }
+func (v *MetadataResponse) IsFlexible() bool { return v.Version >= 9 }
+func (v *MetadataResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 6 }
+func (v *MetadataResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *MetadataResponse) RequestKind() Request { return &MetadataRequest{Version: v.Version} }
+
+func (v *MetadataResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ if version >= 3 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Brokers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.NodeID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 2 {
+ v := v.ClusterID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.ControllerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topic
+ if version < 12 {
+ var vv string
+ if v != nil {
+ vv = *v
+ }
+ {
+ v := vv
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ } else {
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ }
+ if version >= 10 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ if version >= 1 {
+ v := v.IsInternal
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 7 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.ISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.OfflineReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 8 {
+ v := v.AuthorizedOperations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 8 && version <= 10 {
+ v := v.AuthorizedOperations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *MetadataResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *MetadataResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *MetadataResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 9
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Brokers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]MetadataResponseBroker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.NodeID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ if version >= 1 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Rack = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Brokers = v
+ }
+ if version >= 2 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ClusterID = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.ControllerID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]MetadataResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if version < 12 {
+ var vv string
+ if isFlexible {
+ if unsafe {
+ vv = b.UnsafeCompactString()
+ } else {
+ vv = b.CompactString()
+ }
+ } else {
+ if unsafe {
+ vv = b.UnsafeString()
+ } else {
+ vv = b.String()
+ }
+ }
+ v = &vv
+ } else {
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 10 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.IsInternal = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]MetadataResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.Leader = v
+ }
+ if version >= 7 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ {
+ v := s.ISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.ISR = v
+ }
+ if version >= 5 {
+ v := s.OfflineReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.OfflineReplicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if version >= 8 {
+ v := b.Int32()
+ s.AuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 8 && version <= 10 {
+ v := b.Int32()
+ s.AuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrMetadataResponse returns a pointer to a default MetadataResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrMetadataResponse() *MetadataResponse {
+ var v MetadataResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to MetadataResponse.
+func (v *MetadataResponse) Default() {
+ v.ControllerID = -1
+ v.AuthorizedOperations = -2147483648
+}
+
+// NewMetadataResponse returns a default MetadataResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewMetadataResponse() MetadataResponse {
+ var v MetadataResponse
+ v.Default()
+ return v
+}
+
+// LeaderAndISRRequestTopicPartition is a common struct that is used across
+// different versions of LeaderAndISRRequest.
+type LeaderAndISRRequestTopicPartition struct {
+ Topic string // v0-v1
+
+ Partition int32
+
+ ControllerEpoch int32
+
+ Leader int32
+
+ LeaderEpoch int32
+
+ ISR []int32
+
+ ZKVersion int32
+
+ Replicas []int32
+
+ AddingReplicas []int32 // v3+
+
+ RemovingReplicas []int32 // v3+
+
+ IsNew bool // v1+
+
+ LeaderRecoveryState int8 // v6+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRRequestTopicPartition.
+func (v *LeaderAndISRRequestTopicPartition) Default() {
+}
+
+// NewLeaderAndISRRequestTopicPartition returns a default LeaderAndISRRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRRequestTopicPartition() LeaderAndISRRequestTopicPartition {
+ var v LeaderAndISRRequestTopicPartition
+ v.Default()
+ return v
+}
+
+// LeaderAndISRResponseTopicPartition is a common struct that is used across
+// different versions of LeaderAndISRResponse.
+type LeaderAndISRResponseTopicPartition struct {
+ Topic string // v0-v4
+
+ Partition int32
+
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRResponseTopicPartition.
+func (v *LeaderAndISRResponseTopicPartition) Default() {
+}
+
+// NewLeaderAndISRResponseTopicPartition returns a default LeaderAndISRResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRResponseTopicPartition() LeaderAndISRResponseTopicPartition {
+ var v LeaderAndISRResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type LeaderAndISRRequestTopicState struct {
+ Topic string
+
+ TopicID [16]byte // v5+
+
+ PartitionStates []LeaderAndISRRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRRequestTopicState.
+func (v *LeaderAndISRRequestTopicState) Default() {
+}
+
+// NewLeaderAndISRRequestTopicState returns a default LeaderAndISRRequestTopicState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRRequestTopicState() LeaderAndISRRequestTopicState {
+ var v LeaderAndISRRequestTopicState
+ v.Default()
+ return v
+}
+
+type LeaderAndISRRequestLiveLeader struct {
+ BrokerID int32
+
+ Host string
+
+ Port int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRRequestLiveLeader.
+func (v *LeaderAndISRRequestLiveLeader) Default() {
+}
+
+// NewLeaderAndISRRequestLiveLeader returns a default LeaderAndISRRequestLiveLeader
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRRequestLiveLeader() LeaderAndISRRequestLiveLeader {
+ var v LeaderAndISRRequestLiveLeader
+ v.Default()
+ return v
+}
+
+// LeaderAndISRRequest is an advanced request that controller brokers use
+// to broadcast state to other brokers. Manually using this request is a
+// great way to break your cluster.
+//
+// As this is an advanced request and there is little reason to issue it as a
+// client, this request is undocumented.
+//
+// Kafka 1.0 introduced version 1. Kafka 2.2 introduced version 2, proposed
+// in KIP-380, which changed the layout of the struct to be more memory
+// efficient. Kafka 2.4.0 introduced version 3 with KIP-455.
+// Kafka 3.4 introduced version 7 with KIP-866.
+type LeaderAndISRRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ControllerID int32
+
+ // If KRaft controller id is used during migration. See KIP-866.
+ IsKRaftController bool // v7+
+
+ ControllerEpoch int32
+
+ // This field has a default of -1.
+ BrokerEpoch int64 // v2+
+
+ Type int8 // v5+
+
+ PartitionStates []LeaderAndISRRequestTopicPartition // v0-v1
+
+ TopicStates []LeaderAndISRRequestTopicState // v2+
+
+ LiveLeaders []LeaderAndISRRequestLiveLeader
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*LeaderAndISRRequest) Key() int16 { return 4 }
+func (*LeaderAndISRRequest) MaxVersion() int16 { return 7 }
+func (v *LeaderAndISRRequest) SetVersion(version int16) { v.Version = version }
+func (v *LeaderAndISRRequest) GetVersion() int16 { return v.Version }
+func (v *LeaderAndISRRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *LeaderAndISRRequest) ResponseKind() Response {
+ r := &LeaderAndISRResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *LeaderAndISRRequest) RequestWith(ctx context.Context, r Requestor) (*LeaderAndISRResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*LeaderAndISRResponse)
+ return resp, err
+}
+
+func (v *LeaderAndISRRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.ControllerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 7 {
+ v := v.IsKRaftController
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 2 {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 5 {
+ v := v.Type
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if version >= 0 && version <= 1 {
+ v := v.PartitionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 1 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.ZKVersion
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.AddingReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.RemovingReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.IsNew
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 6 {
+ v := v.LeaderRecoveryState
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 2 {
+ v := v.TopicStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.PartitionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 1 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.ZKVersion
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.AddingReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.RemovingReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.IsNew
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 6 {
+ v := v.LeaderRecoveryState
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.LiveLeaders
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *LeaderAndISRRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *LeaderAndISRRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *LeaderAndISRRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ControllerID = v
+ }
+ if version >= 7 {
+ v := b.Bool()
+ s.IsKRaftController = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ if version >= 2 {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ if version >= 5 {
+ v := b.Int8()
+ s.Type = v
+ }
+ if version >= 0 && version <= 1 {
+ v := s.PartitionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 1 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.Leader = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.ISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.ISR = v
+ }
+ {
+ v := b.Int32()
+ s.ZKVersion = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if version >= 3 {
+ v := s.AddingReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.AddingReplicas = v
+ }
+ if version >= 3 {
+ v := s.RemovingReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.RemovingReplicas = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.IsNew = v
+ }
+ if version >= 6 {
+ v := b.Int8()
+ s.LeaderRecoveryState = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.PartitionStates = v
+ }
+ if version >= 2 {
+ v := s.TopicStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRRequestTopicState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 5 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.PartitionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 1 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.Leader = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.ISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.ISR = v
+ }
+ {
+ v := b.Int32()
+ s.ZKVersion = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if version >= 3 {
+ v := s.AddingReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.AddingReplicas = v
+ }
+ if version >= 3 {
+ v := s.RemovingReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.RemovingReplicas = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.IsNew = v
+ }
+ if version >= 6 {
+ v := b.Int8()
+ s.LeaderRecoveryState = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.PartitionStates = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TopicStates = v
+ }
+ {
+ v := s.LiveLeaders
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRRequestLiveLeader, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.LiveLeaders = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrLeaderAndISRRequest returns a pointer to a default LeaderAndISRRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrLeaderAndISRRequest() *LeaderAndISRRequest {
+ var v LeaderAndISRRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRRequest.
+func (v *LeaderAndISRRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewLeaderAndISRRequest returns a default LeaderAndISRRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRRequest() LeaderAndISRRequest {
+ var v LeaderAndISRRequest
+ v.Default()
+ return v
+}
+
+type LeaderAndISRResponseTopic struct {
+ TopicID [16]byte
+
+ Partitions []LeaderAndISRResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRResponseTopic.
+func (v *LeaderAndISRResponseTopic) Default() {
+}
+
+// NewLeaderAndISRResponseTopic returns a default LeaderAndISRResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRResponseTopic() LeaderAndISRResponseTopic {
+ var v LeaderAndISRResponseTopic
+ v.Default()
+ return v
+}
+
+// LeaderAndISRResponse is returned from a LeaderAndISRRequest.
+type LeaderAndISRResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ Partitions []LeaderAndISRResponseTopicPartition // v0-v4
+
+ Topics []LeaderAndISRResponseTopic // v5+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*LeaderAndISRResponse) Key() int16 { return 4 }
+func (*LeaderAndISRResponse) MaxVersion() int16 { return 7 }
+func (v *LeaderAndISRResponse) SetVersion(version int16) { v.Version = version }
+func (v *LeaderAndISRResponse) GetVersion() int16 { return v.Version }
+func (v *LeaderAndISRResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *LeaderAndISRResponse) RequestKind() Request { return &LeaderAndISRRequest{Version: v.Version} }
+
+func (v *LeaderAndISRResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 0 && version <= 4 {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 4 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 5 {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 4 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *LeaderAndISRResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *LeaderAndISRResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *LeaderAndISRResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 0 && version <= 4 {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 4 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if version >= 5 {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaderAndISRResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 4 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrLeaderAndISRResponse returns a pointer to a default LeaderAndISRResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrLeaderAndISRResponse() *LeaderAndISRResponse {
+ var v LeaderAndISRResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaderAndISRResponse.
+func (v *LeaderAndISRResponse) Default() {
+}
+
+// NewLeaderAndISRResponse returns a default LeaderAndISRResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaderAndISRResponse() LeaderAndISRResponse {
+ var v LeaderAndISRResponse
+ v.Default()
+ return v
+}
+
+type StopReplicaRequestTopicPartitionState struct {
+ Partition int32
+
+ // This field has a default of -1.
+ LeaderEpoch int32
+
+ Delete bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StopReplicaRequestTopicPartitionState.
+func (v *StopReplicaRequestTopicPartitionState) Default() {
+ v.LeaderEpoch = -1
+}
+
+// NewStopReplicaRequestTopicPartitionState returns a default StopReplicaRequestTopicPartitionState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStopReplicaRequestTopicPartitionState() StopReplicaRequestTopicPartitionState {
+ var v StopReplicaRequestTopicPartitionState
+ v.Default()
+ return v
+}
+
+type StopReplicaRequestTopic struct {
+ Topic string
+
+ Partition int32
+
+ Partitions []int32 // v1-v2
+
+ PartitionStates []StopReplicaRequestTopicPartitionState // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StopReplicaRequestTopic.
+func (v *StopReplicaRequestTopic) Default() {
+}
+
+// NewStopReplicaRequestTopic returns a default StopReplicaRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStopReplicaRequestTopic() StopReplicaRequestTopic {
+ var v StopReplicaRequestTopic
+ v.Default()
+ return v
+}
+
+// StopReplicaRequest is an advanced request that brokers use to stop replicas.
+//
+// As this is an advanced request and there is little reason to issue it as a
+// client, this request is undocumented.
+//
+// Kafka 2.2 introduced version 1, proposed in KIP-380, which changed the
+// layout of the struct to be more memory efficient.
+//
+// Kafka 2.6 introduced version 3, proposed in KIP-570, reorganizes partitions
+// to be stored and adds the leader epoch and delete partition fields per partition.
+// Kafka 3.4 introduced version 4 with KIP-866.
+type StopReplicaRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ControllerID int32
+
+ ControllerEpoch int32
+
+ // If KRaft controller id is used during migration. See KIP-866.
+ IsKRaftController bool // v4+
+
+ // This field has a default of -1.
+ BrokerEpoch int64 // v1+
+
+ DeletePartitions bool // v0-v2
+
+ Topics []StopReplicaRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*StopReplicaRequest) Key() int16 { return 5 }
+func (*StopReplicaRequest) MaxVersion() int16 { return 4 }
+func (v *StopReplicaRequest) SetVersion(version int16) { v.Version = version }
+func (v *StopReplicaRequest) GetVersion() int16 { return v.Version }
+func (v *StopReplicaRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *StopReplicaRequest) ResponseKind() Response {
+ r := &StopReplicaResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *StopReplicaRequest) RequestWith(ctx context.Context, r Requestor) (*StopReplicaResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*StopReplicaResponse)
+ return resp, err
+}
+
+func (v *StopReplicaRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ControllerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 4 {
+ v := v.IsKRaftController
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 1 {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 0 && version <= 2 {
+ v := v.DeletePartitions
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 0 {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 && version <= 2 {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.PartitionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Delete
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *StopReplicaRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *StopReplicaRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *StopReplicaRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ControllerID = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ if version >= 4 {
+ v := b.Bool()
+ s.IsKRaftController = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ if version >= 0 && version <= 2 {
+ v := b.Bool()
+ s.DeletePartitions = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]StopReplicaRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 0 && version <= 0 {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if version >= 1 && version <= 2 {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if version >= 3 {
+ v := s.PartitionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]StopReplicaRequestTopicPartitionState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := b.Bool()
+ s.Delete = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.PartitionStates = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrStopReplicaRequest returns a pointer to a default StopReplicaRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrStopReplicaRequest() *StopReplicaRequest {
+ var v StopReplicaRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StopReplicaRequest.
+func (v *StopReplicaRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewStopReplicaRequest returns a default StopReplicaRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStopReplicaRequest() StopReplicaRequest {
+ var v StopReplicaRequest
+ v.Default()
+ return v
+}
+
+type StopReplicaResponsePartition struct {
+ Topic string
+
+ Partition int32
+
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StopReplicaResponsePartition.
+func (v *StopReplicaResponsePartition) Default() {
+}
+
+// NewStopReplicaResponsePartition returns a default StopReplicaResponsePartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStopReplicaResponsePartition() StopReplicaResponsePartition {
+ var v StopReplicaResponsePartition
+ v.Default()
+ return v
+}
+
+// StopReplicasResponse is returned from a StopReplicasRequest.
+type StopReplicaResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Version 3 returns FENCED_LEADER_EPOCH if the leader is stale (KIP-570).
+ ErrorCode int16
+
+ Partitions []StopReplicaResponsePartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*StopReplicaResponse) Key() int16 { return 5 }
+func (*StopReplicaResponse) MaxVersion() int16 { return 4 }
+func (v *StopReplicaResponse) SetVersion(version int16) { v.Version = version }
+func (v *StopReplicaResponse) GetVersion() int16 { return v.Version }
+func (v *StopReplicaResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *StopReplicaResponse) RequestKind() Request { return &StopReplicaRequest{Version: v.Version} }
+
+func (v *StopReplicaResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *StopReplicaResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *StopReplicaResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *StopReplicaResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]StopReplicaResponsePartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrStopReplicaResponse returns a pointer to a default StopReplicaResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrStopReplicaResponse() *StopReplicaResponse {
+ var v StopReplicaResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to StopReplicaResponse.
+func (v *StopReplicaResponse) Default() {
+}
+
+// NewStopReplicaResponse returns a default StopReplicaResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewStopReplicaResponse() StopReplicaResponse {
+ var v StopReplicaResponse
+ v.Default()
+ return v
+}
+
+type UpdateMetadataRequestTopicPartition struct {
+ Topic string // v0-v4
+
+ Partition int32
+
+ ControllerEpoch int32
+
+ Leader int32
+
+ LeaderEpoch int32
+
+ ISR []int32
+
+ ZKVersion int32
+
+ Replicas []int32
+
+ OfflineReplicas []int32 // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateMetadataRequestTopicPartition.
+func (v *UpdateMetadataRequestTopicPartition) Default() {
+}
+
+// NewUpdateMetadataRequestTopicPartition returns a default UpdateMetadataRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateMetadataRequestTopicPartition() UpdateMetadataRequestTopicPartition {
+ var v UpdateMetadataRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type UpdateMetadataRequestTopicState struct {
+ Topic string
+
+ TopicID [16]byte // v7+
+
+ PartitionStates []UpdateMetadataRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateMetadataRequestTopicState.
+func (v *UpdateMetadataRequestTopicState) Default() {
+}
+
+// NewUpdateMetadataRequestTopicState returns a default UpdateMetadataRequestTopicState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateMetadataRequestTopicState() UpdateMetadataRequestTopicState {
+ var v UpdateMetadataRequestTopicState
+ v.Default()
+ return v
+}
+
+type UpdateMetadataRequestLiveBrokerEndpoint struct {
+ Port int32
+
+ Host string
+
+ ListenerName string // v3+
+
+ SecurityProtocol int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateMetadataRequestLiveBrokerEndpoint.
+func (v *UpdateMetadataRequestLiveBrokerEndpoint) Default() {
+}
+
+// NewUpdateMetadataRequestLiveBrokerEndpoint returns a default UpdateMetadataRequestLiveBrokerEndpoint
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateMetadataRequestLiveBrokerEndpoint() UpdateMetadataRequestLiveBrokerEndpoint {
+ var v UpdateMetadataRequestLiveBrokerEndpoint
+ v.Default()
+ return v
+}
+
+type UpdateMetadataRequestLiveBroker struct {
+ ID int32
+
+ Host string
+
+ Port int32
+
+ Endpoints []UpdateMetadataRequestLiveBrokerEndpoint // v1+
+
+ Rack *string // v2+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateMetadataRequestLiveBroker.
+func (v *UpdateMetadataRequestLiveBroker) Default() {
+}
+
+// NewUpdateMetadataRequestLiveBroker returns a default UpdateMetadataRequestLiveBroker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateMetadataRequestLiveBroker() UpdateMetadataRequestLiveBroker {
+ var v UpdateMetadataRequestLiveBroker
+ v.Default()
+ return v
+}
+
+// UpdateMetadataRequest is an advanced request that brokers use to
+// issue metadata updates to each other.
+//
+// As this is an advanced request and there is little reason to issue it as a
+// client, this request is undocumented.
+//
+// Version 1 changed the layout of the live brokers.
+//
+// Kafka 2.2 introduced version 5, proposed in KIP-380, which changed the
+// layout of the struct to be more memory efficient.
+// Kafka 3.4 introduced version 8 with KIP-866.
+type UpdateMetadataRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ControllerID int32
+
+ // If KRaft controller id is used during migration. See KIP-866.
+ IsKRaftController bool // v8+
+
+ ControllerEpoch int32
+
+ // This field has a default of -1.
+ BrokerEpoch int64 // v5+
+
+ PartitionStates []UpdateMetadataRequestTopicPartition // v0-v4
+
+ TopicStates []UpdateMetadataRequestTopicState // v5+
+
+ LiveBrokers []UpdateMetadataRequestLiveBroker
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*UpdateMetadataRequest) Key() int16 { return 6 }
+func (*UpdateMetadataRequest) MaxVersion() int16 { return 8 }
+func (v *UpdateMetadataRequest) SetVersion(version int16) { v.Version = version }
+func (v *UpdateMetadataRequest) GetVersion() int16 { return v.Version }
+func (v *UpdateMetadataRequest) IsFlexible() bool { return v.Version >= 6 }
+func (v *UpdateMetadataRequest) ResponseKind() Response {
+ r := &UpdateMetadataResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *UpdateMetadataRequest) RequestWith(ctx context.Context, r Requestor) (*UpdateMetadataResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*UpdateMetadataResponse)
+ return resp, err
+}
+
+func (v *UpdateMetadataRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ {
+ v := v.ControllerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 8 {
+ v := v.IsKRaftController
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 5 {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 0 && version <= 4 {
+ v := v.PartitionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 4 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.ZKVersion
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 4 {
+ v := v.OfflineReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 5 {
+ v := v.TopicStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 7 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.PartitionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 4 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ControllerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Leader
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.ZKVersion
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 4 {
+ v := v.OfflineReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.LiveBrokers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 0 && version <= 0 {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 0 {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.Endpoints
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.ListenerName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.SecurityProtocol
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 2 {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *UpdateMetadataRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *UpdateMetadataRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *UpdateMetadataRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ControllerID = v
+ }
+ if version >= 8 {
+ v := b.Bool()
+ s.IsKRaftController = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ if version >= 5 {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ if version >= 0 && version <= 4 {
+ v := s.PartitionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateMetadataRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 4 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.Leader = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.ISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.ISR = v
+ }
+ {
+ v := b.Int32()
+ s.ZKVersion = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if version >= 4 {
+ v := s.OfflineReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.OfflineReplicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.PartitionStates = v
+ }
+ if version >= 5 {
+ v := s.TopicStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateMetadataRequestTopicState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 7 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.PartitionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateMetadataRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 4 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.Leader = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.ISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.ISR = v
+ }
+ {
+ v := b.Int32()
+ s.ZKVersion = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if version >= 4 {
+ v := s.OfflineReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.OfflineReplicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.PartitionStates = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TopicStates = v
+ }
+ {
+ v := s.LiveBrokers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateMetadataRequestLiveBroker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.ID = v
+ }
+ if version >= 0 && version <= 0 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ if version >= 0 && version <= 0 {
+ v := b.Int32()
+ s.Port = v
+ }
+ if version >= 1 {
+ v := s.Endpoints
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateMetadataRequestLiveBrokerEndpoint, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ListenerName = v
+ }
+ {
+ v := b.Int16()
+ s.SecurityProtocol = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Endpoints = v
+ }
+ if version >= 2 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Rack = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.LiveBrokers = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrUpdateMetadataRequest returns a pointer to a default UpdateMetadataRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrUpdateMetadataRequest() *UpdateMetadataRequest {
+ var v UpdateMetadataRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateMetadataRequest.
+func (v *UpdateMetadataRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewUpdateMetadataRequest returns a default UpdateMetadataRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateMetadataRequest() UpdateMetadataRequest {
+ var v UpdateMetadataRequest
+ v.Default()
+ return v
+}
+
+// UpdateMetadataResponses is returned from an UpdateMetadataRequest.
+type UpdateMetadataResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*UpdateMetadataResponse) Key() int16 { return 6 }
+func (*UpdateMetadataResponse) MaxVersion() int16 { return 8 }
+func (v *UpdateMetadataResponse) SetVersion(version int16) { v.Version = version }
+func (v *UpdateMetadataResponse) GetVersion() int16 { return v.Version }
+func (v *UpdateMetadataResponse) IsFlexible() bool { return v.Version >= 6 }
+func (v *UpdateMetadataResponse) RequestKind() Request {
+ return &UpdateMetadataRequest{Version: v.Version}
+}
+
+func (v *UpdateMetadataResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *UpdateMetadataResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *UpdateMetadataResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *UpdateMetadataResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrUpdateMetadataResponse returns a pointer to a default UpdateMetadataResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrUpdateMetadataResponse() *UpdateMetadataResponse {
+ var v UpdateMetadataResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateMetadataResponse.
+func (v *UpdateMetadataResponse) Default() {
+}
+
+// NewUpdateMetadataResponse returns a default UpdateMetadataResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateMetadataResponse() UpdateMetadataResponse {
+ var v UpdateMetadataResponse
+ v.Default()
+ return v
+}
+
+// ControlledShutdownRequest is an advanced request that can be used to
+// sthudown a broker in a controlled manner.
+//
+// As this is an advanced request and there is little reason to issue it as a
+// client, this request is undocumented. However, the minimal amount of fields
+// here makes the usage rather obvious.
+//
+// Kafka 2.2.0 introduced version 2, proposed in KIP-380.
+//
+// Note that version 0 of this request uses a special encoding format
+// where the request does not include the client ID.
+type ControlledShutdownRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ BrokerID int32
+
+ // This field has a default of -1.
+ BrokerEpoch int64 // v2+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*ControlledShutdownRequest) Key() int16 { return 7 }
+func (*ControlledShutdownRequest) MaxVersion() int16 { return 3 }
+func (v *ControlledShutdownRequest) SetVersion(version int16) { v.Version = version }
+func (v *ControlledShutdownRequest) GetVersion() int16 { return v.Version }
+func (v *ControlledShutdownRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *ControlledShutdownRequest) ResponseKind() Response {
+ r := &ControlledShutdownResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ControlledShutdownRequest) RequestWith(ctx context.Context, r Requestor) (*ControlledShutdownResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ControlledShutdownResponse)
+ return resp, err
+}
+
+func (v *ControlledShutdownRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 2 {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ControlledShutdownRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ControlledShutdownRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ControlledShutdownRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ if version >= 2 {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrControlledShutdownRequest returns a pointer to a default ControlledShutdownRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrControlledShutdownRequest() *ControlledShutdownRequest {
+ var v ControlledShutdownRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ControlledShutdownRequest.
+func (v *ControlledShutdownRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewControlledShutdownRequest returns a default ControlledShutdownRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewControlledShutdownRequest() ControlledShutdownRequest {
+ var v ControlledShutdownRequest
+ v.Default()
+ return v
+}
+
+type ControlledShutdownResponsePartitionsRemaining struct {
+ Topic string
+
+ Partition int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ControlledShutdownResponsePartitionsRemaining.
+func (v *ControlledShutdownResponsePartitionsRemaining) Default() {
+}
+
+// NewControlledShutdownResponsePartitionsRemaining returns a default ControlledShutdownResponsePartitionsRemaining
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewControlledShutdownResponsePartitionsRemaining() ControlledShutdownResponsePartitionsRemaining {
+ var v ControlledShutdownResponsePartitionsRemaining
+ v.Default()
+ return v
+}
+
+// ControlledShutdownResponse is returned from a ControlledShutdownRequest.
+type ControlledShutdownResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ PartitionsRemaining []ControlledShutdownResponsePartitionsRemaining
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*ControlledShutdownResponse) Key() int16 { return 7 }
+func (*ControlledShutdownResponse) MaxVersion() int16 { return 3 }
+func (v *ControlledShutdownResponse) SetVersion(version int16) { v.Version = version }
+func (v *ControlledShutdownResponse) GetVersion() int16 { return v.Version }
+func (v *ControlledShutdownResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *ControlledShutdownResponse) RequestKind() Request {
+ return &ControlledShutdownRequest{Version: v.Version}
+}
+
+func (v *ControlledShutdownResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.PartitionsRemaining
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ControlledShutdownResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ControlledShutdownResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ControlledShutdownResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.PartitionsRemaining
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ControlledShutdownResponsePartitionsRemaining, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.PartitionsRemaining = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrControlledShutdownResponse returns a pointer to a default ControlledShutdownResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrControlledShutdownResponse() *ControlledShutdownResponse {
+ var v ControlledShutdownResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ControlledShutdownResponse.
+func (v *ControlledShutdownResponse) Default() {
+}
+
+// NewControlledShutdownResponse returns a default ControlledShutdownResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewControlledShutdownResponse() ControlledShutdownResponse {
+ var v ControlledShutdownResponse
+ v.Default()
+ return v
+}
+
+type OffsetCommitRequestTopicPartition struct {
+ // Partition if a partition to commit offsets for.
+ Partition int32
+
+ // Offset is an offset to commit.
+ Offset int64
+
+ // Timestamp is the first iteration of tracking how long offset commits
+ // should persist in Kafka. This field only existed for v1.
+ // The expiration would be timestamp + offset.retention.minutes, or, if
+ // timestamp was zero, current time + offset.retention.minutes.
+ //
+ // This field has a default of -1.
+ Timestamp int64 // v1-v1
+
+ // LeaderEpoch, proposed in KIP-320 and introduced in Kafka 2.1.0,
+ // is the leader epoch of the record this request is committing.
+ //
+ // The initial leader epoch can be determined from a MetadataResponse.
+ // To skip log truncation checking, use -1.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32 // v6+
+
+ // Metadata is optional data to include with committing the offset. This
+ // can contain information such as which node is doing the committing, etc.
+ Metadata *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v8+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitRequestTopicPartition.
+func (v *OffsetCommitRequestTopicPartition) Default() {
+ v.Timestamp = -1
+ v.LeaderEpoch = -1
+}
+
+// NewOffsetCommitRequestTopicPartition returns a default OffsetCommitRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitRequestTopicPartition() OffsetCommitRequestTopicPartition {
+ var v OffsetCommitRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetCommitRequestTopic struct {
+ // Topic is a topic to commit offsets for.
+ Topic string
+
+ // Partitions contains partitions in a topic for which to commit offsets.
+ Partitions []OffsetCommitRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v8+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitRequestTopic.
+func (v *OffsetCommitRequestTopic) Default() {
+}
+
+// NewOffsetCommitRequestTopic returns a default OffsetCommitRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitRequestTopic() OffsetCommitRequestTopic {
+ var v OffsetCommitRequestTopic
+ v.Default()
+ return v
+}
+
+// OffsetCommitRequest commits offsets for consumed topics / partitions in
+// a group.
+type OffsetCommitRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group this request is committing offsets to.
+ Group string
+
+ // Generation being -1 and group being empty means the group is being used
+ // to store offsets only. No generation validation, no rebalancing.
+ //
+ // This field has a default of -1.
+ Generation int32 // v1+
+
+ // MemberID is the ID of the client issuing this request in the group.
+ MemberID string // v1+
+
+ // InstanceID is the instance ID of this member in the group (KIP-345).
+ InstanceID *string // v7+
+
+ // RetentionTimeMillis is how long this commit will persist in Kafka.
+ //
+ // This was introduced in v2, replacing an individual topic/partition's
+ // Timestamp from v1, and was removed in v5 with Kafka 2.1.0.
+ //
+ // This was removed because rarely committing consumers could have their
+ // offsets expired before committing, even though the consumer was still
+ // active. After restarting or rebalancing, the consumer would now not know
+ // the last committed offset and would have to start at the beginning or end,
+ // leading to duplicates or log loss.
+ //
+ // Post 2.1.0, if this field is empty, offsets are only deleted once the
+ // group is empty. Read KIP-211 for more details.
+ //
+ // This field has a default of -1.
+ RetentionTimeMillis int64 // v2-v4
+
+ // Topics is contains topics and partitions for which to commit offsets.
+ Topics []OffsetCommitRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v8+
+}
+
+func (*OffsetCommitRequest) Key() int16 { return 8 }
+func (*OffsetCommitRequest) MaxVersion() int16 { return 9 }
+func (v *OffsetCommitRequest) SetVersion(version int16) { v.Version = version }
+func (v *OffsetCommitRequest) GetVersion() int16 { return v.Version }
+func (v *OffsetCommitRequest) IsFlexible() bool { return v.Version >= 8 }
+func (v *OffsetCommitRequest) IsGroupCoordinatorRequest() {}
+func (v *OffsetCommitRequest) ResponseKind() Response {
+ r := &OffsetCommitResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *OffsetCommitRequest) RequestWith(ctx context.Context, r Requestor) (*OffsetCommitResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*OffsetCommitResponse)
+ return resp, err
+}
+
+func (v *OffsetCommitRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 8
+ _ = isFlexible
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 7 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 2 && version <= 4 {
+ v := v.RetentionTimeMillis
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 && version <= 1 {
+ v := v.Timestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 6 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Metadata
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *OffsetCommitRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetCommitRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetCommitRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 8
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.Generation = v
+ }
+ if version >= 1 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 7 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ if version >= 2 && version <= 4 {
+ v := b.Int64()
+ s.RetentionTimeMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetCommitRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetCommitRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ if version >= 1 && version <= 1 {
+ v := b.Int64()
+ s.Timestamp = v
+ }
+ if version >= 6 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Metadata = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetCommitRequest returns a pointer to a default OffsetCommitRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetCommitRequest() *OffsetCommitRequest {
+ var v OffsetCommitRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitRequest.
+func (v *OffsetCommitRequest) Default() {
+ v.Generation = -1
+ v.RetentionTimeMillis = -1
+}
+
+// NewOffsetCommitRequest returns a default OffsetCommitRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitRequest() OffsetCommitRequest {
+ var v OffsetCommitRequest
+ v.Default()
+ return v
+}
+
+type OffsetCommitResponseTopicPartition struct {
+ // Partition is the partition in a topic this array slot corresponds to.
+ Partition int32
+
+ // ErrorCode is the error for this partition response.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // for the group.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // for the topic / partition.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the topic / partition does
+ // not exist.
+ //
+ // OFFSET_METADATA_TOO_LARGE is returned if the request metadata is
+ // larger than the brokers offset.metadata.max.bytes.
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available
+ // (due to the requested broker shutting down or it has not completed startup).
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is loading.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // ILLEGAL_GENERATION is returned if the request's generation ID is invalid.
+ //
+ // UNKNOWN_MEMBER_ID is returned if the group is dead or the group does not
+ // know of the request's member ID.
+ //
+ // REBALANCE_IN_PROGRESS is returned if the group is finishing a rebalance.
+ //
+ // INVALID_COMMIT_OFFSET_SIZE is returned if the offset commit results in
+ // a record batch that is too large (likely due to large metadata).
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v8+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitResponseTopicPartition.
+func (v *OffsetCommitResponseTopicPartition) Default() {
+}
+
+// NewOffsetCommitResponseTopicPartition returns a default OffsetCommitResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitResponseTopicPartition() OffsetCommitResponseTopicPartition {
+ var v OffsetCommitResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetCommitResponseTopic struct {
+ // Topic is the topic this offset commit response corresponds to.
+ Topic string
+
+ // Partitions contains responses for each requested partition in
+ // a topic.
+ Partitions []OffsetCommitResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v8+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitResponseTopic.
+func (v *OffsetCommitResponseTopic) Default() {
+}
+
+// NewOffsetCommitResponseTopic returns a default OffsetCommitResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitResponseTopic() OffsetCommitResponseTopic {
+ var v OffsetCommitResponseTopic
+ v.Default()
+ return v
+}
+
+// OffsetCommitResponse is returned from an OffsetCommitRequest.
+type OffsetCommitResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 4.
+ ThrottleMillis int32 // v3+
+
+ // Topics contains responses for each topic / partition in the commit request.
+ Topics []OffsetCommitResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v8+
+}
+
+func (*OffsetCommitResponse) Key() int16 { return 8 }
+func (*OffsetCommitResponse) MaxVersion() int16 { return 9 }
+func (v *OffsetCommitResponse) SetVersion(version int16) { v.Version = version }
+func (v *OffsetCommitResponse) GetVersion() int16 { return v.Version }
+func (v *OffsetCommitResponse) IsFlexible() bool { return v.Version >= 8 }
+func (v *OffsetCommitResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 4 }
+func (v *OffsetCommitResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *OffsetCommitResponse) RequestKind() Request { return &OffsetCommitRequest{Version: v.Version} }
+
+func (v *OffsetCommitResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 8
+ _ = isFlexible
+ if version >= 3 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *OffsetCommitResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetCommitResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetCommitResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 8
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetCommitResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetCommitResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetCommitResponse returns a pointer to a default OffsetCommitResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetCommitResponse() *OffsetCommitResponse {
+ var v OffsetCommitResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetCommitResponse.
+func (v *OffsetCommitResponse) Default() {
+}
+
+// NewOffsetCommitResponse returns a default OffsetCommitResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetCommitResponse() OffsetCommitResponse {
+ var v OffsetCommitResponse
+ v.Default()
+ return v
+}
+
+type OffsetFetchRequestTopic struct {
+ // Topic is a topic to fetch offsets for.
+ Topic string
+
+ // Partitions in a list of partitions in a group to fetch offsets for.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchRequestTopic.
+func (v *OffsetFetchRequestTopic) Default() {
+}
+
+// NewOffsetFetchRequestTopic returns a default OffsetFetchRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchRequestTopic() OffsetFetchRequestTopic {
+ var v OffsetFetchRequestTopic
+ v.Default()
+ return v
+}
+
+type OffsetFetchRequestGroupTopic struct {
+ Topic string
+
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchRequestGroupTopic.
+func (v *OffsetFetchRequestGroupTopic) Default() {
+}
+
+// NewOffsetFetchRequestGroupTopic returns a default OffsetFetchRequestGroupTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchRequestGroupTopic() OffsetFetchRequestGroupTopic {
+ var v OffsetFetchRequestGroupTopic
+ v.Default()
+ return v
+}
+
+type OffsetFetchRequestGroup struct {
+ Group string
+
+ // The member ID assigned by the group coordinator if using the new consumer protocol (KIP-848).
+ MemberID *string // v9+
+
+ // The member epoch if using the new consumer protocol (KIP-848).
+ //
+ // This field has a default of -1.
+ MemberEpoch int32 // v9+
+
+ Topics []OffsetFetchRequestGroupTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchRequestGroup.
+func (v *OffsetFetchRequestGroup) Default() {
+ v.MemberEpoch = -1
+}
+
+// NewOffsetFetchRequestGroup returns a default OffsetFetchRequestGroup
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchRequestGroup() OffsetFetchRequestGroup {
+ var v OffsetFetchRequestGroup
+ v.Default()
+ return v
+}
+
+// OffsetFetchRequest requests the most recent committed offsets for topic
+// partitions in a group.
+type OffsetFetchRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group to fetch offsets for.
+ Group string // v0-v7
+
+ // Topics contains topics to fetch offets for. Version 2+ allows this to be
+ // null to return all topics the client is authorized to describe in the group.
+ Topics []OffsetFetchRequestTopic // v0-v7
+
+ // Groups, introduced in v8 (Kafka 3.0), allows for fetching offsets for
+ // multiple groups at a time.
+ //
+ // The fields here mirror the old top level fields on the request, thus they
+ // are left undocumented. Refer to the top level documentation if necessary.
+ Groups []OffsetFetchRequestGroup // v8+
+
+ // RequireStable signifies whether the broker should wait on returning
+ // unstable offsets, instead setting a retryable error on the relevant
+ // unstable partitions (UNSTABLE_OFFSET_COMMIT). See KIP-447 for more
+ // details.
+ RequireStable bool // v7+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*OffsetFetchRequest) Key() int16 { return 9 }
+func (*OffsetFetchRequest) MaxVersion() int16 { return 9 }
+func (v *OffsetFetchRequest) SetVersion(version int16) { v.Version = version }
+func (v *OffsetFetchRequest) GetVersion() int16 { return v.Version }
+func (v *OffsetFetchRequest) IsFlexible() bool { return v.Version >= 6 }
+func (v *OffsetFetchRequest) IsGroupCoordinatorRequest() {}
+func (v *OffsetFetchRequest) ResponseKind() Response {
+ r := &OffsetFetchResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *OffsetFetchRequest) RequestWith(ctx context.Context, r Requestor) (*OffsetFetchResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*OffsetFetchResponse)
+ return resp, err
+}
+
+func (v *OffsetFetchRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ if version >= 0 && version <= 7 {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 7 {
+ v := v.Topics
+ if version >= 2 {
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ } else {
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 8 {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 9 {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 9 {
+ v := v.MemberEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 7 {
+ v := v.RequireStable
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *OffsetFetchRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetFetchRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetFetchRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ if version >= 0 && version <= 7 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ if version >= 0 && version <= 7 {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 2 || l == 0 {
+ a = []OffsetFetchRequestTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 8 {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchRequestGroup, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ if version >= 9 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 9 {
+ v := b.Int32()
+ s.MemberEpoch = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []OffsetFetchRequestGroupTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchRequestGroupTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Groups = v
+ }
+ if version >= 7 {
+ v := b.Bool()
+ s.RequireStable = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetFetchRequest returns a pointer to a default OffsetFetchRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetFetchRequest() *OffsetFetchRequest {
+ var v OffsetFetchRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchRequest.
+func (v *OffsetFetchRequest) Default() {
+}
+
+// NewOffsetFetchRequest returns a default OffsetFetchRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchRequest() OffsetFetchRequest {
+ var v OffsetFetchRequest
+ v.Default()
+ return v
+}
+
+type OffsetFetchResponseTopicPartition struct {
+ // Partition is the partition in a topic this array slot corresponds to.
+ Partition int32
+
+ // Offset is the most recently committed offset for this topic partition
+ // in a group.
+ Offset int64
+
+ // LeaderEpoch is the leader epoch of the last consumed record.
+ //
+ // This was proposed in KIP-320 and introduced in Kafka 2.1.0 and allows
+ // clients to detect log truncation. See the KIP for more details.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32 // v5+
+
+ // Metadata is client provided metadata corresponding to the offset commit.
+ // This can be useful for adding who made the commit, etc.
+ Metadata *string
+
+ // ErrorCode is the error for this partition response.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to the group.
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available
+ // (due to the requested broker shutting down or it has not completed startup).
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is loading.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the requested topic or partition
+ // is unknown.
+ //
+ // UNSTABLE_OFFSET_COMMIT is returned for v7+ if the request set RequireStable.
+ // See KIP-447 for more details.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchResponseTopicPartition.
+func (v *OffsetFetchResponseTopicPartition) Default() {
+ v.LeaderEpoch = -1
+}
+
+// NewOffsetFetchResponseTopicPartition returns a default OffsetFetchResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchResponseTopicPartition() OffsetFetchResponseTopicPartition {
+ var v OffsetFetchResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetFetchResponseTopic struct {
+ // Topic is the topic this offset fetch response corresponds to.
+ Topic string
+
+ // Partitions contains responses for each requested partition in
+ // a topic.
+ Partitions []OffsetFetchResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchResponseTopic.
+func (v *OffsetFetchResponseTopic) Default() {
+}
+
+// NewOffsetFetchResponseTopic returns a default OffsetFetchResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchResponseTopic() OffsetFetchResponseTopic {
+ var v OffsetFetchResponseTopic
+ v.Default()
+ return v
+}
+
+type OffsetFetchResponseGroupTopicPartition struct {
+ Partition int32
+
+ Offset int64
+
+ // This field has a default of -1.
+ LeaderEpoch int32
+
+ Metadata *string
+
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchResponseGroupTopicPartition.
+func (v *OffsetFetchResponseGroupTopicPartition) Default() {
+ v.LeaderEpoch = -1
+}
+
+// NewOffsetFetchResponseGroupTopicPartition returns a default OffsetFetchResponseGroupTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchResponseGroupTopicPartition() OffsetFetchResponseGroupTopicPartition {
+ var v OffsetFetchResponseGroupTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetFetchResponseGroupTopic struct {
+ Topic string
+
+ Partitions []OffsetFetchResponseGroupTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchResponseGroupTopic.
+func (v *OffsetFetchResponseGroupTopic) Default() {
+}
+
+// NewOffsetFetchResponseGroupTopic returns a default OffsetFetchResponseGroupTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchResponseGroupTopic() OffsetFetchResponseGroupTopic {
+ var v OffsetFetchResponseGroupTopic
+ v.Default()
+ return v
+}
+
+type OffsetFetchResponseGroup struct {
+ Group string
+
+ Topics []OffsetFetchResponseGroupTopic
+
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchResponseGroup.
+func (v *OffsetFetchResponseGroup) Default() {
+}
+
+// NewOffsetFetchResponseGroup returns a default OffsetFetchResponseGroup
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchResponseGroup() OffsetFetchResponseGroup {
+ var v OffsetFetchResponseGroup
+ v.Default()
+ return v
+}
+
+// OffsetFetchResponse is returned from an OffsetFetchRequest.
+type OffsetFetchResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 4.
+ ThrottleMillis int32 // v3+
+
+ // Topics contains responses for each requested topic/partition.
+ Topics []OffsetFetchResponseTopic // v0-v7
+
+ // ErrorCode is a top level error code that applies to all topic/partitions.
+ // This will be any group error.
+ ErrorCode int16 // v2-v7
+
+ // Groups is the response for all groups. Each field mirrors the fields in the
+ // top level request, thus they are left undocumented. Refer to the top level
+ // documentation if necessary.
+ Groups []OffsetFetchResponseGroup // v8+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*OffsetFetchResponse) Key() int16 { return 9 }
+func (*OffsetFetchResponse) MaxVersion() int16 { return 9 }
+func (v *OffsetFetchResponse) SetVersion(version int16) { v.Version = version }
+func (v *OffsetFetchResponse) GetVersion() int16 { return v.Version }
+func (v *OffsetFetchResponse) IsFlexible() bool { return v.Version >= 6 }
+func (v *OffsetFetchResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 4 }
+func (v *OffsetFetchResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *OffsetFetchResponse) RequestKind() Request { return &OffsetFetchRequest{Version: v.Version} }
+
+func (v *OffsetFetchResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ if version >= 3 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 0 && version <= 7 {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 5 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Metadata
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 2 && version <= 7 {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 8 {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Metadata
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *OffsetFetchResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetFetchResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetFetchResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if version >= 0 && version <= 7 {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ if version >= 5 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Metadata = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 2 && version <= 7 {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 8 {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchResponseGroup, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchResponseGroupTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetFetchResponseGroupTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Metadata = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Groups = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetFetchResponse returns a pointer to a default OffsetFetchResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetFetchResponse() *OffsetFetchResponse {
+ var v OffsetFetchResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetFetchResponse.
+func (v *OffsetFetchResponse) Default() {
+}
+
+// NewOffsetFetchResponse returns a default OffsetFetchResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetFetchResponse() OffsetFetchResponse {
+ var v OffsetFetchResponse
+ v.Default()
+ return v
+}
+
+// FindCoordinatorRequest requests the coordinator for a group or transaction.
+//
+// This coordinator is different from the broker leader coordinator. This
+// coordinator is the partition leader for the partition that is storing
+// the group or transaction ID.
+type FindCoordinatorRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // CoordinatorKey is the ID to use for finding the coordinator. For groups,
+ // this is the group name, for transactional producer, this is the
+ // transactional ID.
+ CoordinatorKey string // v0-v3
+
+ // CoordinatorType is the type that key is. Groups are type 0,
+ // transactional IDs are type 1.
+ CoordinatorType int8 // v1+
+
+ // CoordinatorKeys contains all keys to find the coordinator for.
+ CoordinatorKeys []string // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*FindCoordinatorRequest) Key() int16 { return 10 }
+func (*FindCoordinatorRequest) MaxVersion() int16 { return 4 }
+func (v *FindCoordinatorRequest) SetVersion(version int16) { v.Version = version }
+func (v *FindCoordinatorRequest) GetVersion() int16 { return v.Version }
+func (v *FindCoordinatorRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *FindCoordinatorRequest) ResponseKind() Response {
+ r := &FindCoordinatorResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *FindCoordinatorRequest) RequestWith(ctx context.Context, r Requestor) (*FindCoordinatorResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*FindCoordinatorResponse)
+ return resp, err
+}
+
+func (v *FindCoordinatorRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ if version >= 0 && version <= 3 {
+ v := v.CoordinatorKey
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.CoordinatorType
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if version >= 4 {
+ v := v.CoordinatorKeys
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *FindCoordinatorRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *FindCoordinatorRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *FindCoordinatorRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ if version >= 0 && version <= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.CoordinatorKey = v
+ }
+ if version >= 1 {
+ v := b.Int8()
+ s.CoordinatorType = v
+ }
+ if version >= 4 {
+ v := s.CoordinatorKeys
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.CoordinatorKeys = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrFindCoordinatorRequest returns a pointer to a default FindCoordinatorRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrFindCoordinatorRequest() *FindCoordinatorRequest {
+ var v FindCoordinatorRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FindCoordinatorRequest.
+func (v *FindCoordinatorRequest) Default() {
+}
+
+// NewFindCoordinatorRequest returns a default FindCoordinatorRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFindCoordinatorRequest() FindCoordinatorRequest {
+ var v FindCoordinatorRequest
+ v.Default()
+ return v
+}
+
+type FindCoordinatorResponseCoordinator struct {
+ Key string
+
+ NodeID int32
+
+ Host string
+
+ Port int32
+
+ ErrorCode int16
+
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FindCoordinatorResponseCoordinator.
+func (v *FindCoordinatorResponseCoordinator) Default() {
+}
+
+// NewFindCoordinatorResponseCoordinator returns a default FindCoordinatorResponseCoordinator
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFindCoordinatorResponseCoordinator() FindCoordinatorResponseCoordinator {
+ var v FindCoordinatorResponseCoordinator
+ v.Default()
+ return v
+}
+
+// FindCoordinatorResponse is returned from a FindCoordinatorRequest.
+type FindCoordinatorResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // ErrorCode is the error returned for the request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if for a group ID request and the
+ // client is not authorized to describe groups.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned for a transactional ID
+ // request and the client is not authorized to describe transactional IDs.
+ //
+ // INVALID_REQUEST is returned if not asking for a known type (group,
+ // or transaction).
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available
+ // for the requested ID, which would be if the group or transactional topic
+ // does not exist or the partition the requested key maps to is not available.
+ ErrorCode int16 // v0-v3
+
+ // ErrorMessage is an informative message if the request errored.
+ ErrorMessage *string // v1-v3
+
+ // NodeID is the broker ID of the coordinator.
+ NodeID int32 // v0-v3
+
+ // Host is the host of the coordinator.
+ Host string // v0-v3
+
+ // Port is the port of the coordinator.
+ Port int32 // v0-v3
+
+ // Coordinators, introduced for KIP-699, is the bulk response for
+ // coordinators. The fields in the struct exactly match the original fields
+ // in the FindCoordinatorResponse, thus they are left undocumented.
+ Coordinators []FindCoordinatorResponseCoordinator // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*FindCoordinatorResponse) Key() int16 { return 10 }
+func (*FindCoordinatorResponse) MaxVersion() int16 { return 4 }
+func (v *FindCoordinatorResponse) SetVersion(version int16) { v.Version = version }
+func (v *FindCoordinatorResponse) GetVersion() int16 { return v.Version }
+func (v *FindCoordinatorResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *FindCoordinatorResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *FindCoordinatorResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *FindCoordinatorResponse) RequestKind() Request {
+ return &FindCoordinatorRequest{Version: v.Version}
+}
+
+func (v *FindCoordinatorResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 0 && version <= 3 {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 1 && version <= 3 {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 3 {
+ v := v.NodeID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 0 && version <= 3 {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 3 {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 4 {
+ v := v.Coordinators
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Key
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.NodeID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *FindCoordinatorResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *FindCoordinatorResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *FindCoordinatorResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if version >= 0 && version <= 3 {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 1 && version <= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if version >= 0 && version <= 3 {
+ v := b.Int32()
+ s.NodeID = v
+ }
+ if version >= 0 && version <= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ if version >= 0 && version <= 3 {
+ v := b.Int32()
+ s.Port = v
+ }
+ if version >= 4 {
+ v := s.Coordinators
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FindCoordinatorResponseCoordinator, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Key = v
+ }
+ {
+ v := b.Int32()
+ s.NodeID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Coordinators = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrFindCoordinatorResponse returns a pointer to a default FindCoordinatorResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrFindCoordinatorResponse() *FindCoordinatorResponse {
+ var v FindCoordinatorResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FindCoordinatorResponse.
+func (v *FindCoordinatorResponse) Default() {
+}
+
+// NewFindCoordinatorResponse returns a default FindCoordinatorResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFindCoordinatorResponse() FindCoordinatorResponse {
+ var v FindCoordinatorResponse
+ v.Default()
+ return v
+}
+
+type JoinGroupRequestProtocol struct {
+ // Name is a name of a protocol. This is arbitrary, but is used
+ // in the official client to agree on a partition balancing strategy.
+ //
+ // The official client uses range, roundrobin, or sticky (which was
+ // introduced in KIP-54).
+ Name string
+
+ // Metadata is arbitrary information to pass along with this
+ // protocol name for this member.
+ //
+ // Note that while this is not documented in any protocol page,
+ // this is usually a serialized GroupMemberMetadata as described in
+ // https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Client-side+Assignment+Proposal.
+ //
+ // The protocol metadata is where group members will communicate which
+ // topics they collectively as a group want to consume.
+ Metadata []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to JoinGroupRequestProtocol.
+func (v *JoinGroupRequestProtocol) Default() {
+}
+
+// NewJoinGroupRequestProtocol returns a default JoinGroupRequestProtocol
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewJoinGroupRequestProtocol() JoinGroupRequestProtocol {
+ var v JoinGroupRequestProtocol
+ v.Default()
+ return v
+}
+
+// JoinGroupRequest issues a request to join a Kafka group. This will create a
+// group if one does not exist. If joining an existing group, this may trigger
+// a group rebalance.
+//
+// This will trigger a group rebalance if the request is from the group leader,
+// or if the request is from a group member with different metadata, or if the
+// request is with a new group member.
+//
+// Version 4 introduced replying to joins of existing groups with
+// MEMBER_ID_REQUIRED, which requires re-issuing the join group with the
+// returned member ID. See KIP-394 for more details.
+//
+// Version 5 introduced InstanceID, allowing for more "static" membership.
+// See KIP-345 for more details.
+type JoinGroupRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group to join.
+ Group string
+
+ // SessionTimeoutMillis is how long a member in the group can go between
+ // heartbeats. If a member does not send a heartbeat within this timeout,
+ // the broker will remove the member from the group and initiate a rebalance.
+ SessionTimeoutMillis int32
+
+ // RebalanceTimeoutMillis is how long the broker waits for members to join a group
+ // once a rebalance begins. Kafka waits for the longest rebalance of all
+ // members in the group. Member sessions are still alive; heartbeats will be
+ // replied to with REBALANCE_IN_PROGRESS. Those members must transition to
+ // joining within this rebalance timeout. Members that do not rejoin within
+ // this timeout will be removed from the group. Members must commit offsets
+ // within this timeout.
+ //
+ // The first join for a new group has a 3 second grace period for other
+ // members to join; this grace period is extended until the RebalanceTimeoutMillis
+ // is up or until 3 seconds lapse with no new members.
+ //
+ // This field has a default of -1.
+ RebalanceTimeoutMillis int32 // v1+
+
+ // MemberID is the member ID to join the group with. When joining a group for
+ // the first time, use the empty string. The response will contain the member
+ // ID that should be used going forward.
+ MemberID string
+
+ // InstanceID is a user configured ID that is used for making a group
+ // member "static", allowing many rebalances to be avoided.
+ InstanceID *string // v5+
+
+ // ProtocolType is the "type" of protocol being used for the join group.
+ // The initial group creation sets the type; all additional members must
+ // have the same type or they will be rejected.
+ //
+ // This is completely arbitrary, but the Java client and everything else
+ // uses "consumer" as the protocol type.
+ ProtocolType string
+
+ // Protocols contains arbitrary information that group members use
+ // for rebalancing. All group members must agree on at least one protocol
+ // name.
+ Protocols []JoinGroupRequestProtocol
+
+ // Reason is an optional reason the member is joining (or rejoining) the
+ // group (KIP-800, Kafka 3.2+).
+ Reason *string // v8+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*JoinGroupRequest) Key() int16 { return 11 }
+func (*JoinGroupRequest) MaxVersion() int16 { return 9 }
+func (v *JoinGroupRequest) SetVersion(version int16) { v.Version = version }
+func (v *JoinGroupRequest) GetVersion() int16 { return v.Version }
+func (v *JoinGroupRequest) IsFlexible() bool { return v.Version >= 6 }
+func (v *JoinGroupRequest) IsGroupCoordinatorRequest() {}
+func (v *JoinGroupRequest) ResponseKind() Response {
+ r := &JoinGroupResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *JoinGroupRequest) RequestWith(ctx context.Context, r Requestor) (*JoinGroupResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*JoinGroupResponse)
+ return resp, err
+}
+
+func (v *JoinGroupRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.SessionTimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.RebalanceTimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ProtocolType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Protocols
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Metadata
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 8 {
+ v := v.Reason
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *JoinGroupRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *JoinGroupRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *JoinGroupRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ v := b.Int32()
+ s.SessionTimeoutMillis = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.RebalanceTimeoutMillis = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ProtocolType = v
+ }
+ {
+ v := s.Protocols
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]JoinGroupRequestProtocol, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.Metadata = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Protocols = v
+ }
+ if version >= 8 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Reason = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrJoinGroupRequest returns a pointer to a default JoinGroupRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrJoinGroupRequest() *JoinGroupRequest {
+ var v JoinGroupRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to JoinGroupRequest.
+func (v *JoinGroupRequest) Default() {
+ v.RebalanceTimeoutMillis = -1
+}
+
+// NewJoinGroupRequest returns a default JoinGroupRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewJoinGroupRequest() JoinGroupRequest {
+ var v JoinGroupRequest
+ v.Default()
+ return v
+}
+
+type JoinGroupResponseMember struct {
+ // MemberID is a member in this group.
+ MemberID string
+
+ // InstanceID is an instance ID of a member in this group (KIP-345).
+ InstanceID *string // v5+
+
+ // ProtocolMetadata is the metadata for this member for this protocol.
+ // This is usually of type GroupMemberMetadata.
+ ProtocolMetadata []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to JoinGroupResponseMember.
+func (v *JoinGroupResponseMember) Default() {
+}
+
+// NewJoinGroupResponseMember returns a default JoinGroupResponseMember
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewJoinGroupResponseMember() JoinGroupResponseMember {
+ var v JoinGroupResponseMember
+ v.Default()
+ return v
+}
+
+// JoinGroupResponse is returned from a JoinGroupRequest.
+type JoinGroupResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 3.
+ ThrottleMillis int32 // v2+
+
+ // ErrorCode is the error for the join group request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to the group (no read perms).
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available
+ // (due to the requested broker shutting down or it has not completed startup).
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is loading.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // INVALID_SESSION_TIMEOUT is returned if the requested SessionTimeout is
+ // not within the broker's group.{min,max}.session.timeout.ms.
+ //
+ // INCONSISTENT_GROUP_PROTOCOL is returned if the requested protocols are
+ // incompatible with the existing group member's protocols, or if the join
+ // was for a new group but contained no protocols.
+ //
+ // UNKNOWN_MEMBER_ID is returned is the requested group is dead (likely
+ // just migrated to another coordinator or the group is temporarily unstable),
+ // or if the request was for a new group but contained a non-empty member ID,
+ // or if the group does not have the requested member ID (and the client must
+ // do the new-join-group dance).
+ //
+ // MEMBER_ID_REQUIRED is returned on the initial join of an existing group.
+ // This error was proposed in KIP-394 and introduced in Kafka 2.2.0 to
+ // prevent flaky clients from continually triggering rebalances and prevent
+ // these clients from consuming RAM with metadata. If a client sees
+ // this error, it should re-issue the join with the MemberID in the response.
+ // Non-flaky clients will join with this new member ID, but flaky clients
+ // will not join quickly enough before the pending member ID is rotated out
+ // due to hitting the session.timeout.ms.
+ //
+ // GROUP_MAX_SIZE_REACHED is returned as of Kafka 2.2.0 if the group has
+ // reached a broker's group.max.size.
+ ErrorCode int16
+
+ // Generation is the current "generation" of this group.
+ //
+ // This field has a default of -1.
+ Generation int32
+
+ // ProtocolType is the "type" of protocol being used for this group.
+ ProtocolType *string // v7+
+
+ // Protocol is the agreed upon protocol name (i.e. "sticky", "range").
+ //
+ // v7 of this response changed this field to be nullable.
+ Protocol *string
+
+ // LeaderID is the leader member.
+ LeaderID string
+
+ // True if the leader must skip running the assignment (KIP-814, Kafka 3.2+).
+ SkipAssignment bool // v9+
+
+ // MemberID is the member of the receiving client.
+ MemberID string
+
+ // Members contains all other members of this group. Only the group leader
+ // receives the members. The leader is responsible for balancing subscribed
+ // topic partitions and replying appropriately in a SyncGroup request.
+ Members []JoinGroupResponseMember
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v6+
+}
+
+func (*JoinGroupResponse) Key() int16 { return 11 }
+func (*JoinGroupResponse) MaxVersion() int16 { return 9 }
+func (v *JoinGroupResponse) SetVersion(version int16) { v.Version = version }
+func (v *JoinGroupResponse) GetVersion() int16 { return v.Version }
+func (v *JoinGroupResponse) IsFlexible() bool { return v.Version >= 6 }
+func (v *JoinGroupResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 3 }
+func (v *JoinGroupResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *JoinGroupResponse) RequestKind() Request { return &JoinGroupRequest{Version: v.Version} }
+
+func (v *JoinGroupResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ if version >= 2 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 7 {
+ v := v.ProtocolType
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Protocol
+ if version < 7 {
+ var vv string
+ if v != nil {
+ vv = *v
+ }
+ {
+ v := vv
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ } else {
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ }
+ {
+ v := v.LeaderID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 9 {
+ v := v.SkipAssignment
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Members
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ProtocolMetadata
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *JoinGroupResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *JoinGroupResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *JoinGroupResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 6
+ _ = isFlexible
+ s := v
+ if version >= 2 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.Generation = v
+ }
+ if version >= 7 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ProtocolType = v
+ }
+ {
+ var v *string
+ if version < 7 {
+ var vv string
+ if isFlexible {
+ if unsafe {
+ vv = b.UnsafeCompactString()
+ } else {
+ vv = b.CompactString()
+ }
+ } else {
+ if unsafe {
+ vv = b.UnsafeString()
+ } else {
+ vv = b.String()
+ }
+ }
+ v = &vv
+ } else {
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ }
+ s.Protocol = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.LeaderID = v
+ }
+ if version >= 9 {
+ v := b.Bool()
+ s.SkipAssignment = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ v := s.Members
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]JoinGroupResponseMember, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.ProtocolMetadata = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Members = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrJoinGroupResponse returns a pointer to a default JoinGroupResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrJoinGroupResponse() *JoinGroupResponse {
+ var v JoinGroupResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to JoinGroupResponse.
+func (v *JoinGroupResponse) Default() {
+ v.Generation = -1
+}
+
+// NewJoinGroupResponse returns a default JoinGroupResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewJoinGroupResponse() JoinGroupResponse {
+ var v JoinGroupResponse
+ v.Default()
+ return v
+}
+
+// HeartbeatRequest issues a heartbeat for a member in a group, ensuring that
+// Kafka does not expire the member from the group.
+type HeartbeatRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group ID this heartbeat is for.
+ Group string
+
+ // Generation is the group generation this heartbeat is for.
+ Generation int32
+
+ // MemberID is the member ID this member is for.
+ MemberID string
+
+ // InstanceID is the instance ID of this member in the group (KIP-345).
+ InstanceID *string // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*HeartbeatRequest) Key() int16 { return 12 }
+func (*HeartbeatRequest) MaxVersion() int16 { return 4 }
+func (v *HeartbeatRequest) SetVersion(version int16) { v.Version = version }
+func (v *HeartbeatRequest) GetVersion() int16 { return v.Version }
+func (v *HeartbeatRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *HeartbeatRequest) IsGroupCoordinatorRequest() {}
+func (v *HeartbeatRequest) ResponseKind() Response {
+ r := &HeartbeatResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *HeartbeatRequest) RequestWith(ctx context.Context, r Requestor) (*HeartbeatResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*HeartbeatResponse)
+ return resp, err
+}
+
+func (v *HeartbeatRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *HeartbeatRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *HeartbeatRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *HeartbeatRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ v := b.Int32()
+ s.Generation = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrHeartbeatRequest returns a pointer to a default HeartbeatRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrHeartbeatRequest() *HeartbeatRequest {
+ var v HeartbeatRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to HeartbeatRequest.
+func (v *HeartbeatRequest) Default() {
+}
+
+// NewHeartbeatRequest returns a default HeartbeatRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewHeartbeatRequest() HeartbeatRequest {
+ var v HeartbeatRequest
+ v.Default()
+ return v
+}
+
+// HeartbeatResponse is returned from a HeartbeatRequest.
+type HeartbeatResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // ErrorCode is the error for the heartbeat request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to the group (no read perms).
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available
+ // (due to the requested broker shutting down or it has not completed startup).
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // UNKNOWN_MEMBER_ID is returned if the member ID is not a part of the group,
+ // or if the group is empty or dead.
+ //
+ // ILLEGAL_GENERATION is returned if the request's generation ID is invalid.
+ //
+ // REBALANCE_IN_PROGRESS is returned if the group is currently rebalancing.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*HeartbeatResponse) Key() int16 { return 12 }
+func (*HeartbeatResponse) MaxVersion() int16 { return 4 }
+func (v *HeartbeatResponse) SetVersion(version int16) { v.Version = version }
+func (v *HeartbeatResponse) GetVersion() int16 { return v.Version }
+func (v *HeartbeatResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *HeartbeatResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *HeartbeatResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *HeartbeatResponse) RequestKind() Request { return &HeartbeatRequest{Version: v.Version} }
+
+func (v *HeartbeatResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *HeartbeatResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *HeartbeatResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *HeartbeatResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrHeartbeatResponse returns a pointer to a default HeartbeatResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrHeartbeatResponse() *HeartbeatResponse {
+ var v HeartbeatResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to HeartbeatResponse.
+func (v *HeartbeatResponse) Default() {
+}
+
+// NewHeartbeatResponse returns a default HeartbeatResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewHeartbeatResponse() HeartbeatResponse {
+ var v HeartbeatResponse
+ v.Default()
+ return v
+}
+
+type LeaveGroupRequestMember struct {
+ MemberID string
+
+ InstanceID *string
+
+ // Reason is an optional reason why this member is leaving the group
+ // (KIP-800, Kafka 3.2+).
+ Reason *string // v5+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaveGroupRequestMember.
+func (v *LeaveGroupRequestMember) Default() {
+}
+
+// NewLeaveGroupRequestMember returns a default LeaveGroupRequestMember
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaveGroupRequestMember() LeaveGroupRequestMember {
+ var v LeaveGroupRequestMember
+ v.Default()
+ return v
+}
+
+// LeaveGroupRequest issues a request for a group member to leave the group,
+// triggering a group rebalance.
+//
+// Version 3 changed removed MemberID and added a batch instance+member ID
+// way of leaving a group.
+type LeaveGroupRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group to leave.
+ Group string
+
+ // MemberID is the member that is leaving.
+ MemberID string // v0-v2
+
+ // Members are member and group instance IDs to cause to leave a group.
+ Members []LeaveGroupRequestMember // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*LeaveGroupRequest) Key() int16 { return 13 }
+func (*LeaveGroupRequest) MaxVersion() int16 { return 5 }
+func (v *LeaveGroupRequest) SetVersion(version int16) { v.Version = version }
+func (v *LeaveGroupRequest) GetVersion() int16 { return v.Version }
+func (v *LeaveGroupRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *LeaveGroupRequest) IsGroupCoordinatorRequest() {}
+func (v *LeaveGroupRequest) ResponseKind() Response {
+ r := &LeaveGroupResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *LeaveGroupRequest) RequestWith(ctx context.Context, r Requestor) (*LeaveGroupResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*LeaveGroupResponse)
+ return resp, err
+}
+
+func (v *LeaveGroupRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 2 {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.Members
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.Reason
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *LeaveGroupRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *LeaveGroupRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *LeaveGroupRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ if version >= 0 && version <= 2 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 3 {
+ v := s.Members
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaveGroupRequestMember, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Reason = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Members = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrLeaveGroupRequest returns a pointer to a default LeaveGroupRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrLeaveGroupRequest() *LeaveGroupRequest {
+ var v LeaveGroupRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaveGroupRequest.
+func (v *LeaveGroupRequest) Default() {
+}
+
+// NewLeaveGroupRequest returns a default LeaveGroupRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaveGroupRequest() LeaveGroupRequest {
+ var v LeaveGroupRequest
+ v.Default()
+ return v
+}
+
+type LeaveGroupResponseMember struct {
+ MemberID string
+
+ InstanceID *string
+
+ // An individual member's leave error code.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaveGroupResponseMember.
+func (v *LeaveGroupResponseMember) Default() {
+}
+
+// NewLeaveGroupResponseMember returns a default LeaveGroupResponseMember
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaveGroupResponseMember() LeaveGroupResponseMember {
+ var v LeaveGroupResponseMember
+ v.Default()
+ return v
+}
+
+// LeaveGroupResponse is returned from a LeaveGroupRequest.
+type LeaveGroupResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // ErrorCode is the error for the leave group request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to the group (no read perms).
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available
+ // (due to the requested broker shutting down or it has not completed startup).
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is loading.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // UNKNOWN_MEMBER_ID is returned if the member ID is not a part of the group,
+ // or if the group is empty or dead.
+ ErrorCode int16
+
+ // Members are the list of members and group instance IDs that left the group.
+ Members []LeaveGroupResponseMember // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*LeaveGroupResponse) Key() int16 { return 13 }
+func (*LeaveGroupResponse) MaxVersion() int16 { return 5 }
+func (v *LeaveGroupResponse) SetVersion(version int16) { v.Version = version }
+func (v *LeaveGroupResponse) GetVersion() int16 { return v.Version }
+func (v *LeaveGroupResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *LeaveGroupResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *LeaveGroupResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *LeaveGroupResponse) RequestKind() Request { return &LeaveGroupRequest{Version: v.Version} }
+
+func (v *LeaveGroupResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 3 {
+ v := v.Members
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *LeaveGroupResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *LeaveGroupResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *LeaveGroupResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 3 {
+ v := s.Members
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]LeaveGroupResponseMember, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Members = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrLeaveGroupResponse returns a pointer to a default LeaveGroupResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrLeaveGroupResponse() *LeaveGroupResponse {
+ var v LeaveGroupResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to LeaveGroupResponse.
+func (v *LeaveGroupResponse) Default() {
+}
+
+// NewLeaveGroupResponse returns a default LeaveGroupResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewLeaveGroupResponse() LeaveGroupResponse {
+ var v LeaveGroupResponse
+ v.Default()
+ return v
+}
+
+type SyncGroupRequestGroupAssignment struct {
+ // MemberID is the member this assignment is for.
+ MemberID string
+
+ // MemberAssignment is the assignment for this member. This is typically
+ // of type GroupMemberAssignment.
+ MemberAssignment []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SyncGroupRequestGroupAssignment.
+func (v *SyncGroupRequestGroupAssignment) Default() {
+}
+
+// NewSyncGroupRequestGroupAssignment returns a default SyncGroupRequestGroupAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSyncGroupRequestGroupAssignment() SyncGroupRequestGroupAssignment {
+ var v SyncGroupRequestGroupAssignment
+ v.Default()
+ return v
+}
+
+// SyncGroupRequest is issued by all group members after they receive a a
+// response for JoinGroup. The group leader is responsible for sending member
+// assignments with the request; all other members do not.
+//
+// Once the leader sends the group assignment, all members will be replied to.
+type SyncGroupRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group ID this sync group is for.
+ Group string
+
+ // Generation is the group generation this sync is for.
+ Generation int32
+
+ // MemberID is the member ID this member is.
+ MemberID string
+
+ // InstanceID is the instance ID of this member in the group (KIP-345).
+ InstanceID *string // v3+
+
+ // ProtocolType is the "type" of protocol being used for this group.
+ ProtocolType *string // v5+
+
+ // Protocol is the agreed upon protocol name (i.e. "sticky", "range").
+ Protocol *string // v5+
+
+ // GroupAssignment, sent only from the group leader, is the topic partition
+ // assignment it has decided on for all members.
+ GroupAssignment []SyncGroupRequestGroupAssignment
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*SyncGroupRequest) Key() int16 { return 14 }
+func (*SyncGroupRequest) MaxVersion() int16 { return 5 }
+func (v *SyncGroupRequest) SetVersion(version int16) { v.Version = version }
+func (v *SyncGroupRequest) GetVersion() int16 { return v.Version }
+func (v *SyncGroupRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *SyncGroupRequest) IsGroupCoordinatorRequest() {}
+func (v *SyncGroupRequest) ResponseKind() Response {
+ r := &SyncGroupResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *SyncGroupRequest) RequestWith(ctx context.Context, r Requestor) (*SyncGroupResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*SyncGroupResponse)
+ return resp, err
+}
+
+func (v *SyncGroupRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.ProtocolType
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.Protocol
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.GroupAssignment
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MemberAssignment
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *SyncGroupRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *SyncGroupRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *SyncGroupRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ v := b.Int32()
+ s.Generation = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ProtocolType = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Protocol = v
+ }
+ {
+ v := s.GroupAssignment
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]SyncGroupRequestGroupAssignment, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.MemberAssignment = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.GroupAssignment = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrSyncGroupRequest returns a pointer to a default SyncGroupRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrSyncGroupRequest() *SyncGroupRequest {
+ var v SyncGroupRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SyncGroupRequest.
+func (v *SyncGroupRequest) Default() {
+}
+
+// NewSyncGroupRequest returns a default SyncGroupRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSyncGroupRequest() SyncGroupRequest {
+ var v SyncGroupRequest
+ v.Default()
+ return v
+}
+
+// SyncGroupResponse is returned from a SyncGroupRequest.
+type SyncGroupResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // ErrorCode is the error for the sync group request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to the group (no read perms).
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // UNKNOWN_MEMBER_ID is returned if the member ID is not a part of the group,
+ // or if the group is empty or dead.
+ //
+ // ILLEGAL_GENERATION is returned if the request's generation ID is invalid.
+ //
+ // REBALANCE_IN_PROGRESS is returned if the group switched back to rebalancing.
+ //
+ // UNKNOWN_SERVER_ERROR is returned if the store of the group assignment
+ // resulted in a too large message.
+ ErrorCode int16
+
+ // ProtocolType is the "type" of protocol being used for this group.
+ ProtocolType *string // v5+
+
+ // Protocol is the agreed upon protocol name (i.e. "sticky", "range").
+ Protocol *string // v5+
+
+ // MemberAssignment is the assignment for this member that the leader
+ // determined.
+ MemberAssignment []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*SyncGroupResponse) Key() int16 { return 14 }
+func (*SyncGroupResponse) MaxVersion() int16 { return 5 }
+func (v *SyncGroupResponse) SetVersion(version int16) { v.Version = version }
+func (v *SyncGroupResponse) GetVersion() int16 { return v.Version }
+func (v *SyncGroupResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *SyncGroupResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *SyncGroupResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *SyncGroupResponse) RequestKind() Request { return &SyncGroupRequest{Version: v.Version} }
+
+func (v *SyncGroupResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 5 {
+ v := v.ProtocolType
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.Protocol
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.MemberAssignment
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *SyncGroupResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *SyncGroupResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *SyncGroupResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ProtocolType = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Protocol = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.MemberAssignment = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrSyncGroupResponse returns a pointer to a default SyncGroupResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrSyncGroupResponse() *SyncGroupResponse {
+ var v SyncGroupResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SyncGroupResponse.
+func (v *SyncGroupResponse) Default() {
+}
+
+// NewSyncGroupResponse returns a default SyncGroupResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSyncGroupResponse() SyncGroupResponse {
+ var v SyncGroupResponse
+ v.Default()
+ return v
+}
+
+// DescribeGroupsRequest requests metadata for group IDs.
+type DescribeGroupsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Groups is an array of group IDs to request metadata for.
+ Groups []string
+
+ // IncludeAuthorizedOperations, introduced in Kafka 2.3.0, specifies
+ // whether to include a bitfield of AclOperations this client can perform
+ // on the groups. See KIP-430 for more details.
+ IncludeAuthorizedOperations bool // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+func (*DescribeGroupsRequest) Key() int16 { return 15 }
+func (*DescribeGroupsRequest) MaxVersion() int16 { return 5 }
+func (v *DescribeGroupsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeGroupsRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeGroupsRequest) IsFlexible() bool { return v.Version >= 5 }
+func (v *DescribeGroupsRequest) IsGroupCoordinatorRequest() {}
+func (v *DescribeGroupsRequest) ResponseKind() Response {
+ r := &DescribeGroupsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeGroupsRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeGroupsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeGroupsResponse)
+ return resp, err
+}
+
+func (v *DescribeGroupsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if version >= 3 {
+ v := v.IncludeAuthorizedOperations
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeGroupsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeGroupsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeGroupsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ s := v
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.Groups = v
+ }
+ if version >= 3 {
+ v := b.Bool()
+ s.IncludeAuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeGroupsRequest returns a pointer to a default DescribeGroupsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeGroupsRequest() *DescribeGroupsRequest {
+ var v DescribeGroupsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeGroupsRequest.
+func (v *DescribeGroupsRequest) Default() {
+}
+
+// NewDescribeGroupsRequest returns a default DescribeGroupsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeGroupsRequest() DescribeGroupsRequest {
+ var v DescribeGroupsRequest
+ v.Default()
+ return v
+}
+
+type DescribeGroupsResponseGroupMember struct {
+ // MemberID is the member ID of a member in this group.
+ MemberID string
+
+ // InstanceID is the instance ID of this member in the group (KIP-345).
+ InstanceID *string // v4+
+
+ // ClientID is the client ID used by this member.
+ ClientID string
+
+ // ClientHost is the host this client is running on.
+ ClientHost string
+
+ // ProtocolMetadata is the metadata this member included when joining
+ // the group. If using normal (Java-like) consumers, this will be of
+ // type GroupMemberMetadata.
+ ProtocolMetadata []byte
+
+ // MemberAssignment is the assignment for this member in the group.
+ // If using normal (Java-like) consumers, this will be of type
+ // GroupMemberAssignment.
+ MemberAssignment []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeGroupsResponseGroupMember.
+func (v *DescribeGroupsResponseGroupMember) Default() {
+}
+
+// NewDescribeGroupsResponseGroupMember returns a default DescribeGroupsResponseGroupMember
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeGroupsResponseGroupMember() DescribeGroupsResponseGroupMember {
+ var v DescribeGroupsResponseGroupMember
+ v.Default()
+ return v
+}
+
+type DescribeGroupsResponseGroup struct {
+ // ErrorCode is the error code for an individual group in a request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to describe a group.
+ //
+ // INVALID_GROUP_ID is returned if the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator for this
+ // group is not yet active.
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is loading.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the
+ // coordinator for this group.
+ ErrorCode int16
+
+ // Group is the id of this group.
+ Group string
+
+ // State is the state this group is in.
+ State string
+
+ // ProtocolType is the "type" of protocol being used for this group.
+ ProtocolType string
+
+ // Protocol is the agreed upon protocol for all members in this group.
+ Protocol string
+
+ // Members contains members in this group.
+ Members []DescribeGroupsResponseGroupMember
+
+ // AuthorizedOperations is a bitfield containing which operations the
+ // the client is allowed to perform on this group.
+ // This is only returned if requested.
+ //
+ // This field has a default of -2147483648.
+ AuthorizedOperations int32 // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeGroupsResponseGroup.
+func (v *DescribeGroupsResponseGroup) Default() {
+ v.AuthorizedOperations = -2147483648
+}
+
+// NewDescribeGroupsResponseGroup returns a default DescribeGroupsResponseGroup
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeGroupsResponseGroup() DescribeGroupsResponseGroup {
+ var v DescribeGroupsResponseGroup
+ v.Default()
+ return v
+}
+
+// DescribeGroupsResponse is returned from a DescribeGroupsRequest.
+type DescribeGroupsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // Groups is an array of group metadata.
+ Groups []DescribeGroupsResponseGroup
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+func (*DescribeGroupsResponse) Key() int16 { return 15 }
+func (*DescribeGroupsResponse) MaxVersion() int16 { return 5 }
+func (v *DescribeGroupsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeGroupsResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeGroupsResponse) IsFlexible() bool { return v.Version >= 5 }
+func (v *DescribeGroupsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *DescribeGroupsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *DescribeGroupsResponse) RequestKind() Request {
+ return &DescribeGroupsRequest{Version: v.Version}
+}
+
+func (v *DescribeGroupsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.State
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProtocolType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Protocol
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Members
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 4 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ClientID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ClientHost
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProtocolMetadata
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.MemberAssignment
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 3 {
+ v := v.AuthorizedOperations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeGroupsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeGroupsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeGroupsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeGroupsResponseGroup, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.State = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ProtocolType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Protocol = v
+ }
+ {
+ v := s.Members
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeGroupsResponseGroupMember, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 4 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClientID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClientHost = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.ProtocolMetadata = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.MemberAssignment = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Members = v
+ }
+ if version >= 3 {
+ v := b.Int32()
+ s.AuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Groups = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeGroupsResponse returns a pointer to a default DescribeGroupsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeGroupsResponse() *DescribeGroupsResponse {
+ var v DescribeGroupsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeGroupsResponse.
+func (v *DescribeGroupsResponse) Default() {
+}
+
+// NewDescribeGroupsResponse returns a default DescribeGroupsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeGroupsResponse() DescribeGroupsResponse {
+ var v DescribeGroupsResponse
+ v.Default()
+ return v
+}
+
+// ListGroupsRequest issues a request to list all groups.
+//
+// To list all groups in a cluster, this must be issued to every broker.
+type ListGroupsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // StatesFilter, proposed in KIP-518 and introduced in Kafka 2.6.0,
+ // allows filtering groups by state, where a state is any of
+ // "Preparing", "PreparingRebalance", "CompletingRebalance", "Stable",
+ // "Dead", or "Empty". If empty, all groups are returned.
+ StatesFilter []string // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*ListGroupsRequest) Key() int16 { return 16 }
+func (*ListGroupsRequest) MaxVersion() int16 { return 4 }
+func (v *ListGroupsRequest) SetVersion(version int16) { v.Version = version }
+func (v *ListGroupsRequest) GetVersion() int16 { return v.Version }
+func (v *ListGroupsRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *ListGroupsRequest) ResponseKind() Response {
+ r := &ListGroupsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ListGroupsRequest) RequestWith(ctx context.Context, r Requestor) (*ListGroupsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ListGroupsResponse)
+ return resp, err
+}
+
+func (v *ListGroupsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ if version >= 4 {
+ v := v.StatesFilter
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListGroupsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListGroupsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListGroupsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ if version >= 4 {
+ v := s.StatesFilter
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.StatesFilter = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListGroupsRequest returns a pointer to a default ListGroupsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListGroupsRequest() *ListGroupsRequest {
+ var v ListGroupsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListGroupsRequest.
+func (v *ListGroupsRequest) Default() {
+}
+
+// NewListGroupsRequest returns a default ListGroupsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListGroupsRequest() ListGroupsRequest {
+ var v ListGroupsRequest
+ v.Default()
+ return v
+}
+
+type ListGroupsResponseGroup struct {
+ // Group is a Kafka group.
+ Group string
+
+ // ProtocolType is the protocol type in use by the group.
+ ProtocolType string
+
+ // The group state.
+ GroupState string // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListGroupsResponseGroup.
+func (v *ListGroupsResponseGroup) Default() {
+}
+
+// NewListGroupsResponseGroup returns a default ListGroupsResponseGroup
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListGroupsResponseGroup() ListGroupsResponseGroup {
+ var v ListGroupsResponseGroup
+ v.Default()
+ return v
+}
+
+// ListGroupsResponse is returned from a ListGroupsRequest.
+type ListGroupsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // ErrorCode is the error returned for the list groups request.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not yet active.
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group manager is loading.
+ ErrorCode int16
+
+ // Groups is the list of groups Kafka knows of.
+ Groups []ListGroupsResponseGroup
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*ListGroupsResponse) Key() int16 { return 16 }
+func (*ListGroupsResponse) MaxVersion() int16 { return 4 }
+func (v *ListGroupsResponse) SetVersion(version int16) { v.Version = version }
+func (v *ListGroupsResponse) GetVersion() int16 { return v.Version }
+func (v *ListGroupsResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *ListGroupsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *ListGroupsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *ListGroupsResponse) RequestKind() Request { return &ListGroupsRequest{Version: v.Version} }
+
+func (v *ListGroupsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProtocolType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 4 {
+ v := v.GroupState
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListGroupsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListGroupsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListGroupsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListGroupsResponseGroup, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ProtocolType = v
+ }
+ if version >= 4 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.GroupState = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Groups = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListGroupsResponse returns a pointer to a default ListGroupsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListGroupsResponse() *ListGroupsResponse {
+ var v ListGroupsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListGroupsResponse.
+func (v *ListGroupsResponse) Default() {
+}
+
+// NewListGroupsResponse returns a default ListGroupsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListGroupsResponse() ListGroupsResponse {
+ var v ListGroupsResponse
+ v.Default()
+ return v
+}
+
+// SASLHandshakeRequest begins the sasl authentication flow. Note that Kerberos
+// GSSAPI authentication has its own unique flow.
+type SASLHandshakeRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Mechanism is the mechanism to use for the sasl handshake (e.g., "PLAIN").
+ //
+ // For version 0, if this mechanism is supported, it is expected that the
+ // client immediately authenticates using this mechanism. Note that the
+ // only mechanism exclusive to v0 is PLAIN.
+ //
+ // For version 1, if the mechanism is supported, the next request to issue
+ // is SASLHandshakeRequest.
+ Mechanism string
+}
+
+func (*SASLHandshakeRequest) Key() int16 { return 17 }
+func (*SASLHandshakeRequest) MaxVersion() int16 { return 1 }
+func (v *SASLHandshakeRequest) SetVersion(version int16) { v.Version = version }
+func (v *SASLHandshakeRequest) GetVersion() int16 { return v.Version }
+func (v *SASLHandshakeRequest) IsFlexible() bool { return false }
+func (v *SASLHandshakeRequest) ResponseKind() Response {
+ r := &SASLHandshakeResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *SASLHandshakeRequest) RequestWith(ctx context.Context, r Requestor) (*SASLHandshakeResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*SASLHandshakeResponse)
+ return resp, err
+}
+
+func (v *SASLHandshakeRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Mechanism
+ dst = kbin.AppendString(dst, v)
+ }
+ return dst
+}
+
+func (v *SASLHandshakeRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *SASLHandshakeRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *SASLHandshakeRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Mechanism = v
+ }
+ return b.Complete()
+}
+
+// NewPtrSASLHandshakeRequest returns a pointer to a default SASLHandshakeRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrSASLHandshakeRequest() *SASLHandshakeRequest {
+ var v SASLHandshakeRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SASLHandshakeRequest.
+func (v *SASLHandshakeRequest) Default() {
+}
+
+// NewSASLHandshakeRequest returns a default SASLHandshakeRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSASLHandshakeRequest() SASLHandshakeRequest {
+ var v SASLHandshakeRequest
+ v.Default()
+ return v
+}
+
+// SASLHandshakeResponse is returned for a SASLHandshakeRequest.
+type SASLHandshakeResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is non-zero for ILLEGAL_SASL_STATE, meaning a sasl handshake
+ // is not expected at this point in the connection, or UNSUPPORTED_SASL_MECHANISM,
+ // meaning the requested mechanism is not supported.
+ ErrorCode int16
+
+ // SupportedMechanisms is the list of mechanisms supported if this request
+ // errored.
+ SupportedMechanisms []string
+}
+
+func (*SASLHandshakeResponse) Key() int16 { return 17 }
+func (*SASLHandshakeResponse) MaxVersion() int16 { return 1 }
+func (v *SASLHandshakeResponse) SetVersion(version int16) { v.Version = version }
+func (v *SASLHandshakeResponse) GetVersion() int16 { return v.Version }
+func (v *SASLHandshakeResponse) IsFlexible() bool { return false }
+func (v *SASLHandshakeResponse) RequestKind() Request {
+ return &SASLHandshakeRequest{Version: v.Version}
+}
+
+func (v *SASLHandshakeResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.SupportedMechanisms
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ return dst
+}
+
+func (v *SASLHandshakeResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *SASLHandshakeResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *SASLHandshakeResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.SupportedMechanisms
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ a[i] = v
+ }
+ v = a
+ s.SupportedMechanisms = v
+ }
+ return b.Complete()
+}
+
+// NewPtrSASLHandshakeResponse returns a pointer to a default SASLHandshakeResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrSASLHandshakeResponse() *SASLHandshakeResponse {
+ var v SASLHandshakeResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SASLHandshakeResponse.
+func (v *SASLHandshakeResponse) Default() {
+}
+
+// NewSASLHandshakeResponse returns a default SASLHandshakeResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSASLHandshakeResponse() SASLHandshakeResponse {
+ var v SASLHandshakeResponse
+ v.Default()
+ return v
+}
+
+// ApiVersionsRequest requests what API versions a Kafka broker supports.
+//
+// Note that the client does not know the version a broker supports before
+// sending this request.
+//
+// Before Kafka 2.4.0, if the client used a version larger than the broker
+// understands, the broker would reply with an UNSUPPORTED_VERSION error using
+// the version 0 message format (i.e., 6 bytes long!). The client should retry
+// with a lower version.
+//
+// After Kafka 2.4.0, if the client uses a version larger than the broker
+// understands, the broker replies with UNSUPPORTED_VERSIONS using the version
+// 0 message format but additionally includes the api versions the broker does
+// support.
+type ApiVersionsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ClientSoftwareName, added for KIP-511 with Kafka 2.4.0, is the name of the
+ // client issuing this request. The broker can use this to enrich its own
+ // debugging information of which version of what clients are connected.
+ //
+ // If using v3, this field is required and must match the following pattern:
+ //
+ // [a-zA-Z0-9](?:[a-zA-Z0-9\\-.]*[a-zA-Z0-9])?
+ ClientSoftwareName string // v3+
+
+ // ClientSoftwareVersion is the version of the software name in the prior
+ // field. It must match the same regex (thus, this is also required).
+ ClientSoftwareVersion string // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*ApiVersionsRequest) Key() int16 { return 18 }
+func (*ApiVersionsRequest) MaxVersion() int16 { return 3 }
+func (v *ApiVersionsRequest) SetVersion(version int16) { v.Version = version }
+func (v *ApiVersionsRequest) GetVersion() int16 { return v.Version }
+func (v *ApiVersionsRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *ApiVersionsRequest) ResponseKind() Response {
+ r := &ApiVersionsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ApiVersionsRequest) RequestWith(ctx context.Context, r Requestor) (*ApiVersionsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ApiVersionsResponse)
+ return resp, err
+}
+
+func (v *ApiVersionsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ if version >= 3 {
+ v := v.ClientSoftwareName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.ClientSoftwareVersion
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ApiVersionsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ApiVersionsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ApiVersionsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClientSoftwareName = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClientSoftwareVersion = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrApiVersionsRequest returns a pointer to a default ApiVersionsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrApiVersionsRequest() *ApiVersionsRequest {
+ var v ApiVersionsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ApiVersionsRequest.
+func (v *ApiVersionsRequest) Default() {
+}
+
+// NewApiVersionsRequest returns a default ApiVersionsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewApiVersionsRequest() ApiVersionsRequest {
+ var v ApiVersionsRequest
+ v.Default()
+ return v
+}
+
+type ApiVersionsResponseApiKey struct {
+ // ApiKey is the key of a message request.
+ ApiKey int16
+
+ // MinVersion is the min version a broker supports for an API key.
+ MinVersion int16
+
+ // MaxVersion is the max version a broker supports for an API key.
+ MaxVersion int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ApiVersionsResponseApiKey.
+func (v *ApiVersionsResponseApiKey) Default() {
+}
+
+// NewApiVersionsResponseApiKey returns a default ApiVersionsResponseApiKey
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewApiVersionsResponseApiKey() ApiVersionsResponseApiKey {
+ var v ApiVersionsResponseApiKey
+ v.Default()
+ return v
+}
+
+type ApiVersionsResponseSupportedFeature struct {
+ // The name of the feature.
+ Name string
+
+ // The minimum supported version for the feature.
+ MinVersion int16
+
+ // The maximum supported version for the feature.
+ MaxVersion int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ApiVersionsResponseSupportedFeature.
+func (v *ApiVersionsResponseSupportedFeature) Default() {
+}
+
+// NewApiVersionsResponseSupportedFeature returns a default ApiVersionsResponseSupportedFeature
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewApiVersionsResponseSupportedFeature() ApiVersionsResponseSupportedFeature {
+ var v ApiVersionsResponseSupportedFeature
+ v.Default()
+ return v
+}
+
+type ApiVersionsResponseFinalizedFeature struct {
+ // The name of the feature.
+ Name string
+
+ // The cluster-wide finalized max version level for the feature.
+ MaxVersionLevel int16
+
+ // The cluster-wide finalized min version level for the feature.
+ MinVersionLevel int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ApiVersionsResponseFinalizedFeature.
+func (v *ApiVersionsResponseFinalizedFeature) Default() {
+}
+
+// NewApiVersionsResponseFinalizedFeature returns a default ApiVersionsResponseFinalizedFeature
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewApiVersionsResponseFinalizedFeature() ApiVersionsResponseFinalizedFeature {
+ var v ApiVersionsResponseFinalizedFeature
+ v.Default()
+ return v
+}
+
+// ApiVersionsResponse is returned from an ApiVersionsRequest.
+type ApiVersionsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is UNSUPPORTED_VERSION if the request was issued with a higher
+ // version than the broker supports. Before Kafka 2.4.0, if this error is
+ // returned, the rest of this struct will be empty.
+ //
+ // Starting in Kafka 2.4.0 (with version 3), even with an UNSUPPORTED_VERSION
+ // error, the broker still replies with the ApiKeys it supports.
+ ErrorCode int16
+
+ // ApiKeys is an array corresponding to API keys the broker supports
+ // and the range of supported versions for each key.
+ ApiKeys []ApiVersionsResponseApiKey
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // Features supported by the broker (see KIP-584).
+ SupportedFeatures []ApiVersionsResponseSupportedFeature // tag 0
+
+ // The monotonically increasing epoch for the finalized features information,
+ // where -1 indicates an unknown epoch.
+ //
+ // This field has a default of -1.
+ FinalizedFeaturesEpoch int64 // tag 1
+
+ // The list of cluster-wide finalized features (only valid if
+ // FinalizedFeaturesEpoch is >= 0).
+ FinalizedFeatures []ApiVersionsResponseFinalizedFeature // tag 2
+
+ // Set by a KRaft controller if the required configurations for ZK migration
+ // are present
+ ZkMigrationReady bool // tag 3
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*ApiVersionsResponse) Key() int16 { return 18 }
+func (*ApiVersionsResponse) MaxVersion() int16 { return 3 }
+func (v *ApiVersionsResponse) SetVersion(version int16) { v.Version = version }
+func (v *ApiVersionsResponse) GetVersion() int16 { return v.Version }
+func (v *ApiVersionsResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *ApiVersionsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *ApiVersionsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *ApiVersionsResponse) RequestKind() Request { return &ApiVersionsRequest{Version: v.Version} }
+
+func (v *ApiVersionsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ApiKeys
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ApiKey
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.MinVersion
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.MaxVersion
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if len(v.SupportedFeatures) > 0 {
+ toEncode = append(toEncode, 0)
+ }
+ if v.FinalizedFeaturesEpoch != -1 {
+ toEncode = append(toEncode, 1)
+ }
+ if len(v.FinalizedFeatures) > 0 {
+ toEncode = append(toEncode, 2)
+ }
+ if v.ZkMigrationReady != false {
+ toEncode = append(toEncode, 3)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.SupportedFeatures
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fSupportedFeatures:
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MinVersion
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.MaxVersion
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fSupportedFeatures
+ }
+ }
+ case 1:
+ {
+ v := v.FinalizedFeaturesEpoch
+ dst = kbin.AppendUvarint(dst, 1)
+ dst = kbin.AppendUvarint(dst, 8)
+ dst = kbin.AppendInt64(dst, v)
+ }
+ case 2:
+ {
+ v := v.FinalizedFeatures
+ dst = kbin.AppendUvarint(dst, 2)
+ sized := false
+ lenAt := len(dst)
+ fFinalizedFeatures:
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MaxVersionLevel
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.MinVersionLevel
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fFinalizedFeatures
+ }
+ }
+ case 3:
+ {
+ v := v.ZkMigrationReady
+ dst = kbin.AppendUvarint(dst, 3)
+ dst = kbin.AppendUvarint(dst, 1)
+ dst = kbin.AppendBool(dst, v)
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ApiVersionsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ApiVersionsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ApiVersionsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.ApiKeys
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ApiVersionsResponseApiKey, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ApiKey = v
+ }
+ {
+ v := b.Int16()
+ s.MinVersion = v
+ }
+ {
+ v := b.Int16()
+ s.MaxVersion = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ApiKeys = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := s.SupportedFeatures
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ApiVersionsResponseSupportedFeature, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ v := b.Int16()
+ s.MinVersion = v
+ }
+ {
+ v := b.Int16()
+ s.MaxVersion = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.SupportedFeatures = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ case 1:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := b.Int64()
+ s.FinalizedFeaturesEpoch = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ case 2:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := s.FinalizedFeatures
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ApiVersionsResponseFinalizedFeature, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ v := b.Int16()
+ s.MaxVersionLevel = v
+ }
+ {
+ v := b.Int16()
+ s.MinVersionLevel = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.FinalizedFeatures = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ case 3:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := b.Bool()
+ s.ZkMigrationReady = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ return b.Complete()
+}
+
+// NewPtrApiVersionsResponse returns a pointer to a default ApiVersionsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrApiVersionsResponse() *ApiVersionsResponse {
+ var v ApiVersionsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ApiVersionsResponse.
+func (v *ApiVersionsResponse) Default() {
+ v.FinalizedFeaturesEpoch = -1
+}
+
+// NewApiVersionsResponse returns a default ApiVersionsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewApiVersionsResponse() ApiVersionsResponse {
+ var v ApiVersionsResponse
+ v.Default()
+ return v
+}
+
+type CreateTopicsRequestTopicReplicaAssignment struct {
+ // Partition is a partition to create.
+ Partition int32
+
+ // Replicas are broker IDs the partition must exist on.
+ Replicas []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsRequestTopicReplicaAssignment.
+func (v *CreateTopicsRequestTopicReplicaAssignment) Default() {
+}
+
+// NewCreateTopicsRequestTopicReplicaAssignment returns a default CreateTopicsRequestTopicReplicaAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsRequestTopicReplicaAssignment() CreateTopicsRequestTopicReplicaAssignment {
+ var v CreateTopicsRequestTopicReplicaAssignment
+ v.Default()
+ return v
+}
+
+type CreateTopicsRequestTopicConfig struct {
+ // Name is a topic level config key (e.g. segment.bytes).
+ Name string
+
+ // Value is a topic level config value (e.g. 1073741824)
+ Value *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsRequestTopicConfig.
+func (v *CreateTopicsRequestTopicConfig) Default() {
+}
+
+// NewCreateTopicsRequestTopicConfig returns a default CreateTopicsRequestTopicConfig
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsRequestTopicConfig() CreateTopicsRequestTopicConfig {
+ var v CreateTopicsRequestTopicConfig
+ v.Default()
+ return v
+}
+
+type CreateTopicsRequestTopic struct {
+ // Topic is a topic to create.
+ Topic string
+
+ // NumPartitions is how many partitions to give a topic. This must
+ // be -1 if specifying partitions manually (see ReplicaAssignment)
+ // or, starting v4+, to use the broker default partitions.
+ NumPartitions int32
+
+ // ReplicationFactor is how many replicas every partition must have.
+ // This must be -1 if specifying partitions manually (see ReplicaAssignment)
+ // or, starting v4+, to use the broker default replication factor.
+ ReplicationFactor int16
+
+ // ReplicaAssignment is an array to manually dicate replicas and their
+ // partitions for a topic. If using this, both ReplicationFactor and
+ // NumPartitions must be -1.
+ ReplicaAssignment []CreateTopicsRequestTopicReplicaAssignment
+
+ // Configs is an array of key value config pairs for a topic.
+ // These correspond to Kafka Topic-Level Configs: http://kafka.apache.org/documentation/#topicconfigs.
+ Configs []CreateTopicsRequestTopicConfig
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsRequestTopic.
+func (v *CreateTopicsRequestTopic) Default() {
+}
+
+// NewCreateTopicsRequestTopic returns a default CreateTopicsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsRequestTopic() CreateTopicsRequestTopic {
+ var v CreateTopicsRequestTopic
+ v.Default()
+ return v
+}
+
+// CreateTopicsRequest creates Kafka topics.
+//
+// Version 4, introduced in Kafka 2.4.0, implies client support for
+// creation defaults. See KIP-464.
+//
+// Version 5, also in 2.4.0, returns topic configs in the response (KIP-525).
+type CreateTopicsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics is an array of topics to attempt to create.
+ Topics []CreateTopicsRequestTopic
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 60000.
+ TimeoutMillis int32
+
+ // ValidateOnly is makes this request a dry-run; everything is validated but
+ // no topics are actually created.
+ ValidateOnly bool // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+func (*CreateTopicsRequest) Key() int16 { return 19 }
+func (*CreateTopicsRequest) MaxVersion() int16 { return 7 }
+func (v *CreateTopicsRequest) SetVersion(version int16) { v.Version = version }
+func (v *CreateTopicsRequest) GetVersion() int16 { return v.Version }
+func (v *CreateTopicsRequest) IsFlexible() bool { return v.Version >= 5 }
+func (v *CreateTopicsRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *CreateTopicsRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *CreateTopicsRequest) IsAdminRequest() {}
+func (v *CreateTopicsRequest) ResponseKind() Response {
+ r := &CreateTopicsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *CreateTopicsRequest) RequestWith(ctx context.Context, r Requestor) (*CreateTopicsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*CreateTopicsResponse)
+ return resp, err
+}
+
+func (v *CreateTopicsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.NumPartitions
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ReplicationFactor
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ReplicaAssignment
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Configs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.ValidateOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreateTopicsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreateTopicsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreateTopicsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateTopicsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.NumPartitions = v
+ }
+ {
+ v := b.Int16()
+ s.ReplicationFactor = v
+ }
+ {
+ v := s.ReplicaAssignment
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateTopicsRequestTopicReplicaAssignment, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ReplicaAssignment = v
+ }
+ {
+ v := s.Configs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateTopicsRequestTopicConfig, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Value = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Configs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.ValidateOnly = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreateTopicsRequest returns a pointer to a default CreateTopicsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreateTopicsRequest() *CreateTopicsRequest {
+ var v CreateTopicsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsRequest.
+func (v *CreateTopicsRequest) Default() {
+ v.TimeoutMillis = 60000
+}
+
+// NewCreateTopicsRequest returns a default CreateTopicsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsRequest() CreateTopicsRequest {
+ var v CreateTopicsRequest
+ v.Default()
+ return v
+}
+
+type CreateTopicsResponseTopicConfig struct {
+ // Name is the configuration name (e.g. segment.bytes).
+ Name string
+
+ // Value is the value for this config key. If the key is sensitive,
+ // the value will be null.
+ Value *string
+
+ // ReadOnly signifies whether this is not a dynamic config option.
+ ReadOnly bool
+
+ // Source is where this config entry is from. See the documentation
+ // on DescribeConfigsRequest's Source for more details.
+ //
+ // This field has a default of -1.
+ Source int8
+
+ // IsSensitive signifies whether this is a sensitive config key, which
+ // is either a password or an unknown type.
+ IsSensitive bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsResponseTopicConfig.
+func (v *CreateTopicsResponseTopicConfig) Default() {
+ v.Source = -1
+}
+
+// NewCreateTopicsResponseTopicConfig returns a default CreateTopicsResponseTopicConfig
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsResponseTopicConfig() CreateTopicsResponseTopicConfig {
+ var v CreateTopicsResponseTopicConfig
+ v.Default()
+ return v
+}
+
+type CreateTopicsResponseTopic struct {
+ // Topic is the topic this response corresponds to.
+ Topic string
+
+ // The unique topic ID.
+ TopicID [16]byte // v7+
+
+ // ErrorCode is the error code for an individual topic creation.
+ //
+ // NOT_CONTROLLER is returned if the request was not issued to a Kafka
+ // controller.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized.
+ //
+ // INVALID_REQUEST is returned if the same topic occurred multiple times
+ // in the request.
+ //
+ // POLICY_VIOLATION is returned if the broker is using a
+ // create.topic.policy.class.name that returns a policy violation.
+ //
+ // INVALID_TOPIC_EXCEPTION if the topic collides with another topic when
+ // both topic's names' periods are replaced with underscores (e.g.
+ // topic.foo and topic_foo collide).
+ //
+ // TOPIC_ALREADY_EXISTS is returned if the topic already exists.
+ //
+ // INVALID_PARTITIONS is returned if the requested number of partitions is
+ // <= 0.
+ //
+ // INVALID_REPLICATION_FACTOR is returned if the requested replication
+ // factor is <= 0.
+ //
+ // INVALID_REPLICA_ASSIGNMENT is returned if not all partitions have the same
+ // number of replicas, or duplica replicas are assigned, or the partitions
+ // are not consecutive starting from 0.
+ //
+ // INVALID_CONFIG is returned if the requested topic config is invalid.
+ // to create a topic.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the topic creation failed.
+ ErrorMessage *string // v1+
+
+ // ConfigErrorCode is non-zero if configs are unable to be returned.
+ //
+ // This is the first tagged field, introduced in version 5. As such, it is
+ // only possible to be present in v5+.
+ ConfigErrorCode int16 // tag 0
+
+ // NumPartitions is how many partitions were created for this topic.
+ //
+ // This field has a default of -1.
+ NumPartitions int32 // v5+
+
+ // ReplicationFactor is how many replicas every partition has for this topic.
+ //
+ // This field has a default of -1.
+ ReplicationFactor int16 // v5+
+
+ // Configs contains this topic's configuration.
+ Configs []CreateTopicsResponseTopicConfig // v5+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsResponseTopic.
+func (v *CreateTopicsResponseTopic) Default() {
+ v.NumPartitions = -1
+ v.ReplicationFactor = -1
+}
+
+// NewCreateTopicsResponseTopic returns a default CreateTopicsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsResponseTopic() CreateTopicsResponseTopic {
+ var v CreateTopicsResponseTopic
+ v.Default()
+ return v
+}
+
+// CreateTopicsResponse is returned from a CreateTopicsRequest.
+type CreateTopicsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 3.
+ ThrottleMillis int32 // v2+
+
+ // Topics contains responses to the requested topic creations.
+ Topics []CreateTopicsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v5+
+}
+
+func (*CreateTopicsResponse) Key() int16 { return 19 }
+func (*CreateTopicsResponse) MaxVersion() int16 { return 7 }
+func (v *CreateTopicsResponse) SetVersion(version int16) { v.Version = version }
+func (v *CreateTopicsResponse) GetVersion() int16 { return v.Version }
+func (v *CreateTopicsResponse) IsFlexible() bool { return v.Version >= 5 }
+func (v *CreateTopicsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 3 }
+func (v *CreateTopicsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *CreateTopicsResponse) RequestKind() Request { return &CreateTopicsRequest{Version: v.Version} }
+
+func (v *CreateTopicsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ if version >= 2 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 7 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 1 {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 5 {
+ v := v.NumPartitions
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 5 {
+ v := v.ReplicationFactor
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 5 {
+ v := v.Configs
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ReadOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.Source
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.IsSensitive
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if v.ConfigErrorCode != 0 {
+ toEncode = append(toEncode, 0)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.ConfigErrorCode
+ dst = kbin.AppendUvarint(dst, 0)
+ dst = kbin.AppendUvarint(dst, 2)
+ dst = kbin.AppendInt16(dst, v)
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreateTopicsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreateTopicsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreateTopicsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 5
+ _ = isFlexible
+ s := v
+ if version >= 2 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateTopicsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 7 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 1 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if version >= 5 {
+ v := b.Int32()
+ s.NumPartitions = v
+ }
+ if version >= 5 {
+ v := b.Int16()
+ s.ReplicationFactor = v
+ }
+ if version >= 5 {
+ v := s.Configs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []CreateTopicsResponseTopicConfig{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateTopicsResponseTopicConfig, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Value = v
+ }
+ {
+ v := b.Bool()
+ s.ReadOnly = v
+ }
+ {
+ v := b.Int8()
+ s.Source = v
+ }
+ {
+ v := b.Bool()
+ s.IsSensitive = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Configs = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := b.Int16()
+ s.ConfigErrorCode = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreateTopicsResponse returns a pointer to a default CreateTopicsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreateTopicsResponse() *CreateTopicsResponse {
+ var v CreateTopicsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateTopicsResponse.
+func (v *CreateTopicsResponse) Default() {
+}
+
+// NewCreateTopicsResponse returns a default CreateTopicsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateTopicsResponse() CreateTopicsResponse {
+ var v CreateTopicsResponse
+ v.Default()
+ return v
+}
+
+type DeleteTopicsRequestTopic struct {
+ Topic *string
+
+ TopicID [16]byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteTopicsRequestTopic.
+func (v *DeleteTopicsRequestTopic) Default() {
+}
+
+// NewDeleteTopicsRequestTopic returns a default DeleteTopicsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteTopicsRequestTopic() DeleteTopicsRequestTopic {
+ var v DeleteTopicsRequestTopic
+ v.Default()
+ return v
+}
+
+// DeleteTopicsRequest deletes Kafka topics.
+type DeleteTopicsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics is an array of topics to delete.
+ TopicNames []string // v0-v5
+
+ // The name or topic ID of topics to delete.
+ Topics []DeleteTopicsRequestTopic // v6+
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 15000.
+ TimeoutMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*DeleteTopicsRequest) Key() int16 { return 20 }
+func (*DeleteTopicsRequest) MaxVersion() int16 { return 6 }
+func (v *DeleteTopicsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DeleteTopicsRequest) GetVersion() int16 { return v.Version }
+func (v *DeleteTopicsRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *DeleteTopicsRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *DeleteTopicsRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *DeleteTopicsRequest) IsAdminRequest() {}
+func (v *DeleteTopicsRequest) ResponseKind() Response {
+ r := &DeleteTopicsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DeleteTopicsRequest) RequestWith(ctx context.Context, r Requestor) (*DeleteTopicsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DeleteTopicsResponse)
+ return resp, err
+}
+
+func (v *DeleteTopicsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 0 && version <= 5 {
+ v := v.TopicNames
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if version >= 6 {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteTopicsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteTopicsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteTopicsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 0 && version <= 5 {
+ v := s.TopicNames
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.TopicNames = v
+ }
+ if version >= 6 {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteTopicsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteTopicsRequest returns a pointer to a default DeleteTopicsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteTopicsRequest() *DeleteTopicsRequest {
+ var v DeleteTopicsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteTopicsRequest.
+func (v *DeleteTopicsRequest) Default() {
+ v.TimeoutMillis = 15000
+}
+
+// NewDeleteTopicsRequest returns a default DeleteTopicsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteTopicsRequest() DeleteTopicsRequest {
+ var v DeleteTopicsRequest
+ v.Default()
+ return v
+}
+
+type DeleteTopicsResponseTopic struct {
+ // Topic is the topic requested for deletion.
+ Topic *string
+
+ // The topic ID requested for deletion.
+ TopicID [16]byte // v6+
+
+ // ErrorCode is the error code returned for an individual topic in
+ // deletion request.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to delete a topic.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the topic.
+ //
+ // NOT_CONTROLLER is returned if the request was not issued to a Kafka
+ // controller.
+ //
+ // TOPIC_DELETION_DISABLED is returned for deletion requests version 3+
+ // and brokers >= 2.1.0. INVALID_REQUEST is issued for request versions
+ // 0-2 against brokers >= 2.1.0. Otherwise, the request hangs until it
+ // times out.
+ //
+ // UNSUPPORTED_VERSION is returned when using topic IDs with a cluster
+ // that is not yet Kafka v2.8+.
+ //
+ // UNKNOWN_TOPIC_ID is returned when using topic IDs to a Kafka cluster
+ // v2.8+ and the topic ID is not found.
+ ErrorCode int16
+
+ // ErrorMessage is a message for an error.
+ ErrorMessage *string // v5+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteTopicsResponseTopic.
+func (v *DeleteTopicsResponseTopic) Default() {
+}
+
+// NewDeleteTopicsResponseTopic returns a default DeleteTopicsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteTopicsResponseTopic() DeleteTopicsResponseTopic {
+ var v DeleteTopicsResponseTopic
+ v.Default()
+ return v
+}
+
+// DeleteTopicsResponse is returned from a DeleteTopicsRequest.
+// Version 3 added the TOPIC_DELETION_DISABLED error proposed in KIP-322
+// and introduced in Kafka 2.1.0. Prior, the request timed out.
+type DeleteTopicsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32 // v1+
+
+ // Topics contains responses for each topic requested for deletion.
+ Topics []DeleteTopicsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*DeleteTopicsResponse) Key() int16 { return 20 }
+func (*DeleteTopicsResponse) MaxVersion() int16 { return 6 }
+func (v *DeleteTopicsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DeleteTopicsResponse) GetVersion() int16 { return v.Version }
+func (v *DeleteTopicsResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *DeleteTopicsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *DeleteTopicsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *DeleteTopicsResponse) RequestKind() Request { return &DeleteTopicsRequest{Version: v.Version} }
+
+func (v *DeleteTopicsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if version < 6 {
+ var vv string
+ if v != nil {
+ vv = *v
+ }
+ {
+ v := vv
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ } else {
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ }
+ if version >= 6 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 5 {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteTopicsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteTopicsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteTopicsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteTopicsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v *string
+ if version < 6 {
+ var vv string
+ if isFlexible {
+ if unsafe {
+ vv = b.UnsafeCompactString()
+ } else {
+ vv = b.CompactString()
+ }
+ } else {
+ if unsafe {
+ vv = b.UnsafeString()
+ } else {
+ vv = b.String()
+ }
+ }
+ v = &vv
+ } else {
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 6 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 5 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteTopicsResponse returns a pointer to a default DeleteTopicsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteTopicsResponse() *DeleteTopicsResponse {
+ var v DeleteTopicsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteTopicsResponse.
+func (v *DeleteTopicsResponse) Default() {
+}
+
+// NewDeleteTopicsResponse returns a default DeleteTopicsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteTopicsResponse() DeleteTopicsResponse {
+ var v DeleteTopicsResponse
+ v.Default()
+ return v
+}
+
+type DeleteRecordsRequestTopicPartition struct {
+ // Partition is a partition to delete records from.
+ Partition int32
+
+ // Offset is the offset to set the partition's low watermark (start
+ // offset) to. After a successful response, all records before this
+ // offset are considered deleted and are no longer readable.
+ //
+ // To delete all records, use -1, which is mapped to the partition's
+ // current high watermark.
+ Offset int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteRecordsRequestTopicPartition.
+func (v *DeleteRecordsRequestTopicPartition) Default() {
+}
+
+// NewDeleteRecordsRequestTopicPartition returns a default DeleteRecordsRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteRecordsRequestTopicPartition() DeleteRecordsRequestTopicPartition {
+ var v DeleteRecordsRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type DeleteRecordsRequestTopic struct {
+ // Topic is a topic to delete records from.
+ Topic string
+
+ // Partitions contains partitions to delete records from.
+ Partitions []DeleteRecordsRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteRecordsRequestTopic.
+func (v *DeleteRecordsRequestTopic) Default() {
+}
+
+// NewDeleteRecordsRequestTopic returns a default DeleteRecordsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteRecordsRequestTopic() DeleteRecordsRequestTopic {
+ var v DeleteRecordsRequestTopic
+ v.Default()
+ return v
+}
+
+// DeleteRecordsRequest is an admin request to delete records from Kafka.
+// This was added for KIP-107.
+//
+// To delete records, Kafka sets the LogStartOffset for partitions to
+// the requested offset. All segments whose max partition is before the
+// requested offset are deleted, and any records within the segment before
+// the requested offset can no longer be read.
+//
+// This request must be issued to the correct brokers that own the partitions
+// you intend to delete records for.
+type DeleteRecordsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics contains topics for which to delete records from.
+ Topics []DeleteRecordsRequestTopic
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 15000.
+ TimeoutMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DeleteRecordsRequest) Key() int16 { return 21 }
+func (*DeleteRecordsRequest) MaxVersion() int16 { return 2 }
+func (v *DeleteRecordsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DeleteRecordsRequest) GetVersion() int16 { return v.Version }
+func (v *DeleteRecordsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *DeleteRecordsRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *DeleteRecordsRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *DeleteRecordsRequest) ResponseKind() Response {
+ r := &DeleteRecordsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DeleteRecordsRequest) RequestWith(ctx context.Context, r Requestor) (*DeleteRecordsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DeleteRecordsResponse)
+ return resp, err
+}
+
+func (v *DeleteRecordsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteRecordsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteRecordsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteRecordsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteRecordsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteRecordsRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteRecordsRequest returns a pointer to a default DeleteRecordsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteRecordsRequest() *DeleteRecordsRequest {
+ var v DeleteRecordsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteRecordsRequest.
+func (v *DeleteRecordsRequest) Default() {
+ v.TimeoutMillis = 15000
+}
+
+// NewDeleteRecordsRequest returns a default DeleteRecordsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteRecordsRequest() DeleteRecordsRequest {
+ var v DeleteRecordsRequest
+ v.Default()
+ return v
+}
+
+type DeleteRecordsResponseTopicPartition struct {
+ // Partition is the partition this response corresponds to.
+ Partition int32
+
+ // LowWatermark is the new earliest offset for this partition.
+ LowWatermark int64
+
+ // ErrorCode is the error code returned for a given partition in
+ // the delete request.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned for all partitions if the
+ // client is not authorized to delete records.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned for all partitions that
+ // the requested broker does not know of.
+ //
+ // NOT_LEADER_FOR_PARTITION is returned for partitions that the
+ // requested broker is not a leader of.
+ //
+ // OFFSET_OUT_OF_RANGE is returned if the requested offset is
+ // negative or higher than the current high watermark.
+ //
+ // POLICY_VIOLATION is returned if records cannot be deleted due to
+ // broker configuration.
+ //
+ // KAFKA_STORAGE_EXCEPTION is returned if the partition is in an
+ // offline log directory.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteRecordsResponseTopicPartition.
+func (v *DeleteRecordsResponseTopicPartition) Default() {
+}
+
+// NewDeleteRecordsResponseTopicPartition returns a default DeleteRecordsResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteRecordsResponseTopicPartition() DeleteRecordsResponseTopicPartition {
+ var v DeleteRecordsResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type DeleteRecordsResponseTopic struct {
+ // Topic is the topic this response corresponds to.
+ Topic string
+
+ // Partitions contains responses for each partition in a requested topic
+ // in the delete records request.
+ Partitions []DeleteRecordsResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteRecordsResponseTopic.
+func (v *DeleteRecordsResponseTopic) Default() {
+}
+
+// NewDeleteRecordsResponseTopic returns a default DeleteRecordsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteRecordsResponseTopic() DeleteRecordsResponseTopic {
+ var v DeleteRecordsResponseTopic
+ v.Default()
+ return v
+}
+
+// DeleteRecordsResponse is returned from a DeleteRecordsRequest.
+type DeleteRecordsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Topics contains responses for each topic in the delete records request.
+ Topics []DeleteRecordsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DeleteRecordsResponse) Key() int16 { return 21 }
+func (*DeleteRecordsResponse) MaxVersion() int16 { return 2 }
+func (v *DeleteRecordsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DeleteRecordsResponse) GetVersion() int16 { return v.Version }
+func (v *DeleteRecordsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *DeleteRecordsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *DeleteRecordsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *DeleteRecordsResponse) RequestKind() Request {
+ return &DeleteRecordsRequest{Version: v.Version}
+}
+
+func (v *DeleteRecordsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LowWatermark
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteRecordsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteRecordsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteRecordsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteRecordsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteRecordsResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.LowWatermark = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteRecordsResponse returns a pointer to a default DeleteRecordsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteRecordsResponse() *DeleteRecordsResponse {
+ var v DeleteRecordsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteRecordsResponse.
+func (v *DeleteRecordsResponse) Default() {
+}
+
+// NewDeleteRecordsResponse returns a default DeleteRecordsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteRecordsResponse() DeleteRecordsResponse {
+ var v DeleteRecordsResponse
+ v.Default()
+ return v
+}
+
+// InitProducerIDRequest initializes a producer ID for idempotent transactions,
+// and if using transactions, a producer epoch. This is the first request
+// necessary to begin idempotent producing or transactions.
+//
+// Note that you do not need to go to a txn coordinator if you are initializing
+// a producer id without a transactional id.
+type InitProducerIDRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TransactionalID is the ID to use for transactions if using transactions.
+ TransactionalID *string
+
+ // TransactionTimeoutMillis is how long a transaction is allowed before
+ // EndTxn is required.
+ //
+ // Note that this timeout only begins on the first AddPartitionsToTxn
+ // request.
+ TransactionTimeoutMillis int32
+
+ // ProducerID, added for KIP-360, is the current producer ID. This allows
+ // the client to potentially recover on UNKNOWN_PRODUCER_ID errors.
+ //
+ // This field has a default of -1.
+ ProducerID int64 // v3+
+
+ // The producer's current epoch. This will be checked against the producer
+ // epoch on the broker, and the request will return an error if they do not
+ // match. Also added for KIP-360.
+ //
+ // This field has a default of -1.
+ ProducerEpoch int16 // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*InitProducerIDRequest) Key() int16 { return 22 }
+func (*InitProducerIDRequest) MaxVersion() int16 { return 4 }
+func (v *InitProducerIDRequest) SetVersion(version int16) { v.Version = version }
+func (v *InitProducerIDRequest) GetVersion() int16 { return v.Version }
+func (v *InitProducerIDRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *InitProducerIDRequest) IsTxnCoordinatorRequest() {}
+func (v *InitProducerIDRequest) ResponseKind() Response {
+ r := &InitProducerIDResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *InitProducerIDRequest) RequestWith(ctx context.Context, r Requestor) (*InitProducerIDResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*InitProducerIDResponse)
+ return resp, err
+}
+
+func (v *InitProducerIDRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.TransactionTimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 3 {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 3 {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *InitProducerIDRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *InitProducerIDRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *InitProducerIDRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ v := b.Int32()
+ s.TransactionTimeoutMillis = v
+ }
+ if version >= 3 {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ if version >= 3 {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrInitProducerIDRequest returns a pointer to a default InitProducerIDRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrInitProducerIDRequest() *InitProducerIDRequest {
+ var v InitProducerIDRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to InitProducerIDRequest.
+func (v *InitProducerIDRequest) Default() {
+ v.ProducerID = -1
+ v.ProducerEpoch = -1
+}
+
+// NewInitProducerIDRequest returns a default InitProducerIDRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewInitProducerIDRequest() InitProducerIDRequest {
+ var v InitProducerIDRequest
+ v.Default()
+ return v
+}
+
+// InitProducerIDResponse is returned for an InitProducerIDRequest.
+type InitProducerIDResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // CLUSTER_AUTHORIZATION_FAILED is returned when not using transactions if
+ // the client is not authorized for idempotent_write on cluster.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned when using transactions
+ // if the client is not authorized to write on transactional_id.
+ //
+ // INVALID_REQUEST is returned if using transactions and the transactional id
+ // is an empty, non-null string
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the coordinator for this
+ // transactional ID is still loading.
+ //
+ // NOT_COORDINATOR is returned if the broker is not the coordinator for
+ // this transactional ID.
+ //
+ // INVALID_TRANSACTION_TIMEOUT is returned if using transactions and the timeout
+ // is equal to over over transaction.max.timeout.ms or under 0.
+ //
+ // CONCURRENT_TRANSACTIONS is returned if there is an ongoing transaction
+ // that is completing at the time this init is called.
+ ErrorCode int16
+
+ // ProducerID is the next producer ID that Kafka generated. This ID is used
+ // to ensure repeated produce requests do not result in duplicate records.
+ //
+ // This field has a default of -1.
+ ProducerID int64
+
+ // ProducerEpoch is the producer epoch to use for transactions.
+ ProducerEpoch int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*InitProducerIDResponse) Key() int16 { return 22 }
+func (*InitProducerIDResponse) MaxVersion() int16 { return 4 }
+func (v *InitProducerIDResponse) SetVersion(version int16) { v.Version = version }
+func (v *InitProducerIDResponse) GetVersion() int16 { return v.Version }
+func (v *InitProducerIDResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *InitProducerIDResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *InitProducerIDResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *InitProducerIDResponse) RequestKind() Request {
+ return &InitProducerIDRequest{Version: v.Version}
+}
+
+func (v *InitProducerIDResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *InitProducerIDResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *InitProducerIDResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *InitProducerIDResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrInitProducerIDResponse returns a pointer to a default InitProducerIDResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrInitProducerIDResponse() *InitProducerIDResponse {
+ var v InitProducerIDResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to InitProducerIDResponse.
+func (v *InitProducerIDResponse) Default() {
+ v.ProducerID = -1
+}
+
+// NewInitProducerIDResponse returns a default InitProducerIDResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewInitProducerIDResponse() InitProducerIDResponse {
+ var v InitProducerIDResponse
+ v.Default()
+ return v
+}
+
+type OffsetForLeaderEpochRequestTopicPartition struct {
+ // Partition is the number of a partition.
+ Partition int32
+
+ // CurrentLeaderEpoch, proposed in KIP-320 and introduced in Kafka 2.1.0,
+ // allows brokers to check if the client is fenced (has an out of date
+ // leader) or if the client is ahead of the broker.
+ //
+ // The initial leader epoch can be determined from a MetadataResponse.
+ //
+ // This field has a default of -1.
+ CurrentLeaderEpoch int32 // v2+
+
+ // LeaderEpoch is the epoch to fetch the end offset for.
+ LeaderEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetForLeaderEpochRequestTopicPartition.
+func (v *OffsetForLeaderEpochRequestTopicPartition) Default() {
+ v.CurrentLeaderEpoch = -1
+}
+
+// NewOffsetForLeaderEpochRequestTopicPartition returns a default OffsetForLeaderEpochRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetForLeaderEpochRequestTopicPartition() OffsetForLeaderEpochRequestTopicPartition {
+ var v OffsetForLeaderEpochRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetForLeaderEpochRequestTopic struct {
+ // Topic is the name of a topic.
+ Topic string
+
+ // Partitions are partitions within a topic to fetch leader epoch offsets for.
+ Partitions []OffsetForLeaderEpochRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetForLeaderEpochRequestTopic.
+func (v *OffsetForLeaderEpochRequestTopic) Default() {
+}
+
+// NewOffsetForLeaderEpochRequestTopic returns a default OffsetForLeaderEpochRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetForLeaderEpochRequestTopic() OffsetForLeaderEpochRequestTopic {
+ var v OffsetForLeaderEpochRequestTopic
+ v.Default()
+ return v
+}
+
+// OffsetForLeaderEpochRequest requests log end offsets for partitions.
+//
+// Version 2, proposed in KIP-320 and introduced in Kafka 2.1.0, can be used by
+// consumers to perform more accurate offset resetting in the case of data loss.
+//
+// In support of version 2, this requires DESCRIBE on TOPIC.
+type OffsetForLeaderEpochRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ReplicaID, added in support of KIP-392, is the broker ID of the follower,
+ // or -1 if this request is from a consumer.
+ //
+ // This field has a default of -2.
+ ReplicaID int32 // v3+
+
+ // Topics are topics to fetch leader epoch offsets for.
+ Topics []OffsetForLeaderEpochRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*OffsetForLeaderEpochRequest) Key() int16 { return 23 }
+func (*OffsetForLeaderEpochRequest) MaxVersion() int16 { return 4 }
+func (v *OffsetForLeaderEpochRequest) SetVersion(version int16) { v.Version = version }
+func (v *OffsetForLeaderEpochRequest) GetVersion() int16 { return v.Version }
+func (v *OffsetForLeaderEpochRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *OffsetForLeaderEpochRequest) ResponseKind() Response {
+ r := &OffsetForLeaderEpochResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *OffsetForLeaderEpochRequest) RequestWith(ctx context.Context, r Requestor) (*OffsetForLeaderEpochResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*OffsetForLeaderEpochResponse)
+ return resp, err
+}
+
+func (v *OffsetForLeaderEpochRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 3 {
+ v := v.ReplicaID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 2 {
+ v := v.CurrentLeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *OffsetForLeaderEpochRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetForLeaderEpochRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetForLeaderEpochRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ v := b.Int32()
+ s.ReplicaID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetForLeaderEpochRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetForLeaderEpochRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if version >= 2 {
+ v := b.Int32()
+ s.CurrentLeaderEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetForLeaderEpochRequest returns a pointer to a default OffsetForLeaderEpochRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetForLeaderEpochRequest() *OffsetForLeaderEpochRequest {
+ var v OffsetForLeaderEpochRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetForLeaderEpochRequest.
+func (v *OffsetForLeaderEpochRequest) Default() {
+ v.ReplicaID = -2
+}
+
+// NewOffsetForLeaderEpochRequest returns a default OffsetForLeaderEpochRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetForLeaderEpochRequest() OffsetForLeaderEpochRequest {
+ var v OffsetForLeaderEpochRequest
+ v.Default()
+ return v
+}
+
+type OffsetForLeaderEpochResponseTopicPartition struct {
+ // ErrorCode is the error code returned on request failure.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client does not have
+ // the necessary permissions to issue this request.
+ //
+ // KAFKA_STORAGE_ERROR is returned if the partition is offline.
+ //
+ // NOT_LEADER_FOR_PARTITION is returned if the broker knows of the partition
+ // but does not own it.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of the
+ // partition.
+ //
+ // FENCED_LEADER_EPOCH is returned if the client is using a current leader epoch
+ // older than the actual leader epoch.
+ //
+ // UNKNOWN_LEADER_EPOCH if returned if the client is using a current leader epoch
+ // that the actual leader does not know of. This could occur when the client
+ // has newer metadata than the broker when the broker just became the leader for
+ // a replica.
+ ErrorCode int16
+
+ // Partition is the partition this response is for.
+ Partition int32
+
+ // LeaderEpoch is similar to the requested leader epoch, but pairs with the
+ // next field. If the requested leader epoch is unknown, this is -1. If the
+ // requested epoch had no records produced during the requested epoch, this
+ // is the first prior epoch that had records.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32 // v1+
+
+ // EndOffset is either (1) just past the last recorded offset in the
+ // current partition if the broker leader has the same epoch as the
+ // leader epoch in the request, or (2) the beginning offset of the next
+ // epoch if the leader is past the requested epoch. The second scenario
+ // can be seen as equivalent to the first: the beginning offset of the
+ // next epoch is just past the final offset of the prior epoch.
+ //
+ // (2) allows consumers to detect data loss: if the consumer consumed
+ // past the end offset that is returned, then the consumer should reset
+ // to the returned offset and the consumer knows everything past the end
+ // offset was lost.
+ //
+ // With the prior field, consumers know that at this offset, the broker
+ // either has no more records (consumer is caught up), or the broker
+ // transitioned to a new epoch.
+ //
+ // This field has a default of -1.
+ EndOffset int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetForLeaderEpochResponseTopicPartition.
+func (v *OffsetForLeaderEpochResponseTopicPartition) Default() {
+ v.LeaderEpoch = -1
+ v.EndOffset = -1
+}
+
+// NewOffsetForLeaderEpochResponseTopicPartition returns a default OffsetForLeaderEpochResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetForLeaderEpochResponseTopicPartition() OffsetForLeaderEpochResponseTopicPartition {
+ var v OffsetForLeaderEpochResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetForLeaderEpochResponseTopic struct {
+ // Topic is the topic this response corresponds to.
+ Topic string
+
+ // Partitions are responses to partitions in a topic in the request.
+ Partitions []OffsetForLeaderEpochResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetForLeaderEpochResponseTopic.
+func (v *OffsetForLeaderEpochResponseTopic) Default() {
+}
+
+// NewOffsetForLeaderEpochResponseTopic returns a default OffsetForLeaderEpochResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetForLeaderEpochResponseTopic() OffsetForLeaderEpochResponseTopic {
+ var v OffsetForLeaderEpochResponseTopic
+ v.Default()
+ return v
+}
+
+// OffsetForLeaderEpochResponse is returned from an OffsetForLeaderEpochRequest.
+type OffsetForLeaderEpochResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32 // v2+
+
+ // Topics are responses to topics in the request.
+ Topics []OffsetForLeaderEpochResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*OffsetForLeaderEpochResponse) Key() int16 { return 23 }
+func (*OffsetForLeaderEpochResponse) MaxVersion() int16 { return 4 }
+func (v *OffsetForLeaderEpochResponse) SetVersion(version int16) { v.Version = version }
+func (v *OffsetForLeaderEpochResponse) GetVersion() int16 { return v.Version }
+func (v *OffsetForLeaderEpochResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *OffsetForLeaderEpochResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *OffsetForLeaderEpochResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *OffsetForLeaderEpochResponse) RequestKind() Request {
+ return &OffsetForLeaderEpochRequest{Version: v.Version}
+}
+
+func (v *OffsetForLeaderEpochResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ if version >= 2 {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.EndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *OffsetForLeaderEpochResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetForLeaderEpochResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetForLeaderEpochResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ if version >= 2 {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetForLeaderEpochResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetForLeaderEpochResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if version >= 1 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.EndOffset = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetForLeaderEpochResponse returns a pointer to a default OffsetForLeaderEpochResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetForLeaderEpochResponse() *OffsetForLeaderEpochResponse {
+ var v OffsetForLeaderEpochResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetForLeaderEpochResponse.
+func (v *OffsetForLeaderEpochResponse) Default() {
+}
+
+// NewOffsetForLeaderEpochResponse returns a default OffsetForLeaderEpochResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetForLeaderEpochResponse() OffsetForLeaderEpochResponse {
+ var v OffsetForLeaderEpochResponse
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnRequestTopic struct {
+ // Topic is a topic name.
+ Topic string
+
+ // Partitions are partitions within a topic to add as part of the producer
+ // side of a transaction.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnRequestTopic.
+func (v *AddPartitionsToTxnRequestTopic) Default() {
+}
+
+// NewAddPartitionsToTxnRequestTopic returns a default AddPartitionsToTxnRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnRequestTopic() AddPartitionsToTxnRequestTopic {
+ var v AddPartitionsToTxnRequestTopic
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnRequestTransactionTopic struct {
+ Topic string
+
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnRequestTransactionTopic.
+func (v *AddPartitionsToTxnRequestTransactionTopic) Default() {
+}
+
+// NewAddPartitionsToTxnRequestTransactionTopic returns a default AddPartitionsToTxnRequestTransactionTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnRequestTransactionTopic() AddPartitionsToTxnRequestTransactionTopic {
+ var v AddPartitionsToTxnRequestTransactionTopic
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnRequestTransaction struct {
+ TransactionalID string
+
+ ProducerID int64
+
+ ProducerEpoch int16
+
+ // VerifyOnly signifies if we want to check if the partition is in the
+ // transaction rather than add it.
+ VerifyOnly bool
+
+ Topics []AddPartitionsToTxnRequestTransactionTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnRequestTransaction.
+func (v *AddPartitionsToTxnRequestTransaction) Default() {
+}
+
+// NewAddPartitionsToTxnRequestTransaction returns a default AddPartitionsToTxnRequestTransaction
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnRequestTransaction() AddPartitionsToTxnRequestTransaction {
+ var v AddPartitionsToTxnRequestTransaction
+ v.Default()
+ return v
+}
+
+// AddPartitionsToTxnRequest begins the producer side of a transaction for all
+// partitions in the request. Before producing any records to a partition in
+// the transaction, that partition must have been added to the transaction with
+// this request.
+//
+// Versions 3 and below are exclusively used by clients and versions 4 and
+// above are used by brokers.
+//
+// Version 4 adds VerifyOnly field to check if partitions are already in
+// transaction and adds support to batch multiple transactions.
+type AddPartitionsToTxnRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TransactionalID is the transactional ID to use for this request.
+ TransactionalID string // v0-v3
+
+ // ProducerID is the producer ID of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerID int64 // v0-v3
+
+ // ProducerEpoch is the producer epoch of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerEpoch int16 // v0-v3
+
+ // Topics are topics to add as part of the producer side of a transaction.
+ Topics []AddPartitionsToTxnRequestTopic // v0-v3
+
+ // The list of transactions to add partitions to, for v4+, for brokers only.
+ // The fields in this are batch broker requests that duplicate the above fields
+ // and thus are undocumented (except VerifyOnly, which is new).
+ Transactions []AddPartitionsToTxnRequestTransaction // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*AddPartitionsToTxnRequest) Key() int16 { return 24 }
+func (*AddPartitionsToTxnRequest) MaxVersion() int16 { return 4 }
+func (v *AddPartitionsToTxnRequest) SetVersion(version int16) { v.Version = version }
+func (v *AddPartitionsToTxnRequest) GetVersion() int16 { return v.Version }
+func (v *AddPartitionsToTxnRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *AddPartitionsToTxnRequest) IsTxnCoordinatorRequest() {}
+func (v *AddPartitionsToTxnRequest) ResponseKind() Response {
+ r := &AddPartitionsToTxnResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AddPartitionsToTxnRequest) RequestWith(ctx context.Context, r Requestor) (*AddPartitionsToTxnResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AddPartitionsToTxnResponse)
+ return resp, err
+}
+
+func (v *AddPartitionsToTxnRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ if version >= 0 && version <= 3 {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 0 && version <= 3 {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 0 && version <= 3 {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 0 && version <= 3 {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 4 {
+ v := v.Transactions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.VerifyOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AddPartitionsToTxnRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AddPartitionsToTxnRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AddPartitionsToTxnRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ if version >= 0 && version <= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ if version >= 0 && version <= 3 {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ if version >= 0 && version <= 3 {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ if version >= 0 && version <= 3 {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 4 {
+ v := s.Transactions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnRequestTransaction, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ v := b.Bool()
+ s.VerifyOnly = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnRequestTransactionTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Transactions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAddPartitionsToTxnRequest returns a pointer to a default AddPartitionsToTxnRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAddPartitionsToTxnRequest() *AddPartitionsToTxnRequest {
+ var v AddPartitionsToTxnRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnRequest.
+func (v *AddPartitionsToTxnRequest) Default() {
+}
+
+// NewAddPartitionsToTxnRequest returns a default AddPartitionsToTxnRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnRequest() AddPartitionsToTxnRequest {
+ var v AddPartitionsToTxnRequest
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnResponseTransactionTopicPartition struct {
+ Partition int32
+
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnResponseTransactionTopicPartition.
+func (v *AddPartitionsToTxnResponseTransactionTopicPartition) Default() {
+}
+
+// NewAddPartitionsToTxnResponseTransactionTopicPartition returns a default AddPartitionsToTxnResponseTransactionTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnResponseTransactionTopicPartition() AddPartitionsToTxnResponseTransactionTopicPartition {
+ var v AddPartitionsToTxnResponseTransactionTopicPartition
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnResponseTransactionTopic struct {
+ Topic string
+
+ Partitions []AddPartitionsToTxnResponseTransactionTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnResponseTransactionTopic.
+func (v *AddPartitionsToTxnResponseTransactionTopic) Default() {
+}
+
+// NewAddPartitionsToTxnResponseTransactionTopic returns a default AddPartitionsToTxnResponseTransactionTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnResponseTransactionTopic() AddPartitionsToTxnResponseTransactionTopic {
+ var v AddPartitionsToTxnResponseTransactionTopic
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnResponseTransaction struct {
+ // The transactional id corresponding to the transaction.
+ TransactionalID string
+
+ Topics []AddPartitionsToTxnResponseTransactionTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnResponseTransaction.
+func (v *AddPartitionsToTxnResponseTransaction) Default() {
+}
+
+// NewAddPartitionsToTxnResponseTransaction returns a default AddPartitionsToTxnResponseTransaction
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnResponseTransaction() AddPartitionsToTxnResponseTransaction {
+ var v AddPartitionsToTxnResponseTransaction
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnResponseTopicPartition struct {
+ // Partition is a partition being responded to.
+ Partition int32
+
+ // ErrorCode is any error for this topic/partition commit.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned if the client is
+ // not authorized for write with transactional IDs with the requested
+ // transactional ID.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned for all topics that the client
+ // is not authorized to write to.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned for all topics or partitions
+ // that the broker does not know of.
+ //
+ // OPERATION_NOT_ATTEMPTED is returned if any of the above errors occur
+ // for all partitions that did not have the above errors.
+ //
+ // INVALID_REQUEST is returned if the transactional ID is invalid.
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the coordinator for this
+ // transactional ID is still loading.
+ //
+ // NOT_COORDINATOR is returned if the broker is not the coordinator for
+ // this transactional ID.
+ //
+ // INVALID_PRODUCER_ID_MAPPING is returned if the produce request used
+ // a producer ID that is not tied to the transactional ID (i.e., mismatch
+ // from what was returned from InitProducerID).
+ //
+ // INVALID_PRODUCER_EPOCH is returned if the requested epoch does not match
+ // the broker epoch for this transactional ID.
+ //
+ // CONCURRENT_TRANSACTIONS is returned if there is an ongoing transaction for
+ // this transactional ID, if the producer ID and epoch matches the broker's.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnResponseTopicPartition.
+func (v *AddPartitionsToTxnResponseTopicPartition) Default() {
+}
+
+// NewAddPartitionsToTxnResponseTopicPartition returns a default AddPartitionsToTxnResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnResponseTopicPartition() AddPartitionsToTxnResponseTopicPartition {
+ var v AddPartitionsToTxnResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type AddPartitionsToTxnResponseTopic struct {
+ // Topic is a topic being responded to.
+ Topic string
+
+ // Partitions are responses to partitions in the request.
+ Partitions []AddPartitionsToTxnResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnResponseTopic.
+func (v *AddPartitionsToTxnResponseTopic) Default() {
+}
+
+// NewAddPartitionsToTxnResponseTopic returns a default AddPartitionsToTxnResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnResponseTopic() AddPartitionsToTxnResponseTopic {
+ var v AddPartitionsToTxnResponseTopic
+ v.Default()
+ return v
+}
+
+// AddPartitionsToTxnResponse is a response to an AddPartitionsToTxnRequest.
+type AddPartitionsToTxnResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // The response top level error code.
+ ErrorCode int16 // v4+
+
+ // Results categorized by transactional ID, v4+ only, for brokers only.
+ // The fields duplicate v3 and below fields (except TransactionalID) and
+ // are left undocumented.
+ Transactions []AddPartitionsToTxnResponseTransaction // v4+
+
+ // Topics are responses to topics in the request.
+ Topics []AddPartitionsToTxnResponseTopic // v0-v3
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*AddPartitionsToTxnResponse) Key() int16 { return 24 }
+func (*AddPartitionsToTxnResponse) MaxVersion() int16 { return 4 }
+func (v *AddPartitionsToTxnResponse) SetVersion(version int16) { v.Version = version }
+func (v *AddPartitionsToTxnResponse) GetVersion() int16 { return v.Version }
+func (v *AddPartitionsToTxnResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *AddPartitionsToTxnResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 1
+}
+
+func (v *AddPartitionsToTxnResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AddPartitionsToTxnResponse) RequestKind() Request {
+ return &AddPartitionsToTxnRequest{Version: v.Version}
+}
+
+func (v *AddPartitionsToTxnResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 4 {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 4 {
+ v := v.Transactions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 0 && version <= 3 {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AddPartitionsToTxnResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AddPartitionsToTxnResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AddPartitionsToTxnResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if version >= 4 {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if version >= 4 {
+ v := s.Transactions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnResponseTransaction, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnResponseTransactionTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnResponseTransactionTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Transactions = v
+ }
+ if version >= 0 && version <= 3 {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AddPartitionsToTxnResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAddPartitionsToTxnResponse returns a pointer to a default AddPartitionsToTxnResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAddPartitionsToTxnResponse() *AddPartitionsToTxnResponse {
+ var v AddPartitionsToTxnResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddPartitionsToTxnResponse.
+func (v *AddPartitionsToTxnResponse) Default() {
+}
+
+// NewAddPartitionsToTxnResponse returns a default AddPartitionsToTxnResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddPartitionsToTxnResponse() AddPartitionsToTxnResponse {
+ var v AddPartitionsToTxnResponse
+ v.Default()
+ return v
+}
+
+// AddOffsetsToTxnRequest is a request that ties produced records to what group
+// is being consumed for the transaction.
+//
+// This request must be called before TxnOffsetCommitRequest.
+//
+// Internally, this request simply adds the __consumer_offsets topic as a
+// partition for this transaction with AddPartitionsToTxn for the partition
+// in that topic that contains the group.
+type AddOffsetsToTxnRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TransactionalID is the transactional ID to use for this request.
+ TransactionalID string
+
+ // ProducerID is the producer ID of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerID int64
+
+ // ProducerEpoch is the producer epoch of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerEpoch int16
+
+ // Group is the group to tie this transaction to.
+ Group string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*AddOffsetsToTxnRequest) Key() int16 { return 25 }
+func (*AddOffsetsToTxnRequest) MaxVersion() int16 { return 3 }
+func (v *AddOffsetsToTxnRequest) SetVersion(version int16) { v.Version = version }
+func (v *AddOffsetsToTxnRequest) GetVersion() int16 { return v.Version }
+func (v *AddOffsetsToTxnRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *AddOffsetsToTxnRequest) IsTxnCoordinatorRequest() {}
+func (v *AddOffsetsToTxnRequest) ResponseKind() Response {
+ r := &AddOffsetsToTxnResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AddOffsetsToTxnRequest) RequestWith(ctx context.Context, r Requestor) (*AddOffsetsToTxnResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AddOffsetsToTxnResponse)
+ return resp, err
+}
+
+func (v *AddOffsetsToTxnRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AddOffsetsToTxnRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AddOffsetsToTxnRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AddOffsetsToTxnRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAddOffsetsToTxnRequest returns a pointer to a default AddOffsetsToTxnRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAddOffsetsToTxnRequest() *AddOffsetsToTxnRequest {
+ var v AddOffsetsToTxnRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddOffsetsToTxnRequest.
+func (v *AddOffsetsToTxnRequest) Default() {
+}
+
+// NewAddOffsetsToTxnRequest returns a default AddOffsetsToTxnRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddOffsetsToTxnRequest() AddOffsetsToTxnRequest {
+ var v AddOffsetsToTxnRequest
+ v.Default()
+ return v
+}
+
+// AddOffsetsToTxnResponse is a response to an AddOffsetsToTxnRequest.
+type AddOffsetsToTxnResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // ErrorCode is any error for this topic/partition commit.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned if the client is
+ // not authorized for write with transactional IDs with the requested
+ // transactional ID.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to read group with the requested group id.
+ //
+ // This also can return any error that AddPartitionsToTxn returns.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*AddOffsetsToTxnResponse) Key() int16 { return 25 }
+func (*AddOffsetsToTxnResponse) MaxVersion() int16 { return 3 }
+func (v *AddOffsetsToTxnResponse) SetVersion(version int16) { v.Version = version }
+func (v *AddOffsetsToTxnResponse) GetVersion() int16 { return v.Version }
+func (v *AddOffsetsToTxnResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *AddOffsetsToTxnResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *AddOffsetsToTxnResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AddOffsetsToTxnResponse) RequestKind() Request {
+ return &AddOffsetsToTxnRequest{Version: v.Version}
+}
+
+func (v *AddOffsetsToTxnResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AddOffsetsToTxnResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AddOffsetsToTxnResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AddOffsetsToTxnResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAddOffsetsToTxnResponse returns a pointer to a default AddOffsetsToTxnResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAddOffsetsToTxnResponse() *AddOffsetsToTxnResponse {
+ var v AddOffsetsToTxnResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AddOffsetsToTxnResponse.
+func (v *AddOffsetsToTxnResponse) Default() {
+}
+
+// NewAddOffsetsToTxnResponse returns a default AddOffsetsToTxnResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAddOffsetsToTxnResponse() AddOffsetsToTxnResponse {
+ var v AddOffsetsToTxnResponse
+ v.Default()
+ return v
+}
+
+// EndTxnRequest ends a transaction. This should be called after
+// TxnOffsetCommitRequest.
+type EndTxnRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TransactionalID is the transactional ID to use for this request.
+ TransactionalID string
+
+ // ProducerID is the producer ID of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerID int64
+
+ // ProducerEpoch is the producer epoch of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerEpoch int16
+
+ // Commit is whether to commit this transaction: true for yes, false for abort.
+ Commit bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*EndTxnRequest) Key() int16 { return 26 }
+func (*EndTxnRequest) MaxVersion() int16 { return 3 }
+func (v *EndTxnRequest) SetVersion(version int16) { v.Version = version }
+func (v *EndTxnRequest) GetVersion() int16 { return v.Version }
+func (v *EndTxnRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *EndTxnRequest) IsTxnCoordinatorRequest() {}
+func (v *EndTxnRequest) ResponseKind() Response {
+ r := &EndTxnResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *EndTxnRequest) RequestWith(ctx context.Context, r Requestor) (*EndTxnResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*EndTxnResponse)
+ return resp, err
+}
+
+func (v *EndTxnRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Commit
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *EndTxnRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EndTxnRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EndTxnRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ v := b.Bool()
+ s.Commit = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrEndTxnRequest returns a pointer to a default EndTxnRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrEndTxnRequest() *EndTxnRequest {
+ var v EndTxnRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndTxnRequest.
+func (v *EndTxnRequest) Default() {
+}
+
+// NewEndTxnRequest returns a default EndTxnRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndTxnRequest() EndTxnRequest {
+ var v EndTxnRequest
+ v.Default()
+ return v
+}
+
+// EndTxnResponse is a response for an EndTxnRequest.
+type EndTxnResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // ErrorCode is any error for this topic/partition commit.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned if the client is
+ // not authorized for write with transactional IDs with the requested
+ // transactional ID.
+ //
+ // INVALID_REQUEST is returned if the transactional ID is invalid.
+ //
+ // INVALID_PRODUCER_ID_MAPPING is returned if the produce request used
+ // a producer ID that is not tied to the transactional ID (i.e., mismatch
+ // from what was returned from InitProducerID).
+ //
+ // INVALID_PRODUCER_EPOCH is returned if the requested epoch does not match
+ // the broker epoch for this transactional ID.
+ //
+ // CONCURRENT_TRANSACTIONS is returned if there is an ongoing transaction for
+ // this transactional ID, if the producer ID and epoch matches the broker's.
+ //
+ // INVALID_TXN_STATE is returned if this request is attempted at the wrong
+ // time (given the order of how transaction requests should go).
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*EndTxnResponse) Key() int16 { return 26 }
+func (*EndTxnResponse) MaxVersion() int16 { return 3 }
+func (v *EndTxnResponse) SetVersion(version int16) { v.Version = version }
+func (v *EndTxnResponse) GetVersion() int16 { return v.Version }
+func (v *EndTxnResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *EndTxnResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *EndTxnResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *EndTxnResponse) RequestKind() Request { return &EndTxnRequest{Version: v.Version} }
+
+func (v *EndTxnResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *EndTxnResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EndTxnResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EndTxnResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrEndTxnResponse returns a pointer to a default EndTxnResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrEndTxnResponse() *EndTxnResponse {
+ var v EndTxnResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndTxnResponse.
+func (v *EndTxnResponse) Default() {
+}
+
+// NewEndTxnResponse returns a default EndTxnResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndTxnResponse() EndTxnResponse {
+ var v EndTxnResponse
+ v.Default()
+ return v
+}
+
+type WriteTxnMarkersRequestMarkerTopic struct {
+ // Topic is the name of the topic to write markers for.
+ Topic string
+
+ // Partitions contains partitions to write markers for.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersRequestMarkerTopic.
+func (v *WriteTxnMarkersRequestMarkerTopic) Default() {
+}
+
+// NewWriteTxnMarkersRequestMarkerTopic returns a default WriteTxnMarkersRequestMarkerTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersRequestMarkerTopic() WriteTxnMarkersRequestMarkerTopic {
+ var v WriteTxnMarkersRequestMarkerTopic
+ v.Default()
+ return v
+}
+
+type WriteTxnMarkersRequestMarker struct {
+ // ProducerID is the current producer ID to use when writing a marker.
+ ProducerID int64
+
+ // ProducerEpoch is the current producer epoch to use when writing a
+ // marker.
+ ProducerEpoch int16
+
+ // Committed is true if this marker is for a committed transaction,
+ // otherwise false if this is for an aborted transaction.
+ Committed bool
+
+ // Topics contains the topics we are writing markers for.
+ Topics []WriteTxnMarkersRequestMarkerTopic
+
+ // CoordinatorEpoch is the current epoch of the transaction coordinator we
+ // are writing a marker to. This is used to detect fenced writers.
+ CoordinatorEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersRequestMarker.
+func (v *WriteTxnMarkersRequestMarker) Default() {
+}
+
+// NewWriteTxnMarkersRequestMarker returns a default WriteTxnMarkersRequestMarker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersRequestMarker() WriteTxnMarkersRequestMarker {
+ var v WriteTxnMarkersRequestMarker
+ v.Default()
+ return v
+}
+
+// WriteTxnMarkersRequest is a broker-to-broker request that Kafka uses to
+// finish transactions.
+type WriteTxnMarkersRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Markers contains transactional markers to be written.
+ Markers []WriteTxnMarkersRequestMarker
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*WriteTxnMarkersRequest) Key() int16 { return 27 }
+func (*WriteTxnMarkersRequest) MaxVersion() int16 { return 1 }
+func (v *WriteTxnMarkersRequest) SetVersion(version int16) { v.Version = version }
+func (v *WriteTxnMarkersRequest) GetVersion() int16 { return v.Version }
+func (v *WriteTxnMarkersRequest) IsFlexible() bool { return v.Version >= 1 }
+func (v *WriteTxnMarkersRequest) ResponseKind() Response {
+ r := &WriteTxnMarkersResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *WriteTxnMarkersRequest) RequestWith(ctx context.Context, r Requestor) (*WriteTxnMarkersResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*WriteTxnMarkersResponse)
+ return resp, err
+}
+
+func (v *WriteTxnMarkersRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.Markers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Committed
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.CoordinatorEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *WriteTxnMarkersRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *WriteTxnMarkersRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *WriteTxnMarkersRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := s.Markers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]WriteTxnMarkersRequestMarker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ v := b.Bool()
+ s.Committed = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]WriteTxnMarkersRequestMarkerTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int32()
+ s.CoordinatorEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Markers = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrWriteTxnMarkersRequest returns a pointer to a default WriteTxnMarkersRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrWriteTxnMarkersRequest() *WriteTxnMarkersRequest {
+ var v WriteTxnMarkersRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersRequest.
+func (v *WriteTxnMarkersRequest) Default() {
+}
+
+// NewWriteTxnMarkersRequest returns a default WriteTxnMarkersRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersRequest() WriteTxnMarkersRequest {
+ var v WriteTxnMarkersRequest
+ v.Default()
+ return v
+}
+
+type WriteTxnMarkersResponseMarkerTopicPartition struct {
+ // Partition is the partition this result is for.
+ Partition int32
+
+ // ErrorCode is non-nil if writing the transansactional marker for this
+ // partition errored.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if the user does not have
+ // CLUSTER_ACTION on CLUSTER.
+ //
+ // NOT_LEADER_OR_FOLLOWER is returned if the broker receiving this
+ // request is not the leader of the partition.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the topic or partition is
+ // not known to exist.
+ //
+ // INVALID_PRODUCER_EPOCH is returned if the cluster epoch is provided
+ // and the provided epoch does not match.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersResponseMarkerTopicPartition.
+func (v *WriteTxnMarkersResponseMarkerTopicPartition) Default() {
+}
+
+// NewWriteTxnMarkersResponseMarkerTopicPartition returns a default WriteTxnMarkersResponseMarkerTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersResponseMarkerTopicPartition() WriteTxnMarkersResponseMarkerTopicPartition {
+ var v WriteTxnMarkersResponseMarkerTopicPartition
+ v.Default()
+ return v
+}
+
+type WriteTxnMarkersResponseMarkerTopic struct {
+ // Topic is the topic these results are for.
+ Topic string
+
+ // Partitions contains per-partition results for the write markers
+ // request.
+ Partitions []WriteTxnMarkersResponseMarkerTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersResponseMarkerTopic.
+func (v *WriteTxnMarkersResponseMarkerTopic) Default() {
+}
+
+// NewWriteTxnMarkersResponseMarkerTopic returns a default WriteTxnMarkersResponseMarkerTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersResponseMarkerTopic() WriteTxnMarkersResponseMarkerTopic {
+ var v WriteTxnMarkersResponseMarkerTopic
+ v.Default()
+ return v
+}
+
+type WriteTxnMarkersResponseMarker struct {
+ // ProducerID is the producer ID these results are for (from the input
+ // request).
+ ProducerID int64
+
+ // Topics contains the results for the write markers request.
+ Topics []WriteTxnMarkersResponseMarkerTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersResponseMarker.
+func (v *WriteTxnMarkersResponseMarker) Default() {
+}
+
+// NewWriteTxnMarkersResponseMarker returns a default WriteTxnMarkersResponseMarker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersResponseMarker() WriteTxnMarkersResponseMarker {
+ var v WriteTxnMarkersResponseMarker
+ v.Default()
+ return v
+}
+
+// WriteTxnMarkersResponse is a response to a WriteTxnMarkersRequest.
+type WriteTxnMarkersResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Markers contains results for writing transactional markers.
+ Markers []WriteTxnMarkersResponseMarker
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*WriteTxnMarkersResponse) Key() int16 { return 27 }
+func (*WriteTxnMarkersResponse) MaxVersion() int16 { return 1 }
+func (v *WriteTxnMarkersResponse) SetVersion(version int16) { v.Version = version }
+func (v *WriteTxnMarkersResponse) GetVersion() int16 { return v.Version }
+func (v *WriteTxnMarkersResponse) IsFlexible() bool { return v.Version >= 1 }
+func (v *WriteTxnMarkersResponse) RequestKind() Request {
+ return &WriteTxnMarkersRequest{Version: v.Version}
+}
+
+func (v *WriteTxnMarkersResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.Markers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *WriteTxnMarkersResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *WriteTxnMarkersResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *WriteTxnMarkersResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := s.Markers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]WriteTxnMarkersResponseMarker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]WriteTxnMarkersResponseMarkerTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]WriteTxnMarkersResponseMarkerTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Markers = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrWriteTxnMarkersResponse returns a pointer to a default WriteTxnMarkersResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrWriteTxnMarkersResponse() *WriteTxnMarkersResponse {
+ var v WriteTxnMarkersResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to WriteTxnMarkersResponse.
+func (v *WriteTxnMarkersResponse) Default() {
+}
+
+// NewWriteTxnMarkersResponse returns a default WriteTxnMarkersResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewWriteTxnMarkersResponse() WriteTxnMarkersResponse {
+ var v WriteTxnMarkersResponse
+ v.Default()
+ return v
+}
+
+type TxnOffsetCommitRequestTopicPartition struct {
+ // Partition is a partition to add for a pending commit.
+ Partition int32
+
+ // Offset is the offset within partition to commit once EndTxnRequest is
+ // called (with commit; abort obviously aborts).
+ Offset int64
+
+ // LeaderEpoch, proposed in KIP-320 and introduced in Kafka 2.1.0,
+ // allows brokers to check if the client is fenced (has an out of date
+ // leader) or is using an unknown leader.
+ //
+ // The initial leader epoch can be determined from a MetadataResponse.
+ // To skip log truncation checking, use -1.
+ //
+ // This field has a default of -1.
+ LeaderEpoch int32 // v2+
+
+ // Metadata is optional metadata the client wants to include with this
+ // commit.
+ Metadata *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnOffsetCommitRequestTopicPartition.
+func (v *TxnOffsetCommitRequestTopicPartition) Default() {
+ v.LeaderEpoch = -1
+}
+
+// NewTxnOffsetCommitRequestTopicPartition returns a default TxnOffsetCommitRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnOffsetCommitRequestTopicPartition() TxnOffsetCommitRequestTopicPartition {
+ var v TxnOffsetCommitRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type TxnOffsetCommitRequestTopic struct {
+ // Topic is a topic to add for a pending commit.
+ Topic string
+
+ // Partitions are partitions to add for pending commits.
+ Partitions []TxnOffsetCommitRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnOffsetCommitRequestTopic.
+func (v *TxnOffsetCommitRequestTopic) Default() {
+}
+
+// NewTxnOffsetCommitRequestTopic returns a default TxnOffsetCommitRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnOffsetCommitRequestTopic() TxnOffsetCommitRequestTopic {
+ var v TxnOffsetCommitRequestTopic
+ v.Default()
+ return v
+}
+
+// TxnOffsetCommitRequest sends offsets that are a part of this transaction
+// to be committed once the transaction itself finishes. This effectively
+// replaces OffsetCommitRequest for when using transactions.
+type TxnOffsetCommitRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TransactionalID is the transactional ID to use for this request.
+ TransactionalID string
+
+ // Group is the group consumed in this transaction and to be used for
+ // committing.
+ Group string
+
+ // ProducerID is the producer ID of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerID int64
+
+ // ProducerEpoch is the producer epoch of the client for this transactional ID
+ // as received from InitProducerID.
+ ProducerEpoch int16
+
+ // Generation is the group generation this transactional offset commit request is for.
+ //
+ // This field has a default of -1.
+ Generation int32 // v3+
+
+ // MemberID is the member ID this member is for.
+ MemberID string // v3+
+
+ // InstanceID is the instance ID of this member in the group (KIP-345, KIP-447).
+ InstanceID *string // v3+
+
+ // Topics are topics to add for pending commits.
+ Topics []TxnOffsetCommitRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*TxnOffsetCommitRequest) Key() int16 { return 28 }
+func (*TxnOffsetCommitRequest) MaxVersion() int16 { return 3 }
+func (v *TxnOffsetCommitRequest) SetVersion(version int16) { v.Version = version }
+func (v *TxnOffsetCommitRequest) GetVersion() int16 { return v.Version }
+func (v *TxnOffsetCommitRequest) IsFlexible() bool { return v.Version >= 3 }
+func (v *TxnOffsetCommitRequest) IsGroupCoordinatorRequest() {}
+func (v *TxnOffsetCommitRequest) ResponseKind() Response {
+ r := &TxnOffsetCommitResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *TxnOffsetCommitRequest) RequestWith(ctx context.Context, r Requestor) (*TxnOffsetCommitResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*TxnOffsetCommitResponse)
+ return resp, err
+}
+
+func (v *TxnOffsetCommitRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 3 {
+ v := v.Generation
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 3 {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Offset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 2 {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Metadata
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *TxnOffsetCommitRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *TxnOffsetCommitRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *TxnOffsetCommitRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ if version >= 3 {
+ v := b.Int32()
+ s.Generation = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]TxnOffsetCommitRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]TxnOffsetCommitRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.Offset = v
+ }
+ if version >= 2 {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Metadata = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrTxnOffsetCommitRequest returns a pointer to a default TxnOffsetCommitRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrTxnOffsetCommitRequest() *TxnOffsetCommitRequest {
+ var v TxnOffsetCommitRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnOffsetCommitRequest.
+func (v *TxnOffsetCommitRequest) Default() {
+ v.Generation = -1
+}
+
+// NewTxnOffsetCommitRequest returns a default TxnOffsetCommitRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnOffsetCommitRequest() TxnOffsetCommitRequest {
+ var v TxnOffsetCommitRequest
+ v.Default()
+ return v
+}
+
+type TxnOffsetCommitResponseTopicPartition struct {
+ // Partition is the partition this response is for.
+ Partition int32
+
+ // ErrorCode is any error for this topic/partition commit.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned if the client is
+ // not authorized for write with transactional IDs with the requested
+ // transactional ID.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to read group with the requested group id.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned for all topics that the client
+ // is not authorized to read.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned for all topics or partitions
+ // that the broker does not know of.
+ //
+ // INVALID_GROUP_ID is returned if the requested group does not exist.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the broker is not yet fully
+ // started or is shutting down, or if the group was just deleted or is
+ // migrating to another broker.
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is still loading.
+ //
+ // NOT_COORDINATOR is returned if the broker is not the coordinator for
+ // the group.
+ //
+ // FENCED_INSTANCE_ID is returned if the member is fenced (another newer
+ // transactional member is using the same instance ID).
+ //
+ // UNKNOWN_MEMBER_ID is returned if the consumer group does not know of
+ // this member.
+ //
+ // ILLEGAL_GENERATION is returned if the consumer group's generation is
+ // different than the requested generation.
+ //
+ // OFFSET_METADATA_TOO_LARGE is returned if the commit metadata is too
+ // large.
+ //
+ // REBALANCE_IN_PROGRESS is returned if the group is completing a rebalance.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnOffsetCommitResponseTopicPartition.
+func (v *TxnOffsetCommitResponseTopicPartition) Default() {
+}
+
+// NewTxnOffsetCommitResponseTopicPartition returns a default TxnOffsetCommitResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnOffsetCommitResponseTopicPartition() TxnOffsetCommitResponseTopicPartition {
+ var v TxnOffsetCommitResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type TxnOffsetCommitResponseTopic struct {
+ // Topic is the topic this response is for.
+ Topic string
+
+ // Partitions contains responses to the partitions in this topic.
+ Partitions []TxnOffsetCommitResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnOffsetCommitResponseTopic.
+func (v *TxnOffsetCommitResponseTopic) Default() {
+}
+
+// NewTxnOffsetCommitResponseTopic returns a default TxnOffsetCommitResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnOffsetCommitResponseTopic() TxnOffsetCommitResponseTopic {
+ var v TxnOffsetCommitResponseTopic
+ v.Default()
+ return v
+}
+
+// TxnOffsetCommitResponse is a response to a TxnOffsetCommitRequest.
+type TxnOffsetCommitResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Topics contains responses to the topics in the request.
+ Topics []TxnOffsetCommitResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v3+
+}
+
+func (*TxnOffsetCommitResponse) Key() int16 { return 28 }
+func (*TxnOffsetCommitResponse) MaxVersion() int16 { return 3 }
+func (v *TxnOffsetCommitResponse) SetVersion(version int16) { v.Version = version }
+func (v *TxnOffsetCommitResponse) GetVersion() int16 { return v.Version }
+func (v *TxnOffsetCommitResponse) IsFlexible() bool { return v.Version >= 3 }
+func (v *TxnOffsetCommitResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *TxnOffsetCommitResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *TxnOffsetCommitResponse) RequestKind() Request {
+ return &TxnOffsetCommitRequest{Version: v.Version}
+}
+
+func (v *TxnOffsetCommitResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *TxnOffsetCommitResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *TxnOffsetCommitResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *TxnOffsetCommitResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 3
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]TxnOffsetCommitResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]TxnOffsetCommitResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrTxnOffsetCommitResponse returns a pointer to a default TxnOffsetCommitResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrTxnOffsetCommitResponse() *TxnOffsetCommitResponse {
+ var v TxnOffsetCommitResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to TxnOffsetCommitResponse.
+func (v *TxnOffsetCommitResponse) Default() {
+}
+
+// NewTxnOffsetCommitResponse returns a default TxnOffsetCommitResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewTxnOffsetCommitResponse() TxnOffsetCommitResponse {
+ var v TxnOffsetCommitResponse
+ v.Default()
+ return v
+}
+
+// DescribeACLsRequest describes ACLs. Describing ACLs works on a filter basis:
+// anything that matches the filter is described. Note that there are two
+// "types" of filters in this request: the resource filter and the entry
+// filter, with entries corresponding to users. The first three fields form the
+// resource filter, the last four the entry filter.
+type DescribeACLsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ResourceType is the type of resource to describe.
+ ResourceType ACLResourceType
+
+ // ResourceName is the name to filter out. For the CLUSTER resource type,
+ // this must be "kafka-cluster".
+ ResourceName *string
+
+ // ResourcePatternType is how ResourceName is understood.
+ //
+ // This field has a default of 3.
+ ResourcePatternType ACLResourcePatternType // v1+
+
+ // Principal is the user to filter for. In Kafka with the simple authorizor,
+ // all principals begin with "User:". Pluggable authorizors are allowed, but
+ // Kafka still expects principals to lead with a principal type ("User") and
+ // have a colon separating the principal name ("bob" in "User:bob").
+ Principal *string
+
+ // Host is a host to filter for.
+ Host *string
+
+ // Operation is an operation to filter for.
+ //
+ // Note that READ, WRITE, DELETE, and ALTER imply DESCRIBE, and ALTER_CONFIGS
+ // implies DESCRIBE_CONFIGS.
+ Operation ACLOperation
+
+ // PermissionType is the permission type to filter for. UNKNOWN is 0.
+ PermissionType ACLPermissionType
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DescribeACLsRequest) Key() int16 { return 29 }
+func (*DescribeACLsRequest) MaxVersion() int16 { return 3 }
+func (v *DescribeACLsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeACLsRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeACLsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *DescribeACLsRequest) ResponseKind() Response {
+ r := &DescribeACLsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeACLsRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeACLsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeACLsResponse)
+ return resp, err
+}
+
+func (v *DescribeACLsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.ResourcePatternType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Principal
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Operation
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.PermissionType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeACLsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeACLsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeACLsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ var t ACLResourceType
+ {
+ v := b.Int8()
+ t = ACLResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ResourceName = v
+ }
+ if version >= 1 {
+ var t ACLResourcePatternType
+ {
+ v := b.Int8()
+ t = ACLResourcePatternType(v)
+ }
+ v := t
+ s.ResourcePatternType = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Principal = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Host = v
+ }
+ {
+ var t ACLOperation
+ {
+ v := b.Int8()
+ t = ACLOperation(v)
+ }
+ v := t
+ s.Operation = v
+ }
+ {
+ var t ACLPermissionType
+ {
+ v := b.Int8()
+ t = ACLPermissionType(v)
+ }
+ v := t
+ s.PermissionType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeACLsRequest returns a pointer to a default DescribeACLsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeACLsRequest() *DescribeACLsRequest {
+ var v DescribeACLsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeACLsRequest.
+func (v *DescribeACLsRequest) Default() {
+ v.ResourcePatternType = 3
+}
+
+// NewDescribeACLsRequest returns a default DescribeACLsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeACLsRequest() DescribeACLsRequest {
+ var v DescribeACLsRequest
+ v.Default()
+ return v
+}
+
+type DescribeACLsResponseResourceACL struct {
+ // Principal is who this ACL applies to.
+ Principal string
+
+ // Host is on which host this ACL applies.
+ Host string
+
+ // Operation is the operation being described.
+ Operation ACLOperation
+
+ // PermissionType is the permission being described.
+ PermissionType ACLPermissionType
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeACLsResponseResourceACL.
+func (v *DescribeACLsResponseResourceACL) Default() {
+}
+
+// NewDescribeACLsResponseResourceACL returns a default DescribeACLsResponseResourceACL
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeACLsResponseResourceACL() DescribeACLsResponseResourceACL {
+ var v DescribeACLsResponseResourceACL
+ v.Default()
+ return v
+}
+
+type DescribeACLsResponseResource struct {
+ // ResourceType is the resource type being described.
+ ResourceType ACLResourceType
+
+ // ResourceName is the resource name being described.
+ ResourceName string
+
+ // ResourcePatternType is the pattern type being described.
+ //
+ // This field has a default of 3.
+ ResourcePatternType ACLResourcePatternType // v1+
+
+ // ACLs contains users / entries being described.
+ ACLs []DescribeACLsResponseResourceACL
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeACLsResponseResource.
+func (v *DescribeACLsResponseResource) Default() {
+ v.ResourcePatternType = 3
+}
+
+// NewDescribeACLsResponseResource returns a default DescribeACLsResponseResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeACLsResponseResource() DescribeACLsResponseResource {
+ var v DescribeACLsResponseResource
+ v.Default()
+ return v
+}
+
+// DescribeACLsResponse is a response to a describe acls request.
+type DescribeACLsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // ErrorCode is the error code returned on request failure.
+ //
+ // SECURITY_DISABLED is returned if there is no authorizer configured on the
+ // broker.
+ //
+ // There can be other authorization failures.
+ ErrorCode int16
+
+ // ErrorMessage is a message for an error.
+ ErrorMessage *string
+
+ // Resources are the describe resources.
+ Resources []DescribeACLsResponseResource
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DescribeACLsResponse) Key() int16 { return 29 }
+func (*DescribeACLsResponse) MaxVersion() int16 { return 3 }
+func (v *DescribeACLsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeACLsResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeACLsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *DescribeACLsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *DescribeACLsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *DescribeACLsResponse) RequestKind() Request { return &DescribeACLsRequest{Version: v.Version} }
+
+func (v *DescribeACLsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.ResourcePatternType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ACLs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Principal
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Operation
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.PermissionType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeACLsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeACLsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeACLsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeACLsResponseResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var t ACLResourceType
+ {
+ v := b.Int8()
+ t = ACLResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ if version >= 1 {
+ var t ACLResourcePatternType
+ {
+ v := b.Int8()
+ t = ACLResourcePatternType(v)
+ }
+ v := t
+ s.ResourcePatternType = v
+ }
+ {
+ v := s.ACLs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeACLsResponseResourceACL, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Principal = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ var t ACLOperation
+ {
+ v := b.Int8()
+ t = ACLOperation(v)
+ }
+ v := t
+ s.Operation = v
+ }
+ {
+ var t ACLPermissionType
+ {
+ v := b.Int8()
+ t = ACLPermissionType(v)
+ }
+ v := t
+ s.PermissionType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ACLs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeACLsResponse returns a pointer to a default DescribeACLsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeACLsResponse() *DescribeACLsResponse {
+ var v DescribeACLsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeACLsResponse.
+func (v *DescribeACLsResponse) Default() {
+}
+
+// NewDescribeACLsResponse returns a default DescribeACLsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeACLsResponse() DescribeACLsResponse {
+ var v DescribeACLsResponse
+ v.Default()
+ return v
+}
+
+type CreateACLsRequestCreation struct {
+ // ResourceType is the type of resource this acl entry will be on.
+ // It is invalid to use UNKNOWN or ANY.
+ ResourceType ACLResourceType
+
+ // ResourceName is the name of the resource this acl entry will be on.
+ // For CLUSTER, this must be "kafka-cluster".
+ ResourceName string
+
+ // ResourcePatternType is the pattern type to use for the resource name.
+ // This cannot be UNKNOWN or MATCH (i.e. this must be LITERAL or PREFIXED).
+ // The default for pre-Kafka 2.0.0 is effectively LITERAL.
+ //
+ // This field has a default of 3.
+ ResourcePatternType ACLResourcePatternType // v1+
+
+ // Principal is the user to apply this acl for. With the Kafka simple
+ // authorizer, this must begin with "User:".
+ Principal string
+
+ // Host is the host address to use for this acl. Each host to allow
+ // the principal access from must be specified as a new creation. KIP-252
+ // might solve this someday. The special wildcard host "*" allows all hosts.
+ Host string
+
+ // Operation is the operation this acl is for. This must not be UNKNOWN or
+ // ANY.
+ Operation ACLOperation
+
+ // PermissionType is the permission of this acl. This must be either ALLOW
+ // or DENY.
+ PermissionType ACLPermissionType
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateACLsRequestCreation.
+func (v *CreateACLsRequestCreation) Default() {
+ v.ResourcePatternType = 3
+}
+
+// NewCreateACLsRequestCreation returns a default CreateACLsRequestCreation
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateACLsRequestCreation() CreateACLsRequestCreation {
+ var v CreateACLsRequestCreation
+ v.Default()
+ return v
+}
+
+// CreateACLsRequest creates acls. Creating acls can be done as a batch; each
+// "creation" will be an acl entry.
+//
+// See the DescribeACLsRequest documentation for more descriptions of what
+// valid values for the fields in this request are.
+type CreateACLsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ Creations []CreateACLsRequestCreation
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*CreateACLsRequest) Key() int16 { return 30 }
+func (*CreateACLsRequest) MaxVersion() int16 { return 3 }
+func (v *CreateACLsRequest) SetVersion(version int16) { v.Version = version }
+func (v *CreateACLsRequest) GetVersion() int16 { return v.Version }
+func (v *CreateACLsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *CreateACLsRequest) ResponseKind() Response {
+ r := &CreateACLsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *CreateACLsRequest) RequestWith(ctx context.Context, r Requestor) (*CreateACLsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*CreateACLsResponse)
+ return resp, err
+}
+
+func (v *CreateACLsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Creations
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.ResourcePatternType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Principal
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Operation
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.PermissionType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreateACLsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreateACLsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreateACLsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Creations
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateACLsRequestCreation, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var t ACLResourceType
+ {
+ v := b.Int8()
+ t = ACLResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ if version >= 1 {
+ var t ACLResourcePatternType
+ {
+ v := b.Int8()
+ t = ACLResourcePatternType(v)
+ }
+ v := t
+ s.ResourcePatternType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Principal = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ var t ACLOperation
+ {
+ v := b.Int8()
+ t = ACLOperation(v)
+ }
+ v := t
+ s.Operation = v
+ }
+ {
+ var t ACLPermissionType
+ {
+ v := b.Int8()
+ t = ACLPermissionType(v)
+ }
+ v := t
+ s.PermissionType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Creations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreateACLsRequest returns a pointer to a default CreateACLsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreateACLsRequest() *CreateACLsRequest {
+ var v CreateACLsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateACLsRequest.
+func (v *CreateACLsRequest) Default() {
+}
+
+// NewCreateACLsRequest returns a default CreateACLsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateACLsRequest() CreateACLsRequest {
+ var v CreateACLsRequest
+ v.Default()
+ return v
+}
+
+type CreateACLsResponseResult struct {
+ // ErrorCode is an error for this particular creation (index wise).
+ ErrorCode int16
+
+ // ErrorMessage is a message for this error.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateACLsResponseResult.
+func (v *CreateACLsResponseResult) Default() {
+}
+
+// NewCreateACLsResponseResult returns a default CreateACLsResponseResult
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateACLsResponseResult() CreateACLsResponseResult {
+ var v CreateACLsResponseResult
+ v.Default()
+ return v
+}
+
+// CreateACLsResponse is a response for a CreateACLsRequest.
+type CreateACLsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Results contains responses to each creation request.
+ Results []CreateACLsResponseResult
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*CreateACLsResponse) Key() int16 { return 30 }
+func (*CreateACLsResponse) MaxVersion() int16 { return 3 }
+func (v *CreateACLsResponse) SetVersion(version int16) { v.Version = version }
+func (v *CreateACLsResponse) GetVersion() int16 { return v.Version }
+func (v *CreateACLsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *CreateACLsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *CreateACLsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *CreateACLsResponse) RequestKind() Request { return &CreateACLsRequest{Version: v.Version} }
+
+func (v *CreateACLsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Results
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreateACLsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreateACLsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreateACLsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Results
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateACLsResponseResult, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Results = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreateACLsResponse returns a pointer to a default CreateACLsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreateACLsResponse() *CreateACLsResponse {
+ var v CreateACLsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateACLsResponse.
+func (v *CreateACLsResponse) Default() {
+}
+
+// NewCreateACLsResponse returns a default CreateACLsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateACLsResponse() CreateACLsResponse {
+ var v CreateACLsResponse
+ v.Default()
+ return v
+}
+
+type DeleteACLsRequestFilter struct {
+ ResourceType ACLResourceType
+
+ ResourceName *string
+
+ // This field has a default of 3.
+ ResourcePatternType ACLResourcePatternType // v1+
+
+ Principal *string
+
+ Host *string
+
+ Operation ACLOperation
+
+ PermissionType ACLPermissionType
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteACLsRequestFilter.
+func (v *DeleteACLsRequestFilter) Default() {
+ v.ResourcePatternType = 3
+}
+
+// NewDeleteACLsRequestFilter returns a default DeleteACLsRequestFilter
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteACLsRequestFilter() DeleteACLsRequestFilter {
+ var v DeleteACLsRequestFilter
+ v.Default()
+ return v
+}
+
+// DeleteACLsRequest deletes acls. This request works on filters the same way
+// that DescribeACLsRequest does. See DescribeACLsRequest for documentation of
+// the fields.
+type DeleteACLsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Filters are filters for acls to delete.
+ Filters []DeleteACLsRequestFilter
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DeleteACLsRequest) Key() int16 { return 31 }
+func (*DeleteACLsRequest) MaxVersion() int16 { return 3 }
+func (v *DeleteACLsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DeleteACLsRequest) GetVersion() int16 { return v.Version }
+func (v *DeleteACLsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *DeleteACLsRequest) ResponseKind() Response {
+ r := &DeleteACLsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DeleteACLsRequest) RequestWith(ctx context.Context, r Requestor) (*DeleteACLsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DeleteACLsResponse)
+ return resp, err
+}
+
+func (v *DeleteACLsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Filters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.ResourcePatternType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Principal
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Operation
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.PermissionType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteACLsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteACLsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteACLsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Filters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteACLsRequestFilter, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var t ACLResourceType
+ {
+ v := b.Int8()
+ t = ACLResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ResourceName = v
+ }
+ if version >= 1 {
+ var t ACLResourcePatternType
+ {
+ v := b.Int8()
+ t = ACLResourcePatternType(v)
+ }
+ v := t
+ s.ResourcePatternType = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Principal = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Host = v
+ }
+ {
+ var t ACLOperation
+ {
+ v := b.Int8()
+ t = ACLOperation(v)
+ }
+ v := t
+ s.Operation = v
+ }
+ {
+ var t ACLPermissionType
+ {
+ v := b.Int8()
+ t = ACLPermissionType(v)
+ }
+ v := t
+ s.PermissionType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Filters = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteACLsRequest returns a pointer to a default DeleteACLsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteACLsRequest() *DeleteACLsRequest {
+ var v DeleteACLsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteACLsRequest.
+func (v *DeleteACLsRequest) Default() {
+}
+
+// NewDeleteACLsRequest returns a default DeleteACLsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteACLsRequest() DeleteACLsRequest {
+ var v DeleteACLsRequest
+ v.Default()
+ return v
+}
+
+type DeleteACLsResponseResultMatchingACL struct {
+ // ErrorCode contains an error for this individual acl for this filter.
+ ErrorCode int16
+
+ // ErrorMessage is a message for this error.
+ ErrorMessage *string
+
+ ResourceType ACLResourceType
+
+ ResourceName string
+
+ // This field has a default of 3.
+ ResourcePatternType ACLResourcePatternType // v1+
+
+ Principal string
+
+ Host string
+
+ Operation ACLOperation
+
+ PermissionType ACLPermissionType
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteACLsResponseResultMatchingACL.
+func (v *DeleteACLsResponseResultMatchingACL) Default() {
+ v.ResourcePatternType = 3
+}
+
+// NewDeleteACLsResponseResultMatchingACL returns a default DeleteACLsResponseResultMatchingACL
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteACLsResponseResultMatchingACL() DeleteACLsResponseResultMatchingACL {
+ var v DeleteACLsResponseResultMatchingACL
+ v.Default()
+ return v
+}
+
+type DeleteACLsResponseResult struct {
+ // ErrorCode is the overall error code for this individual filter.
+ ErrorCode int16
+
+ // ErrorMessage is a message for this error.
+ ErrorMessage *string
+
+ // MatchingACLs contains all acls that were matched for this filter.
+ MatchingACLs []DeleteACLsResponseResultMatchingACL
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteACLsResponseResult.
+func (v *DeleteACLsResponseResult) Default() {
+}
+
+// NewDeleteACLsResponseResult returns a default DeleteACLsResponseResult
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteACLsResponseResult() DeleteACLsResponseResult {
+ var v DeleteACLsResponseResult
+ v.Default()
+ return v
+}
+
+// DeleteACLsResponse is a response for a DeleteACLsRequest.
+type DeleteACLsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Results contains a response to each requested filter.
+ Results []DeleteACLsResponseResult
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DeleteACLsResponse) Key() int16 { return 31 }
+func (*DeleteACLsResponse) MaxVersion() int16 { return 3 }
+func (v *DeleteACLsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DeleteACLsResponse) GetVersion() int16 { return v.Version }
+func (v *DeleteACLsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *DeleteACLsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *DeleteACLsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *DeleteACLsResponse) RequestKind() Request { return &DeleteACLsRequest{Version: v.Version} }
+
+func (v *DeleteACLsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Results
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.MatchingACLs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.ResourcePatternType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Principal
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Operation
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.PermissionType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteACLsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteACLsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteACLsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Results
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteACLsResponseResult, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.MatchingACLs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteACLsResponseResultMatchingACL, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var t ACLResourceType
+ {
+ v := b.Int8()
+ t = ACLResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ if version >= 1 {
+ var t ACLResourcePatternType
+ {
+ v := b.Int8()
+ t = ACLResourcePatternType(v)
+ }
+ v := t
+ s.ResourcePatternType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Principal = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ var t ACLOperation
+ {
+ v := b.Int8()
+ t = ACLOperation(v)
+ }
+ v := t
+ s.Operation = v
+ }
+ {
+ var t ACLPermissionType
+ {
+ v := b.Int8()
+ t = ACLPermissionType(v)
+ }
+ v := t
+ s.PermissionType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.MatchingACLs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Results = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteACLsResponse returns a pointer to a default DeleteACLsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteACLsResponse() *DeleteACLsResponse {
+ var v DeleteACLsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteACLsResponse.
+func (v *DeleteACLsResponse) Default() {
+}
+
+// NewDeleteACLsResponse returns a default DeleteACLsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteACLsResponse() DeleteACLsResponse {
+ var v DeleteACLsResponse
+ v.Default()
+ return v
+}
+
+type DescribeConfigsRequestResource struct {
+ // ResourceType is an enum corresponding to the type of config to describe.
+ ResourceType ConfigResourceType
+
+ // ResourceName is the name of config to describe.
+ //
+ // If the requested type is a topic, this corresponds to a topic name.
+ //
+ // If the requested type if a broker, this should either be empty or be
+ // the ID of the broker this request is issued to. If it is empty, this
+ // returns all broker configs, but only the dynamic configuration values.
+ // If a specific ID, this returns all broker config values.
+ ResourceName string
+
+ // ConfigNames is a list of config entries to return. Null requests all.
+ ConfigNames []string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeConfigsRequestResource.
+func (v *DescribeConfigsRequestResource) Default() {
+}
+
+// NewDescribeConfigsRequestResource returns a default DescribeConfigsRequestResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeConfigsRequestResource() DescribeConfigsRequestResource {
+ var v DescribeConfigsRequestResource
+ v.Default()
+ return v
+}
+
+// DescribeConfigsRequest issues a request to describe configs that Kafka
+// currently has. These are the key/value pairs that one uses to configure
+// brokers and topics.
+type DescribeConfigsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Resources is a list of resources to describe.
+ Resources []DescribeConfigsRequestResource
+
+ // IncludeSynonyms signifies whether to return config entry synonyms for
+ // all config entries.
+ IncludeSynonyms bool // v1+
+
+ // IncludeDocumentation signifies whether to return documentation for
+ // config entries.
+ IncludeDocumentation bool // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*DescribeConfigsRequest) Key() int16 { return 32 }
+func (*DescribeConfigsRequest) MaxVersion() int16 { return 4 }
+func (v *DescribeConfigsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeConfigsRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeConfigsRequest) IsFlexible() bool { return v.Version >= 4 }
+func (v *DescribeConfigsRequest) ResponseKind() Response {
+ r := &DescribeConfigsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeConfigsRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeConfigsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeConfigsResponse)
+ return resp, err
+}
+
+func (v *DescribeConfigsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ConfigNames
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.IncludeSynonyms
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 3 {
+ v := v.IncludeDocumentation
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeConfigsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeConfigsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeConfigsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeConfigsRequestResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var t ConfigResourceType
+ {
+ v := b.Int8()
+ t = ConfigResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ {
+ v := s.ConfigNames
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []string{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.ConfigNames = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.IncludeSynonyms = v
+ }
+ if version >= 3 {
+ v := b.Bool()
+ s.IncludeDocumentation = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeConfigsRequest returns a pointer to a default DescribeConfigsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeConfigsRequest() *DescribeConfigsRequest {
+ var v DescribeConfigsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeConfigsRequest.
+func (v *DescribeConfigsRequest) Default() {
+}
+
+// NewDescribeConfigsRequest returns a default DescribeConfigsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeConfigsRequest() DescribeConfigsRequest {
+ var v DescribeConfigsRequest
+ v.Default()
+ return v
+}
+
+type DescribeConfigsResponseResourceConfigConfigSynonym struct {
+ Name string
+
+ Value *string
+
+ Source ConfigSource
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeConfigsResponseResourceConfigConfigSynonym.
+func (v *DescribeConfigsResponseResourceConfigConfigSynonym) Default() {
+}
+
+// NewDescribeConfigsResponseResourceConfigConfigSynonym returns a default DescribeConfigsResponseResourceConfigConfigSynonym
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeConfigsResponseResourceConfigConfigSynonym() DescribeConfigsResponseResourceConfigConfigSynonym {
+ var v DescribeConfigsResponseResourceConfigConfigSynonym
+ v.Default()
+ return v
+}
+
+type DescribeConfigsResponseResourceConfig struct {
+ // Name is a key this entry corresponds to (e.g. segment.bytes).
+ Name string
+
+ // Value is the value for this config key. If the key is sensitive,
+ // the value will be null.
+ Value *string
+
+ // ReadOnly signifies whether this is not a dynamic config option.
+ //
+ // Note that this field is not always correct, and you may need to check
+ // whether the Source is any dynamic enum. See franz-go#91 for more details.
+ ReadOnly bool
+
+ // IsDefault is whether this is a default config option. This has been
+ // replaced in favor of Source.
+ IsDefault bool
+
+ // Source is where this config entry is from.
+ //
+ // This field has a default of -1.
+ Source ConfigSource // v1+
+
+ // IsSensitive signifies whether this is a sensitive config key, which
+ // is either a password or an unknown type.
+ IsSensitive bool
+
+ // ConfigSynonyms contains fallback key/value pairs for this config
+ // entry, in order of preference. That is, if a config entry is both
+ // dynamically configured and has a default, the top level return will be
+ // the dynamic configuration, while its "synonym" will be the default.
+ ConfigSynonyms []DescribeConfigsResponseResourceConfigConfigSynonym // v1+
+
+ // ConfigType specifies the configuration data type.
+ ConfigType ConfigType // v3+
+
+ // Documentation is optional documentation for the config entry.
+ Documentation *string // v3+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeConfigsResponseResourceConfig.
+func (v *DescribeConfigsResponseResourceConfig) Default() {
+ v.Source = -1
+}
+
+// NewDescribeConfigsResponseResourceConfig returns a default DescribeConfigsResponseResourceConfig
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeConfigsResponseResourceConfig() DescribeConfigsResponseResourceConfig {
+ var v DescribeConfigsResponseResourceConfig
+ v.Default()
+ return v
+}
+
+type DescribeConfigsResponseResource struct {
+ // ErrorCode is the error code returned for describing configs.
+ //
+ // INVALID_REQUEST is returned if asking to descibe an invalid resource
+ // type.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if asking to describe broker
+ // configs but the client is not authorized to do so.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if asking to describe topic
+ // configs but the client is not authorized to do so.
+ //
+ // INVALID_TOPIC_EXCEPTION is returned if the requested topic was invalid.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the requested topic.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the describe config failed.
+ ErrorMessage *string
+
+ // ResourceType is the enum corresponding to the type of described config.
+ ResourceType ConfigResourceType
+
+ // ResourceName is the name corresponding to the describe config request.
+ ResourceName string
+
+ // Configs contains information about key/value config pairs for
+ // the requested resource.
+ Configs []DescribeConfigsResponseResourceConfig
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeConfigsResponseResource.
+func (v *DescribeConfigsResponseResource) Default() {
+}
+
+// NewDescribeConfigsResponseResource returns a default DescribeConfigsResponseResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeConfigsResponseResource() DescribeConfigsResponseResource {
+ var v DescribeConfigsResponseResource
+ v.Default()
+ return v
+}
+
+// DescribeConfigsResponse is returned from a DescribeConfigsRequest.
+type DescribeConfigsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 2.
+ ThrottleMillis int32
+
+ // Resources are responses for each resource in the describe config request.
+ Resources []DescribeConfigsResponseResource
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
+}
+
+func (*DescribeConfigsResponse) Key() int16 { return 32 }
+func (*DescribeConfigsResponse) MaxVersion() int16 { return 4 }
+func (v *DescribeConfigsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeConfigsResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeConfigsResponse) IsFlexible() bool { return v.Version >= 4 }
+func (v *DescribeConfigsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 2 }
+func (v *DescribeConfigsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeConfigsResponse) RequestKind() Request {
+ return &DescribeConfigsRequest{Version: v.Version}
+}
+
+func (v *DescribeConfigsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Configs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ReadOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 0 && version <= 0 {
+ v := v.IsDefault
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 1 {
+ v := v.Source
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.IsSensitive
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 1 {
+ v := v.ConfigSynonyms
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Source
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 3 {
+ v := v.ConfigType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.Documentation
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeConfigsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeConfigsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeConfigsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 4
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeConfigsResponseResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var t ConfigResourceType
+ {
+ v := b.Int8()
+ t = ConfigResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ {
+ v := s.Configs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeConfigsResponseResourceConfig, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Value = v
+ }
+ {
+ v := b.Bool()
+ s.ReadOnly = v
+ }
+ if version >= 0 && version <= 0 {
+ v := b.Bool()
+ s.IsDefault = v
+ }
+ if version >= 1 {
+ var t ConfigSource
+ {
+ v := b.Int8()
+ t = ConfigSource(v)
+ }
+ v := t
+ s.Source = v
+ }
+ {
+ v := b.Bool()
+ s.IsSensitive = v
+ }
+ if version >= 1 {
+ v := s.ConfigSynonyms
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeConfigsResponseResourceConfigConfigSynonym, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Value = v
+ }
+ {
+ var t ConfigSource
+ {
+ v := b.Int8()
+ t = ConfigSource(v)
+ }
+ v := t
+ s.Source = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ConfigSynonyms = v
+ }
+ if version >= 3 {
+ var t ConfigType
+ {
+ v := b.Int8()
+ t = ConfigType(v)
+ }
+ v := t
+ s.ConfigType = v
+ }
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Documentation = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Configs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeConfigsResponse returns a pointer to a default DescribeConfigsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeConfigsResponse() *DescribeConfigsResponse {
+ var v DescribeConfigsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeConfigsResponse.
+func (v *DescribeConfigsResponse) Default() {
+}
+
+// NewDescribeConfigsResponse returns a default DescribeConfigsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeConfigsResponse() DescribeConfigsResponse {
+ var v DescribeConfigsResponse
+ v.Default()
+ return v
+}
+
+type AlterConfigsRequestResourceConfig struct {
+ // Name is a key to set (e.g. segment.bytes).
+ Name string
+
+ // Value is a value to set for the key (e.g. 10).
+ Value *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterConfigsRequestResourceConfig.
+func (v *AlterConfigsRequestResourceConfig) Default() {
+}
+
+// NewAlterConfigsRequestResourceConfig returns a default AlterConfigsRequestResourceConfig
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterConfigsRequestResourceConfig() AlterConfigsRequestResourceConfig {
+ var v AlterConfigsRequestResourceConfig
+ v.Default()
+ return v
+}
+
+type AlterConfigsRequestResource struct {
+ // ResourceType is an enum corresponding to the type of config to alter.
+ // The only two valid values are 2 (for topic) and 4 (for broker).
+ ResourceType ConfigResourceType
+
+ // ResourceName is the name of config to alter.
+ //
+ // If the requested type is a topic, this corresponds to a topic name.
+ //
+ // If the requested type if a broker, this should either be empty or be
+ // the ID of the broker this request is issued to. If it is empty, this
+ // updates all broker configs. If a specific ID, this updates just the
+ // broker. Using a specific ID also ensures that brokers reload config
+ // or secret files even if the file path has not changed. Lastly, password
+ // config options can only be defined on a per broker basis.
+ //
+ // If the type is broker logger, this must be a broker ID.
+ ResourceName string
+
+ // Configs contains key/value config pairs to set on the resource.
+ Configs []AlterConfigsRequestResourceConfig
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterConfigsRequestResource.
+func (v *AlterConfigsRequestResource) Default() {
+}
+
+// NewAlterConfigsRequestResource returns a default AlterConfigsRequestResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterConfigsRequestResource() AlterConfigsRequestResource {
+ var v AlterConfigsRequestResource
+ v.Default()
+ return v
+}
+
+// AlterConfigsRequest issues a request to alter either topic or broker
+// configs.
+//
+// Note that to alter configs, you must specify the whole config on every
+// request. All existing non-static values will be removed. This means that
+// to add one key/value to a config, you must describe the config and then
+// issue an alter request with the current config with the new key value.
+// This also means that dynamic sensitive values, which are not returned
+// in describe configs, will be lost.
+//
+// To fix this problem, the AlterConfigs request / response was deprecated
+// in Kafka 2.3.0 in favor of the new IncrementalAlterConfigs request / response.
+// See KIP-339 for more details.
+type AlterConfigsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Resources is an array of configs to alter.
+ Resources []AlterConfigsRequestResource
+
+ // ValidateOnly validates the request but does not apply it.
+ ValidateOnly bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*AlterConfigsRequest) Key() int16 { return 33 }
+func (*AlterConfigsRequest) MaxVersion() int16 { return 2 }
+func (v *AlterConfigsRequest) SetVersion(version int16) { v.Version = version }
+func (v *AlterConfigsRequest) GetVersion() int16 { return v.Version }
+func (v *AlterConfigsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *AlterConfigsRequest) ResponseKind() Response {
+ r := &AlterConfigsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AlterConfigsRequest) RequestWith(ctx context.Context, r Requestor) (*AlterConfigsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AlterConfigsResponse)
+ return resp, err
+}
+
+func (v *AlterConfigsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Configs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.ValidateOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterConfigsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterConfigsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterConfigsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterConfigsRequestResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var t ConfigResourceType
+ {
+ v := b.Int8()
+ t = ConfigResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ {
+ v := s.Configs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterConfigsRequestResourceConfig, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Value = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Configs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ {
+ v := b.Bool()
+ s.ValidateOnly = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterConfigsRequest returns a pointer to a default AlterConfigsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterConfigsRequest() *AlterConfigsRequest {
+ var v AlterConfigsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterConfigsRequest.
+func (v *AlterConfigsRequest) Default() {
+}
+
+// NewAlterConfigsRequest returns a default AlterConfigsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterConfigsRequest() AlterConfigsRequest {
+ var v AlterConfigsRequest
+ v.Default()
+ return v
+}
+
+type AlterConfigsResponseResource struct {
+ // ErrorCode is the error code returned for altering configs.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if asking to alter broker
+ // configs but the client is not authorized to do so.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if asking to alter topic
+ // configs but the client is not authorized to do so.
+ //
+ // INVALID_TOPIC_EXCEPTION is returned if the requested topic was invalid.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the requested topic.
+ //
+ // INVALID_REQUEST is returned if the requested config is invalid or if
+ // asking Kafka to alter an invalid resource.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the alter config failed.
+ ErrorMessage *string
+
+ // ResourceType is the enum corresponding to the type of altered config.
+ ResourceType ConfigResourceType
+
+ // ResourceName is the name corresponding to the alter config request.
+ ResourceName string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterConfigsResponseResource.
+func (v *AlterConfigsResponseResource) Default() {
+}
+
+// NewAlterConfigsResponseResource returns a default AlterConfigsResponseResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterConfigsResponseResource() AlterConfigsResponseResource {
+ var v AlterConfigsResponseResource
+ v.Default()
+ return v
+}
+
+// AlterConfigsResponse is returned from an AlterConfigsRequest.
+type AlterConfigsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Resources are responses for each resource in the alter request.
+ Resources []AlterConfigsResponseResource
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*AlterConfigsResponse) Key() int16 { return 33 }
+func (*AlterConfigsResponse) MaxVersion() int16 { return 2 }
+func (v *AlterConfigsResponse) SetVersion(version int16) { v.Version = version }
+func (v *AlterConfigsResponse) GetVersion() int16 { return v.Version }
+func (v *AlterConfigsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *AlterConfigsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *AlterConfigsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *AlterConfigsResponse) RequestKind() Request { return &AlterConfigsRequest{Version: v.Version} }
+
+func (v *AlterConfigsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterConfigsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterConfigsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterConfigsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterConfigsResponseResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var t ConfigResourceType
+ {
+ v := b.Int8()
+ t = ConfigResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterConfigsResponse returns a pointer to a default AlterConfigsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterConfigsResponse() *AlterConfigsResponse {
+ var v AlterConfigsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterConfigsResponse.
+func (v *AlterConfigsResponse) Default() {
+}
+
+// NewAlterConfigsResponse returns a default AlterConfigsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterConfigsResponse() AlterConfigsResponse {
+ var v AlterConfigsResponse
+ v.Default()
+ return v
+}
+
+type AlterReplicaLogDirsRequestDirTopic struct {
+ // Topic is a topic to move.
+ Topic string
+
+ // Partitions contains partitions for the topic to move.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterReplicaLogDirsRequestDirTopic.
+func (v *AlterReplicaLogDirsRequestDirTopic) Default() {
+}
+
+// NewAlterReplicaLogDirsRequestDirTopic returns a default AlterReplicaLogDirsRequestDirTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterReplicaLogDirsRequestDirTopic() AlterReplicaLogDirsRequestDirTopic {
+ var v AlterReplicaLogDirsRequestDirTopic
+ v.Default()
+ return v
+}
+
+type AlterReplicaLogDirsRequestDir struct {
+ // Dir is an absolute path where everything listed below should
+ // end up.
+ Dir string
+
+ // Topics contains topics to move to the above log directory.
+ Topics []AlterReplicaLogDirsRequestDirTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterReplicaLogDirsRequestDir.
+func (v *AlterReplicaLogDirsRequestDir) Default() {
+}
+
+// NewAlterReplicaLogDirsRequestDir returns a default AlterReplicaLogDirsRequestDir
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterReplicaLogDirsRequestDir() AlterReplicaLogDirsRequestDir {
+ var v AlterReplicaLogDirsRequestDir
+ v.Default()
+ return v
+}
+
+// AlterReplicaLogDirsRequest requests for log directories to be moved
+// within Kafka.
+//
+// This is primarily useful for moving directories between disks.
+type AlterReplicaLogDirsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Dirs contains absolute paths of where you want things to end up.
+ Dirs []AlterReplicaLogDirsRequestDir
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*AlterReplicaLogDirsRequest) Key() int16 { return 34 }
+func (*AlterReplicaLogDirsRequest) MaxVersion() int16 { return 2 }
+func (v *AlterReplicaLogDirsRequest) SetVersion(version int16) { v.Version = version }
+func (v *AlterReplicaLogDirsRequest) GetVersion() int16 { return v.Version }
+func (v *AlterReplicaLogDirsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *AlterReplicaLogDirsRequest) ResponseKind() Response {
+ r := &AlterReplicaLogDirsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AlterReplicaLogDirsRequest) RequestWith(ctx context.Context, r Requestor) (*AlterReplicaLogDirsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AlterReplicaLogDirsResponse)
+ return resp, err
+}
+
+func (v *AlterReplicaLogDirsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Dirs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Dir
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterReplicaLogDirsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterReplicaLogDirsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterReplicaLogDirsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Dirs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterReplicaLogDirsRequestDir, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Dir = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterReplicaLogDirsRequestDirTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Dirs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterReplicaLogDirsRequest returns a pointer to a default AlterReplicaLogDirsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterReplicaLogDirsRequest() *AlterReplicaLogDirsRequest {
+ var v AlterReplicaLogDirsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterReplicaLogDirsRequest.
+func (v *AlterReplicaLogDirsRequest) Default() {
+}
+
+// NewAlterReplicaLogDirsRequest returns a default AlterReplicaLogDirsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterReplicaLogDirsRequest() AlterReplicaLogDirsRequest {
+ var v AlterReplicaLogDirsRequest
+ v.Default()
+ return v
+}
+
+type AlterReplicaLogDirsResponseTopicPartition struct {
+ // Partition is the partition this array slot corresponds to.
+ Partition int32
+
+ // CLUSTER_AUTHORIZATION_FAILED is returned if the client is not
+ // authorized to alter replica dirs.
+ //
+ // LOG_DIR_NOT_FOUND is returned when the requested log directory
+ // is not in the broker config.
+ //
+ // KAFKA_STORAGE_EXCEPTION is returned when destination directory or
+ // requested replica is offline.
+ //
+ // REPLICA_NOT_AVAILABLE is returned if the replica does not exist
+ // yet.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterReplicaLogDirsResponseTopicPartition.
+func (v *AlterReplicaLogDirsResponseTopicPartition) Default() {
+}
+
+// NewAlterReplicaLogDirsResponseTopicPartition returns a default AlterReplicaLogDirsResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterReplicaLogDirsResponseTopicPartition() AlterReplicaLogDirsResponseTopicPartition {
+ var v AlterReplicaLogDirsResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type AlterReplicaLogDirsResponseTopic struct {
+ // Topic is the topic this array slot corresponds to.
+ Topic string
+
+ // Partitions contains responses to each partition that was requested
+ // to move.
+ Partitions []AlterReplicaLogDirsResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterReplicaLogDirsResponseTopic.
+func (v *AlterReplicaLogDirsResponseTopic) Default() {
+}
+
+// NewAlterReplicaLogDirsResponseTopic returns a default AlterReplicaLogDirsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterReplicaLogDirsResponseTopic() AlterReplicaLogDirsResponseTopic {
+ var v AlterReplicaLogDirsResponseTopic
+ v.Default()
+ return v
+}
+
+// AlterReplicaLogDirsResponse is returned from an AlterReplicaLogDirsRequest.
+type AlterReplicaLogDirsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Topics contains responses to each topic that had partitions requested
+ // for moving.
+ Topics []AlterReplicaLogDirsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*AlterReplicaLogDirsResponse) Key() int16 { return 34 }
+func (*AlterReplicaLogDirsResponse) MaxVersion() int16 { return 2 }
+func (v *AlterReplicaLogDirsResponse) SetVersion(version int16) { v.Version = version }
+func (v *AlterReplicaLogDirsResponse) GetVersion() int16 { return v.Version }
+func (v *AlterReplicaLogDirsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *AlterReplicaLogDirsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 1
+}
+
+func (v *AlterReplicaLogDirsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AlterReplicaLogDirsResponse) RequestKind() Request {
+ return &AlterReplicaLogDirsRequest{Version: v.Version}
+}
+
+func (v *AlterReplicaLogDirsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterReplicaLogDirsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterReplicaLogDirsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterReplicaLogDirsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterReplicaLogDirsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterReplicaLogDirsResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterReplicaLogDirsResponse returns a pointer to a default AlterReplicaLogDirsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterReplicaLogDirsResponse() *AlterReplicaLogDirsResponse {
+ var v AlterReplicaLogDirsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterReplicaLogDirsResponse.
+func (v *AlterReplicaLogDirsResponse) Default() {
+}
+
+// NewAlterReplicaLogDirsResponse returns a default AlterReplicaLogDirsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterReplicaLogDirsResponse() AlterReplicaLogDirsResponse {
+ var v AlterReplicaLogDirsResponse
+ v.Default()
+ return v
+}
+
+type DescribeLogDirsRequestTopic struct {
+ // Topic is a topic to describe the log dir of.
+ Topic string
+
+ // Partitions contains topic partitions to describe the log dirs of.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeLogDirsRequestTopic.
+func (v *DescribeLogDirsRequestTopic) Default() {
+}
+
+// NewDescribeLogDirsRequestTopic returns a default DescribeLogDirsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeLogDirsRequestTopic() DescribeLogDirsRequestTopic {
+ var v DescribeLogDirsRequestTopic
+ v.Default()
+ return v
+}
+
+// DescribeLogDirsRequest requests directory information for topic partitions.
+// This request was added in support of KIP-113.
+type DescribeLogDirsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics is an array of topics to describe the log dirs of. If this is
+ // null, the response includes all topics and all of their partitions.
+ Topics []DescribeLogDirsRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DescribeLogDirsRequest) Key() int16 { return 35 }
+func (*DescribeLogDirsRequest) MaxVersion() int16 { return 4 }
+func (v *DescribeLogDirsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeLogDirsRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeLogDirsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *DescribeLogDirsRequest) ResponseKind() Response {
+ r := &DescribeLogDirsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeLogDirsRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeLogDirsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeLogDirsResponse)
+ return resp, err
+}
+
+func (v *DescribeLogDirsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeLogDirsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeLogDirsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeLogDirsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []DescribeLogDirsRequestTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeLogDirsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeLogDirsRequest returns a pointer to a default DescribeLogDirsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeLogDirsRequest() *DescribeLogDirsRequest {
+ var v DescribeLogDirsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeLogDirsRequest.
+func (v *DescribeLogDirsRequest) Default() {
+}
+
+// NewDescribeLogDirsRequest returns a default DescribeLogDirsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeLogDirsRequest() DescribeLogDirsRequest {
+ var v DescribeLogDirsRequest
+ v.Default()
+ return v
+}
+
+type DescribeLogDirsResponseDirTopicPartition struct {
+ // Partition is a partition ID.
+ Partition int32
+
+ // Size is the total size of the log sements of this partition, in bytes.
+ Size int64
+
+ // OffsetLag is how far behind the log end offset is compared to
+ // the partition's high watermark (if this is the current log for
+ // the partition) or compared to the current replica's log end
+ // offset (if this is the future log for the patition).
+ //
+ // The math is,
+ //
+ // if IsFuture, localLogEndOffset - futurelogEndOffset.
+ //
+ // otherwise, max(localHighWatermark - logEndOffset, 0).
+ OffsetLag int64
+
+ // IsFuture is true if this replica was created by an
+ // AlterReplicaLogDirsRequest and will replace the current log of the
+ // replica in the future.
+ IsFuture bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeLogDirsResponseDirTopicPartition.
+func (v *DescribeLogDirsResponseDirTopicPartition) Default() {
+}
+
+// NewDescribeLogDirsResponseDirTopicPartition returns a default DescribeLogDirsResponseDirTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeLogDirsResponseDirTopicPartition() DescribeLogDirsResponseDirTopicPartition {
+ var v DescribeLogDirsResponseDirTopicPartition
+ v.Default()
+ return v
+}
+
+type DescribeLogDirsResponseDirTopic struct {
+ // Topic is the name of a Kafka topic.
+ Topic string
+
+ // Partitions is the set of queried partitions for a topic that are
+ // within a log directory.
+ Partitions []DescribeLogDirsResponseDirTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeLogDirsResponseDirTopic.
+func (v *DescribeLogDirsResponseDirTopic) Default() {
+}
+
+// NewDescribeLogDirsResponseDirTopic returns a default DescribeLogDirsResponseDirTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeLogDirsResponseDirTopic() DescribeLogDirsResponseDirTopic {
+ var v DescribeLogDirsResponseDirTopic
+ v.Default()
+ return v
+}
+
+type DescribeLogDirsResponseDir struct {
+ // ErrorCode is the error code returned for describing log dirs.
+ //
+ // KAFKA_STORAGE_ERROR is returned if the log directory is offline.
+ ErrorCode int16
+
+ // Dir is the absolute path of a log directory.
+ Dir string
+
+ // Topics is an array of topics within a log directory.
+ Topics []DescribeLogDirsResponseDirTopic
+
+ // TotalBytes is the total size in bytes of the volume the log directory is
+ // in.
+ //
+ // This field has a default of -1.
+ TotalBytes int64 // v4+
+
+ // UsableBytes is the usable size in bytes of the volume the log directory
+ // is in.
+ //
+ // This field has a default of -1.
+ UsableBytes int64 // v4+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeLogDirsResponseDir.
+func (v *DescribeLogDirsResponseDir) Default() {
+ v.TotalBytes = -1
+ v.UsableBytes = -1
+}
+
+// NewDescribeLogDirsResponseDir returns a default DescribeLogDirsResponseDir
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeLogDirsResponseDir() DescribeLogDirsResponseDir {
+ var v DescribeLogDirsResponseDir
+ v.Default()
+ return v
+}
+
+// DescribeLogDirsResponse is returned from a DescribeLogDirsRequest.
+type DescribeLogDirsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // The error code, or 0 if there was no error.
+ ErrorCode int16 // v3+
+
+ // Dirs pairs log directories with the topics and partitions that are
+ // stored in those directores.
+ Dirs []DescribeLogDirsResponseDir
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DescribeLogDirsResponse) Key() int16 { return 35 }
+func (*DescribeLogDirsResponse) MaxVersion() int16 { return 4 }
+func (v *DescribeLogDirsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeLogDirsResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeLogDirsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *DescribeLogDirsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *DescribeLogDirsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeLogDirsResponse) RequestKind() Request {
+ return &DescribeLogDirsRequest{Version: v.Version}
+}
+
+func (v *DescribeLogDirsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 3 {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Dirs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Dir
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Size
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.OffsetLag
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.IsFuture
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 4 {
+ v := v.TotalBytes
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 4 {
+ v := v.UsableBytes
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeLogDirsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeLogDirsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeLogDirsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if version >= 3 {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Dirs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeLogDirsResponseDir, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Dir = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeLogDirsResponseDirTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeLogDirsResponseDirTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int64()
+ s.Size = v
+ }
+ {
+ v := b.Int64()
+ s.OffsetLag = v
+ }
+ {
+ v := b.Bool()
+ s.IsFuture = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if version >= 4 {
+ v := b.Int64()
+ s.TotalBytes = v
+ }
+ if version >= 4 {
+ v := b.Int64()
+ s.UsableBytes = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Dirs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeLogDirsResponse returns a pointer to a default DescribeLogDirsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeLogDirsResponse() *DescribeLogDirsResponse {
+ var v DescribeLogDirsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeLogDirsResponse.
+func (v *DescribeLogDirsResponse) Default() {
+}
+
+// NewDescribeLogDirsResponse returns a default DescribeLogDirsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeLogDirsResponse() DescribeLogDirsResponse {
+ var v DescribeLogDirsResponse
+ v.Default()
+ return v
+}
+
+// SASLAuthenticate continues a sasl authentication flow. Prior to Kafka 1.0.0,
+// authenticating with sasl involved sending raw blobs of data back and forth.
+// After, those blobs are wrapped in a SASLAuthenticateRequest The benefit of
+// this wrapping is that Kafka can indicate errors in the response, rather than
+// just closing the connection. Additionally, the response allows for further
+// extension fields.
+type SASLAuthenticateRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // SASLAuthBytes contains bytes for a SASL client request.
+ SASLAuthBytes []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*SASLAuthenticateRequest) Key() int16 { return 36 }
+func (*SASLAuthenticateRequest) MaxVersion() int16 { return 2 }
+func (v *SASLAuthenticateRequest) SetVersion(version int16) { v.Version = version }
+func (v *SASLAuthenticateRequest) GetVersion() int16 { return v.Version }
+func (v *SASLAuthenticateRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *SASLAuthenticateRequest) ResponseKind() Response {
+ r := &SASLAuthenticateResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *SASLAuthenticateRequest) RequestWith(ctx context.Context, r Requestor) (*SASLAuthenticateResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*SASLAuthenticateResponse)
+ return resp, err
+}
+
+func (v *SASLAuthenticateRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.SASLAuthBytes
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *SASLAuthenticateRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *SASLAuthenticateRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *SASLAuthenticateRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.SASLAuthBytes = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrSASLAuthenticateRequest returns a pointer to a default SASLAuthenticateRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrSASLAuthenticateRequest() *SASLAuthenticateRequest {
+ var v SASLAuthenticateRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SASLAuthenticateRequest.
+func (v *SASLAuthenticateRequest) Default() {
+}
+
+// NewSASLAuthenticateRequest returns a default SASLAuthenticateRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSASLAuthenticateRequest() SASLAuthenticateRequest {
+ var v SASLAuthenticateRequest
+ v.Default()
+ return v
+}
+
+// SASLAuthenticateResponse is returned for a SASLAuthenticateRequest.
+type SASLAuthenticateResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is a potential error.
+ ErrorCode int16
+
+ // ErrorMessage can contain a message for an error.
+ ErrorMessage *string
+
+ // SASLAuthBytes is the server challenge continuing SASL flow.
+ SASLAuthBytes []byte
+
+ // SessionLifetimeMillis, added in Kafka 2.2.0, is how long the SASL
+ // authentication is valid for. This timeout is only enforced if the request
+ // was v1. After this timeout, Kafka expects the next bytes on the wire to
+ // begin reauthentication. Otherwise, Kafka closes the connection.
+ SessionLifetimeMillis int64 // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*SASLAuthenticateResponse) Key() int16 { return 36 }
+func (*SASLAuthenticateResponse) MaxVersion() int16 { return 2 }
+func (v *SASLAuthenticateResponse) SetVersion(version int16) { v.Version = version }
+func (v *SASLAuthenticateResponse) GetVersion() int16 { return v.Version }
+func (v *SASLAuthenticateResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *SASLAuthenticateResponse) RequestKind() Request {
+ return &SASLAuthenticateRequest{Version: v.Version}
+}
+
+func (v *SASLAuthenticateResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.SASLAuthBytes
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.SessionLifetimeMillis
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *SASLAuthenticateResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *SASLAuthenticateResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *SASLAuthenticateResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.SASLAuthBytes = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.SessionLifetimeMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrSASLAuthenticateResponse returns a pointer to a default SASLAuthenticateResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrSASLAuthenticateResponse() *SASLAuthenticateResponse {
+ var v SASLAuthenticateResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to SASLAuthenticateResponse.
+func (v *SASLAuthenticateResponse) Default() {
+}
+
+// NewSASLAuthenticateResponse returns a default SASLAuthenticateResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewSASLAuthenticateResponse() SASLAuthenticateResponse {
+ var v SASLAuthenticateResponse
+ v.Default()
+ return v
+}
+
+type CreatePartitionsRequestTopicAssignment struct {
+ // Replicas are replicas to assign a new partition to.
+ Replicas []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreatePartitionsRequestTopicAssignment.
+func (v *CreatePartitionsRequestTopicAssignment) Default() {
+}
+
+// NewCreatePartitionsRequestTopicAssignment returns a default CreatePartitionsRequestTopicAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreatePartitionsRequestTopicAssignment() CreatePartitionsRequestTopicAssignment {
+ var v CreatePartitionsRequestTopicAssignment
+ v.Default()
+ return v
+}
+
+type CreatePartitionsRequestTopic struct {
+ // Topic is a topic for which to create additional partitions for.
+ Topic string
+
+ // Count is the final count of partitions this topic must have after this
+ // request. This must be greater than the current number of partitions.
+ Count int32
+
+ // Assignment is a two-level array, the first corresponding to new
+ // partitions, the second contining broker IDs for where new partition
+ // replicas should live.
+ //
+ // The second level, the replicas, cannot have duplicate broker IDs (i.e.
+ // you cannot replicate a single partition twice on the same broker).
+ // Additionally, the number of replicas must match the current number of
+ // replicas per partition on the topic.
+ //
+ // The first level's length must be equal to the delta of Count and the
+ // current number of partitions.
+ Assignment []CreatePartitionsRequestTopicAssignment
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreatePartitionsRequestTopic.
+func (v *CreatePartitionsRequestTopic) Default() {
+}
+
+// NewCreatePartitionsRequestTopic returns a default CreatePartitionsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreatePartitionsRequestTopic() CreatePartitionsRequestTopic {
+ var v CreatePartitionsRequestTopic
+ v.Default()
+ return v
+}
+
+// CreatePartitionsRequest creates additional partitions for topics.
+type CreatePartitionsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Topics contains topics to create partitions for.
+ Topics []CreatePartitionsRequestTopic
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 15000.
+ TimeoutMillis int32
+
+ // ValidateOnly is makes this request a dry-run; everything is validated but
+ // no partitions are actually created.
+ ValidateOnly bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*CreatePartitionsRequest) Key() int16 { return 37 }
+func (*CreatePartitionsRequest) MaxVersion() int16 { return 3 }
+func (v *CreatePartitionsRequest) SetVersion(version int16) { v.Version = version }
+func (v *CreatePartitionsRequest) GetVersion() int16 { return v.Version }
+func (v *CreatePartitionsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *CreatePartitionsRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *CreatePartitionsRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *CreatePartitionsRequest) IsAdminRequest() {}
+func (v *CreatePartitionsRequest) ResponseKind() Response {
+ r := &CreatePartitionsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *CreatePartitionsRequest) RequestWith(ctx context.Context, r Requestor) (*CreatePartitionsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*CreatePartitionsResponse)
+ return resp, err
+}
+
+func (v *CreatePartitionsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Count
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Assignment
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ValidateOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreatePartitionsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreatePartitionsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreatePartitionsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreatePartitionsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int32()
+ s.Count = v
+ }
+ {
+ v := s.Assignment
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []CreatePartitionsRequestTopicAssignment{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreatePartitionsRequestTopicAssignment, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Assignment = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ v := b.Bool()
+ s.ValidateOnly = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreatePartitionsRequest returns a pointer to a default CreatePartitionsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreatePartitionsRequest() *CreatePartitionsRequest {
+ var v CreatePartitionsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreatePartitionsRequest.
+func (v *CreatePartitionsRequest) Default() {
+ v.TimeoutMillis = 15000
+}
+
+// NewCreatePartitionsRequest returns a default CreatePartitionsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreatePartitionsRequest() CreatePartitionsRequest {
+ var v CreatePartitionsRequest
+ v.Default()
+ return v
+}
+
+type CreatePartitionsResponseTopic struct {
+ // Topic is the topic that partitions were requested to be made for.
+ Topic string
+
+ // ErrorCode is the error code returned for each topic in the request.
+ //
+ // NOT_CONTROLLER is returned if the request was not issued to a Kafka
+ // controller.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to create partitions for a topic.
+ //
+ // INVALID_REQUEST is returned for duplicate topics in the request.
+ //
+ // INVALID_TOPIC_EXCEPTION is returned if the topic is queued for deletion.
+ //
+ // REASSIGNMENT_IN_PROGRESS is returned if the request was issued while
+ // partitions were being reassigned.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the topic for which to create partitions.
+ //
+ // INVALID_PARTITIONS is returned if the request would drop the total
+ // count of partitions down, or if the request would not add any more
+ // partitions, or if the request uses unknown brokers, or if the request
+ // assigns a different number of brokers than the increase in the
+ // partition count.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the topic creation failed.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreatePartitionsResponseTopic.
+func (v *CreatePartitionsResponseTopic) Default() {
+}
+
+// NewCreatePartitionsResponseTopic returns a default CreatePartitionsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreatePartitionsResponseTopic() CreatePartitionsResponseTopic {
+ var v CreatePartitionsResponseTopic
+ v.Default()
+ return v
+}
+
+// CreatePartitionsResponse is returned from a CreatePartitionsRequest.
+type CreatePartitionsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Topics is a response to each topic in the creation request.
+ Topics []CreatePartitionsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*CreatePartitionsResponse) Key() int16 { return 37 }
+func (*CreatePartitionsResponse) MaxVersion() int16 { return 3 }
+func (v *CreatePartitionsResponse) SetVersion(version int16) { v.Version = version }
+func (v *CreatePartitionsResponse) GetVersion() int16 { return v.Version }
+func (v *CreatePartitionsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *CreatePartitionsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *CreatePartitionsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *CreatePartitionsResponse) RequestKind() Request {
+ return &CreatePartitionsRequest{Version: v.Version}
+}
+
+func (v *CreatePartitionsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreatePartitionsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreatePartitionsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreatePartitionsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreatePartitionsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreatePartitionsResponse returns a pointer to a default CreatePartitionsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreatePartitionsResponse() *CreatePartitionsResponse {
+ var v CreatePartitionsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreatePartitionsResponse.
+func (v *CreatePartitionsResponse) Default() {
+}
+
+// NewCreatePartitionsResponse returns a default CreatePartitionsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreatePartitionsResponse() CreatePartitionsResponse {
+ var v CreatePartitionsResponse
+ v.Default()
+ return v
+}
+
+type CreateDelegationTokenRequestRenewer struct {
+ // PrincipalType is the "type" this principal is. This must be "User".
+ PrincipalType string
+
+ // PrincipalName is the user name allowed to renew the returned token.
+ PrincipalName string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateDelegationTokenRequestRenewer.
+func (v *CreateDelegationTokenRequestRenewer) Default() {
+}
+
+// NewCreateDelegationTokenRequestRenewer returns a default CreateDelegationTokenRequestRenewer
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateDelegationTokenRequestRenewer() CreateDelegationTokenRequestRenewer {
+ var v CreateDelegationTokenRequestRenewer
+ v.Default()
+ return v
+}
+
+// CreateDelegationTokenRequest issues a request to create a delegation token.
+//
+// Creating delegation tokens allows for an (ideally) quicker and easier method
+// of enabling authorization for a wide array of clients. Rather than having to
+// manage many passwords external to Kafka, you only need to manage a few
+// accounts and use those to create delegation tokens per client.
+//
+// Note that delegation tokens inherit the same ACLs as the user creating the
+// token. Thus, if you want to properly scope ACLs, you should not create
+// delegation tokens with admin accounts.
+//
+// Delegation tokens live inside of Kafka and use SASL SCRAM-SHA-256 for
+// authorization.
+type CreateDelegationTokenRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The principal type of the owner of the token. If null, this defaults
+ // to the token request principal.
+ OwnerPrincipalType *string // v3+
+
+ // Principal name of the owner of the token. If null, this defaults to
+ // the token request principal.
+ OwnerPrincipalName *string // v3+
+
+ // Renewers is a list of who can renew this delegation token. If empty, the
+ // default is the principal (user) who created the token.
+ Renewers []CreateDelegationTokenRequestRenewer
+
+ // MaxLifetimeMillis is how long this delegation token will be valid for.
+ // If -1, the default will be the server's delegation.token.max.lifetime.ms.
+ MaxLifetimeMillis int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*CreateDelegationTokenRequest) Key() int16 { return 38 }
+func (*CreateDelegationTokenRequest) MaxVersion() int16 { return 3 }
+func (v *CreateDelegationTokenRequest) SetVersion(version int16) { v.Version = version }
+func (v *CreateDelegationTokenRequest) GetVersion() int16 { return v.Version }
+func (v *CreateDelegationTokenRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *CreateDelegationTokenRequest) ResponseKind() Response {
+ r := &CreateDelegationTokenResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *CreateDelegationTokenRequest) RequestWith(ctx context.Context, r Requestor) (*CreateDelegationTokenResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*CreateDelegationTokenResponse)
+ return resp, err
+}
+
+func (v *CreateDelegationTokenRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ if version >= 3 {
+ v := v.OwnerPrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.OwnerPrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Renewers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.PrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.PrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.MaxLifetimeMillis
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreateDelegationTokenRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreateDelegationTokenRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreateDelegationTokenRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.OwnerPrincipalType = v
+ }
+ if version >= 3 {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.OwnerPrincipalName = v
+ }
+ {
+ v := s.Renewers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]CreateDelegationTokenRequestRenewer, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalName = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Renewers = v
+ }
+ {
+ v := b.Int64()
+ s.MaxLifetimeMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreateDelegationTokenRequest returns a pointer to a default CreateDelegationTokenRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreateDelegationTokenRequest() *CreateDelegationTokenRequest {
+ var v CreateDelegationTokenRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateDelegationTokenRequest.
+func (v *CreateDelegationTokenRequest) Default() {
+}
+
+// NewCreateDelegationTokenRequest returns a default CreateDelegationTokenRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateDelegationTokenRequest() CreateDelegationTokenRequest {
+ var v CreateDelegationTokenRequest
+ v.Default()
+ return v
+}
+
+// CreateDelegationTokenResponse is a response to a CreateDelegationTokenRequest.
+type CreateDelegationTokenResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is any error that caused the request to fail.
+ ErrorCode int16
+
+ // PrincipalType is the type of principal that granted this delegation token.
+ // This will always be "User" with the simple authorizer.
+ PrincipalType string
+
+ // PrincipalName is the name of the principal that granted this delegation
+ // token.
+ PrincipalName string
+
+ // The principal type of the requester of the token.
+ TokenRequesterPrincipalType string // v3+
+
+ // The principal name of the requester token.
+ TokenRequesterPrincipalName string // v3+
+
+ // IssueTimestamp is the millisecond timestamp this delegation token was
+ // issued.
+ IssueTimestamp int64
+
+ // ExpiryTimestamp is the millisecond timestamp this token will expire. The
+ // token can be renewed up to MaxTimestamp, past which point, it will be
+ // invalid. The Kafka default is 24h.
+ ExpiryTimestamp int64
+
+ // MaxTimestamp is the millisecond timestamp past which this token cannot
+ // be renewed.
+ MaxTimestamp int64
+
+ // TokenID is the ID of this token; this will be used as the username for
+ // scram authentication.
+ TokenID string
+
+ // HMAC is the password of this token; this will be used as the password for
+ // scram authentication.
+ HMAC []byte
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*CreateDelegationTokenResponse) Key() int16 { return 38 }
+func (*CreateDelegationTokenResponse) MaxVersion() int16 { return 3 }
+func (v *CreateDelegationTokenResponse) SetVersion(version int16) { v.Version = version }
+func (v *CreateDelegationTokenResponse) GetVersion() int16 { return v.Version }
+func (v *CreateDelegationTokenResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *CreateDelegationTokenResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 1
+}
+
+func (v *CreateDelegationTokenResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *CreateDelegationTokenResponse) RequestKind() Request {
+ return &CreateDelegationTokenRequest{Version: v.Version}
+}
+
+func (v *CreateDelegationTokenResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.PrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.PrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.TokenRequesterPrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.TokenRequesterPrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.IssueTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ExpiryTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.MaxTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.TokenID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.HMAC
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *CreateDelegationTokenResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *CreateDelegationTokenResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *CreateDelegationTokenResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalName = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TokenRequesterPrincipalType = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TokenRequesterPrincipalName = v
+ }
+ {
+ v := b.Int64()
+ s.IssueTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.ExpiryTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.MaxTimestamp = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TokenID = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.HMAC = v
+ }
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrCreateDelegationTokenResponse returns a pointer to a default CreateDelegationTokenResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrCreateDelegationTokenResponse() *CreateDelegationTokenResponse {
+ var v CreateDelegationTokenResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to CreateDelegationTokenResponse.
+func (v *CreateDelegationTokenResponse) Default() {
+}
+
+// NewCreateDelegationTokenResponse returns a default CreateDelegationTokenResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewCreateDelegationTokenResponse() CreateDelegationTokenResponse {
+ var v CreateDelegationTokenResponse
+ v.Default()
+ return v
+}
+
+// RenewDelegationTokenRequest is a request to renew a delegation token that
+// has not yet hit its max timestamp. Note that a client using a token cannot
+// renew its own token.
+type RenewDelegationTokenRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // HMAC is the HMAC of the token to be renewed.
+ HMAC []byte
+
+ // RenewTimeMillis is how long to renew the token for. If -1, Kafka uses its
+ // delegation.token.max.lifetime.ms.
+ RenewTimeMillis int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*RenewDelegationTokenRequest) Key() int16 { return 39 }
+func (*RenewDelegationTokenRequest) MaxVersion() int16 { return 2 }
+func (v *RenewDelegationTokenRequest) SetVersion(version int16) { v.Version = version }
+func (v *RenewDelegationTokenRequest) GetVersion() int16 { return v.Version }
+func (v *RenewDelegationTokenRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *RenewDelegationTokenRequest) ResponseKind() Response {
+ r := &RenewDelegationTokenResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *RenewDelegationTokenRequest) RequestWith(ctx context.Context, r Requestor) (*RenewDelegationTokenResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*RenewDelegationTokenResponse)
+ return resp, err
+}
+
+func (v *RenewDelegationTokenRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.HMAC
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.RenewTimeMillis
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *RenewDelegationTokenRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *RenewDelegationTokenRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *RenewDelegationTokenRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.HMAC = v
+ }
+ {
+ v := b.Int64()
+ s.RenewTimeMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrRenewDelegationTokenRequest returns a pointer to a default RenewDelegationTokenRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrRenewDelegationTokenRequest() *RenewDelegationTokenRequest {
+ var v RenewDelegationTokenRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to RenewDelegationTokenRequest.
+func (v *RenewDelegationTokenRequest) Default() {
+}
+
+// NewRenewDelegationTokenRequest returns a default RenewDelegationTokenRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewRenewDelegationTokenRequest() RenewDelegationTokenRequest {
+ var v RenewDelegationTokenRequest
+ v.Default()
+ return v
+}
+
+// RenewDelegationTokenResponse is a response to a RenewDelegationTokenRequest.
+type RenewDelegationTokenResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is any error that caused the request to fail.
+ ErrorCode int16
+
+ // ExpiryTimestamp is the millisecond timestamp this token will expire. The
+ // token can be renewed up to MaxTimestamp, past which point, it will be
+ // invalid. The Kafka default is 24h.
+ ExpiryTimestamp int64
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*RenewDelegationTokenResponse) Key() int16 { return 39 }
+func (*RenewDelegationTokenResponse) MaxVersion() int16 { return 2 }
+func (v *RenewDelegationTokenResponse) SetVersion(version int16) { v.Version = version }
+func (v *RenewDelegationTokenResponse) GetVersion() int16 { return v.Version }
+func (v *RenewDelegationTokenResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *RenewDelegationTokenResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 1
+}
+
+func (v *RenewDelegationTokenResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *RenewDelegationTokenResponse) RequestKind() Request {
+ return &RenewDelegationTokenRequest{Version: v.Version}
+}
+
+func (v *RenewDelegationTokenResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ExpiryTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *RenewDelegationTokenResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *RenewDelegationTokenResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *RenewDelegationTokenResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.ExpiryTimestamp = v
+ }
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrRenewDelegationTokenResponse returns a pointer to a default RenewDelegationTokenResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrRenewDelegationTokenResponse() *RenewDelegationTokenResponse {
+ var v RenewDelegationTokenResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to RenewDelegationTokenResponse.
+func (v *RenewDelegationTokenResponse) Default() {
+}
+
+// NewRenewDelegationTokenResponse returns a default RenewDelegationTokenResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewRenewDelegationTokenResponse() RenewDelegationTokenResponse {
+ var v RenewDelegationTokenResponse
+ v.Default()
+ return v
+}
+
+// ExpireDelegationTokenRequest is a request to change the expiry timestamp
+// of a delegation token. Note that a client using a token cannot expire its
+// own token.
+type ExpireDelegationTokenRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // HMAC is the HMAC of the token to change the expiry timestamp of.
+ HMAC []byte
+
+ // ExpiryPeriodMillis changes the delegation token's expiry timestamp to
+ // now + expiry time millis. This can be used to force tokens to expire
+ // quickly, or to allow tokens a grace period before expiry. You cannot
+ // add enough expiry that exceeds the original max timestamp.
+ ExpiryPeriodMillis int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*ExpireDelegationTokenRequest) Key() int16 { return 40 }
+func (*ExpireDelegationTokenRequest) MaxVersion() int16 { return 2 }
+func (v *ExpireDelegationTokenRequest) SetVersion(version int16) { v.Version = version }
+func (v *ExpireDelegationTokenRequest) GetVersion() int16 { return v.Version }
+func (v *ExpireDelegationTokenRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *ExpireDelegationTokenRequest) ResponseKind() Response {
+ r := &ExpireDelegationTokenResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ExpireDelegationTokenRequest) RequestWith(ctx context.Context, r Requestor) (*ExpireDelegationTokenResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ExpireDelegationTokenResponse)
+ return resp, err
+}
+
+func (v *ExpireDelegationTokenRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.HMAC
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.ExpiryPeriodMillis
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ExpireDelegationTokenRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ExpireDelegationTokenRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ExpireDelegationTokenRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.HMAC = v
+ }
+ {
+ v := b.Int64()
+ s.ExpiryPeriodMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrExpireDelegationTokenRequest returns a pointer to a default ExpireDelegationTokenRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrExpireDelegationTokenRequest() *ExpireDelegationTokenRequest {
+ var v ExpireDelegationTokenRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ExpireDelegationTokenRequest.
+func (v *ExpireDelegationTokenRequest) Default() {
+}
+
+// NewExpireDelegationTokenRequest returns a default ExpireDelegationTokenRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewExpireDelegationTokenRequest() ExpireDelegationTokenRequest {
+ var v ExpireDelegationTokenRequest
+ v.Default()
+ return v
+}
+
+// ExpireDelegationTokenResponse is a response to an ExpireDelegationTokenRequest.
+type ExpireDelegationTokenResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is any error that caused the request to fail.
+ ErrorCode int16
+
+ // ExpiryTimestamp is the new timestamp at which the delegation token will
+ // expire.
+ ExpiryTimestamp int64
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*ExpireDelegationTokenResponse) Key() int16 { return 40 }
+func (*ExpireDelegationTokenResponse) MaxVersion() int16 { return 2 }
+func (v *ExpireDelegationTokenResponse) SetVersion(version int16) { v.Version = version }
+func (v *ExpireDelegationTokenResponse) GetVersion() int16 { return v.Version }
+func (v *ExpireDelegationTokenResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *ExpireDelegationTokenResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 1
+}
+
+func (v *ExpireDelegationTokenResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *ExpireDelegationTokenResponse) RequestKind() Request {
+ return &ExpireDelegationTokenRequest{Version: v.Version}
+}
+
+func (v *ExpireDelegationTokenResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ExpiryTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ExpireDelegationTokenResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ExpireDelegationTokenResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ExpireDelegationTokenResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.ExpiryTimestamp = v
+ }
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrExpireDelegationTokenResponse returns a pointer to a default ExpireDelegationTokenResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrExpireDelegationTokenResponse() *ExpireDelegationTokenResponse {
+ var v ExpireDelegationTokenResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ExpireDelegationTokenResponse.
+func (v *ExpireDelegationTokenResponse) Default() {
+}
+
+// NewExpireDelegationTokenResponse returns a default ExpireDelegationTokenResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewExpireDelegationTokenResponse() ExpireDelegationTokenResponse {
+ var v ExpireDelegationTokenResponse
+ v.Default()
+ return v
+}
+
+type DescribeDelegationTokenRequestOwner struct {
+ // PrincipalType is a type to match to describe delegation tokens created
+ // with this principal. This would be "User" with the simple authorizer.
+ PrincipalType string
+
+ // PrincipalName is the name to match to describe delegation tokens created
+ // with this principal.
+ PrincipalName string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeDelegationTokenRequestOwner.
+func (v *DescribeDelegationTokenRequestOwner) Default() {
+}
+
+// NewDescribeDelegationTokenRequestOwner returns a default DescribeDelegationTokenRequestOwner
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeDelegationTokenRequestOwner() DescribeDelegationTokenRequestOwner {
+ var v DescribeDelegationTokenRequestOwner
+ v.Default()
+ return v
+}
+
+// DescribeDelegationTokenRequest is a request to describe delegation tokens.
+type DescribeDelegationTokenRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Owners contains owners to describe delegation tokens for, or null for all.
+ // If non-null, only tokens created from a matching principal type, name
+ // combination are printed.
+ Owners []DescribeDelegationTokenRequestOwner
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DescribeDelegationTokenRequest) Key() int16 { return 41 }
+func (*DescribeDelegationTokenRequest) MaxVersion() int16 { return 3 }
+func (v *DescribeDelegationTokenRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeDelegationTokenRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeDelegationTokenRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *DescribeDelegationTokenRequest) ResponseKind() Response {
+ r := &DescribeDelegationTokenResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeDelegationTokenRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeDelegationTokenResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeDelegationTokenResponse)
+ return resp, err
+}
+
+func (v *DescribeDelegationTokenRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Owners
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.PrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.PrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeDelegationTokenRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeDelegationTokenRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeDelegationTokenRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Owners
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []DescribeDelegationTokenRequestOwner{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeDelegationTokenRequestOwner, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalName = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Owners = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeDelegationTokenRequest returns a pointer to a default DescribeDelegationTokenRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeDelegationTokenRequest() *DescribeDelegationTokenRequest {
+ var v DescribeDelegationTokenRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeDelegationTokenRequest.
+func (v *DescribeDelegationTokenRequest) Default() {
+}
+
+// NewDescribeDelegationTokenRequest returns a default DescribeDelegationTokenRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeDelegationTokenRequest() DescribeDelegationTokenRequest {
+ var v DescribeDelegationTokenRequest
+ v.Default()
+ return v
+}
+
+type DescribeDelegationTokenResponseTokenDetailRenewer struct {
+ PrincipalType string
+
+ PrincipalName string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeDelegationTokenResponseTokenDetailRenewer.
+func (v *DescribeDelegationTokenResponseTokenDetailRenewer) Default() {
+}
+
+// NewDescribeDelegationTokenResponseTokenDetailRenewer returns a default DescribeDelegationTokenResponseTokenDetailRenewer
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeDelegationTokenResponseTokenDetailRenewer() DescribeDelegationTokenResponseTokenDetailRenewer {
+ var v DescribeDelegationTokenResponseTokenDetailRenewer
+ v.Default()
+ return v
+}
+
+type DescribeDelegationTokenResponseTokenDetail struct {
+ // PrincipalType is the principal type of who created this token.
+ PrincipalType string
+
+ // PrincipalName is the principal name of who created this token.
+ PrincipalName string
+
+ // The principal type of the requester of the token.
+ TokenRequesterPrincipalType string // v3+
+
+ // The principal name of the requester token.
+ TokenRequesterPrincipalName string // v3+
+
+ // IssueTimestamp is the millisecond timestamp of when this token was issued.
+ IssueTimestamp int64
+
+ // ExpiryTimestamp is the millisecond timestamp of when this token will expire.
+ ExpiryTimestamp int64
+
+ // MaxTimestamp is the millisecond timestamp past which whis token cannot
+ // be renewed.
+ MaxTimestamp int64
+
+ // TokenID is the ID (scram username) of this token.
+ TokenID string
+
+ // HMAC is the password of this token.
+ HMAC []byte
+
+ // Renewers is a list of users that can renew this token.
+ Renewers []DescribeDelegationTokenResponseTokenDetailRenewer
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeDelegationTokenResponseTokenDetail.
+func (v *DescribeDelegationTokenResponseTokenDetail) Default() {
+}
+
+// NewDescribeDelegationTokenResponseTokenDetail returns a default DescribeDelegationTokenResponseTokenDetail
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeDelegationTokenResponseTokenDetail() DescribeDelegationTokenResponseTokenDetail {
+ var v DescribeDelegationTokenResponseTokenDetail
+ v.Default()
+ return v
+}
+
+// DescribeDelegationTokenResponsee is a response to a DescribeDelegationTokenRequest.
+type DescribeDelegationTokenResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is any error that caused the request to fail.
+ ErrorCode int16
+
+ // TokenDetails shows information about each token created from any principal
+ // in the request.
+ TokenDetails []DescribeDelegationTokenResponseTokenDetail
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DescribeDelegationTokenResponse) Key() int16 { return 41 }
+func (*DescribeDelegationTokenResponse) MaxVersion() int16 { return 3 }
+func (v *DescribeDelegationTokenResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeDelegationTokenResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeDelegationTokenResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *DescribeDelegationTokenResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 1
+}
+
+func (v *DescribeDelegationTokenResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeDelegationTokenResponse) RequestKind() Request {
+ return &DescribeDelegationTokenRequest{Version: v.Version}
+}
+
+func (v *DescribeDelegationTokenResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.TokenDetails
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.PrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.PrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.TokenRequesterPrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.TokenRequesterPrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.IssueTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ExpiryTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.MaxTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.TokenID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.HMAC
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.Renewers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.PrincipalType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.PrincipalName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeDelegationTokenResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeDelegationTokenResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeDelegationTokenResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.TokenDetails
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeDelegationTokenResponseTokenDetail, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalName = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TokenRequesterPrincipalType = v
+ }
+ if version >= 3 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TokenRequesterPrincipalName = v
+ }
+ {
+ v := b.Int64()
+ s.IssueTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.ExpiryTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.MaxTimestamp = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TokenID = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.HMAC = v
+ }
+ {
+ v := s.Renewers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeDelegationTokenResponseTokenDetailRenewer, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.PrincipalName = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Renewers = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TokenDetails = v
+ }
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeDelegationTokenResponse returns a pointer to a default DescribeDelegationTokenResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeDelegationTokenResponse() *DescribeDelegationTokenResponse {
+ var v DescribeDelegationTokenResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeDelegationTokenResponse.
+func (v *DescribeDelegationTokenResponse) Default() {
+}
+
+// NewDescribeDelegationTokenResponse returns a default DescribeDelegationTokenResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeDelegationTokenResponse() DescribeDelegationTokenResponse {
+ var v DescribeDelegationTokenResponse
+ v.Default()
+ return v
+}
+
+// DeleteGroupsRequest deletes consumer groups. This request was added for
+// Kafka 1.1.0 corresponding to the removal of RetentionTimeMillis from
+// OffsetCommitRequest. See KIP-229 for more details.
+type DeleteGroupsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Groups is a list of groups to delete.
+ Groups []string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DeleteGroupsRequest) Key() int16 { return 42 }
+func (*DeleteGroupsRequest) MaxVersion() int16 { return 2 }
+func (v *DeleteGroupsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DeleteGroupsRequest) GetVersion() int16 { return v.Version }
+func (v *DeleteGroupsRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *DeleteGroupsRequest) IsGroupCoordinatorRequest() {}
+func (v *DeleteGroupsRequest) ResponseKind() Response {
+ r := &DeleteGroupsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DeleteGroupsRequest) RequestWith(ctx context.Context, r Requestor) (*DeleteGroupsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DeleteGroupsResponse)
+ return resp, err
+}
+
+func (v *DeleteGroupsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteGroupsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteGroupsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteGroupsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.Groups = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteGroupsRequest returns a pointer to a default DeleteGroupsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteGroupsRequest() *DeleteGroupsRequest {
+ var v DeleteGroupsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteGroupsRequest.
+func (v *DeleteGroupsRequest) Default() {
+}
+
+// NewDeleteGroupsRequest returns a default DeleteGroupsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteGroupsRequest() DeleteGroupsRequest {
+ var v DeleteGroupsRequest
+ v.Default()
+ return v
+}
+
+type DeleteGroupsResponseGroup struct {
+ // Group is a group ID requested for deletion.
+ Group string
+
+ // ErrorCode is the error code returned for this group's deletion request.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // to delete a group.
+ //
+ // INVALID_GROUP_ID is returned if the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator for this
+ // group is not yet active.
+ //
+ // GROUP_ID_NOT_FOUND is returned if the group ID does not exist.
+ //
+ // NON_EMPTY_GROUP is returned if attempting to delete a group that is
+ // not in the empty state.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteGroupsResponseGroup.
+func (v *DeleteGroupsResponseGroup) Default() {
+}
+
+// NewDeleteGroupsResponseGroup returns a default DeleteGroupsResponseGroup
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteGroupsResponseGroup() DeleteGroupsResponseGroup {
+ var v DeleteGroupsResponseGroup
+ v.Default()
+ return v
+}
+
+// DeleteGroupsResponse is returned from a DeleteGroupsRequest.
+type DeleteGroupsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after this request.
+ // For Kafka < 2.0.0, the throttle is applied before issuing a response.
+ // For Kafka >= 2.0.0, the throttle is applied after issuing a response.
+ //
+ // This request switched at version 1.
+ ThrottleMillis int32
+
+ // Groups are the responses to each group requested for deletion.
+ Groups []DeleteGroupsResponseGroup
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*DeleteGroupsResponse) Key() int16 { return 42 }
+func (*DeleteGroupsResponse) MaxVersion() int16 { return 2 }
+func (v *DeleteGroupsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DeleteGroupsResponse) GetVersion() int16 { return v.Version }
+func (v *DeleteGroupsResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *DeleteGroupsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 1 }
+func (v *DeleteGroupsResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *DeleteGroupsResponse) RequestKind() Request { return &DeleteGroupsRequest{Version: v.Version} }
+
+func (v *DeleteGroupsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DeleteGroupsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DeleteGroupsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DeleteGroupsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DeleteGroupsResponseGroup, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Groups = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDeleteGroupsResponse returns a pointer to a default DeleteGroupsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDeleteGroupsResponse() *DeleteGroupsResponse {
+ var v DeleteGroupsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DeleteGroupsResponse.
+func (v *DeleteGroupsResponse) Default() {
+}
+
+// NewDeleteGroupsResponse returns a default DeleteGroupsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDeleteGroupsResponse() DeleteGroupsResponse {
+ var v DeleteGroupsResponse
+ v.Default()
+ return v
+}
+
+type ElectLeadersRequestTopic struct {
+ // Topic is a topic to trigger leader elections for (but only for the
+ // partitions below).
+ Topic string
+
+ // Partitions is an array of partitions in a topic to trigger leader
+ // elections for.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ElectLeadersRequestTopic.
+func (v *ElectLeadersRequestTopic) Default() {
+}
+
+// NewElectLeadersRequestTopic returns a default ElectLeadersRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewElectLeadersRequestTopic() ElectLeadersRequestTopic {
+ var v ElectLeadersRequestTopic
+ v.Default()
+ return v
+}
+
+// ElectLeadersRequest begins a leader election for all given topic
+// partitions. This request was added in Kafka 2.2.0 to replace the zookeeper
+// only option of triggering leader elections before. See KIP-183 for more
+// details. KIP-460 introduced the ElectionType field with Kafka 2.4.0.
+type ElectLeadersRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ElectionType is the type of election to conduct. 0 elects the preferred
+ // replica, 1 elects the first live replica if there are no in-sync replicas
+ // (i.e., unclean leader election).
+ ElectionType int8 // v1+
+
+ // Topics is an array of topics and corresponding partitions to
+ // trigger leader elections for, or null for all.
+ Topics []ElectLeadersRequestTopic
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 60000.
+ TimeoutMillis int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*ElectLeadersRequest) Key() int16 { return 43 }
+func (*ElectLeadersRequest) MaxVersion() int16 { return 2 }
+func (v *ElectLeadersRequest) SetVersion(version int16) { v.Version = version }
+func (v *ElectLeadersRequest) GetVersion() int16 { return v.Version }
+func (v *ElectLeadersRequest) IsFlexible() bool { return v.Version >= 2 }
+func (v *ElectLeadersRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *ElectLeadersRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *ElectLeadersRequest) IsAdminRequest() {}
+func (v *ElectLeadersRequest) ResponseKind() Response {
+ r := &ElectLeadersResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ElectLeadersRequest) RequestWith(ctx context.Context, r Requestor) (*ElectLeadersResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ElectLeadersResponse)
+ return resp, err
+}
+
+func (v *ElectLeadersRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ if version >= 1 {
+ v := v.ElectionType
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ElectLeadersRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ElectLeadersRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ElectLeadersRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ if version >= 1 {
+ v := b.Int8()
+ s.ElectionType = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []ElectLeadersRequestTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ElectLeadersRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrElectLeadersRequest returns a pointer to a default ElectLeadersRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrElectLeadersRequest() *ElectLeadersRequest {
+ var v ElectLeadersRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ElectLeadersRequest.
+func (v *ElectLeadersRequest) Default() {
+ v.TimeoutMillis = 60000
+}
+
+// NewElectLeadersRequest returns a default ElectLeadersRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewElectLeadersRequest() ElectLeadersRequest {
+ var v ElectLeadersRequest
+ v.Default()
+ return v
+}
+
+type ElectLeadersResponseTopicPartition struct {
+ // Partition is the partition for this result.
+ Partition int32
+
+ // ErrorCode is the error code returned for this topic/partition leader
+ // election.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if the client is not
+ // authorized to trigger leader elections.
+ //
+ // NOT_CONTROLLER is returned if the request was not issued to a Kafka
+ // controller.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the topic/partition does
+ // not exist on any broker in the cluster (this is slightly different
+ // from the usual meaning of a single broker not knowing of the topic
+ // partition).
+ //
+ // PREFERRED_LEADER_NOT_AVAILABLE is returned if the preferred leader
+ // could not be elected (for example, the preferred leader was not in
+ // the ISR).
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the leader election failed.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ElectLeadersResponseTopicPartition.
+func (v *ElectLeadersResponseTopicPartition) Default() {
+}
+
+// NewElectLeadersResponseTopicPartition returns a default ElectLeadersResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewElectLeadersResponseTopicPartition() ElectLeadersResponseTopicPartition {
+ var v ElectLeadersResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type ElectLeadersResponseTopic struct {
+ // Topic is topic for the given partition results below.
+ Topic string
+
+ // Partitions contains election results for a topic's partitions.
+ Partitions []ElectLeadersResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ElectLeadersResponseTopic.
+func (v *ElectLeadersResponseTopic) Default() {
+}
+
+// NewElectLeadersResponseTopic returns a default ElectLeadersResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewElectLeadersResponseTopic() ElectLeadersResponseTopic {
+ var v ElectLeadersResponseTopic
+ v.Default()
+ return v
+}
+
+// ElectLeadersResponse is a response for an ElectLeadersRequest.
+type ElectLeadersResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // ErrorCode is any error that applies to all partitions.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if the client is not
+ // authorized to reassign partitions.
+ ErrorCode int16 // v1+
+
+ // Topics contains leader election results for each requested topic.
+ Topics []ElectLeadersResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v2+
+}
+
+func (*ElectLeadersResponse) Key() int16 { return 43 }
+func (*ElectLeadersResponse) MaxVersion() int16 { return 2 }
+func (v *ElectLeadersResponse) SetVersion(version int16) { v.Version = version }
+func (v *ElectLeadersResponse) GetVersion() int16 { return v.Version }
+func (v *ElectLeadersResponse) IsFlexible() bool { return v.Version >= 2 }
+func (v *ElectLeadersResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *ElectLeadersResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *ElectLeadersResponse) RequestKind() Request { return &ElectLeadersRequest{Version: v.Version} }
+
+func (v *ElectLeadersResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 1 {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ElectLeadersResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ElectLeadersResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ElectLeadersResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 2
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ if version >= 1 {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ElectLeadersResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ElectLeadersResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrElectLeadersResponse returns a pointer to a default ElectLeadersResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrElectLeadersResponse() *ElectLeadersResponse {
+ var v ElectLeadersResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ElectLeadersResponse.
+func (v *ElectLeadersResponse) Default() {
+}
+
+// NewElectLeadersResponse returns a default ElectLeadersResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewElectLeadersResponse() ElectLeadersResponse {
+ var v ElectLeadersResponse
+ v.Default()
+ return v
+}
+
+type IncrementalAlterConfigsRequestResourceConfig struct {
+ // Name is a key to modify (e.g. segment.bytes).
+ //
+ // For broker loggers, see KIP-412 section "Request/Response Overview"
+ // for details on how to change per logger log levels.
+ Name string
+
+ // Op is the type of operation to perform for this config name.
+ //
+ // SET (0) is to set a configuration value; the value must not be null.
+ //
+ // DELETE (1) is to delete a configuration key.
+ //
+ // APPEND (2) is to add a value to the list of values for a key (if the
+ // key is for a list of values).
+ //
+ // SUBTRACT (3) is to remove a value from a list of values (if the key
+ // is for a list of values).
+ Op IncrementalAlterConfigOp
+
+ // Value is a value to set for the key (e.g. 10).
+ Value *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to IncrementalAlterConfigsRequestResourceConfig.
+func (v *IncrementalAlterConfigsRequestResourceConfig) Default() {
+}
+
+// NewIncrementalAlterConfigsRequestResourceConfig returns a default IncrementalAlterConfigsRequestResourceConfig
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewIncrementalAlterConfigsRequestResourceConfig() IncrementalAlterConfigsRequestResourceConfig {
+ var v IncrementalAlterConfigsRequestResourceConfig
+ v.Default()
+ return v
+}
+
+type IncrementalAlterConfigsRequestResource struct {
+ // ResourceType is an enum corresponding to the type of config to alter.
+ ResourceType ConfigResourceType
+
+ // ResourceName is the name of config to alter.
+ //
+ // If the requested type is a topic, this corresponds to a topic name.
+ //
+ // If the requested type if a broker, this should either be empty or be
+ // the ID of the broker this request is issued to. If it is empty, this
+ // updates all broker configs. If a specific ID, this updates just the
+ // broker. Using a specific ID also ensures that brokers reload config
+ // or secret files even if the file path has not changed. Lastly, password
+ // config options can only be defined on a per broker basis.
+ //
+ // If the type is broker logger, this must be a broker ID.
+ ResourceName string
+
+ // Configs contains key/value config pairs to set on the resource.
+ Configs []IncrementalAlterConfigsRequestResourceConfig
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to IncrementalAlterConfigsRequestResource.
+func (v *IncrementalAlterConfigsRequestResource) Default() {
+}
+
+// NewIncrementalAlterConfigsRequestResource returns a default IncrementalAlterConfigsRequestResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewIncrementalAlterConfigsRequestResource() IncrementalAlterConfigsRequestResource {
+ var v IncrementalAlterConfigsRequestResource
+ v.Default()
+ return v
+}
+
+// IncrementalAlterConfigsRequest issues ar equest to alter either topic or
+// broker configs.
+//
+// This API was added in Kafka 2.3.0 to replace AlterConfigs. The key benefit
+// of this API is that consumers do not need to know the full config state
+// to add or remove new config options. See KIP-339 for more details.
+type IncrementalAlterConfigsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Resources is an array of configs to alter.
+ Resources []IncrementalAlterConfigsRequestResource
+
+ // ValidateOnly validates the request but does not apply it.
+ ValidateOnly bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*IncrementalAlterConfigsRequest) Key() int16 { return 44 }
+func (*IncrementalAlterConfigsRequest) MaxVersion() int16 { return 1 }
+func (v *IncrementalAlterConfigsRequest) SetVersion(version int16) { v.Version = version }
+func (v *IncrementalAlterConfigsRequest) GetVersion() int16 { return v.Version }
+func (v *IncrementalAlterConfigsRequest) IsFlexible() bool { return v.Version >= 1 }
+func (v *IncrementalAlterConfigsRequest) ResponseKind() Response {
+ r := &IncrementalAlterConfigsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *IncrementalAlterConfigsRequest) RequestWith(ctx context.Context, r Requestor) (*IncrementalAlterConfigsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*IncrementalAlterConfigsResponse)
+ return resp, err
+}
+
+func (v *IncrementalAlterConfigsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Configs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Op
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.ValidateOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *IncrementalAlterConfigsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *IncrementalAlterConfigsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *IncrementalAlterConfigsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]IncrementalAlterConfigsRequestResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var t ConfigResourceType
+ {
+ v := b.Int8()
+ t = ConfigResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ {
+ v := s.Configs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]IncrementalAlterConfigsRequestResourceConfig, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var t IncrementalAlterConfigOp
+ {
+ v := b.Int8()
+ t = IncrementalAlterConfigOp(v)
+ }
+ v := t
+ s.Op = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Value = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Configs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ {
+ v := b.Bool()
+ s.ValidateOnly = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrIncrementalAlterConfigsRequest returns a pointer to a default IncrementalAlterConfigsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrIncrementalAlterConfigsRequest() *IncrementalAlterConfigsRequest {
+ var v IncrementalAlterConfigsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to IncrementalAlterConfigsRequest.
+func (v *IncrementalAlterConfigsRequest) Default() {
+}
+
+// NewIncrementalAlterConfigsRequest returns a default IncrementalAlterConfigsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewIncrementalAlterConfigsRequest() IncrementalAlterConfigsRequest {
+ var v IncrementalAlterConfigsRequest
+ v.Default()
+ return v
+}
+
+type IncrementalAlterConfigsResponseResource struct {
+ // ErrorCode is the error code returned for incrementally altering configs.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if asking to alter broker
+ // configs but the client is not authorized to do so.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if asking to alter topic
+ // configs but the client is not authorized to do so.
+ //
+ // INVALID_TOPIC_EXCEPTION is returned if the requested topic was invalid.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the requested topic.
+ //
+ // INVALID_REQUEST is returned if the requested config is invalid or if
+ // asking Kafka to alter an invalid resource.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the incremental alter config failed.
+ ErrorMessage *string
+
+ // ResourceType is the enum corresponding to the type of altered config.
+ ResourceType ConfigResourceType
+
+ // ResourceName is the name corresponding to the incremental alter config
+ // request.
+ ResourceName string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to IncrementalAlterConfigsResponseResource.
+func (v *IncrementalAlterConfigsResponseResource) Default() {
+}
+
+// NewIncrementalAlterConfigsResponseResource returns a default IncrementalAlterConfigsResponseResource
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewIncrementalAlterConfigsResponseResource() IncrementalAlterConfigsResponseResource {
+ var v IncrementalAlterConfigsResponseResource
+ v.Default()
+ return v
+}
+
+// IncrementalAlterConfigsResponse is returned from an IncrementalAlterConfigsRequest.
+type IncrementalAlterConfigsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // Resources are responses for each resources in the alter request.
+ Resources []IncrementalAlterConfigsResponseResource
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*IncrementalAlterConfigsResponse) Key() int16 { return 44 }
+func (*IncrementalAlterConfigsResponse) MaxVersion() int16 { return 1 }
+func (v *IncrementalAlterConfigsResponse) SetVersion(version int16) { v.Version = version }
+func (v *IncrementalAlterConfigsResponse) GetVersion() int16 { return v.Version }
+func (v *IncrementalAlterConfigsResponse) IsFlexible() bool { return v.Version >= 1 }
+func (v *IncrementalAlterConfigsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *IncrementalAlterConfigsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *IncrementalAlterConfigsResponse) RequestKind() Request {
+ return &IncrementalAlterConfigsRequest{Version: v.Version}
+}
+
+func (v *IncrementalAlterConfigsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Resources
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ResourceType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.ResourceName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *IncrementalAlterConfigsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *IncrementalAlterConfigsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *IncrementalAlterConfigsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Resources
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]IncrementalAlterConfigsResponseResource, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var t ConfigResourceType
+ {
+ v := b.Int8()
+ t = ConfigResourceType(v)
+ }
+ v := t
+ s.ResourceType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ResourceName = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Resources = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrIncrementalAlterConfigsResponse returns a pointer to a default IncrementalAlterConfigsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrIncrementalAlterConfigsResponse() *IncrementalAlterConfigsResponse {
+ var v IncrementalAlterConfigsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to IncrementalAlterConfigsResponse.
+func (v *IncrementalAlterConfigsResponse) Default() {
+}
+
+// NewIncrementalAlterConfigsResponse returns a default IncrementalAlterConfigsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewIncrementalAlterConfigsResponse() IncrementalAlterConfigsResponse {
+ var v IncrementalAlterConfigsResponse
+ v.Default()
+ return v
+}
+
+type AlterPartitionAssignmentsRequestTopicPartition struct {
+ // Partition is a partition to reassign.
+ Partition int32
+
+ // Replicas are replicas to place the partition on, or null to
+ // cancel a pending reassignment of this partition.
+ Replicas []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionAssignmentsRequestTopicPartition.
+func (v *AlterPartitionAssignmentsRequestTopicPartition) Default() {
+}
+
+// NewAlterPartitionAssignmentsRequestTopicPartition returns a default AlterPartitionAssignmentsRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionAssignmentsRequestTopicPartition() AlterPartitionAssignmentsRequestTopicPartition {
+ var v AlterPartitionAssignmentsRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type AlterPartitionAssignmentsRequestTopic struct {
+ // Topic is a topic to reassign the partitions of.
+ Topic string
+
+ // Partitions contains partitions to reassign.
+ Partitions []AlterPartitionAssignmentsRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionAssignmentsRequestTopic.
+func (v *AlterPartitionAssignmentsRequestTopic) Default() {
+}
+
+// NewAlterPartitionAssignmentsRequestTopic returns a default AlterPartitionAssignmentsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionAssignmentsRequestTopic() AlterPartitionAssignmentsRequestTopic {
+ var v AlterPartitionAssignmentsRequestTopic
+ v.Default()
+ return v
+}
+
+// AlterPartitionAssignmentsRequest, proposed in KIP-455 and implemented in
+// Kafka 2.4.0, is a request to reassign partitions to certain brokers.
+//
+// ACL wise, this requires ALTER on CLUSTER.
+type AlterPartitionAssignmentsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 60000.
+ TimeoutMillis int32
+
+ // Topics are topics for which to reassign partitions of.
+ Topics []AlterPartitionAssignmentsRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AlterPartitionAssignmentsRequest) Key() int16 { return 45 }
+func (*AlterPartitionAssignmentsRequest) MaxVersion() int16 { return 0 }
+func (v *AlterPartitionAssignmentsRequest) SetVersion(version int16) { v.Version = version }
+func (v *AlterPartitionAssignmentsRequest) GetVersion() int16 { return v.Version }
+func (v *AlterPartitionAssignmentsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *AlterPartitionAssignmentsRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *AlterPartitionAssignmentsRequest) SetTimeout(timeoutMillis int32) {
+ v.TimeoutMillis = timeoutMillis
+}
+func (v *AlterPartitionAssignmentsRequest) IsAdminRequest() {}
+func (v *AlterPartitionAssignmentsRequest) ResponseKind() Response {
+ r := &AlterPartitionAssignmentsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AlterPartitionAssignmentsRequest) RequestWith(ctx context.Context, r Requestor) (*AlterPartitionAssignmentsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AlterPartitionAssignmentsResponse)
+ return resp, err
+}
+
+func (v *AlterPartitionAssignmentsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterPartitionAssignmentsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterPartitionAssignmentsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterPartitionAssignmentsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionAssignmentsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionAssignmentsRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []int32{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterPartitionAssignmentsRequest returns a pointer to a default AlterPartitionAssignmentsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterPartitionAssignmentsRequest() *AlterPartitionAssignmentsRequest {
+ var v AlterPartitionAssignmentsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionAssignmentsRequest.
+func (v *AlterPartitionAssignmentsRequest) Default() {
+ v.TimeoutMillis = 60000
+}
+
+// NewAlterPartitionAssignmentsRequest returns a default AlterPartitionAssignmentsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionAssignmentsRequest() AlterPartitionAssignmentsRequest {
+ var v AlterPartitionAssignmentsRequest
+ v.Default()
+ return v
+}
+
+type AlterPartitionAssignmentsResponseTopicPartition struct {
+ // Partition is the partition being responded to.
+ Partition int32
+
+ // ErrorCode is the error code returned for partition reassignments.
+ //
+ // REQUEST_TIMED_OUT is returned if the request timed out.
+ //
+ // NOT_CONTROLLER is returned if the request was not issued to a Kafka
+ // controller.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if the client is not
+ // authorized to reassign partitions.
+ //
+ // NO_REASSIGNMENT_IN_PROGRESS is returned for partition reassignment
+ // cancellations when the partition was not being reassigned.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the requested topic or the topic is being deleted.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the partition reassignment failed.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionAssignmentsResponseTopicPartition.
+func (v *AlterPartitionAssignmentsResponseTopicPartition) Default() {
+}
+
+// NewAlterPartitionAssignmentsResponseTopicPartition returns a default AlterPartitionAssignmentsResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionAssignmentsResponseTopicPartition() AlterPartitionAssignmentsResponseTopicPartition {
+ var v AlterPartitionAssignmentsResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type AlterPartitionAssignmentsResponseTopic struct {
+ // Topic is the topic being responded to.
+ Topic string
+
+ // Partitions contains responses for partitions.
+ Partitions []AlterPartitionAssignmentsResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionAssignmentsResponseTopic.
+func (v *AlterPartitionAssignmentsResponseTopic) Default() {
+}
+
+// NewAlterPartitionAssignmentsResponseTopic returns a default AlterPartitionAssignmentsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionAssignmentsResponseTopic() AlterPartitionAssignmentsResponseTopic {
+ var v AlterPartitionAssignmentsResponseTopic
+ v.Default()
+ return v
+}
+
+// AlterPartitionAssignmentsResponse is returned for an AlterPartitionAssignmentsRequest.
+type AlterPartitionAssignmentsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // ErrorCode is any global (applied to all partitions) error code.
+ ErrorCode int16
+
+ // ErrorMessage is any global (applied to all partitions) error message.
+ ErrorMessage *string
+
+ // Topics contains responses for each topic requested.
+ Topics []AlterPartitionAssignmentsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AlterPartitionAssignmentsResponse) Key() int16 { return 45 }
+func (*AlterPartitionAssignmentsResponse) MaxVersion() int16 { return 0 }
+func (v *AlterPartitionAssignmentsResponse) SetVersion(version int16) { v.Version = version }
+func (v *AlterPartitionAssignmentsResponse) GetVersion() int16 { return v.Version }
+func (v *AlterPartitionAssignmentsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *AlterPartitionAssignmentsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *AlterPartitionAssignmentsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AlterPartitionAssignmentsResponse) RequestKind() Request {
+ return &AlterPartitionAssignmentsRequest{Version: v.Version}
+}
+
+func (v *AlterPartitionAssignmentsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterPartitionAssignmentsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterPartitionAssignmentsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterPartitionAssignmentsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionAssignmentsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionAssignmentsResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterPartitionAssignmentsResponse returns a pointer to a default AlterPartitionAssignmentsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterPartitionAssignmentsResponse() *AlterPartitionAssignmentsResponse {
+ var v AlterPartitionAssignmentsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionAssignmentsResponse.
+func (v *AlterPartitionAssignmentsResponse) Default() {
+}
+
+// NewAlterPartitionAssignmentsResponse returns a default AlterPartitionAssignmentsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionAssignmentsResponse() AlterPartitionAssignmentsResponse {
+ var v AlterPartitionAssignmentsResponse
+ v.Default()
+ return v
+}
+
+type ListPartitionReassignmentsRequestTopic struct {
+ // Topic is a topic to list in progress partition reassingments of.
+ Topic string
+
+ // Partitions are partitions to list in progress reassignments of.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListPartitionReassignmentsRequestTopic.
+func (v *ListPartitionReassignmentsRequestTopic) Default() {
+}
+
+// NewListPartitionReassignmentsRequestTopic returns a default ListPartitionReassignmentsRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListPartitionReassignmentsRequestTopic() ListPartitionReassignmentsRequestTopic {
+ var v ListPartitionReassignmentsRequestTopic
+ v.Default()
+ return v
+}
+
+// ListPartitionReassignmentsRequest, proposed in KIP-455 and implemented in
+// Kafka 2.4.0, is a request to list in progress partition reassignments.
+//
+// ACL wise, this requires DESCRIBE on CLUSTER.
+type ListPartitionReassignmentsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 60000.
+ TimeoutMillis int32
+
+ // Topics are topics to list in progress partition reassignments of, or null
+ // to list everything.
+ Topics []ListPartitionReassignmentsRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ListPartitionReassignmentsRequest) Key() int16 { return 46 }
+func (*ListPartitionReassignmentsRequest) MaxVersion() int16 { return 0 }
+func (v *ListPartitionReassignmentsRequest) SetVersion(version int16) { v.Version = version }
+func (v *ListPartitionReassignmentsRequest) GetVersion() int16 { return v.Version }
+func (v *ListPartitionReassignmentsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *ListPartitionReassignmentsRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *ListPartitionReassignmentsRequest) SetTimeout(timeoutMillis int32) {
+ v.TimeoutMillis = timeoutMillis
+}
+func (v *ListPartitionReassignmentsRequest) IsAdminRequest() {}
+func (v *ListPartitionReassignmentsRequest) ResponseKind() Response {
+ r := &ListPartitionReassignmentsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ListPartitionReassignmentsRequest) RequestWith(ctx context.Context, r Requestor) (*ListPartitionReassignmentsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ListPartitionReassignmentsResponse)
+ return resp, err
+}
+
+func (v *ListPartitionReassignmentsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListPartitionReassignmentsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListPartitionReassignmentsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListPartitionReassignmentsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []ListPartitionReassignmentsRequestTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListPartitionReassignmentsRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListPartitionReassignmentsRequest returns a pointer to a default ListPartitionReassignmentsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListPartitionReassignmentsRequest() *ListPartitionReassignmentsRequest {
+ var v ListPartitionReassignmentsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListPartitionReassignmentsRequest.
+func (v *ListPartitionReassignmentsRequest) Default() {
+ v.TimeoutMillis = 60000
+}
+
+// NewListPartitionReassignmentsRequest returns a default ListPartitionReassignmentsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListPartitionReassignmentsRequest() ListPartitionReassignmentsRequest {
+ var v ListPartitionReassignmentsRequest
+ v.Default()
+ return v
+}
+
+type ListPartitionReassignmentsResponseTopicPartition struct {
+ // Partition is the partition being responded to.
+ Partition int32
+
+ // Replicas is the partition's current replicas.
+ Replicas []int32
+
+ // AddingReplicas are replicas currently being added to the partition.
+ AddingReplicas []int32
+
+ // RemovingReplicas are replicas currently being removed from the partition.
+ RemovingReplicas []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListPartitionReassignmentsResponseTopicPartition.
+func (v *ListPartitionReassignmentsResponseTopicPartition) Default() {
+}
+
+// NewListPartitionReassignmentsResponseTopicPartition returns a default ListPartitionReassignmentsResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListPartitionReassignmentsResponseTopicPartition() ListPartitionReassignmentsResponseTopicPartition {
+ var v ListPartitionReassignmentsResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type ListPartitionReassignmentsResponseTopic struct {
+ // Topic is the topic being responded to.
+ Topic string
+
+ // Partitions contains responses for partitions.
+ Partitions []ListPartitionReassignmentsResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListPartitionReassignmentsResponseTopic.
+func (v *ListPartitionReassignmentsResponseTopic) Default() {
+}
+
+// NewListPartitionReassignmentsResponseTopic returns a default ListPartitionReassignmentsResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListPartitionReassignmentsResponseTopic() ListPartitionReassignmentsResponseTopic {
+ var v ListPartitionReassignmentsResponseTopic
+ v.Default()
+ return v
+}
+
+// ListPartitionReassignmentsResponse is returned for a ListPartitionReassignmentsRequest.
+type ListPartitionReassignmentsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // ErrorCode is the error code returned for listing reassignments.
+ //
+ // REQUEST_TIMED_OUT is returned if the request timed out.
+ //
+ // NOT_CONTROLLER is returned if the request was not issued to a Kafka
+ // controller.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if the client is not
+ // authorized to reassign partitions.
+ ErrorCode int16
+
+ // ErrorMessage is any global (applied to all partitions) error message.
+ ErrorMessage *string
+
+ // Topics contains responses for each topic requested.
+ Topics []ListPartitionReassignmentsResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ListPartitionReassignmentsResponse) Key() int16 { return 46 }
+func (*ListPartitionReassignmentsResponse) MaxVersion() int16 { return 0 }
+func (v *ListPartitionReassignmentsResponse) SetVersion(version int16) { v.Version = version }
+func (v *ListPartitionReassignmentsResponse) GetVersion() int16 { return v.Version }
+func (v *ListPartitionReassignmentsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *ListPartitionReassignmentsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *ListPartitionReassignmentsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *ListPartitionReassignmentsResponse) RequestKind() Request {
+ return &ListPartitionReassignmentsRequest{Version: v.Version}
+}
+
+func (v *ListPartitionReassignmentsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Replicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.AddingReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ {
+ v := v.RemovingReplicas
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListPartitionReassignmentsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListPartitionReassignmentsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListPartitionReassignmentsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListPartitionReassignmentsResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListPartitionReassignmentsResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := s.Replicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Replicas = v
+ }
+ {
+ v := s.AddingReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.AddingReplicas = v
+ }
+ {
+ v := s.RemovingReplicas
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.RemovingReplicas = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListPartitionReassignmentsResponse returns a pointer to a default ListPartitionReassignmentsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListPartitionReassignmentsResponse() *ListPartitionReassignmentsResponse {
+ var v ListPartitionReassignmentsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListPartitionReassignmentsResponse.
+func (v *ListPartitionReassignmentsResponse) Default() {
+}
+
+// NewListPartitionReassignmentsResponse returns a default ListPartitionReassignmentsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListPartitionReassignmentsResponse() ListPartitionReassignmentsResponse {
+ var v ListPartitionReassignmentsResponse
+ v.Default()
+ return v
+}
+
+type OffsetDeleteRequestTopicPartition struct {
+ // Partition is a partition to delete offsets for.
+ Partition int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetDeleteRequestTopicPartition.
+func (v *OffsetDeleteRequestTopicPartition) Default() {
+}
+
+// NewOffsetDeleteRequestTopicPartition returns a default OffsetDeleteRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetDeleteRequestTopicPartition() OffsetDeleteRequestTopicPartition {
+ var v OffsetDeleteRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetDeleteRequestTopic struct {
+ // Topic is a topic to delete offsets in.
+ Topic string
+
+ // Partitions are partitions to delete offsets for.
+ Partitions []OffsetDeleteRequestTopicPartition
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetDeleteRequestTopic.
+func (v *OffsetDeleteRequestTopic) Default() {
+}
+
+// NewOffsetDeleteRequestTopic returns a default OffsetDeleteRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetDeleteRequestTopic() OffsetDeleteRequestTopic {
+ var v OffsetDeleteRequestTopic
+ v.Default()
+ return v
+}
+
+// OffsetDeleteRequest, proposed in KIP-496 and implemented in Kafka 2.4.0, is
+// a request to delete group offsets.
+//
+// ACL wise, this requires DELETE on GROUP for the group and READ on TOPIC for
+// each topic.
+type OffsetDeleteRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Group is the group to delete offsets in.
+ Group string
+
+ // Topics are topics to delete offsets in.
+ Topics []OffsetDeleteRequestTopic
+}
+
+func (*OffsetDeleteRequest) Key() int16 { return 47 }
+func (*OffsetDeleteRequest) MaxVersion() int16 { return 0 }
+func (v *OffsetDeleteRequest) SetVersion(version int16) { v.Version = version }
+func (v *OffsetDeleteRequest) GetVersion() int16 { return v.Version }
+func (v *OffsetDeleteRequest) IsFlexible() bool { return false }
+func (v *OffsetDeleteRequest) IsGroupCoordinatorRequest() {}
+func (v *OffsetDeleteRequest) ResponseKind() Response {
+ r := &OffsetDeleteResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *OffsetDeleteRequest) RequestWith(ctx context.Context, r Requestor) (*OffsetDeleteResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*OffsetDeleteResponse)
+ return resp, err
+}
+
+func (v *OffsetDeleteRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.Group
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ }
+ return dst
+}
+
+func (v *OffsetDeleteRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetDeleteRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetDeleteRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Group = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetDeleteRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetDeleteRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetDeleteRequest returns a pointer to a default OffsetDeleteRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetDeleteRequest() *OffsetDeleteRequest {
+ var v OffsetDeleteRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetDeleteRequest.
+func (v *OffsetDeleteRequest) Default() {
+}
+
+// NewOffsetDeleteRequest returns a default OffsetDeleteRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetDeleteRequest() OffsetDeleteRequest {
+ var v OffsetDeleteRequest
+ v.Default()
+ return v
+}
+
+type OffsetDeleteResponseTopicPartition struct {
+ // Partition is the partition being responded to.
+ Partition int32
+
+ // ErrorCode is any per partition error code.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // for the topic / partition.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the broker does not know of
+ // the requested topic.
+ //
+ // GROUP_SUBSCRIBED_TO_TOPIC is returned if the topic is still subscribed to.
+ ErrorCode int16
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetDeleteResponseTopicPartition.
+func (v *OffsetDeleteResponseTopicPartition) Default() {
+}
+
+// NewOffsetDeleteResponseTopicPartition returns a default OffsetDeleteResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetDeleteResponseTopicPartition() OffsetDeleteResponseTopicPartition {
+ var v OffsetDeleteResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type OffsetDeleteResponseTopic struct {
+ // Topic is the topic being responded to.
+ Topic string
+
+ // Partitions are partitions being responded to.
+ Partitions []OffsetDeleteResponseTopicPartition
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetDeleteResponseTopic.
+func (v *OffsetDeleteResponseTopic) Default() {
+}
+
+// NewOffsetDeleteResponseTopic returns a default OffsetDeleteResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetDeleteResponseTopic() OffsetDeleteResponseTopic {
+ var v OffsetDeleteResponseTopic
+ v.Default()
+ return v
+}
+
+// OffsetDeleteResponse is a response to an offset delete request.
+type OffsetDeleteResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ErrorCode is any group wide error.
+ //
+ // GROUP_AUTHORIZATION_FAILED is returned if the client is not authorized
+ // for the group.
+ //
+ // INVALID_GROUP_ID is returned in the requested group ID is invalid.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is not available.
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the group is loading.
+ //
+ // NOT_COORDINATOR is returned if the requested broker is not the coordinator
+ // for the requested group.
+ //
+ // GROUP_ID_NOT_FOUND is returned if the group ID does not exist.
+ ErrorCode int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // Topics are responses to requested topics.
+ Topics []OffsetDeleteResponseTopic
+}
+
+func (*OffsetDeleteResponse) Key() int16 { return 47 }
+func (*OffsetDeleteResponse) MaxVersion() int16 { return 0 }
+func (v *OffsetDeleteResponse) SetVersion(version int16) { v.Version = version }
+func (v *OffsetDeleteResponse) GetVersion() int16 { return v.Version }
+func (v *OffsetDeleteResponse) IsFlexible() bool { return false }
+func (v *OffsetDeleteResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *OffsetDeleteResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *OffsetDeleteResponse) RequestKind() Request { return &OffsetDeleteRequest{Version: v.Version} }
+
+func (v *OffsetDeleteResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ }
+ }
+ }
+ }
+ return dst
+}
+
+func (v *OffsetDeleteResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *OffsetDeleteResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *OffsetDeleteResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetDeleteResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]OffsetDeleteResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ return b.Complete()
+}
+
+// NewPtrOffsetDeleteResponse returns a pointer to a default OffsetDeleteResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrOffsetDeleteResponse() *OffsetDeleteResponse {
+ var v OffsetDeleteResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to OffsetDeleteResponse.
+func (v *OffsetDeleteResponse) Default() {
+}
+
+// NewOffsetDeleteResponse returns a default OffsetDeleteResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewOffsetDeleteResponse() OffsetDeleteResponse {
+ var v OffsetDeleteResponse
+ v.Default()
+ return v
+}
+
+type DescribeClientQuotasRequestComponent struct {
+ // EntityType is the entity component type that this filter component
+ // applies to; some possible values are "user" or "client-id".
+ EntityType string
+
+ // MatchType specifies how to match an entity,
+ // with 0 meaning match on the name exactly,
+ // 1 meaning match on the default name,
+ // and 2 meaning any specified name.
+ MatchType QuotasMatchType
+
+ // Match is the string to match against, or null if unused for the given
+ // match type.
+ Match *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClientQuotasRequestComponent.
+func (v *DescribeClientQuotasRequestComponent) Default() {
+}
+
+// NewDescribeClientQuotasRequestComponent returns a default DescribeClientQuotasRequestComponent
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClientQuotasRequestComponent() DescribeClientQuotasRequestComponent {
+ var v DescribeClientQuotasRequestComponent
+ v.Default()
+ return v
+}
+
+// DescribeClientQuotasRequest, proposed in KIP-546 and introduced with Kafka 2.6.0,
+// provides a way to describe client quotas.
+type DescribeClientQuotasRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Components is a list of match filters to apply for describing quota entities.
+ Components []DescribeClientQuotasRequestComponent
+
+ // Strict signifies whether matches are strict; if true, the response
+ // excludes entities with unspecified entity types.
+ Strict bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*DescribeClientQuotasRequest) Key() int16 { return 48 }
+func (*DescribeClientQuotasRequest) MaxVersion() int16 { return 1 }
+func (v *DescribeClientQuotasRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeClientQuotasRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeClientQuotasRequest) IsFlexible() bool { return v.Version >= 1 }
+func (v *DescribeClientQuotasRequest) ResponseKind() Response {
+ r := &DescribeClientQuotasResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeClientQuotasRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeClientQuotasResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeClientQuotasResponse)
+ return resp, err
+}
+
+func (v *DescribeClientQuotasRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.Components
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.EntityType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MatchType
+ {
+ v := int8(v)
+ dst = kbin.AppendInt8(dst, v)
+ }
+ }
+ {
+ v := v.Match
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Strict
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeClientQuotasRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeClientQuotasRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeClientQuotasRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := s.Components
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeClientQuotasRequestComponent, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.EntityType = v
+ }
+ {
+ var t QuotasMatchType
+ {
+ v := b.Int8()
+ t = QuotasMatchType(v)
+ }
+ v := t
+ s.MatchType = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Match = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Components = v
+ }
+ {
+ v := b.Bool()
+ s.Strict = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeClientQuotasRequest returns a pointer to a default DescribeClientQuotasRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeClientQuotasRequest() *DescribeClientQuotasRequest {
+ var v DescribeClientQuotasRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClientQuotasRequest.
+func (v *DescribeClientQuotasRequest) Default() {
+}
+
+// NewDescribeClientQuotasRequest returns a default DescribeClientQuotasRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClientQuotasRequest() DescribeClientQuotasRequest {
+ var v DescribeClientQuotasRequest
+ v.Default()
+ return v
+}
+
+type DescribeClientQuotasResponseEntryEntity struct {
+ // Type is the entity type.
+ Type string
+
+ // Name is the entity name, or null if the default.
+ Name *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClientQuotasResponseEntryEntity.
+func (v *DescribeClientQuotasResponseEntryEntity) Default() {
+}
+
+// NewDescribeClientQuotasResponseEntryEntity returns a default DescribeClientQuotasResponseEntryEntity
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClientQuotasResponseEntryEntity() DescribeClientQuotasResponseEntryEntity {
+ var v DescribeClientQuotasResponseEntryEntity
+ v.Default()
+ return v
+}
+
+type DescribeClientQuotasResponseEntryValue struct {
+ // Key is the quota configuration key.
+ Key string
+
+ // Value is the quota configuration value.
+ Value float64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClientQuotasResponseEntryValue.
+func (v *DescribeClientQuotasResponseEntryValue) Default() {
+}
+
+// NewDescribeClientQuotasResponseEntryValue returns a default DescribeClientQuotasResponseEntryValue
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClientQuotasResponseEntryValue() DescribeClientQuotasResponseEntryValue {
+ var v DescribeClientQuotasResponseEntryValue
+ v.Default()
+ return v
+}
+
+type DescribeClientQuotasResponseEntry struct {
+ // Entity contains the quota entity components being described.
+ Entity []DescribeClientQuotasResponseEntryEntity
+
+ // Values are quota values for the entity.
+ Values []DescribeClientQuotasResponseEntryValue
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClientQuotasResponseEntry.
+func (v *DescribeClientQuotasResponseEntry) Default() {
+}
+
+// NewDescribeClientQuotasResponseEntry returns a default DescribeClientQuotasResponseEntry
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClientQuotasResponseEntry() DescribeClientQuotasResponseEntry {
+ var v DescribeClientQuotasResponseEntry
+ v.Default()
+ return v
+}
+
+// DescribeClientQuotasResponse is a response for a DescribeClientQuotasRequest.
+type DescribeClientQuotasResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // ErrorCode is any error for the request.
+ ErrorCode int16
+
+ // ErrorMessage is an error message for the request, or null if the request succeeded.
+ ErrorMessage *string
+
+ // Entries contains entities that were matched.
+ Entries []DescribeClientQuotasResponseEntry
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*DescribeClientQuotasResponse) Key() int16 { return 48 }
+func (*DescribeClientQuotasResponse) MaxVersion() int16 { return 1 }
+func (v *DescribeClientQuotasResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeClientQuotasResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeClientQuotasResponse) IsFlexible() bool { return v.Version >= 1 }
+func (v *DescribeClientQuotasResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *DescribeClientQuotasResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeClientQuotasResponse) RequestKind() Request {
+ return &DescribeClientQuotasRequest{Version: v.Version}
+}
+
+func (v *DescribeClientQuotasResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Entries
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Entity
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Type
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Values
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Key
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendFloat64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeClientQuotasResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeClientQuotasResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeClientQuotasResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Entries
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []DescribeClientQuotasResponseEntry{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeClientQuotasResponseEntry, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := s.Entity
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeClientQuotasResponseEntryEntity, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Type = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Name = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Entity = v
+ }
+ {
+ v := s.Values
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeClientQuotasResponseEntryValue, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Key = v
+ }
+ {
+ v := b.Float64()
+ s.Value = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Values = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Entries = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeClientQuotasResponse returns a pointer to a default DescribeClientQuotasResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeClientQuotasResponse() *DescribeClientQuotasResponse {
+ var v DescribeClientQuotasResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClientQuotasResponse.
+func (v *DescribeClientQuotasResponse) Default() {
+}
+
+// NewDescribeClientQuotasResponse returns a default DescribeClientQuotasResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClientQuotasResponse() DescribeClientQuotasResponse {
+ var v DescribeClientQuotasResponse
+ v.Default()
+ return v
+}
+
+type AlterClientQuotasRequestEntryEntity struct {
+ // Type is the entity component's type; e.g. "client-id", "user" or "ip".
+ Type string
+
+ // Name is the name of the entity, or null for the default.
+ Name *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasRequestEntryEntity.
+func (v *AlterClientQuotasRequestEntryEntity) Default() {
+}
+
+// NewAlterClientQuotasRequestEntryEntity returns a default AlterClientQuotasRequestEntryEntity
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasRequestEntryEntity() AlterClientQuotasRequestEntryEntity {
+ var v AlterClientQuotasRequestEntryEntity
+ v.Default()
+ return v
+}
+
+type AlterClientQuotasRequestEntryOp struct {
+ // Key is the quota configuration key to alter.
+ Key string
+
+ // Value is the value to set; ignored if remove is true.
+ Value float64
+
+ // Remove is whether the quota configuration value should be removed or set.
+ Remove bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasRequestEntryOp.
+func (v *AlterClientQuotasRequestEntryOp) Default() {
+}
+
+// NewAlterClientQuotasRequestEntryOp returns a default AlterClientQuotasRequestEntryOp
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasRequestEntryOp() AlterClientQuotasRequestEntryOp {
+ var v AlterClientQuotasRequestEntryOp
+ v.Default()
+ return v
+}
+
+type AlterClientQuotasRequestEntry struct {
+ // Entity contains the components of a quota entity to alter.
+ Entity []AlterClientQuotasRequestEntryEntity
+
+ // Ops contains quota configuration entries to alter.
+ Ops []AlterClientQuotasRequestEntryOp
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasRequestEntry.
+func (v *AlterClientQuotasRequestEntry) Default() {
+}
+
+// NewAlterClientQuotasRequestEntry returns a default AlterClientQuotasRequestEntry
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasRequestEntry() AlterClientQuotasRequestEntry {
+ var v AlterClientQuotasRequestEntry
+ v.Default()
+ return v
+}
+
+// AlterClientQuotaRequest, proposed in KIP-546 and introduced with Kafka 2.6.0,
+// provides a way to alter client quotas.
+type AlterClientQuotasRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Entries are quota configuration entries to alter.
+ Entries []AlterClientQuotasRequestEntry
+
+ // ValidateOnly is makes this request a dry-run; the alteration is validated
+ // but not performed.
+ ValidateOnly bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*AlterClientQuotasRequest) Key() int16 { return 49 }
+func (*AlterClientQuotasRequest) MaxVersion() int16 { return 1 }
+func (v *AlterClientQuotasRequest) SetVersion(version int16) { v.Version = version }
+func (v *AlterClientQuotasRequest) GetVersion() int16 { return v.Version }
+func (v *AlterClientQuotasRequest) IsFlexible() bool { return v.Version >= 1 }
+func (v *AlterClientQuotasRequest) ResponseKind() Response {
+ r := &AlterClientQuotasResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AlterClientQuotasRequest) RequestWith(ctx context.Context, r Requestor) (*AlterClientQuotasResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AlterClientQuotasResponse)
+ return resp, err
+}
+
+func (v *AlterClientQuotasRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.Entries
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Entity
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Type
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Ops
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Key
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendFloat64(dst, v)
+ }
+ {
+ v := v.Remove
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.ValidateOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterClientQuotasRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterClientQuotasRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterClientQuotasRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := s.Entries
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterClientQuotasRequestEntry, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := s.Entity
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterClientQuotasRequestEntryEntity, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Type = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Name = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Entity = v
+ }
+ {
+ v := s.Ops
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterClientQuotasRequestEntryOp, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Key = v
+ }
+ {
+ v := b.Float64()
+ s.Value = v
+ }
+ {
+ v := b.Bool()
+ s.Remove = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Ops = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Entries = v
+ }
+ {
+ v := b.Bool()
+ s.ValidateOnly = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterClientQuotasRequest returns a pointer to a default AlterClientQuotasRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterClientQuotasRequest() *AlterClientQuotasRequest {
+ var v AlterClientQuotasRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasRequest.
+func (v *AlterClientQuotasRequest) Default() {
+}
+
+// NewAlterClientQuotasRequest returns a default AlterClientQuotasRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasRequest() AlterClientQuotasRequest {
+ var v AlterClientQuotasRequest
+ v.Default()
+ return v
+}
+
+type AlterClientQuotasResponseEntryEntity struct {
+ // Type is the entity component's type; e.g. "client-id" or "user".
+ Type string
+
+ // Name is the name of the entity, or null for the default.
+ Name *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasResponseEntryEntity.
+func (v *AlterClientQuotasResponseEntryEntity) Default() {
+}
+
+// NewAlterClientQuotasResponseEntryEntity returns a default AlterClientQuotasResponseEntryEntity
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasResponseEntryEntity() AlterClientQuotasResponseEntryEntity {
+ var v AlterClientQuotasResponseEntryEntity
+ v.Default()
+ return v
+}
+
+type AlterClientQuotasResponseEntry struct {
+ // ErrorCode is the error code for an alter on a matched entity.
+ ErrorCode int16
+
+ // ErrorMessage is an informative message if the alter on this entity failed.
+ ErrorMessage *string
+
+ // Entity contains the components of a matched entity.
+ Entity []AlterClientQuotasResponseEntryEntity
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasResponseEntry.
+func (v *AlterClientQuotasResponseEntry) Default() {
+}
+
+// NewAlterClientQuotasResponseEntry returns a default AlterClientQuotasResponseEntry
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasResponseEntry() AlterClientQuotasResponseEntry {
+ var v AlterClientQuotasResponseEntry
+ v.Default()
+ return v
+}
+
+// AlterClientQuotasResponse is a response to an AlterClientQuotasRequest.
+type AlterClientQuotasResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // Entries contains results for the alter request.
+ Entries []AlterClientQuotasResponseEntry
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v1+
+}
+
+func (*AlterClientQuotasResponse) Key() int16 { return 49 }
+func (*AlterClientQuotasResponse) MaxVersion() int16 { return 1 }
+func (v *AlterClientQuotasResponse) SetVersion(version int16) { v.Version = version }
+func (v *AlterClientQuotasResponse) GetVersion() int16 { return v.Version }
+func (v *AlterClientQuotasResponse) IsFlexible() bool { return v.Version >= 1 }
+func (v *AlterClientQuotasResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *AlterClientQuotasResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AlterClientQuotasResponse) RequestKind() Request {
+ return &AlterClientQuotasRequest{Version: v.Version}
+}
+
+func (v *AlterClientQuotasResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Entries
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Entity
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Type
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterClientQuotasResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterClientQuotasResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterClientQuotasResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 1
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Entries
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterClientQuotasResponseEntry, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Entity
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterClientQuotasResponseEntryEntity, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Type = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Name = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Entity = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Entries = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterClientQuotasResponse returns a pointer to a default AlterClientQuotasResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterClientQuotasResponse() *AlterClientQuotasResponse {
+ var v AlterClientQuotasResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterClientQuotasResponse.
+func (v *AlterClientQuotasResponse) Default() {
+}
+
+// NewAlterClientQuotasResponse returns a default AlterClientQuotasResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterClientQuotasResponse() AlterClientQuotasResponse {
+ var v AlterClientQuotasResponse
+ v.Default()
+ return v
+}
+
+type DescribeUserSCRAMCredentialsRequestUser struct {
+ // The user name.
+ Name string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeUserSCRAMCredentialsRequestUser.
+func (v *DescribeUserSCRAMCredentialsRequestUser) Default() {
+}
+
+// NewDescribeUserSCRAMCredentialsRequestUser returns a default DescribeUserSCRAMCredentialsRequestUser
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeUserSCRAMCredentialsRequestUser() DescribeUserSCRAMCredentialsRequestUser {
+ var v DescribeUserSCRAMCredentialsRequestUser
+ v.Default()
+ return v
+}
+
+// DescribeUserSCRAMCredentialsRequest, proposed in KIP-554 and introduced
+// with Kafka 2.7.0, describes user SCRAM credentials.
+//
+// This request was introduced as part of the overarching KIP-500 initiative,
+// which is to remove Zookeeper as a dependency.
+//
+// This request requires DESCRIBE on CLUSTER.
+type DescribeUserSCRAMCredentialsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The users to describe, or null to describe all.
+ Users []DescribeUserSCRAMCredentialsRequestUser
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeUserSCRAMCredentialsRequest) Key() int16 { return 50 }
+func (*DescribeUserSCRAMCredentialsRequest) MaxVersion() int16 { return 0 }
+func (v *DescribeUserSCRAMCredentialsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeUserSCRAMCredentialsRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeUserSCRAMCredentialsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeUserSCRAMCredentialsRequest) ResponseKind() Response {
+ r := &DescribeUserSCRAMCredentialsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeUserSCRAMCredentialsRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeUserSCRAMCredentialsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeUserSCRAMCredentialsResponse)
+ return resp, err
+}
+
+func (v *DescribeUserSCRAMCredentialsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Users
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeUserSCRAMCredentialsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeUserSCRAMCredentialsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeUserSCRAMCredentialsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.Users
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []DescribeUserSCRAMCredentialsRequestUser{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeUserSCRAMCredentialsRequestUser, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Users = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeUserSCRAMCredentialsRequest returns a pointer to a default DescribeUserSCRAMCredentialsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeUserSCRAMCredentialsRequest() *DescribeUserSCRAMCredentialsRequest {
+ var v DescribeUserSCRAMCredentialsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeUserSCRAMCredentialsRequest.
+func (v *DescribeUserSCRAMCredentialsRequest) Default() {
+}
+
+// NewDescribeUserSCRAMCredentialsRequest returns a default DescribeUserSCRAMCredentialsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeUserSCRAMCredentialsRequest() DescribeUserSCRAMCredentialsRequest {
+ var v DescribeUserSCRAMCredentialsRequest
+ v.Default()
+ return v
+}
+
+type DescribeUserSCRAMCredentialsResponseResultCredentialInfo struct {
+ // The SCRAM mechanism for this user, where 0 is UNKNOWN, 1 is SCRAM-SHA-256,
+ // and 2 is SCRAM-SHA-512.
+ Mechanism int8
+
+ // The number of iterations used in the SCRAM credential.
+ Iterations int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeUserSCRAMCredentialsResponseResultCredentialInfo.
+func (v *DescribeUserSCRAMCredentialsResponseResultCredentialInfo) Default() {
+}
+
+// NewDescribeUserSCRAMCredentialsResponseResultCredentialInfo returns a default DescribeUserSCRAMCredentialsResponseResultCredentialInfo
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeUserSCRAMCredentialsResponseResultCredentialInfo() DescribeUserSCRAMCredentialsResponseResultCredentialInfo {
+ var v DescribeUserSCRAMCredentialsResponseResultCredentialInfo
+ v.Default()
+ return v
+}
+
+type DescribeUserSCRAMCredentialsResponseResult struct {
+ // The name this result corresponds to.
+ User string
+
+ // The user-level error code.
+ //
+ // RESOURCE_NOT_FOUND if the user does not exist or has no credentials.
+ //
+ // DUPLICATE_RESOURCE if the user is requested twice+.
+ ErrorCode int16
+
+ // The user-level error message, if any.
+ ErrorMessage *string
+
+ // Information about the SCRAM credentials for this user.
+ CredentialInfos []DescribeUserSCRAMCredentialsResponseResultCredentialInfo
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeUserSCRAMCredentialsResponseResult.
+func (v *DescribeUserSCRAMCredentialsResponseResult) Default() {
+}
+
+// NewDescribeUserSCRAMCredentialsResponseResult returns a default DescribeUserSCRAMCredentialsResponseResult
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeUserSCRAMCredentialsResponseResult() DescribeUserSCRAMCredentialsResponseResult {
+ var v DescribeUserSCRAMCredentialsResponseResult
+ v.Default()
+ return v
+}
+
+// DescribeUserSCRAMCredentialsResponse is a response for a
+// DescribeUserSCRAMCredentialsRequest.
+type DescribeUserSCRAMCredentialsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // The request-level error code. This is 0 except for auth or infra issues.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED if you do not have DESCRIBE on CLUSTER.
+ ErrorCode int16
+
+ // The request-level error message, if any.
+ ErrorMessage *string
+
+ // Results for descriptions, one per user.
+ Results []DescribeUserSCRAMCredentialsResponseResult
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeUserSCRAMCredentialsResponse) Key() int16 { return 50 }
+func (*DescribeUserSCRAMCredentialsResponse) MaxVersion() int16 { return 0 }
+func (v *DescribeUserSCRAMCredentialsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeUserSCRAMCredentialsResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeUserSCRAMCredentialsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeUserSCRAMCredentialsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *DescribeUserSCRAMCredentialsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeUserSCRAMCredentialsResponse) RequestKind() Request {
+ return &DescribeUserSCRAMCredentialsRequest{Version: v.Version}
+}
+
+func (v *DescribeUserSCRAMCredentialsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Results
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.User
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.CredentialInfos
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Mechanism
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Iterations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeUserSCRAMCredentialsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeUserSCRAMCredentialsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeUserSCRAMCredentialsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Results
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeUserSCRAMCredentialsResponseResult, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.User = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.CredentialInfos
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeUserSCRAMCredentialsResponseResultCredentialInfo, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int8()
+ s.Mechanism = v
+ }
+ {
+ v := b.Int32()
+ s.Iterations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.CredentialInfos = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Results = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeUserSCRAMCredentialsResponse returns a pointer to a default DescribeUserSCRAMCredentialsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeUserSCRAMCredentialsResponse() *DescribeUserSCRAMCredentialsResponse {
+ var v DescribeUserSCRAMCredentialsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeUserSCRAMCredentialsResponse.
+func (v *DescribeUserSCRAMCredentialsResponse) Default() {
+}
+
+// NewDescribeUserSCRAMCredentialsResponse returns a default DescribeUserSCRAMCredentialsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeUserSCRAMCredentialsResponse() DescribeUserSCRAMCredentialsResponse {
+ var v DescribeUserSCRAMCredentialsResponse
+ v.Default()
+ return v
+}
+
+type AlterUserSCRAMCredentialsRequestDeletion struct {
+ // The user name to match for removal.
+ Name string
+
+ // The mechanism for the user name to remove.
+ Mechanism int8
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterUserSCRAMCredentialsRequestDeletion.
+func (v *AlterUserSCRAMCredentialsRequestDeletion) Default() {
+}
+
+// NewAlterUserSCRAMCredentialsRequestDeletion returns a default AlterUserSCRAMCredentialsRequestDeletion
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterUserSCRAMCredentialsRequestDeletion() AlterUserSCRAMCredentialsRequestDeletion {
+ var v AlterUserSCRAMCredentialsRequestDeletion
+ v.Default()
+ return v
+}
+
+type AlterUserSCRAMCredentialsRequestUpsertion struct {
+ // The user name to use.
+ Name string
+
+ // The mechanism to use for creating, where 1 is SCRAM-SHA-256 and 2 is
+ // SCRAM-SHA-512.
+ Mechanism int8
+
+ // The number of iterations to use. This must be more than the minimum for
+ // the mechanism and cannot be more than 16384.
+ Iterations int32
+
+ // A random salt generated by the client.
+ Salt []byte
+
+ // The salted password to use.
+ SaltedPassword []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterUserSCRAMCredentialsRequestUpsertion.
+func (v *AlterUserSCRAMCredentialsRequestUpsertion) Default() {
+}
+
+// NewAlterUserSCRAMCredentialsRequestUpsertion returns a default AlterUserSCRAMCredentialsRequestUpsertion
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterUserSCRAMCredentialsRequestUpsertion() AlterUserSCRAMCredentialsRequestUpsertion {
+ var v AlterUserSCRAMCredentialsRequestUpsertion
+ v.Default()
+ return v
+}
+
+// AlterUserSCRAMCredentialsRequest, proposed in KIP-554 and introduced
+// with Kafka 2.7.0, alters or deletes user SCRAM credentials.
+//
+// This request was introduced as part of the overarching KIP-500 initiative,
+// which is to remove Zookeeper as a dependency.
+//
+// This request requires ALTER on CLUSTER.
+type AlterUserSCRAMCredentialsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The SCRAM credentials to remove.
+ Deletions []AlterUserSCRAMCredentialsRequestDeletion
+
+ // The SCRAM credentials to update or insert.
+ Upsertions []AlterUserSCRAMCredentialsRequestUpsertion
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AlterUserSCRAMCredentialsRequest) Key() int16 { return 51 }
+func (*AlterUserSCRAMCredentialsRequest) MaxVersion() int16 { return 0 }
+func (v *AlterUserSCRAMCredentialsRequest) SetVersion(version int16) { v.Version = version }
+func (v *AlterUserSCRAMCredentialsRequest) GetVersion() int16 { return v.Version }
+func (v *AlterUserSCRAMCredentialsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *AlterUserSCRAMCredentialsRequest) IsAdminRequest() {}
+func (v *AlterUserSCRAMCredentialsRequest) ResponseKind() Response {
+ r := &AlterUserSCRAMCredentialsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AlterUserSCRAMCredentialsRequest) RequestWith(ctx context.Context, r Requestor) (*AlterUserSCRAMCredentialsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AlterUserSCRAMCredentialsResponse)
+ return resp, err
+}
+
+func (v *AlterUserSCRAMCredentialsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Deletions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Mechanism
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Upsertions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Mechanism
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.Iterations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Salt
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.SaltedPassword
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterUserSCRAMCredentialsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterUserSCRAMCredentialsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterUserSCRAMCredentialsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.Deletions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterUserSCRAMCredentialsRequestDeletion, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ v := b.Int8()
+ s.Mechanism = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Deletions = v
+ }
+ {
+ v := s.Upsertions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterUserSCRAMCredentialsRequestUpsertion, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ v := b.Int8()
+ s.Mechanism = v
+ }
+ {
+ v := b.Int32()
+ s.Iterations = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.Salt = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.SaltedPassword = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Upsertions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterUserSCRAMCredentialsRequest returns a pointer to a default AlterUserSCRAMCredentialsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterUserSCRAMCredentialsRequest() *AlterUserSCRAMCredentialsRequest {
+ var v AlterUserSCRAMCredentialsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterUserSCRAMCredentialsRequest.
+func (v *AlterUserSCRAMCredentialsRequest) Default() {
+}
+
+// NewAlterUserSCRAMCredentialsRequest returns a default AlterUserSCRAMCredentialsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterUserSCRAMCredentialsRequest() AlterUserSCRAMCredentialsRequest {
+ var v AlterUserSCRAMCredentialsRequest
+ v.Default()
+ return v
+}
+
+type AlterUserSCRAMCredentialsResponseResult struct {
+ // The name this result corresponds to.
+ User string
+
+ // The user-level error code.
+ ErrorCode int16
+
+ // The user-level error message, if any.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterUserSCRAMCredentialsResponseResult.
+func (v *AlterUserSCRAMCredentialsResponseResult) Default() {
+}
+
+// NewAlterUserSCRAMCredentialsResponseResult returns a default AlterUserSCRAMCredentialsResponseResult
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterUserSCRAMCredentialsResponseResult() AlterUserSCRAMCredentialsResponseResult {
+ var v AlterUserSCRAMCredentialsResponseResult
+ v.Default()
+ return v
+}
+
+// AlterUserSCRAMCredentialsResponse is a response for an
+// AlterUserSCRAMCredentialsRequest.
+type AlterUserSCRAMCredentialsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // The results for deletions and upsertions.
+ Results []AlterUserSCRAMCredentialsResponseResult
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AlterUserSCRAMCredentialsResponse) Key() int16 { return 51 }
+func (*AlterUserSCRAMCredentialsResponse) MaxVersion() int16 { return 0 }
+func (v *AlterUserSCRAMCredentialsResponse) SetVersion(version int16) { v.Version = version }
+func (v *AlterUserSCRAMCredentialsResponse) GetVersion() int16 { return v.Version }
+func (v *AlterUserSCRAMCredentialsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *AlterUserSCRAMCredentialsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *AlterUserSCRAMCredentialsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AlterUserSCRAMCredentialsResponse) RequestKind() Request {
+ return &AlterUserSCRAMCredentialsRequest{Version: v.Version}
+}
+
+func (v *AlterUserSCRAMCredentialsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Results
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.User
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterUserSCRAMCredentialsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterUserSCRAMCredentialsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterUserSCRAMCredentialsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Results
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterUserSCRAMCredentialsResponseResult, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.User = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Results = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterUserSCRAMCredentialsResponse returns a pointer to a default AlterUserSCRAMCredentialsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterUserSCRAMCredentialsResponse() *AlterUserSCRAMCredentialsResponse {
+ var v AlterUserSCRAMCredentialsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterUserSCRAMCredentialsResponse.
+func (v *AlterUserSCRAMCredentialsResponse) Default() {
+}
+
+// NewAlterUserSCRAMCredentialsResponse returns a default AlterUserSCRAMCredentialsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterUserSCRAMCredentialsResponse() AlterUserSCRAMCredentialsResponse {
+ var v AlterUserSCRAMCredentialsResponse
+ v.Default()
+ return v
+}
+
+type VoteRequestTopicPartition struct {
+ Partition int32
+
+ // The bumped epoch of the candidate sending the request.
+ CandidateEpoch int32
+
+ // The ID of the voter sending the request.
+ CandidateID int32
+
+ // The epoch of the last record written to the metadata log.
+ LastOffsetEpoch int32
+
+ // The offset of the last record written to the metadata log.
+ LastOffset int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to VoteRequestTopicPartition.
+func (v *VoteRequestTopicPartition) Default() {
+}
+
+// NewVoteRequestTopicPartition returns a default VoteRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewVoteRequestTopicPartition() VoteRequestTopicPartition {
+ var v VoteRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type VoteRequestTopic struct {
+ Topic string
+
+ Partitions []VoteRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to VoteRequestTopic.
+func (v *VoteRequestTopic) Default() {
+}
+
+// NewVoteRequestTopic returns a default VoteRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewVoteRequestTopic() VoteRequestTopic {
+ var v VoteRequestTopic
+ v.Default()
+ return v
+}
+
+// Part of KIP-595 to replace Kafka's dependence on Zookeeper with a
+// Kafka-only raft protocol,
+// VoteRequest is used by voters to hold a leader election.
+//
+// Since this is relatively Kafka internal, most fields are left undocumented.
+type VoteRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ClusterID *string
+
+ Topics []VoteRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*VoteRequest) Key() int16 { return 52 }
+func (*VoteRequest) MaxVersion() int16 { return 0 }
+func (v *VoteRequest) SetVersion(version int16) { v.Version = version }
+func (v *VoteRequest) GetVersion() int16 { return v.Version }
+func (v *VoteRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *VoteRequest) IsAdminRequest() {}
+func (v *VoteRequest) ResponseKind() Response {
+ r := &VoteResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *VoteRequest) RequestWith(ctx context.Context, r Requestor) (*VoteResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*VoteResponse)
+ return resp, err
+}
+
+func (v *VoteRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ClusterID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.CandidateEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.CandidateID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LastOffsetEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LastOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *VoteRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *VoteRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *VoteRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ClusterID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]VoteRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]VoteRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.CandidateEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.CandidateID = v
+ }
+ {
+ v := b.Int32()
+ s.LastOffsetEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.LastOffset = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrVoteRequest returns a pointer to a default VoteRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrVoteRequest() *VoteRequest {
+ var v VoteRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to VoteRequest.
+func (v *VoteRequest) Default() {
+}
+
+// NewVoteRequest returns a default VoteRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewVoteRequest() VoteRequest {
+ var v VoteRequest
+ v.Default()
+ return v
+}
+
+type VoteResponseTopicPartition struct {
+ Partition int32
+
+ ErrorCode int16
+
+ // The ID of the current leader, or -1 if the leader is unknown.
+ LeaderID int32
+
+ // The latest known leader epoch.
+ LeaderEpoch int32
+
+ // Whether the vote was granted.
+ VoteGranted bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to VoteResponseTopicPartition.
+func (v *VoteResponseTopicPartition) Default() {
+}
+
+// NewVoteResponseTopicPartition returns a default VoteResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewVoteResponseTopicPartition() VoteResponseTopicPartition {
+ var v VoteResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type VoteResponseTopic struct {
+ Topic string
+
+ Partitions []VoteResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to VoteResponseTopic.
+func (v *VoteResponseTopic) Default() {
+}
+
+// NewVoteResponseTopic returns a default VoteResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewVoteResponseTopic() VoteResponseTopic {
+ var v VoteResponseTopic
+ v.Default()
+ return v
+}
+
+type VoteResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ Topics []VoteResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*VoteResponse) Key() int16 { return 52 }
+func (*VoteResponse) MaxVersion() int16 { return 0 }
+func (v *VoteResponse) SetVersion(version int16) { v.Version = version }
+func (v *VoteResponse) GetVersion() int16 { return v.Version }
+func (v *VoteResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *VoteResponse) RequestKind() Request { return &VoteRequest{Version: v.Version} }
+
+func (v *VoteResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.VoteGranted
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *VoteResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *VoteResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *VoteResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]VoteResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]VoteResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := b.Bool()
+ s.VoteGranted = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrVoteResponse returns a pointer to a default VoteResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrVoteResponse() *VoteResponse {
+ var v VoteResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to VoteResponse.
+func (v *VoteResponse) Default() {
+}
+
+// NewVoteResponse returns a default VoteResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewVoteResponse() VoteResponse {
+ var v VoteResponse
+ v.Default()
+ return v
+}
+
+type BeginQuorumEpochRequestTopicPartition struct {
+ Partition int32
+
+ // The ID of the newly elected leader.
+ LeaderID int32
+
+ // The epoch of the newly elected leader.
+ LeaderEpoch int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BeginQuorumEpochRequestTopicPartition.
+func (v *BeginQuorumEpochRequestTopicPartition) Default() {
+}
+
+// NewBeginQuorumEpochRequestTopicPartition returns a default BeginQuorumEpochRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBeginQuorumEpochRequestTopicPartition() BeginQuorumEpochRequestTopicPartition {
+ var v BeginQuorumEpochRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type BeginQuorumEpochRequestTopic struct {
+ Topic string
+
+ Partitions []BeginQuorumEpochRequestTopicPartition
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BeginQuorumEpochRequestTopic.
+func (v *BeginQuorumEpochRequestTopic) Default() {
+}
+
+// NewBeginQuorumEpochRequestTopic returns a default BeginQuorumEpochRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBeginQuorumEpochRequestTopic() BeginQuorumEpochRequestTopic {
+ var v BeginQuorumEpochRequestTopic
+ v.Default()
+ return v
+}
+
+// Part of KIP-595 to replace Kafka's dependence on Zookeeper with a
+// Kafka-only raft protocol,
+// BeginQuorumEpochRequest is sent by a leader (once it has enough votes)
+// to all voters in the election.
+//
+// Since this is relatively Kafka internal, most fields are left undocumented.
+type BeginQuorumEpochRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ClusterID *string
+
+ Topics []BeginQuorumEpochRequestTopic
+}
+
+func (*BeginQuorumEpochRequest) Key() int16 { return 53 }
+func (*BeginQuorumEpochRequest) MaxVersion() int16 { return 0 }
+func (v *BeginQuorumEpochRequest) SetVersion(version int16) { v.Version = version }
+func (v *BeginQuorumEpochRequest) GetVersion() int16 { return v.Version }
+func (v *BeginQuorumEpochRequest) IsFlexible() bool { return false }
+func (v *BeginQuorumEpochRequest) IsAdminRequest() {}
+func (v *BeginQuorumEpochRequest) ResponseKind() Response {
+ r := &BeginQuorumEpochResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *BeginQuorumEpochRequest) RequestWith(ctx context.Context, r Requestor) (*BeginQuorumEpochResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*BeginQuorumEpochResponse)
+ return resp, err
+}
+
+func (v *BeginQuorumEpochRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.ClusterID
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ }
+ return dst
+}
+
+func (v *BeginQuorumEpochRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *BeginQuorumEpochRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *BeginQuorumEpochRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v *string
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ s.ClusterID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]BeginQuorumEpochRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]BeginQuorumEpochRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ return b.Complete()
+}
+
+// NewPtrBeginQuorumEpochRequest returns a pointer to a default BeginQuorumEpochRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrBeginQuorumEpochRequest() *BeginQuorumEpochRequest {
+ var v BeginQuorumEpochRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BeginQuorumEpochRequest.
+func (v *BeginQuorumEpochRequest) Default() {
+}
+
+// NewBeginQuorumEpochRequest returns a default BeginQuorumEpochRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBeginQuorumEpochRequest() BeginQuorumEpochRequest {
+ var v BeginQuorumEpochRequest
+ v.Default()
+ return v
+}
+
+type BeginQuorumEpochResponseTopicPartition struct {
+ Partition int32
+
+ ErrorCode int16
+
+ // The ID of the current leader, or -1 if the leader is unknown.
+ LeaderID int32
+
+ // The latest known leader epoch.
+ LeaderEpoch int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BeginQuorumEpochResponseTopicPartition.
+func (v *BeginQuorumEpochResponseTopicPartition) Default() {
+}
+
+// NewBeginQuorumEpochResponseTopicPartition returns a default BeginQuorumEpochResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBeginQuorumEpochResponseTopicPartition() BeginQuorumEpochResponseTopicPartition {
+ var v BeginQuorumEpochResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type BeginQuorumEpochResponseTopic struct {
+ Topic string
+
+ Partitions []BeginQuorumEpochResponseTopicPartition
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BeginQuorumEpochResponseTopic.
+func (v *BeginQuorumEpochResponseTopic) Default() {
+}
+
+// NewBeginQuorumEpochResponseTopic returns a default BeginQuorumEpochResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBeginQuorumEpochResponseTopic() BeginQuorumEpochResponseTopic {
+ var v BeginQuorumEpochResponseTopic
+ v.Default()
+ return v
+}
+
+type BeginQuorumEpochResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ Topics []BeginQuorumEpochResponseTopic
+}
+
+func (*BeginQuorumEpochResponse) Key() int16 { return 53 }
+func (*BeginQuorumEpochResponse) MaxVersion() int16 { return 0 }
+func (v *BeginQuorumEpochResponse) SetVersion(version int16) { v.Version = version }
+func (v *BeginQuorumEpochResponse) GetVersion() int16 { return v.Version }
+func (v *BeginQuorumEpochResponse) IsFlexible() bool { return false }
+func (v *BeginQuorumEpochResponse) RequestKind() Request {
+ return &BeginQuorumEpochRequest{Version: v.Version}
+}
+
+func (v *BeginQuorumEpochResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ }
+ return dst
+}
+
+func (v *BeginQuorumEpochResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *BeginQuorumEpochResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *BeginQuorumEpochResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]BeginQuorumEpochResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]BeginQuorumEpochResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ return b.Complete()
+}
+
+// NewPtrBeginQuorumEpochResponse returns a pointer to a default BeginQuorumEpochResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrBeginQuorumEpochResponse() *BeginQuorumEpochResponse {
+ var v BeginQuorumEpochResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BeginQuorumEpochResponse.
+func (v *BeginQuorumEpochResponse) Default() {
+}
+
+// NewBeginQuorumEpochResponse returns a default BeginQuorumEpochResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBeginQuorumEpochResponse() BeginQuorumEpochResponse {
+ var v BeginQuorumEpochResponse
+ v.Default()
+ return v
+}
+
+type EndQuorumEpochRequestTopicPartition struct {
+ Partition int32
+
+ // The current leader ID that is resigning.
+ LeaderID int32
+
+ // The current epoch.
+ LeaderEpoch int32
+
+ // A sorted list of preferred successors to start the election.
+ PreferredSuccessors []int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndQuorumEpochRequestTopicPartition.
+func (v *EndQuorumEpochRequestTopicPartition) Default() {
+}
+
+// NewEndQuorumEpochRequestTopicPartition returns a default EndQuorumEpochRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndQuorumEpochRequestTopicPartition() EndQuorumEpochRequestTopicPartition {
+ var v EndQuorumEpochRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type EndQuorumEpochRequestTopic struct {
+ Topic string
+
+ Partitions []EndQuorumEpochRequestTopicPartition
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndQuorumEpochRequestTopic.
+func (v *EndQuorumEpochRequestTopic) Default() {
+}
+
+// NewEndQuorumEpochRequestTopic returns a default EndQuorumEpochRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndQuorumEpochRequestTopic() EndQuorumEpochRequestTopic {
+ var v EndQuorumEpochRequestTopic
+ v.Default()
+ return v
+}
+
+// Part of KIP-595 to replace Kafka's dependence on Zookeeper with a
+// Kafka-only raft protocol,
+// EndQuorumEpochRequest is sent by a leader to gracefully step down as leader
+// (i.e. on shutdown). Stepping down begins a new election.
+//
+// Since this is relatively Kafka internal, most fields are left undocumented.
+type EndQuorumEpochRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ClusterID *string
+
+ Topics []EndQuorumEpochRequestTopic
+}
+
+func (*EndQuorumEpochRequest) Key() int16 { return 54 }
+func (*EndQuorumEpochRequest) MaxVersion() int16 { return 0 }
+func (v *EndQuorumEpochRequest) SetVersion(version int16) { v.Version = version }
+func (v *EndQuorumEpochRequest) GetVersion() int16 { return v.Version }
+func (v *EndQuorumEpochRequest) IsFlexible() bool { return false }
+func (v *EndQuorumEpochRequest) IsAdminRequest() {}
+func (v *EndQuorumEpochRequest) ResponseKind() Response {
+ r := &EndQuorumEpochResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *EndQuorumEpochRequest) RequestWith(ctx context.Context, r Requestor) (*EndQuorumEpochResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*EndQuorumEpochResponse)
+ return resp, err
+}
+
+func (v *EndQuorumEpochRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.ClusterID
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.PreferredSuccessors
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ }
+ }
+ return dst
+}
+
+func (v *EndQuorumEpochRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EndQuorumEpochRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EndQuorumEpochRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ var v *string
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ s.ClusterID = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]EndQuorumEpochRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]EndQuorumEpochRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.PreferredSuccessors
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.PreferredSuccessors = v
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ return b.Complete()
+}
+
+// NewPtrEndQuorumEpochRequest returns a pointer to a default EndQuorumEpochRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrEndQuorumEpochRequest() *EndQuorumEpochRequest {
+ var v EndQuorumEpochRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndQuorumEpochRequest.
+func (v *EndQuorumEpochRequest) Default() {
+}
+
+// NewEndQuorumEpochRequest returns a default EndQuorumEpochRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndQuorumEpochRequest() EndQuorumEpochRequest {
+ var v EndQuorumEpochRequest
+ v.Default()
+ return v
+}
+
+type EndQuorumEpochResponseTopicPartition struct {
+ Partition int32
+
+ ErrorCode int16
+
+ // The ID of the current leader, or -1 if the leader is unknown.
+ LeaderID int32
+
+ // The latest known leader epoch.
+ LeaderEpoch int32
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndQuorumEpochResponseTopicPartition.
+func (v *EndQuorumEpochResponseTopicPartition) Default() {
+}
+
+// NewEndQuorumEpochResponseTopicPartition returns a default EndQuorumEpochResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndQuorumEpochResponseTopicPartition() EndQuorumEpochResponseTopicPartition {
+ var v EndQuorumEpochResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type EndQuorumEpochResponseTopic struct {
+ Topic string
+
+ Partitions []EndQuorumEpochResponseTopicPartition
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndQuorumEpochResponseTopic.
+func (v *EndQuorumEpochResponseTopic) Default() {
+}
+
+// NewEndQuorumEpochResponseTopic returns a default EndQuorumEpochResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndQuorumEpochResponseTopic() EndQuorumEpochResponseTopic {
+ var v EndQuorumEpochResponseTopic
+ v.Default()
+ return v
+}
+
+type EndQuorumEpochResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ Topics []EndQuorumEpochResponseTopic
+}
+
+func (*EndQuorumEpochResponse) Key() int16 { return 54 }
+func (*EndQuorumEpochResponse) MaxVersion() int16 { return 0 }
+func (v *EndQuorumEpochResponse) SetVersion(version int16) { v.Version = version }
+func (v *EndQuorumEpochResponse) GetVersion() int16 { return v.Version }
+func (v *EndQuorumEpochResponse) IsFlexible() bool { return false }
+func (v *EndQuorumEpochResponse) RequestKind() Request {
+ return &EndQuorumEpochRequest{Version: v.Version}
+}
+
+func (v *EndQuorumEpochResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ dst = kbin.AppendString(dst, v)
+ }
+ {
+ v := v.Partitions
+ dst = kbin.AppendArrayLen(dst, len(v))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ }
+ }
+ }
+ return dst
+}
+
+func (v *EndQuorumEpochResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EndQuorumEpochResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EndQuorumEpochResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]EndQuorumEpochResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeString()
+ } else {
+ v = b.String()
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ l = b.ArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]EndQuorumEpochResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ return b.Complete()
+}
+
+// NewPtrEndQuorumEpochResponse returns a pointer to a default EndQuorumEpochResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrEndQuorumEpochResponse() *EndQuorumEpochResponse {
+ var v EndQuorumEpochResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EndQuorumEpochResponse.
+func (v *EndQuorumEpochResponse) Default() {
+}
+
+// NewEndQuorumEpochResponse returns a default EndQuorumEpochResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEndQuorumEpochResponse() EndQuorumEpochResponse {
+ var v EndQuorumEpochResponse
+ v.Default()
+ return v
+}
+
+// A common struct used in DescribeQuorumResponse.
+type DescribeQuorumResponseTopicPartitionReplicaState struct {
+ ReplicaID int32
+
+ // The last known log end offset of the follower, or -1 if it is unknown.
+ LogEndOffset int64
+
+ // The last known leader wall clock time when a follower fetched from the
+ // leader, or -1 for the current leader or if unknown for a voter.
+ //
+ // This field has a default of -1.
+ LastFetchTimestamp int64 // v1+
+
+ // The leader wall clock append time of the offset for which the follower
+ // made the most recent fetch request, or -1 for the current leader or if
+ // unknown for a voter.
+ //
+ // This field has a default of -1.
+ LastCaughtUpTimestamp int64 // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumResponseTopicPartitionReplicaState.
+func (v *DescribeQuorumResponseTopicPartitionReplicaState) Default() {
+ v.LastFetchTimestamp = -1
+ v.LastCaughtUpTimestamp = -1
+}
+
+// NewDescribeQuorumResponseTopicPartitionReplicaState returns a default DescribeQuorumResponseTopicPartitionReplicaState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumResponseTopicPartitionReplicaState() DescribeQuorumResponseTopicPartitionReplicaState {
+ var v DescribeQuorumResponseTopicPartitionReplicaState
+ v.Default()
+ return v
+}
+
+type DescribeQuorumRequestTopicPartition struct {
+ Partition int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumRequestTopicPartition.
+func (v *DescribeQuorumRequestTopicPartition) Default() {
+}
+
+// NewDescribeQuorumRequestTopicPartition returns a default DescribeQuorumRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumRequestTopicPartition() DescribeQuorumRequestTopicPartition {
+ var v DescribeQuorumRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type DescribeQuorumRequestTopic struct {
+ Topic string
+
+ Partitions []DescribeQuorumRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumRequestTopic.
+func (v *DescribeQuorumRequestTopic) Default() {
+}
+
+// NewDescribeQuorumRequestTopic returns a default DescribeQuorumRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumRequestTopic() DescribeQuorumRequestTopic {
+ var v DescribeQuorumRequestTopic
+ v.Default()
+ return v
+}
+
+// Part of KIP-642 (and KIP-595) to replace Kafka's dependence on Zookeeper with a
+// Kafka-only raft protocol,
+// DescribeQuorumRequest is sent by a leader to describe the quorum.
+type DescribeQuorumRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ Topics []DescribeQuorumRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeQuorumRequest) Key() int16 { return 55 }
+func (*DescribeQuorumRequest) MaxVersion() int16 { return 1 }
+func (v *DescribeQuorumRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeQuorumRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeQuorumRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeQuorumRequest) IsAdminRequest() {}
+func (v *DescribeQuorumRequest) ResponseKind() Response {
+ r := &DescribeQuorumResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeQuorumRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeQuorumResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeQuorumResponse)
+ return resp, err
+}
+
+func (v *DescribeQuorumRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeQuorumRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeQuorumRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeQuorumRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeQuorumRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeQuorumRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeQuorumRequest returns a pointer to a default DescribeQuorumRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeQuorumRequest() *DescribeQuorumRequest {
+ var v DescribeQuorumRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumRequest.
+func (v *DescribeQuorumRequest) Default() {
+}
+
+// NewDescribeQuorumRequest returns a default DescribeQuorumRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumRequest() DescribeQuorumRequest {
+ var v DescribeQuorumRequest
+ v.Default()
+ return v
+}
+
+type DescribeQuorumResponseTopicPartition struct {
+ Partition int32
+
+ ErrorCode int16
+
+ // The ID of the current leader, or -1 if the leader is unknown.
+ LeaderID int32
+
+ // The latest known leader epoch.
+ LeaderEpoch int32
+
+ HighWatermark int64
+
+ CurrentVoters []DescribeQuorumResponseTopicPartitionReplicaState
+
+ Observers []DescribeQuorumResponseTopicPartitionReplicaState
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumResponseTopicPartition.
+func (v *DescribeQuorumResponseTopicPartition) Default() {
+}
+
+// NewDescribeQuorumResponseTopicPartition returns a default DescribeQuorumResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumResponseTopicPartition() DescribeQuorumResponseTopicPartition {
+ var v DescribeQuorumResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type DescribeQuorumResponseTopic struct {
+ Topic string
+
+ Partitions []DescribeQuorumResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumResponseTopic.
+func (v *DescribeQuorumResponseTopic) Default() {
+}
+
+// NewDescribeQuorumResponseTopic returns a default DescribeQuorumResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumResponseTopic() DescribeQuorumResponseTopic {
+ var v DescribeQuorumResponseTopic
+ v.Default()
+ return v
+}
+
+type DescribeQuorumResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ ErrorCode int16
+
+ Topics []DescribeQuorumResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeQuorumResponse) Key() int16 { return 55 }
+func (*DescribeQuorumResponse) MaxVersion() int16 { return 1 }
+func (v *DescribeQuorumResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeQuorumResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeQuorumResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeQuorumResponse) RequestKind() Request {
+ return &DescribeQuorumRequest{Version: v.Version}
+}
+
+func (v *DescribeQuorumResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.HighWatermark
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.CurrentVoters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ReplicaID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LogEndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 {
+ v := v.LastFetchTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 {
+ v := v.LastCaughtUpTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Observers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ReplicaID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LogEndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 {
+ v := v.LastFetchTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if version >= 1 {
+ v := v.LastCaughtUpTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeQuorumResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeQuorumResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeQuorumResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeQuorumResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeQuorumResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.HighWatermark = v
+ }
+ {
+ v := s.CurrentVoters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeQuorumResponseTopicPartitionReplicaState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.ReplicaID = v
+ }
+ {
+ v := b.Int64()
+ s.LogEndOffset = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.LastFetchTimestamp = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.LastCaughtUpTimestamp = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.CurrentVoters = v
+ }
+ {
+ v := s.Observers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeQuorumResponseTopicPartitionReplicaState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.ReplicaID = v
+ }
+ {
+ v := b.Int64()
+ s.LogEndOffset = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.LastFetchTimestamp = v
+ }
+ if version >= 1 {
+ v := b.Int64()
+ s.LastCaughtUpTimestamp = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Observers = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeQuorumResponse returns a pointer to a default DescribeQuorumResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeQuorumResponse() *DescribeQuorumResponse {
+ var v DescribeQuorumResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeQuorumResponse.
+func (v *DescribeQuorumResponse) Default() {
+}
+
+// NewDescribeQuorumResponse returns a default DescribeQuorumResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeQuorumResponse() DescribeQuorumResponse {
+ var v DescribeQuorumResponse
+ v.Default()
+ return v
+}
+
+type AlterPartitionRequestTopicPartitionNewEpochISR struct {
+ // The broker ID .
+ BrokerID int32
+
+ // The broker's epoch; -1 if the epoch check is not supported.
+ //
+ // This field has a default of -1.
+ BrokerEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionRequestTopicPartitionNewEpochISR.
+func (v *AlterPartitionRequestTopicPartitionNewEpochISR) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewAlterPartitionRequestTopicPartitionNewEpochISR returns a default AlterPartitionRequestTopicPartitionNewEpochISR
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionRequestTopicPartitionNewEpochISR() AlterPartitionRequestTopicPartitionNewEpochISR {
+ var v AlterPartitionRequestTopicPartitionNewEpochISR
+ v.Default()
+ return v
+}
+
+type AlterPartitionRequestTopicPartition struct {
+ Partition int32
+
+ // The leader epoch of this partition.
+ LeaderEpoch int32
+
+ // The ISR for this partition.
+ NewISR []int32 // v0-v2
+
+ NewEpochISR []AlterPartitionRequestTopicPartitionNewEpochISR // v3+
+
+ // 1 if the partition is recovering from unclean leader election; 0 otherwise
+ LeaderRecoveryState int8 // v1+
+
+ // The expected epoch of the partition which is being updated.
+ // For a legacy cluster, this is the ZkVersion in the LeaderAndISR request.
+ PartitionEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionRequestTopicPartition.
+func (v *AlterPartitionRequestTopicPartition) Default() {
+}
+
+// NewAlterPartitionRequestTopicPartition returns a default AlterPartitionRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionRequestTopicPartition() AlterPartitionRequestTopicPartition {
+ var v AlterPartitionRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type AlterPartitionRequestTopic struct {
+ Topic string // v0-v1
+
+ TopicID [16]byte // v2+
+
+ Partitions []AlterPartitionRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionRequestTopic.
+func (v *AlterPartitionRequestTopic) Default() {
+}
+
+// NewAlterPartitionRequestTopic returns a default AlterPartitionRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionRequestTopic() AlterPartitionRequestTopic {
+ var v AlterPartitionRequestTopic
+ v.Default()
+ return v
+}
+
+// AlterPartitionRequest, proposed in KIP-497 and introduced in Kafka 2.7.0,
+// is an admin request to modify ISR.
+//
+// Version 3 was added for KIP-903 and replaced NewISR.
+type AlterPartitionRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The ID of the requesting broker.
+ BrokerID int32
+
+ // The epoch of the requesting broker.
+ //
+ // This field has a default of -1.
+ BrokerEpoch int64
+
+ Topics []AlterPartitionRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AlterPartitionRequest) Key() int16 { return 56 }
+func (*AlterPartitionRequest) MaxVersion() int16 { return 3 }
+func (v *AlterPartitionRequest) SetVersion(version int16) { v.Version = version }
+func (v *AlterPartitionRequest) GetVersion() int16 { return v.Version }
+func (v *AlterPartitionRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *AlterPartitionRequest) IsAdminRequest() {}
+func (v *AlterPartitionRequest) ResponseKind() Response {
+ r := &AlterPartitionResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AlterPartitionRequest) RequestWith(ctx context.Context, r Requestor) (*AlterPartitionResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AlterPartitionResponse)
+ return resp, err
+}
+
+func (v *AlterPartitionRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 1 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 2 {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if version >= 0 && version <= 2 {
+ v := v.NewISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 3 {
+ v := v.NewEpochISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.LeaderRecoveryState
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.PartitionEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterPartitionRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterPartitionRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterPartitionRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 1 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 2 {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ if version >= 0 && version <= 2 {
+ v := s.NewISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.NewISR = v
+ }
+ if version >= 3 {
+ v := s.NewEpochISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionRequestTopicPartitionNewEpochISR, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ {
+ v := b.Int32()
+ s.BrokerEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.NewEpochISR = v
+ }
+ if version >= 1 {
+ v := b.Int8()
+ s.LeaderRecoveryState = v
+ }
+ {
+ v := b.Int32()
+ s.PartitionEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterPartitionRequest returns a pointer to a default AlterPartitionRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterPartitionRequest() *AlterPartitionRequest {
+ var v AlterPartitionRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionRequest.
+func (v *AlterPartitionRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewAlterPartitionRequest returns a default AlterPartitionRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionRequest() AlterPartitionRequest {
+ var v AlterPartitionRequest
+ v.Default()
+ return v
+}
+
+type AlterPartitionResponseTopicPartition struct {
+ Partition int32
+
+ ErrorCode int16
+
+ // The broker ID of the leader.
+ LeaderID int32
+
+ // The leader epoch of this partition.
+ LeaderEpoch int32
+
+ // The in-sync replica ids.
+ ISR []int32
+
+ // 1 if the partition is recovering from unclean leader election; 0 otherwise
+ LeaderRecoveryState int8 // v1+
+
+ // The current epoch of the partition for KRaft controllers.
+ // The current ZK version for legacy controllers.
+ PartitionEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionResponseTopicPartition.
+func (v *AlterPartitionResponseTopicPartition) Default() {
+}
+
+// NewAlterPartitionResponseTopicPartition returns a default AlterPartitionResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionResponseTopicPartition() AlterPartitionResponseTopicPartition {
+ var v AlterPartitionResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type AlterPartitionResponseTopic struct {
+ Topic string // v0-v1
+
+ TopidID [16]byte // v2+
+
+ Partitions []AlterPartitionResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionResponseTopic.
+func (v *AlterPartitionResponseTopic) Default() {
+}
+
+// NewAlterPartitionResponseTopic returns a default AlterPartitionResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionResponseTopic() AlterPartitionResponseTopic {
+ var v AlterPartitionResponseTopic
+ v.Default()
+ return v
+}
+
+type AlterPartitionResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ ErrorCode int16
+
+ Topics []AlterPartitionResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AlterPartitionResponse) Key() int16 { return 56 }
+func (*AlterPartitionResponse) MaxVersion() int16 { return 3 }
+func (v *AlterPartitionResponse) SetVersion(version int16) { v.Version = version }
+func (v *AlterPartitionResponse) GetVersion() int16 { return v.Version }
+func (v *AlterPartitionResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *AlterPartitionResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *AlterPartitionResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *AlterPartitionResponse) RequestKind() Request {
+ return &AlterPartitionRequest{Version: v.Version}
+}
+
+func (v *AlterPartitionResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ if version >= 0 && version <= 1 {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if version >= 2 {
+ v := v.TopidID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ISR
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.LeaderRecoveryState
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.PartitionEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AlterPartitionResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AlterPartitionResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AlterPartitionResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ if version >= 0 && version <= 1 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ if version >= 2 {
+ v := b.Uuid()
+ s.TopidID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AlterPartitionResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ {
+ v := s.ISR
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.ISR = v
+ }
+ if version >= 1 {
+ v := b.Int8()
+ s.LeaderRecoveryState = v
+ }
+ {
+ v := b.Int32()
+ s.PartitionEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAlterPartitionResponse returns a pointer to a default AlterPartitionResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAlterPartitionResponse() *AlterPartitionResponse {
+ var v AlterPartitionResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AlterPartitionResponse.
+func (v *AlterPartitionResponse) Default() {
+}
+
+// NewAlterPartitionResponse returns a default AlterPartitionResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAlterPartitionResponse() AlterPartitionResponse {
+ var v AlterPartitionResponse
+ v.Default()
+ return v
+}
+
+type UpdateFeaturesRequestFeatureUpdate struct {
+ // The name of the finalized feature to update.
+ Feature string
+
+ // The new maximum version level for the finalized feature. A value >= 1 is
+ // valid. A value < 1, is special, and can be used to request the deletion
+ // of the finalized feature.
+ MaxVersionLevel int16
+
+ // When set to true, the finalized feature version level is allowed to be
+ // downgraded/deleted. The downgrade request will fail if the new maximum
+ // version level is a value that's not lower than the existing maximum
+ // finalized version level.
+ //
+ // Replaced in v1 with ValidateOnly.
+ AllowDowngrade bool
+
+ // Determine which type of upgrade will be performed: 1 will perform an
+ // upgrade only (default), 2 is safe downgrades only (lossless), 3 is
+ // unsafe downgrades (lossy).
+ UpgradeType int8 // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateFeaturesRequestFeatureUpdate.
+func (v *UpdateFeaturesRequestFeatureUpdate) Default() {
+}
+
+// NewUpdateFeaturesRequestFeatureUpdate returns a default UpdateFeaturesRequestFeatureUpdate
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateFeaturesRequestFeatureUpdate() UpdateFeaturesRequestFeatureUpdate {
+ var v UpdateFeaturesRequestFeatureUpdate
+ v.Default()
+ return v
+}
+
+// From KIP-584 and introduced in 2.7.0, this request updates broker-wide features.
+type UpdateFeaturesRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // TimeoutMillis is how long Kafka can wait before responding to this request.
+ // This field has no effect on Kafka's processing of the request; the request
+ // will continue to be processed if the timeout is reached. If the timeout is
+ // reached, Kafka will reply with a REQUEST_TIMED_OUT error.
+ //
+ // This field has a default of 60000.
+ TimeoutMillis int32
+
+ // The list of updates to finalized features.
+ FeatureUpdates []UpdateFeaturesRequestFeatureUpdate
+
+ // True if we should validate the request, but not perform the upgrade or
+ // downgrade.
+ ValidateOnly bool // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*UpdateFeaturesRequest) Key() int16 { return 57 }
+func (*UpdateFeaturesRequest) MaxVersion() int16 { return 1 }
+func (v *UpdateFeaturesRequest) SetVersion(version int16) { v.Version = version }
+func (v *UpdateFeaturesRequest) GetVersion() int16 { return v.Version }
+func (v *UpdateFeaturesRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *UpdateFeaturesRequest) Timeout() int32 { return v.TimeoutMillis }
+func (v *UpdateFeaturesRequest) SetTimeout(timeoutMillis int32) { v.TimeoutMillis = timeoutMillis }
+func (v *UpdateFeaturesRequest) IsAdminRequest() {}
+func (v *UpdateFeaturesRequest) ResponseKind() Response {
+ r := &UpdateFeaturesResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *UpdateFeaturesRequest) RequestWith(ctx context.Context, r Requestor) (*UpdateFeaturesResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*UpdateFeaturesResponse)
+ return resp, err
+}
+
+func (v *UpdateFeaturesRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.FeatureUpdates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Feature
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MaxVersionLevel
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if version >= 0 && version <= 0 {
+ v := v.AllowDowngrade
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 1 {
+ v := v.UpgradeType
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if version >= 1 {
+ v := v.ValidateOnly
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *UpdateFeaturesRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *UpdateFeaturesRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *UpdateFeaturesRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ v := s.FeatureUpdates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateFeaturesRequestFeatureUpdate, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Feature = v
+ }
+ {
+ v := b.Int16()
+ s.MaxVersionLevel = v
+ }
+ if version >= 0 && version <= 0 {
+ v := b.Bool()
+ s.AllowDowngrade = v
+ }
+ if version >= 1 {
+ v := b.Int8()
+ s.UpgradeType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.FeatureUpdates = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.ValidateOnly = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrUpdateFeaturesRequest returns a pointer to a default UpdateFeaturesRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrUpdateFeaturesRequest() *UpdateFeaturesRequest {
+ var v UpdateFeaturesRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateFeaturesRequest.
+func (v *UpdateFeaturesRequest) Default() {
+ v.TimeoutMillis = 60000
+}
+
+// NewUpdateFeaturesRequest returns a default UpdateFeaturesRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateFeaturesRequest() UpdateFeaturesRequest {
+ var v UpdateFeaturesRequest
+ v.Default()
+ return v
+}
+
+type UpdateFeaturesResponseResult struct {
+ // The name of the finalized feature.
+ Feature string
+
+ // The feature update error code, if any.
+ ErrorCode int16
+
+ // The feature update error, if any.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateFeaturesResponseResult.
+func (v *UpdateFeaturesResponseResult) Default() {
+}
+
+// NewUpdateFeaturesResponseResult returns a default UpdateFeaturesResponseResult
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateFeaturesResponseResult() UpdateFeaturesResponseResult {
+ var v UpdateFeaturesResponseResult
+ v.Default()
+ return v
+}
+
+type UpdateFeaturesResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // The top level error code, if any.
+ ErrorCode int16
+
+ // An informative message if the request errored, if any.
+ ErrorMessage *string
+
+ // The results for each feature update request.
+ Results []UpdateFeaturesResponseResult
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*UpdateFeaturesResponse) Key() int16 { return 57 }
+func (*UpdateFeaturesResponse) MaxVersion() int16 { return 1 }
+func (v *UpdateFeaturesResponse) SetVersion(version int16) { v.Version = version }
+func (v *UpdateFeaturesResponse) GetVersion() int16 { return v.Version }
+func (v *UpdateFeaturesResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *UpdateFeaturesResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *UpdateFeaturesResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *UpdateFeaturesResponse) RequestKind() Request {
+ return &UpdateFeaturesRequest{Version: v.Version}
+}
+
+func (v *UpdateFeaturesResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Results
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Feature
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *UpdateFeaturesResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *UpdateFeaturesResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *UpdateFeaturesResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.Results
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]UpdateFeaturesResponseResult, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Feature = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Results = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrUpdateFeaturesResponse returns a pointer to a default UpdateFeaturesResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrUpdateFeaturesResponse() *UpdateFeaturesResponse {
+ var v UpdateFeaturesResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UpdateFeaturesResponse.
+func (v *UpdateFeaturesResponse) Default() {
+}
+
+// NewUpdateFeaturesResponse returns a default UpdateFeaturesResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUpdateFeaturesResponse() UpdateFeaturesResponse {
+ var v UpdateFeaturesResponse
+ v.Default()
+ return v
+}
+
+// Introduced for KIP-590, EnvelopeRequest is what brokers use to wrap an
+// incoming request before forwarding it to another broker.
+type EnvelopeRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The embedded request header and data.
+ RequestData []byte
+
+ // Value of the initial client principal when the request is redirected by a broker.
+ RequestPrincipal []byte
+
+ // The original client's address in bytes.
+ ClientHostAddress []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*EnvelopeRequest) Key() int16 { return 58 }
+func (*EnvelopeRequest) MaxVersion() int16 { return 0 }
+func (v *EnvelopeRequest) SetVersion(version int16) { v.Version = version }
+func (v *EnvelopeRequest) GetVersion() int16 { return v.Version }
+func (v *EnvelopeRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *EnvelopeRequest) IsAdminRequest() {}
+func (v *EnvelopeRequest) ResponseKind() Response {
+ r := &EnvelopeResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *EnvelopeRequest) RequestWith(ctx context.Context, r Requestor) (*EnvelopeResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*EnvelopeResponse)
+ return resp, err
+}
+
+func (v *EnvelopeRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.RequestData
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ {
+ v := v.RequestPrincipal
+ if isFlexible {
+ dst = kbin.AppendCompactNullableBytes(dst, v)
+ } else {
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ }
+ {
+ v := v.ClientHostAddress
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *EnvelopeRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EnvelopeRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EnvelopeRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.RequestData = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactNullableBytes()
+ } else {
+ v = b.NullableBytes()
+ }
+ s.RequestPrincipal = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.ClientHostAddress = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrEnvelopeRequest returns a pointer to a default EnvelopeRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrEnvelopeRequest() *EnvelopeRequest {
+ var v EnvelopeRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EnvelopeRequest.
+func (v *EnvelopeRequest) Default() {
+}
+
+// NewEnvelopeRequest returns a default EnvelopeRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEnvelopeRequest() EnvelopeRequest {
+ var v EnvelopeRequest
+ v.Default()
+ return v
+}
+
+type EnvelopeResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The embedded response header and data.
+ ResponseData []byte
+
+ // The error code, or 0 if there was no error.
+ //
+ // NOT_CONTROLLER is returned when the request is not sent to the controller.
+ //
+ // CLUSTER_AUTHORIZATION_FAILED is returned if inter-broker authorization failed.
+ ErrorCode int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*EnvelopeResponse) Key() int16 { return 58 }
+func (*EnvelopeResponse) MaxVersion() int16 { return 0 }
+func (v *EnvelopeResponse) SetVersion(version int16) { v.Version = version }
+func (v *EnvelopeResponse) GetVersion() int16 { return v.Version }
+func (v *EnvelopeResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *EnvelopeResponse) RequestKind() Request { return &EnvelopeRequest{Version: v.Version} }
+
+func (v *EnvelopeResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ResponseData
+ if isFlexible {
+ dst = kbin.AppendCompactNullableBytes(dst, v)
+ } else {
+ dst = kbin.AppendNullableBytes(dst, v)
+ }
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *EnvelopeResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *EnvelopeResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *EnvelopeResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactNullableBytes()
+ } else {
+ v = b.NullableBytes()
+ }
+ s.ResponseData = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrEnvelopeResponse returns a pointer to a default EnvelopeResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrEnvelopeResponse() *EnvelopeResponse {
+ var v EnvelopeResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to EnvelopeResponse.
+func (v *EnvelopeResponse) Default() {
+}
+
+// NewEnvelopeResponse returns a default EnvelopeResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewEnvelopeResponse() EnvelopeResponse {
+ var v EnvelopeResponse
+ v.Default()
+ return v
+}
+
+type FetchSnapshotRequestTopicPartitionSnapshotID struct {
+ EndOffset int64
+
+ Epoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotRequestTopicPartitionSnapshotID.
+func (v *FetchSnapshotRequestTopicPartitionSnapshotID) Default() {
+}
+
+// NewFetchSnapshotRequestTopicPartitionSnapshotID returns a default FetchSnapshotRequestTopicPartitionSnapshotID
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotRequestTopicPartitionSnapshotID() FetchSnapshotRequestTopicPartitionSnapshotID {
+ var v FetchSnapshotRequestTopicPartitionSnapshotID
+ v.Default()
+ return v
+}
+
+type FetchSnapshotRequestTopicPartition struct {
+ // The partition to fetch.
+ Partition int32
+
+ // The current leader epoch of the partition, or -1 for an unknown leader epoch.
+ CurrentLeaderEpoch int32
+
+ // The snapshot end offset and epoch to fetch.
+ SnapshotID FetchSnapshotRequestTopicPartitionSnapshotID
+
+ // The byte position within the snapshot to start fetching from.
+ Position int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotRequestTopicPartition.
+func (v *FetchSnapshotRequestTopicPartition) Default() {
+ {
+ v := &v.SnapshotID
+ _ = v
+ }
+}
+
+// NewFetchSnapshotRequestTopicPartition returns a default FetchSnapshotRequestTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotRequestTopicPartition() FetchSnapshotRequestTopicPartition {
+ var v FetchSnapshotRequestTopicPartition
+ v.Default()
+ return v
+}
+
+type FetchSnapshotRequestTopic struct {
+ // The name of the topic to fetch.
+ Topic string
+
+ // The partitions to fetch.
+ Partitions []FetchSnapshotRequestTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotRequestTopic.
+func (v *FetchSnapshotRequestTopic) Default() {
+}
+
+// NewFetchSnapshotRequestTopic returns a default FetchSnapshotRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotRequestTopic() FetchSnapshotRequestTopic {
+ var v FetchSnapshotRequestTopic
+ v.Default()
+ return v
+}
+
+// Introduced for KIP-630, FetchSnapshotRequest is a part of the inter-Kafka
+// raft protocol to remove the dependency on Zookeeper.
+type FetchSnapshotRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The ClusterID if known, this is used to validate metadata fetches prior to
+ // broker registration.
+ ClusterID *string // tag 0
+
+ // The broker ID of the follower.
+ //
+ // This field has a default of -1.
+ ReplicaID int32
+
+ // The maximum bytes to fetch from all of the snapshots.
+ //
+ // This field has a default of 0x7fffffff.
+ MaxBytes int32
+
+ // The topics to fetch.
+ Topics []FetchSnapshotRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*FetchSnapshotRequest) Key() int16 { return 59 }
+func (*FetchSnapshotRequest) MaxVersion() int16 { return 0 }
+func (v *FetchSnapshotRequest) SetVersion(version int16) { v.Version = version }
+func (v *FetchSnapshotRequest) GetVersion() int16 { return v.Version }
+func (v *FetchSnapshotRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *FetchSnapshotRequest) ResponseKind() Response {
+ r := &FetchSnapshotResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *FetchSnapshotRequest) RequestWith(ctx context.Context, r Requestor) (*FetchSnapshotResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*FetchSnapshotResponse)
+ return resp, err
+}
+
+func (v *FetchSnapshotRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ReplicaID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.MaxBytes
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.CurrentLeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := &v.SnapshotID
+ {
+ v := v.EndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Epoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ {
+ v := v.Position
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if v.ClusterID != nil {
+ toEncode = append(toEncode, 0)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.ClusterID
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fClusterID:
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fClusterID
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *FetchSnapshotRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *FetchSnapshotRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *FetchSnapshotRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ReplicaID = v
+ }
+ {
+ v := b.Int32()
+ s.MaxBytes = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchSnapshotRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchSnapshotRequestTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int32()
+ s.CurrentLeaderEpoch = v
+ }
+ {
+ v := &s.SnapshotID
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.EndOffset = v
+ }
+ {
+ v := b.Int32()
+ s.Epoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ {
+ v := b.Int64()
+ s.Position = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ClusterID = v
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ return b.Complete()
+}
+
+// NewPtrFetchSnapshotRequest returns a pointer to a default FetchSnapshotRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrFetchSnapshotRequest() *FetchSnapshotRequest {
+ var v FetchSnapshotRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotRequest.
+func (v *FetchSnapshotRequest) Default() {
+ v.ReplicaID = -1
+ v.MaxBytes = 2147483647
+}
+
+// NewFetchSnapshotRequest returns a default FetchSnapshotRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotRequest() FetchSnapshotRequest {
+ var v FetchSnapshotRequest
+ v.Default()
+ return v
+}
+
+type FetchSnapshotResponseTopicPartitionSnapshotID struct {
+ EndOffset int64
+
+ Epoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotResponseTopicPartitionSnapshotID.
+func (v *FetchSnapshotResponseTopicPartitionSnapshotID) Default() {
+}
+
+// NewFetchSnapshotResponseTopicPartitionSnapshotID returns a default FetchSnapshotResponseTopicPartitionSnapshotID
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotResponseTopicPartitionSnapshotID() FetchSnapshotResponseTopicPartitionSnapshotID {
+ var v FetchSnapshotResponseTopicPartitionSnapshotID
+ v.Default()
+ return v
+}
+
+type FetchSnapshotResponseTopicPartitionCurrentLeader struct {
+ LeaderID int32
+
+ LeaderEpoch int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotResponseTopicPartitionCurrentLeader.
+func (v *FetchSnapshotResponseTopicPartitionCurrentLeader) Default() {
+}
+
+// NewFetchSnapshotResponseTopicPartitionCurrentLeader returns a default FetchSnapshotResponseTopicPartitionCurrentLeader
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotResponseTopicPartitionCurrentLeader() FetchSnapshotResponseTopicPartitionCurrentLeader {
+ var v FetchSnapshotResponseTopicPartitionCurrentLeader
+ v.Default()
+ return v
+}
+
+type FetchSnapshotResponseTopicPartition struct {
+ // The partition.
+ Partition int32
+
+ // An error code, or 0 if there was no fetch error.
+ ErrorCode int16
+
+ // The snapshot end offset and epoch to fetch.
+ SnapshotID FetchSnapshotResponseTopicPartitionSnapshotID
+
+ // The ID of the current leader (or -1 if unknown) and the latest known
+ // leader epoch.
+ CurrentLeader FetchSnapshotResponseTopicPartitionCurrentLeader // tag 0
+
+ // The total size of the snapshot.
+ Size int64
+
+ // The starting byte position within the snapshot included in the Bytes
+ // field.
+ Position int64
+
+ // Snapshot data.
+ Bytes []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotResponseTopicPartition.
+func (v *FetchSnapshotResponseTopicPartition) Default() {
+ {
+ v := &v.SnapshotID
+ _ = v
+ }
+ {
+ v := &v.CurrentLeader
+ _ = v
+ }
+}
+
+// NewFetchSnapshotResponseTopicPartition returns a default FetchSnapshotResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotResponseTopicPartition() FetchSnapshotResponseTopicPartition {
+ var v FetchSnapshotResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type FetchSnapshotResponseTopic struct {
+ // The name of the topic to fetch.
+ Topic string
+
+ // The partitions to fetch.
+ Partitions []FetchSnapshotResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotResponseTopic.
+func (v *FetchSnapshotResponseTopic) Default() {
+}
+
+// NewFetchSnapshotResponseTopic returns a default FetchSnapshotResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotResponseTopic() FetchSnapshotResponseTopic {
+ var v FetchSnapshotResponseTopic
+ v.Default()
+ return v
+}
+
+// FetchSnapshotResponse is a response for a FetchSnapshotRequest.
+type FetchSnapshotResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // The top level response error code.
+ ErrorCode int16
+
+ // The topics to fetch.
+ Topics []FetchSnapshotResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*FetchSnapshotResponse) Key() int16 { return 59 }
+func (*FetchSnapshotResponse) MaxVersion() int16 { return 0 }
+func (v *FetchSnapshotResponse) SetVersion(version int16) { v.Version = version }
+func (v *FetchSnapshotResponse) GetVersion() int16 { return v.Version }
+func (v *FetchSnapshotResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *FetchSnapshotResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *FetchSnapshotResponse) SetThrottle(throttleMillis int32) { v.ThrottleMillis = throttleMillis }
+func (v *FetchSnapshotResponse) RequestKind() Request {
+ return &FetchSnapshotRequest{Version: v.Version}
+}
+
+func (v *FetchSnapshotResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := &v.SnapshotID
+ {
+ v := v.EndOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Epoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ {
+ v := v.Size
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Position
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.Bytes
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ var toEncode []uint32
+ if !reflect.DeepEqual(v.CurrentLeader, (func() FetchSnapshotResponseTopicPartitionCurrentLeader {
+ var v FetchSnapshotResponseTopicPartitionCurrentLeader
+ v.Default()
+ return v
+ })()) {
+ toEncode = append(toEncode, 0)
+ }
+ dst = kbin.AppendUvarint(dst, uint32(len(toEncode)+v.UnknownTags.Len()))
+ for _, tag := range toEncode {
+ switch tag {
+ case 0:
+ {
+ v := v.CurrentLeader
+ dst = kbin.AppendUvarint(dst, 0)
+ sized := false
+ lenAt := len(dst)
+ fCurrentLeader:
+ {
+ v := v.LeaderID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LeaderEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ if !sized {
+ dst = kbin.AppendUvarint(dst[:lenAt], uint32(len(dst[lenAt:])))
+ sized = true
+ goto fCurrentLeader
+ }
+ }
+ }
+ }
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *FetchSnapshotResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *FetchSnapshotResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *FetchSnapshotResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchSnapshotResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]FetchSnapshotResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := &s.SnapshotID
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.EndOffset = v
+ }
+ {
+ v := b.Int32()
+ s.Epoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ {
+ v := b.Int64()
+ s.Size = v
+ }
+ {
+ v := b.Int64()
+ s.Position = v
+ }
+ {
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
+ s.Bytes = v
+ }
+ if isFlexible {
+ for i := b.Uvarint(); i > 0; i-- {
+ switch key := b.Uvarint(); key {
+ default:
+ s.UnknownTags.Set(key, b.Span(int(b.Uvarint())))
+ case 0:
+ b := kbin.Reader{Src: b.Span(int(b.Uvarint()))}
+ v := &s.CurrentLeader
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.LeaderID = v
+ }
+ {
+ v := b.Int32()
+ s.LeaderEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ if err := b.Complete(); err != nil {
+ return err
+ }
+ }
+ }
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrFetchSnapshotResponse returns a pointer to a default FetchSnapshotResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrFetchSnapshotResponse() *FetchSnapshotResponse {
+ var v FetchSnapshotResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to FetchSnapshotResponse.
+func (v *FetchSnapshotResponse) Default() {
+}
+
+// NewFetchSnapshotResponse returns a default FetchSnapshotResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewFetchSnapshotResponse() FetchSnapshotResponse {
+ var v FetchSnapshotResponse
+ v.Default()
+ return v
+}
+
+// Introduced for KIP-700, DescribeClusterRequest is effectively an "admin"
+// type metadata request for information that producers or consumers do not
+// need to care about.
+type DescribeClusterRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Whether to include cluster authorized operations. This requires DESCRIBE
+ // on CLUSTER.
+ IncludeClusterAuthorizedOperations bool
+
+ // The endpoint type to describe. 1=brokers, 2=controllers.
+ //
+ // This field has a default of 1.
+ EndpointType int8 // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeClusterRequest) Key() int16 { return 60 }
+func (*DescribeClusterRequest) MaxVersion() int16 { return 1 }
+func (v *DescribeClusterRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeClusterRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeClusterRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeClusterRequest) ResponseKind() Response {
+ r := &DescribeClusterResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeClusterRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeClusterResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeClusterResponse)
+ return resp, err
+}
+
+func (v *DescribeClusterRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.IncludeClusterAuthorizedOperations
+ dst = kbin.AppendBool(dst, v)
+ }
+ if version >= 1 {
+ v := v.EndpointType
+ dst = kbin.AppendInt8(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeClusterRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeClusterRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeClusterRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Bool()
+ s.IncludeClusterAuthorizedOperations = v
+ }
+ if version >= 1 {
+ v := b.Int8()
+ s.EndpointType = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeClusterRequest returns a pointer to a default DescribeClusterRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeClusterRequest() *DescribeClusterRequest {
+ var v DescribeClusterRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClusterRequest.
+func (v *DescribeClusterRequest) Default() {
+ v.EndpointType = 1
+}
+
+// NewDescribeClusterRequest returns a default DescribeClusterRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClusterRequest() DescribeClusterRequest {
+ var v DescribeClusterRequest
+ v.Default()
+ return v
+}
+
+type DescribeClusterResponseBroker struct {
+ // NodeID is the node ID of a Kafka broker.
+ NodeID int32
+
+ // Host is the hostname of a Kafka broker.
+ Host string
+
+ // Port is the port of a Kafka broker.
+ Port int32
+
+ // Rack is the rack this Kafka broker is in, if any.
+ Rack *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClusterResponseBroker.
+func (v *DescribeClusterResponseBroker) Default() {
+}
+
+// NewDescribeClusterResponseBroker returns a default DescribeClusterResponseBroker
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClusterResponseBroker() DescribeClusterResponseBroker {
+ var v DescribeClusterResponseBroker
+ v.Default()
+ return v
+}
+
+// DescribeClusterResponse is a response to a DescribeClusterRequest.
+type DescribeClusterResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // The top level response error code.
+ ErrorCode int16
+
+ // The top level error message, if any.
+ ErrorMessage *string
+
+ // The endpoint type that was described. 1=brokers, 2=controllers.
+ //
+ // This field has a default of 1.
+ EndpointType int8 // v1+
+
+ // The cluster ID that responding broker belongs to.
+ ClusterID string
+
+ // The ID of the controller broker.
+ //
+ // This field has a default of -1.
+ ControllerID int32
+
+ // Brokers is a set of alive Kafka brokers (this mirrors MetadataResponse.Brokers).
+ Brokers []DescribeClusterResponseBroker
+
+ // 32-bit bitfield to represent authorized operations for this cluster.
+ //
+ // This field has a default of -2147483648.
+ ClusterAuthorizedOperations int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeClusterResponse) Key() int16 { return 60 }
+func (*DescribeClusterResponse) MaxVersion() int16 { return 1 }
+func (v *DescribeClusterResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeClusterResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeClusterResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeClusterResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *DescribeClusterResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeClusterResponse) RequestKind() Request {
+ return &DescribeClusterRequest{Version: v.Version}
+}
+
+func (v *DescribeClusterResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.EndpointType
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ v := v.ClusterID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ControllerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Brokers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.NodeID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.ClusterAuthorizedOperations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeClusterResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeClusterResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeClusterResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if version >= 1 {
+ v := b.Int8()
+ s.EndpointType = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClusterID = v
+ }
+ {
+ v := b.Int32()
+ s.ControllerID = v
+ }
+ {
+ v := s.Brokers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeClusterResponseBroker, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.NodeID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Int32()
+ s.Port = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Rack = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Brokers = v
+ }
+ {
+ v := b.Int32()
+ s.ClusterAuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeClusterResponse returns a pointer to a default DescribeClusterResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeClusterResponse() *DescribeClusterResponse {
+ var v DescribeClusterResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeClusterResponse.
+func (v *DescribeClusterResponse) Default() {
+ v.EndpointType = 1
+ v.ControllerID = -1
+ v.ClusterAuthorizedOperations = -2147483648
+}
+
+// NewDescribeClusterResponse returns a default DescribeClusterResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeClusterResponse() DescribeClusterResponse {
+ var v DescribeClusterResponse
+ v.Default()
+ return v
+}
+
+type DescribeProducersRequestTopic struct {
+ Topic string
+
+ // The partitions to list producers for for the given topic.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeProducersRequestTopic.
+func (v *DescribeProducersRequestTopic) Default() {
+}
+
+// NewDescribeProducersRequestTopic returns a default DescribeProducersRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeProducersRequestTopic() DescribeProducersRequestTopic {
+ var v DescribeProducersRequestTopic
+ v.Default()
+ return v
+}
+
+// Introduced for KIP-664, DescribeProducersRequest allows for introspecting
+// the state of the transaction coordinator. This request can be used to detect
+// hanging transactions or other EOS-related problems.
+//
+// This request allows for describing the state of the active
+// idempotent/transactional producers.
+type DescribeProducersRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The topics to describe producers for.
+ Topics []DescribeProducersRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeProducersRequest) Key() int16 { return 61 }
+func (*DescribeProducersRequest) MaxVersion() int16 { return 0 }
+func (v *DescribeProducersRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeProducersRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeProducersRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeProducersRequest) ResponseKind() Response {
+ r := &DescribeProducersResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeProducersRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeProducersResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeProducersResponse)
+ return resp, err
+}
+
+func (v *DescribeProducersRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeProducersRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeProducersRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeProducersRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeProducersRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeProducersRequest returns a pointer to a default DescribeProducersRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeProducersRequest() *DescribeProducersRequest {
+ var v DescribeProducersRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeProducersRequest.
+func (v *DescribeProducersRequest) Default() {
+}
+
+// NewDescribeProducersRequest returns a default DescribeProducersRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeProducersRequest() DescribeProducersRequest {
+ var v DescribeProducersRequest
+ v.Default()
+ return v
+}
+
+type DescribeProducersResponseTopicPartitionActiveProducer struct {
+ ProducerID int64
+
+ ProducerEpoch int32
+
+ // The last sequence produced.
+ //
+ // This field has a default of -1.
+ LastSequence int32
+
+ // The last timestamp produced.
+ //
+ // This field has a default of -1.
+ LastTimestamp int64
+
+ // The epoch of the transactional coordinator for this last produce.
+ CoordinatorEpoch int32
+
+ // The first offset of the transaction.
+ //
+ // This field has a default of -1.
+ CurrentTxnStartOffset int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeProducersResponseTopicPartitionActiveProducer.
+func (v *DescribeProducersResponseTopicPartitionActiveProducer) Default() {
+ v.LastSequence = -1
+ v.LastTimestamp = -1
+ v.CurrentTxnStartOffset = -1
+}
+
+// NewDescribeProducersResponseTopicPartitionActiveProducer returns a default DescribeProducersResponseTopicPartitionActiveProducer
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeProducersResponseTopicPartitionActiveProducer() DescribeProducersResponseTopicPartitionActiveProducer {
+ var v DescribeProducersResponseTopicPartitionActiveProducer
+ v.Default()
+ return v
+}
+
+type DescribeProducersResponseTopicPartition struct {
+ Partition int32
+
+ // The partition error code, or 0 if there was no error.
+ //
+ // NOT_LEADER_OR_FOLLOWER is returned if the broker receiving this request
+ // is not the leader of the partition.
+ //
+ // TOPIC_AUTHORIZATION_FAILED is returned if the user does not have Describe
+ // permissions on the topic.
+ //
+ // UNKNOWN_TOPIC_OR_PARTITION is returned if the partition is not known to exist.
+ //
+ // Other errors may be returned corresponding to the partition being offline, etc.
+ ErrorCode int16
+
+ // The partition error message, which may be null if no additional details are available.
+ ErrorMessage *string
+
+ // The current idempotent or transactional producers producing to this partition,
+ // and the metadata related to their produce requests.
+ ActiveProducers []DescribeProducersResponseTopicPartitionActiveProducer
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeProducersResponseTopicPartition.
+func (v *DescribeProducersResponseTopicPartition) Default() {
+}
+
+// NewDescribeProducersResponseTopicPartition returns a default DescribeProducersResponseTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeProducersResponseTopicPartition() DescribeProducersResponseTopicPartition {
+ var v DescribeProducersResponseTopicPartition
+ v.Default()
+ return v
+}
+
+type DescribeProducersResponseTopic struct {
+ Topic string
+
+ Partitions []DescribeProducersResponseTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeProducersResponseTopic.
+func (v *DescribeProducersResponseTopic) Default() {
+}
+
+// NewDescribeProducersResponseTopic returns a default DescribeProducersResponseTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeProducersResponseTopic() DescribeProducersResponseTopic {
+ var v DescribeProducersResponseTopic
+ v.Default()
+ return v
+}
+
+// DescribeProducersResponse is a response to a DescribeProducersRequest.
+type DescribeProducersResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ Topics []DescribeProducersResponseTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeProducersResponse) Key() int16 { return 61 }
+func (*DescribeProducersResponse) MaxVersion() int16 { return 0 }
+func (v *DescribeProducersResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeProducersResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeProducersResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeProducersResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *DescribeProducersResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeProducersResponse) RequestKind() Request {
+ return &DescribeProducersRequest{Version: v.Version}
+}
+
+func (v *DescribeProducersResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Partition
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.ActiveProducers
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LastSequence
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.LastTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.CoordinatorEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.CurrentTxnStartOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeProducersResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeProducersResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeProducersResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeProducersResponseTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeProducersResponseTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int32()
+ s.Partition = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ v := s.ActiveProducers
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeProducersResponseTopicPartitionActiveProducer, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int32()
+ s.ProducerEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.LastSequence = v
+ }
+ {
+ v := b.Int64()
+ s.LastTimestamp = v
+ }
+ {
+ v := b.Int32()
+ s.CoordinatorEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.CurrentTxnStartOffset = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.ActiveProducers = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeProducersResponse returns a pointer to a default DescribeProducersResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeProducersResponse() *DescribeProducersResponse {
+ var v DescribeProducersResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeProducersResponse.
+func (v *DescribeProducersResponse) Default() {
+}
+
+// NewDescribeProducersResponse returns a default DescribeProducersResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeProducersResponse() DescribeProducersResponse {
+ var v DescribeProducersResponse
+ v.Default()
+ return v
+}
+
+type BrokerRegistrationRequestListener struct {
+ // The name of this endpoint.
+ Name string
+
+ // The hostname.
+ Host string
+
+ // The port.
+ Port uint16
+
+ // The security protocol.
+ SecurityProtocol int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BrokerRegistrationRequestListener.
+func (v *BrokerRegistrationRequestListener) Default() {
+}
+
+// NewBrokerRegistrationRequestListener returns a default BrokerRegistrationRequestListener
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBrokerRegistrationRequestListener() BrokerRegistrationRequestListener {
+ var v BrokerRegistrationRequestListener
+ v.Default()
+ return v
+}
+
+type BrokerRegistrationRequestFeature struct {
+ // The name of the feature.
+ Name string
+
+ // The minimum supported feature level.
+ MinSupportedVersion int16
+
+ // The maximum supported feature level.
+ MaxSupportedVersion int16
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BrokerRegistrationRequestFeature.
+func (v *BrokerRegistrationRequestFeature) Default() {
+}
+
+// NewBrokerRegistrationRequestFeature returns a default BrokerRegistrationRequestFeature
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBrokerRegistrationRequestFeature() BrokerRegistrationRequestFeature {
+ var v BrokerRegistrationRequestFeature
+ v.Default()
+ return v
+}
+
+// For KIP-500 / KIP-631, BrokerRegistrationRequest is an internal
+// broker-to-broker only request.
+type BrokerRegistrationRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The broker ID.
+ BrokerID int32
+
+ // The cluster ID of the broker process.
+ ClusterID string
+
+ // The incarnation ID of the broker process.
+ IncarnationID [16]byte
+
+ // The listeners for this broker.
+ Listeners []BrokerRegistrationRequestListener
+
+ // Features on this broker.
+ Features []BrokerRegistrationRequestFeature
+
+ // The rack that this broker is in, if any.
+ Rack *string
+
+ // If the required configurations for ZK migration are present, this value is
+ // set to true.
+ IsMigratingZkBroker bool // v1+
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*BrokerRegistrationRequest) Key() int16 { return 62 }
+func (*BrokerRegistrationRequest) MaxVersion() int16 { return 1 }
+func (v *BrokerRegistrationRequest) SetVersion(version int16) { v.Version = version }
+func (v *BrokerRegistrationRequest) GetVersion() int16 { return v.Version }
+func (v *BrokerRegistrationRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *BrokerRegistrationRequest) ResponseKind() Response {
+ r := &BrokerRegistrationResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *BrokerRegistrationRequest) RequestWith(ctx context.Context, r Requestor) (*BrokerRegistrationResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*BrokerRegistrationResponse)
+ return resp, err
+}
+
+func (v *BrokerRegistrationRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ClusterID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.IncarnationID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Listeners
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Host
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Port
+ dst = kbin.AppendUint16(dst, v)
+ }
+ {
+ v := v.SecurityProtocol
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Features
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Name
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MinSupportedVersion
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.MaxSupportedVersion
+ dst = kbin.AppendInt16(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.Rack
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if version >= 1 {
+ v := v.IsMigratingZkBroker
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *BrokerRegistrationRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *BrokerRegistrationRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *BrokerRegistrationRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClusterID = v
+ }
+ {
+ v := b.Uuid()
+ s.IncarnationID = v
+ }
+ {
+ v := s.Listeners
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]BrokerRegistrationRequestListener, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Host = v
+ }
+ {
+ v := b.Uint16()
+ s.Port = v
+ }
+ {
+ v := b.Int16()
+ s.SecurityProtocol = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Listeners = v
+ }
+ {
+ v := s.Features
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]BrokerRegistrationRequestFeature, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Name = v
+ }
+ {
+ v := b.Int16()
+ s.MinSupportedVersion = v
+ }
+ {
+ v := b.Int16()
+ s.MaxSupportedVersion = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Features = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.Rack = v
+ }
+ if version >= 1 {
+ v := b.Bool()
+ s.IsMigratingZkBroker = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrBrokerRegistrationRequest returns a pointer to a default BrokerRegistrationRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrBrokerRegistrationRequest() *BrokerRegistrationRequest {
+ var v BrokerRegistrationRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BrokerRegistrationRequest.
+func (v *BrokerRegistrationRequest) Default() {
+}
+
+// NewBrokerRegistrationRequest returns a default BrokerRegistrationRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBrokerRegistrationRequest() BrokerRegistrationRequest {
+ var v BrokerRegistrationRequest
+ v.Default()
+ return v
+}
+
+// BrokerRegistrationResponse is a response to a BrokerRegistrationRequest.
+type BrokerRegistrationResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // Any error code, or 0.
+ ErrorCode int16
+
+ // The broker's assigned epoch, or -1 if none was assigned.
+ //
+ // This field has a default of -1.
+ BrokerEpoch int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*BrokerRegistrationResponse) Key() int16 { return 62 }
+func (*BrokerRegistrationResponse) MaxVersion() int16 { return 1 }
+func (v *BrokerRegistrationResponse) SetVersion(version int16) { v.Version = version }
+func (v *BrokerRegistrationResponse) GetVersion() int16 { return v.Version }
+func (v *BrokerRegistrationResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *BrokerRegistrationResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *BrokerRegistrationResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *BrokerRegistrationResponse) RequestKind() Request {
+ return &BrokerRegistrationRequest{Version: v.Version}
+}
+
+func (v *BrokerRegistrationResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *BrokerRegistrationResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *BrokerRegistrationResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *BrokerRegistrationResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrBrokerRegistrationResponse returns a pointer to a default BrokerRegistrationResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrBrokerRegistrationResponse() *BrokerRegistrationResponse {
+ var v BrokerRegistrationResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BrokerRegistrationResponse.
+func (v *BrokerRegistrationResponse) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewBrokerRegistrationResponse returns a default BrokerRegistrationResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBrokerRegistrationResponse() BrokerRegistrationResponse {
+ var v BrokerRegistrationResponse
+ v.Default()
+ return v
+}
+
+// For KIP-500 / KIP-631, BrokerHeartbeatRequest is an internal
+// broker-to-broker only request.
+type BrokerHeartbeatRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The broker ID.
+ BrokerID int32
+
+ // The broker's epoch.
+ //
+ // This field has a default of -1.
+ BrokerEpoch int64
+
+ // The highest metadata offset that the broker has reached.
+ CurrentMetadataOffset int64
+
+ // True if the broker wants to be fenced.
+ WantFence bool
+
+ // True if the broker wants to be shutdown.
+ WantShutdown bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*BrokerHeartbeatRequest) Key() int16 { return 63 }
+func (*BrokerHeartbeatRequest) MaxVersion() int16 { return 0 }
+func (v *BrokerHeartbeatRequest) SetVersion(version int16) { v.Version = version }
+func (v *BrokerHeartbeatRequest) GetVersion() int16 { return v.Version }
+func (v *BrokerHeartbeatRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *BrokerHeartbeatRequest) ResponseKind() Response {
+ r := &BrokerHeartbeatResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *BrokerHeartbeatRequest) RequestWith(ctx context.Context, r Requestor) (*BrokerHeartbeatResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*BrokerHeartbeatResponse)
+ return resp, err
+}
+
+func (v *BrokerHeartbeatRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.CurrentMetadataOffset
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.WantFence
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.WantShutdown
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *BrokerHeartbeatRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *BrokerHeartbeatRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *BrokerHeartbeatRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ {
+ v := b.Int64()
+ s.CurrentMetadataOffset = v
+ }
+ {
+ v := b.Bool()
+ s.WantFence = v
+ }
+ {
+ v := b.Bool()
+ s.WantShutdown = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrBrokerHeartbeatRequest returns a pointer to a default BrokerHeartbeatRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrBrokerHeartbeatRequest() *BrokerHeartbeatRequest {
+ var v BrokerHeartbeatRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BrokerHeartbeatRequest.
+func (v *BrokerHeartbeatRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewBrokerHeartbeatRequest returns a default BrokerHeartbeatRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBrokerHeartbeatRequest() BrokerHeartbeatRequest {
+ var v BrokerHeartbeatRequest
+ v.Default()
+ return v
+}
+
+// BrokerHeartbeatResponse is a response to a BrokerHeartbeatRequest.
+type BrokerHeartbeatResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // Any error code, or 0.
+ ErrorCode int16
+
+ // True if the broker has approximately caught up with the latest metadata.
+ IsCaughtUp bool
+
+ // True if the broker is fenced.
+ //
+ // This field has a default of true.
+ IsFenced bool
+
+ // True if the broker should proceed with its shutdown.
+ ShouldShutdown bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*BrokerHeartbeatResponse) Key() int16 { return 63 }
+func (*BrokerHeartbeatResponse) MaxVersion() int16 { return 0 }
+func (v *BrokerHeartbeatResponse) SetVersion(version int16) { v.Version = version }
+func (v *BrokerHeartbeatResponse) GetVersion() int16 { return v.Version }
+func (v *BrokerHeartbeatResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *BrokerHeartbeatResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *BrokerHeartbeatResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *BrokerHeartbeatResponse) RequestKind() Request {
+ return &BrokerHeartbeatRequest{Version: v.Version}
+}
+
+func (v *BrokerHeartbeatResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.IsCaughtUp
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.IsFenced
+ dst = kbin.AppendBool(dst, v)
+ }
+ {
+ v := v.ShouldShutdown
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *BrokerHeartbeatResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *BrokerHeartbeatResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *BrokerHeartbeatResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Bool()
+ s.IsCaughtUp = v
+ }
+ {
+ v := b.Bool()
+ s.IsFenced = v
+ }
+ {
+ v := b.Bool()
+ s.ShouldShutdown = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrBrokerHeartbeatResponse returns a pointer to a default BrokerHeartbeatResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrBrokerHeartbeatResponse() *BrokerHeartbeatResponse {
+ var v BrokerHeartbeatResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to BrokerHeartbeatResponse.
+func (v *BrokerHeartbeatResponse) Default() {
+ v.IsFenced = true
+}
+
+// NewBrokerHeartbeatResponse returns a default BrokerHeartbeatResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewBrokerHeartbeatResponse() BrokerHeartbeatResponse {
+ var v BrokerHeartbeatResponse
+ v.Default()
+ return v
+}
+
+// For KIP-500 / KIP-631, UnregisterBrokerRequest is an admin request to
+// remove registration of a broker from the cluster.
+type UnregisterBrokerRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The broker ID to unregister.
+ BrokerID int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*UnregisterBrokerRequest) Key() int16 { return 64 }
+func (*UnregisterBrokerRequest) MaxVersion() int16 { return 0 }
+func (v *UnregisterBrokerRequest) SetVersion(version int16) { v.Version = version }
+func (v *UnregisterBrokerRequest) GetVersion() int16 { return v.Version }
+func (v *UnregisterBrokerRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *UnregisterBrokerRequest) ResponseKind() Response {
+ r := &UnregisterBrokerResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *UnregisterBrokerRequest) RequestWith(ctx context.Context, r Requestor) (*UnregisterBrokerResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*UnregisterBrokerResponse)
+ return resp, err
+}
+
+func (v *UnregisterBrokerRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *UnregisterBrokerRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *UnregisterBrokerRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *UnregisterBrokerRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrUnregisterBrokerRequest returns a pointer to a default UnregisterBrokerRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrUnregisterBrokerRequest() *UnregisterBrokerRequest {
+ var v UnregisterBrokerRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UnregisterBrokerRequest.
+func (v *UnregisterBrokerRequest) Default() {
+}
+
+// NewUnregisterBrokerRequest returns a default UnregisterBrokerRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUnregisterBrokerRequest() UnregisterBrokerRequest {
+ var v UnregisterBrokerRequest
+ v.Default()
+ return v
+}
+
+// UnregisterBrokerResponse is a response to a UnregisterBrokerRequest.
+type UnregisterBrokerResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // Any error code, or 0.
+ ErrorCode int16
+
+ // The error message, if any.
+ ErrorMessage *string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*UnregisterBrokerResponse) Key() int16 { return 64 }
+func (*UnregisterBrokerResponse) MaxVersion() int16 { return 0 }
+func (v *UnregisterBrokerResponse) SetVersion(version int16) { v.Version = version }
+func (v *UnregisterBrokerResponse) GetVersion() int16 { return v.Version }
+func (v *UnregisterBrokerResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *UnregisterBrokerResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *UnregisterBrokerResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *UnregisterBrokerResponse) RequestKind() Request {
+ return &UnregisterBrokerRequest{Version: v.Version}
+}
+
+func (v *UnregisterBrokerResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *UnregisterBrokerResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *UnregisterBrokerResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *UnregisterBrokerResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrUnregisterBrokerResponse returns a pointer to a default UnregisterBrokerResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrUnregisterBrokerResponse() *UnregisterBrokerResponse {
+ var v UnregisterBrokerResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to UnregisterBrokerResponse.
+func (v *UnregisterBrokerResponse) Default() {
+}
+
+// NewUnregisterBrokerResponse returns a default UnregisterBrokerResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewUnregisterBrokerResponse() UnregisterBrokerResponse {
+ var v UnregisterBrokerResponse
+ v.Default()
+ return v
+}
+
+// For KIP-664, DescribeTransactionsRequest describes the state of transactions.
+type DescribeTransactionsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // Array of transactionalIds to include in describe results. If empty, then
+ // no results will be returned.
+ TransactionalIDs []string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeTransactionsRequest) Key() int16 { return 65 }
+func (*DescribeTransactionsRequest) MaxVersion() int16 { return 0 }
+func (v *DescribeTransactionsRequest) SetVersion(version int16) { v.Version = version }
+func (v *DescribeTransactionsRequest) GetVersion() int16 { return v.Version }
+func (v *DescribeTransactionsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeTransactionsRequest) ResponseKind() Response {
+ r := &DescribeTransactionsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *DescribeTransactionsRequest) RequestWith(ctx context.Context, r Requestor) (*DescribeTransactionsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*DescribeTransactionsResponse)
+ return resp, err
+}
+
+func (v *DescribeTransactionsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.TransactionalIDs
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeTransactionsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeTransactionsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeTransactionsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.TransactionalIDs
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.TransactionalIDs = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeTransactionsRequest returns a pointer to a default DescribeTransactionsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeTransactionsRequest() *DescribeTransactionsRequest {
+ var v DescribeTransactionsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeTransactionsRequest.
+func (v *DescribeTransactionsRequest) Default() {
+}
+
+// NewDescribeTransactionsRequest returns a default DescribeTransactionsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeTransactionsRequest() DescribeTransactionsRequest {
+ var v DescribeTransactionsRequest
+ v.Default()
+ return v
+}
+
+type DescribeTransactionsResponseTransactionStateTopic struct {
+ Topic string
+
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeTransactionsResponseTransactionStateTopic.
+func (v *DescribeTransactionsResponseTransactionStateTopic) Default() {
+}
+
+// NewDescribeTransactionsResponseTransactionStateTopic returns a default DescribeTransactionsResponseTransactionStateTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeTransactionsResponseTransactionStateTopic() DescribeTransactionsResponseTransactionStateTopic {
+ var v DescribeTransactionsResponseTransactionStateTopic
+ v.Default()
+ return v
+}
+
+type DescribeTransactionsResponseTransactionState struct {
+ // A potential error code for describing this transaction.
+ //
+ // NOT_COORDINATOR is returned if the broker receiving this transactional
+ // ID does not own the ID.
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the coordiantor is laoding.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator is being shutdown.
+ //
+ // TRANSACTIONAL_ID_NOT_FOUND is returned if the transactional ID could not be found.
+ //
+ // TRANSACTIONAL_ID_AUTHORIZATION_FAILED is returned if the user does not have
+ // Describe permissions on the transactional ID.
+ ErrorCode int16
+
+ // TransactionalID is the transactional ID this record is for.
+ TransactionalID string
+
+ // State is the state the transaction is in.
+ State string
+
+ // TimeoutMillis is the timeout of this transaction in milliseconds.
+ TimeoutMillis int32
+
+ // StartTimestamp is the timestamp in millis of when this transaction started.
+ StartTimestamp int64
+
+ // ProducerID is the ID in use by the transactional ID.
+ ProducerID int64
+
+ // ProducerEpoch is the epoch associated with the producer ID.
+ ProducerEpoch int16
+
+ // The set of partitions included in the current transaction (if active).
+ // When a transaction is preparing to commit or abort, this will include
+ // only partitions which do not have markers.
+ //
+ // This does not include topics the user is not authorized to describe.
+ Topics []DescribeTransactionsResponseTransactionStateTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeTransactionsResponseTransactionState.
+func (v *DescribeTransactionsResponseTransactionState) Default() {
+}
+
+// NewDescribeTransactionsResponseTransactionState returns a default DescribeTransactionsResponseTransactionState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeTransactionsResponseTransactionState() DescribeTransactionsResponseTransactionState {
+ var v DescribeTransactionsResponseTransactionState
+ v.Default()
+ return v
+}
+
+// DescribeTransactionsResponse is a response to a DescribeTransactionsRequest.
+type DescribeTransactionsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ TransactionStates []DescribeTransactionsResponseTransactionState
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*DescribeTransactionsResponse) Key() int16 { return 65 }
+func (*DescribeTransactionsResponse) MaxVersion() int16 { return 0 }
+func (v *DescribeTransactionsResponse) SetVersion(version int16) { v.Version = version }
+func (v *DescribeTransactionsResponse) GetVersion() int16 { return v.Version }
+func (v *DescribeTransactionsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *DescribeTransactionsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *DescribeTransactionsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *DescribeTransactionsResponse) RequestKind() Request {
+ return &DescribeTransactionsRequest{Version: v.Version}
+}
+
+func (v *DescribeTransactionsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.TransactionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.State
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.TimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.StartTimestamp
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerEpoch
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *DescribeTransactionsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *DescribeTransactionsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *DescribeTransactionsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.TransactionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeTransactionsResponseTransactionState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.State = v
+ }
+ {
+ v := b.Int32()
+ s.TimeoutMillis = v
+ }
+ {
+ v := b.Int64()
+ s.StartTimestamp = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ v := b.Int16()
+ s.ProducerEpoch = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]DescribeTransactionsResponseTransactionStateTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TransactionStates = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrDescribeTransactionsResponse returns a pointer to a default DescribeTransactionsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrDescribeTransactionsResponse() *DescribeTransactionsResponse {
+ var v DescribeTransactionsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to DescribeTransactionsResponse.
+func (v *DescribeTransactionsResponse) Default() {
+}
+
+// NewDescribeTransactionsResponse returns a default DescribeTransactionsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewDescribeTransactionsResponse() DescribeTransactionsResponse {
+ var v DescribeTransactionsResponse
+ v.Default()
+ return v
+}
+
+// For KIP-664, ListTransactionsRequest lists transactions.
+type ListTransactionsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The transaction states to filter by: if empty, all transactions are
+ // returned; if non-empty, then only transactions matching one of the
+ // filtered states will be returned.
+ //
+ // For a list of valid states, see the TransactionState enum.
+ StateFilters []string
+
+ // The producer IDs to filter by: if empty, all transactions will be
+ // returned; if non-empty, only transactions which match one of the filtered
+ // producer IDs will be returned
+ ProducerIDFilters []int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ListTransactionsRequest) Key() int16 { return 66 }
+func (*ListTransactionsRequest) MaxVersion() int16 { return 0 }
+func (v *ListTransactionsRequest) SetVersion(version int16) { v.Version = version }
+func (v *ListTransactionsRequest) GetVersion() int16 { return v.Version }
+func (v *ListTransactionsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *ListTransactionsRequest) ResponseKind() Response {
+ r := &ListTransactionsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ListTransactionsRequest) RequestWith(ctx context.Context, r Requestor) (*ListTransactionsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ListTransactionsResponse)
+ return resp, err
+}
+
+func (v *ListTransactionsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.StateFilters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ {
+ v := v.ProducerIDFilters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt64(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListTransactionsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListTransactionsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListTransactionsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.StateFilters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.StateFilters = v
+ }
+ {
+ v := s.ProducerIDFilters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int64, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int64()
+ a[i] = v
+ }
+ v = a
+ s.ProducerIDFilters = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListTransactionsRequest returns a pointer to a default ListTransactionsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListTransactionsRequest() *ListTransactionsRequest {
+ var v ListTransactionsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListTransactionsRequest.
+func (v *ListTransactionsRequest) Default() {
+}
+
+// NewListTransactionsRequest returns a default ListTransactionsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListTransactionsRequest() ListTransactionsRequest {
+ var v ListTransactionsRequest
+ v.Default()
+ return v
+}
+
+type ListTransactionsResponseTransactionState struct {
+ // The transactional ID being used.
+ TransactionalID string
+
+ // The producer ID of the producer.
+ ProducerID int64
+
+ // The current transaction state of the producer.
+ TransactionState string
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListTransactionsResponseTransactionState.
+func (v *ListTransactionsResponseTransactionState) Default() {
+}
+
+// NewListTransactionsResponseTransactionState returns a default ListTransactionsResponseTransactionState
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListTransactionsResponseTransactionState() ListTransactionsResponseTransactionState {
+ var v ListTransactionsResponseTransactionState
+ v.Default()
+ return v
+}
+
+// ListTransactionsResponse is a response to a ListTransactionsRequest.
+type ListTransactionsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // A potential error code for the listing,
+ //
+ // COORDINATOR_LOAD_IN_PROGRESS is returned if the coordinator is loading.
+ //
+ // COORDINATOR_NOT_AVAILABLE is returned if the coordinator receiving this
+ // request is shutting down.
+ ErrorCode int16
+
+ // Set of state filters provided in the request which were unknown to the
+ // transaction coordinator.
+ UnknownStateFilters []string
+
+ // TransactionStates contains all transactions that were matched for listing
+ // in the request. The response elides transactions that the user does not have
+ // permission to describe (DESCRIBE on TRANSACTIONAL_ID for the transaction).
+ TransactionStates []ListTransactionsResponseTransactionState
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ListTransactionsResponse) Key() int16 { return 66 }
+func (*ListTransactionsResponse) MaxVersion() int16 { return 0 }
+func (v *ListTransactionsResponse) SetVersion(version int16) { v.Version = version }
+func (v *ListTransactionsResponse) GetVersion() int16 { return v.Version }
+func (v *ListTransactionsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *ListTransactionsResponse) Throttle() (int32, bool) { return v.ThrottleMillis, v.Version >= 0 }
+func (v *ListTransactionsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *ListTransactionsResponse) RequestKind() Request {
+ return &ListTransactionsRequest{Version: v.Version}
+}
+
+func (v *ListTransactionsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.UnknownStateFilters
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ {
+ v := v.TransactionStates
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TransactionalID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ProducerID
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.TransactionState
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ListTransactionsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ListTransactionsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ListTransactionsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := s.UnknownStateFilters
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.UnknownStateFilters = v
+ }
+ {
+ v := s.TransactionStates
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ListTransactionsResponseTransactionState, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionalID = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.TransactionState = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TransactionStates = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrListTransactionsResponse returns a pointer to a default ListTransactionsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrListTransactionsResponse() *ListTransactionsResponse {
+ var v ListTransactionsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ListTransactionsResponse.
+func (v *ListTransactionsResponse) Default() {
+}
+
+// NewListTransactionsResponse returns a default ListTransactionsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewListTransactionsResponse() ListTransactionsResponse {
+ var v ListTransactionsResponse
+ v.Default()
+ return v
+}
+
+// For KIP-730, AllocateProducerIDsRequest is a broker-to-broker request that
+// requests a block of producer IDs from the controller broker. This is more
+// specifically introduced for raft, but allows for one more request to avoid
+// zookeeper in the non-raft world as well.
+type AllocateProducerIDsRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The ID of the requesting broker.
+ BrokerID int32
+
+ // The epoch of the requesting broker.
+ //
+ // This field has a default of -1.
+ BrokerEpoch int64
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AllocateProducerIDsRequest) Key() int16 { return 67 }
+func (*AllocateProducerIDsRequest) MaxVersion() int16 { return 0 }
+func (v *AllocateProducerIDsRequest) SetVersion(version int16) { v.Version = version }
+func (v *AllocateProducerIDsRequest) GetVersion() int16 { return v.Version }
+func (v *AllocateProducerIDsRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *AllocateProducerIDsRequest) ResponseKind() Response {
+ r := &AllocateProducerIDsResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *AllocateProducerIDsRequest) RequestWith(ctx context.Context, r Requestor) (*AllocateProducerIDsResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*AllocateProducerIDsResponse)
+ return resp, err
+}
+
+func (v *AllocateProducerIDsRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.BrokerID
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.BrokerEpoch
+ dst = kbin.AppendInt64(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AllocateProducerIDsRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AllocateProducerIDsRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AllocateProducerIDsRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.BrokerID = v
+ }
+ {
+ v := b.Int64()
+ s.BrokerEpoch = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAllocateProducerIDsRequest returns a pointer to a default AllocateProducerIDsRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAllocateProducerIDsRequest() *AllocateProducerIDsRequest {
+ var v AllocateProducerIDsRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AllocateProducerIDsRequest.
+func (v *AllocateProducerIDsRequest) Default() {
+ v.BrokerEpoch = -1
+}
+
+// NewAllocateProducerIDsRequest returns a default AllocateProducerIDsRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAllocateProducerIDsRequest() AllocateProducerIDsRequest {
+ var v AllocateProducerIDsRequest
+ v.Default()
+ return v
+}
+
+// AllocateProducerIDsResponse is a response to an AllocateProducerIDsRequest.
+type AllocateProducerIDsResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // An error code, if any.
+ ErrorCode int16
+
+ // The first producer ID in this range, inclusive.
+ ProducerIDStart int64
+
+ // The number of producer IDs in this range.
+ ProducerIDLen int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*AllocateProducerIDsResponse) Key() int16 { return 67 }
+func (*AllocateProducerIDsResponse) MaxVersion() int16 { return 0 }
+func (v *AllocateProducerIDsResponse) SetVersion(version int16) { v.Version = version }
+func (v *AllocateProducerIDsResponse) GetVersion() int16 { return v.Version }
+func (v *AllocateProducerIDsResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *AllocateProducerIDsResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *AllocateProducerIDsResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *AllocateProducerIDsResponse) RequestKind() Request {
+ return &AllocateProducerIDsRequest{Version: v.Version}
+}
+
+func (v *AllocateProducerIDsResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ProducerIDStart
+ dst = kbin.AppendInt64(dst, v)
+ }
+ {
+ v := v.ProducerIDLen
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *AllocateProducerIDsResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *AllocateProducerIDsResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *AllocateProducerIDsResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ v := b.Int64()
+ s.ProducerIDStart = v
+ }
+ {
+ v := b.Int32()
+ s.ProducerIDLen = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrAllocateProducerIDsResponse returns a pointer to a default AllocateProducerIDsResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrAllocateProducerIDsResponse() *AllocateProducerIDsResponse {
+ var v AllocateProducerIDsResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AllocateProducerIDsResponse.
+func (v *AllocateProducerIDsResponse) Default() {
+}
+
+// NewAllocateProducerIDsResponse returns a default AllocateProducerIDsResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAllocateProducerIDsResponse() AllocateProducerIDsResponse {
+ var v AllocateProducerIDsResponse
+ v.Default()
+ return v
+}
+
+type ConsumerGroupHeartbeatRequestTopic struct {
+ // The topic ID.
+ TopicID [16]byte
+
+ // The partitions.
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupHeartbeatRequestTopic.
+func (v *ConsumerGroupHeartbeatRequestTopic) Default() {
+}
+
+// NewConsumerGroupHeartbeatRequestTopic returns a default ConsumerGroupHeartbeatRequestTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupHeartbeatRequestTopic() ConsumerGroupHeartbeatRequestTopic {
+ var v ConsumerGroupHeartbeatRequestTopic
+ v.Default()
+ return v
+}
+
+// ConsumerGroupHeartbeat is a part of KIP-848; there are a lot of details
+// to this request so documentation is left to the KIP itself.
+type ConsumerGroupHeartbeatRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The group ID.
+ Group string
+
+ // The member ID generated by the coordinator. This must be kept during
+ // the entire lifetime of the member.
+ MemberID string
+
+ // The current member epoch; 0 to join the group, -1 to leave, -2 to
+ // indicate that the static member will rejoin.
+ MemberEpoch int32
+
+ // Instance ID of the member; null if not provided or if unchanging.
+ InstanceID *string
+
+ // The rack ID of the member; null if not provided or if unchanging.
+ RackID *string
+
+ // RebalanceTimeoutMillis is how long the coordinator will wait on a member
+ // to revoke its partitions. -1 if unchanging.
+ //
+ // This field has a default of -1.
+ RebalanceTimeoutMillis int32
+
+ // Subscribed topics; null if unchanging.
+ SubscribedTopicNames []string
+
+ // The server side assignor to use; null if unchanging.
+ ServerAssignor *string
+
+ // Topic partitions owned by the member; null if unchanging.
+ Topics []ConsumerGroupHeartbeatRequestTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ConsumerGroupHeartbeatRequest) Key() int16 { return 68 }
+func (*ConsumerGroupHeartbeatRequest) MaxVersion() int16 { return 0 }
+func (v *ConsumerGroupHeartbeatRequest) SetVersion(version int16) { v.Version = version }
+func (v *ConsumerGroupHeartbeatRequest) GetVersion() int16 { return v.Version }
+func (v *ConsumerGroupHeartbeatRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *ConsumerGroupHeartbeatRequest) ResponseKind() Response {
+ r := &ConsumerGroupHeartbeatResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ConsumerGroupHeartbeatRequest) RequestWith(ctx context.Context, r Requestor) (*ConsumerGroupHeartbeatResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ConsumerGroupHeartbeatResponse)
+ return resp, err
+}
+
+func (v *ConsumerGroupHeartbeatRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.MemberEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.RackID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.RebalanceTimeoutMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.SubscribedTopicNames
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ {
+ v := v.ServerAssignor
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactNullableArrayLen(dst, len(v), v == nil)
+ } else {
+ dst = kbin.AppendNullableArrayLen(dst, len(v), v == nil)
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ConsumerGroupHeartbeatRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConsumerGroupHeartbeatRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConsumerGroupHeartbeatRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ v := b.Int32()
+ s.MemberEpoch = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.RackID = v
+ }
+ {
+ v := b.Int32()
+ s.RebalanceTimeoutMillis = v
+ }
+ {
+ v := s.SubscribedTopicNames
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []string{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.SubscribedTopicNames = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ServerAssignor = v
+ }
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if version < 0 || l == 0 {
+ a = []ConsumerGroupHeartbeatRequestTopic{}
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConsumerGroupHeartbeatRequestTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrConsumerGroupHeartbeatRequest returns a pointer to a default ConsumerGroupHeartbeatRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrConsumerGroupHeartbeatRequest() *ConsumerGroupHeartbeatRequest {
+ var v ConsumerGroupHeartbeatRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupHeartbeatRequest.
+func (v *ConsumerGroupHeartbeatRequest) Default() {
+ v.RebalanceTimeoutMillis = -1
+}
+
+// NewConsumerGroupHeartbeatRequest returns a default ConsumerGroupHeartbeatRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupHeartbeatRequest() ConsumerGroupHeartbeatRequest {
+ var v ConsumerGroupHeartbeatRequest
+ v.Default()
+ return v
+}
+
+type ConsumerGroupHeartbeatResponseAssignmentTopic struct {
+ TopicID [16]byte
+
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupHeartbeatResponseAssignmentTopic.
+func (v *ConsumerGroupHeartbeatResponseAssignmentTopic) Default() {
+}
+
+// NewConsumerGroupHeartbeatResponseAssignmentTopic returns a default ConsumerGroupHeartbeatResponseAssignmentTopic
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupHeartbeatResponseAssignmentTopic() ConsumerGroupHeartbeatResponseAssignmentTopic {
+ var v ConsumerGroupHeartbeatResponseAssignmentTopic
+ v.Default()
+ return v
+}
+
+type ConsumerGroupHeartbeatResponseAssignment struct {
+ // The topics partitions that can be used immediately.
+ Topics []ConsumerGroupHeartbeatResponseAssignmentTopic
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupHeartbeatResponseAssignment.
+func (v *ConsumerGroupHeartbeatResponseAssignment) Default() {
+}
+
+// NewConsumerGroupHeartbeatResponseAssignment returns a default ConsumerGroupHeartbeatResponseAssignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupHeartbeatResponseAssignment() ConsumerGroupHeartbeatResponseAssignment {
+ var v ConsumerGroupHeartbeatResponseAssignment
+ v.Default()
+ return v
+}
+
+// ConsumerGroupHeartbeatResponse is returned from a ConsumerGroupHeartbeatRequest.
+type ConsumerGroupHeartbeatResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ // ErrorCode is the error for this response.
+ //
+ // Supported errors:
+ // - GROUP_AUTHORIZATION_FAILED (version 0+)
+ // - NOT_COORDINATOR (version 0+)
+ // - COORDINATOR_NOT_AVAILABLE (version 0+)
+ // - COORDINATOR_LOAD_IN_PROGRESS (version 0+)
+ // - INVALID_REQUEST (version 0+)
+ // - UNKNOWN_MEMBER_ID (version 0+)
+ // - FENCED_MEMBER_EPOCH (version 0+)
+ // - UNSUPPORTED_ASSIGNOR (version 0+)
+ // - UNRELEASED_INSTANCE_ID (version 0+)
+ // - GROUP_MAX_SIZE_REACHED (version 0+)
+ ErrorCode int16
+
+ // A supplementary message if this errored.
+ ErrorMessage *string
+
+ // The member ID generated by the coordinator; provided when joining
+ // with MemberEpoch=0.
+ MemberID *string
+
+ // The member epoch.
+ MemberEpoch int32
+
+ // The heartbeat interval, in milliseconds.
+ HeartbeatIntervalMillis int32
+
+ // The assignment; null if not provided.
+ Assignment *ConsumerGroupHeartbeatResponseAssignment
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ConsumerGroupHeartbeatResponse) Key() int16 { return 68 }
+func (*ConsumerGroupHeartbeatResponse) MaxVersion() int16 { return 0 }
+func (v *ConsumerGroupHeartbeatResponse) SetVersion(version int16) { v.Version = version }
+func (v *ConsumerGroupHeartbeatResponse) GetVersion() int16 { return v.Version }
+func (v *ConsumerGroupHeartbeatResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *ConsumerGroupHeartbeatResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *ConsumerGroupHeartbeatResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *ConsumerGroupHeartbeatResponse) RequestKind() Request {
+ return &ConsumerGroupHeartbeatRequest{Version: v.Version}
+}
+
+func (v *ConsumerGroupHeartbeatResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.MemberEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.HeartbeatIntervalMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Assignment
+ if v == nil {
+ dst = append(dst, 255)
+ } else {
+ dst = append(dst, 1)
+ {
+ v := v.Topics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ConsumerGroupHeartbeatResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConsumerGroupHeartbeatResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConsumerGroupHeartbeatResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ v := b.Int32()
+ s.MemberEpoch = v
+ }
+ {
+ v := b.Int32()
+ s.HeartbeatIntervalMillis = v
+ }
+ {
+ if present := b.Int8(); present != -1 && b.Ok() {
+ s.Assignment = new(ConsumerGroupHeartbeatResponseAssignment)
+ v := s.Assignment
+ v.Default()
+ s := v
+ {
+ v := s.Topics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConsumerGroupHeartbeatResponseAssignmentTopic, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Topics = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrConsumerGroupHeartbeatResponse returns a pointer to a default ConsumerGroupHeartbeatResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrConsumerGroupHeartbeatResponse() *ConsumerGroupHeartbeatResponse {
+ var v ConsumerGroupHeartbeatResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupHeartbeatResponse.
+func (v *ConsumerGroupHeartbeatResponse) Default() {
+ {
+ v := &v.Assignment
+ _ = v
+ }
+}
+
+// NewConsumerGroupHeartbeatResponse returns a default ConsumerGroupHeartbeatResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupHeartbeatResponse() ConsumerGroupHeartbeatResponse {
+ var v ConsumerGroupHeartbeatResponse
+ v.Default()
+ return v
+}
+
+// RequestForKey returns the request corresponding to the given request key
+// or nil if the key is unknown.
+func RequestForKey(key int16) Request {
+ switch key {
+ default:
+ return nil
+ case 0:
+ return NewPtrProduceRequest()
+ case 1:
+ return NewPtrFetchRequest()
+ case 2:
+ return NewPtrListOffsetsRequest()
+ case 3:
+ return NewPtrMetadataRequest()
+ case 4:
+ return NewPtrLeaderAndISRRequest()
+ case 5:
+ return NewPtrStopReplicaRequest()
+ case 6:
+ return NewPtrUpdateMetadataRequest()
+ case 7:
+ return NewPtrControlledShutdownRequest()
+ case 8:
+ return NewPtrOffsetCommitRequest()
+ case 9:
+ return NewPtrOffsetFetchRequest()
+ case 10:
+ return NewPtrFindCoordinatorRequest()
+ case 11:
+ return NewPtrJoinGroupRequest()
+ case 12:
+ return NewPtrHeartbeatRequest()
+ case 13:
+ return NewPtrLeaveGroupRequest()
+ case 14:
+ return NewPtrSyncGroupRequest()
+ case 15:
+ return NewPtrDescribeGroupsRequest()
+ case 16:
+ return NewPtrListGroupsRequest()
+ case 17:
+ return NewPtrSASLHandshakeRequest()
+ case 18:
+ return NewPtrApiVersionsRequest()
+ case 19:
+ return NewPtrCreateTopicsRequest()
+ case 20:
+ return NewPtrDeleteTopicsRequest()
+ case 21:
+ return NewPtrDeleteRecordsRequest()
+ case 22:
+ return NewPtrInitProducerIDRequest()
+ case 23:
+ return NewPtrOffsetForLeaderEpochRequest()
+ case 24:
+ return NewPtrAddPartitionsToTxnRequest()
+ case 25:
+ return NewPtrAddOffsetsToTxnRequest()
+ case 26:
+ return NewPtrEndTxnRequest()
+ case 27:
+ return NewPtrWriteTxnMarkersRequest()
+ case 28:
+ return NewPtrTxnOffsetCommitRequest()
+ case 29:
+ return NewPtrDescribeACLsRequest()
+ case 30:
+ return NewPtrCreateACLsRequest()
+ case 31:
+ return NewPtrDeleteACLsRequest()
+ case 32:
+ return NewPtrDescribeConfigsRequest()
+ case 33:
+ return NewPtrAlterConfigsRequest()
+ case 34:
+ return NewPtrAlterReplicaLogDirsRequest()
+ case 35:
+ return NewPtrDescribeLogDirsRequest()
+ case 36:
+ return NewPtrSASLAuthenticateRequest()
+ case 37:
+ return NewPtrCreatePartitionsRequest()
+ case 38:
+ return NewPtrCreateDelegationTokenRequest()
+ case 39:
+ return NewPtrRenewDelegationTokenRequest()
+ case 40:
+ return NewPtrExpireDelegationTokenRequest()
+ case 41:
+ return NewPtrDescribeDelegationTokenRequest()
+ case 42:
+ return NewPtrDeleteGroupsRequest()
+ case 43:
+ return NewPtrElectLeadersRequest()
+ case 44:
+ return NewPtrIncrementalAlterConfigsRequest()
+ case 45:
+ return NewPtrAlterPartitionAssignmentsRequest()
+ case 46:
+ return NewPtrListPartitionReassignmentsRequest()
+ case 47:
+ return NewPtrOffsetDeleteRequest()
+ case 48:
+ return NewPtrDescribeClientQuotasRequest()
+ case 49:
+ return NewPtrAlterClientQuotasRequest()
+ case 50:
+ return NewPtrDescribeUserSCRAMCredentialsRequest()
+ case 51:
+ return NewPtrAlterUserSCRAMCredentialsRequest()
+ case 52:
+ return NewPtrVoteRequest()
+ case 53:
+ return NewPtrBeginQuorumEpochRequest()
+ case 54:
+ return NewPtrEndQuorumEpochRequest()
+ case 55:
+ return NewPtrDescribeQuorumRequest()
+ case 56:
+ return NewPtrAlterPartitionRequest()
+ case 57:
+ return NewPtrUpdateFeaturesRequest()
+ case 58:
+ return NewPtrEnvelopeRequest()
+ case 59:
+ return NewPtrFetchSnapshotRequest()
+ case 60:
+ return NewPtrDescribeClusterRequest()
+ case 61:
+ return NewPtrDescribeProducersRequest()
+ case 62:
+ return NewPtrBrokerRegistrationRequest()
+ case 63:
+ return NewPtrBrokerHeartbeatRequest()
+ case 64:
+ return NewPtrUnregisterBrokerRequest()
+ case 65:
+ return NewPtrDescribeTransactionsRequest()
+ case 66:
+ return NewPtrListTransactionsRequest()
+ case 67:
+ return NewPtrAllocateProducerIDsRequest()
+ case 68:
+ return NewPtrConsumerGroupHeartbeatRequest()
+ }
+}
+
+// ResponseForKey returns the response corresponding to the given request key
+// or nil if the key is unknown.
+func ResponseForKey(key int16) Response {
+ switch key {
+ default:
+ return nil
+ case 0:
+ return NewPtrProduceResponse()
+ case 1:
+ return NewPtrFetchResponse()
+ case 2:
+ return NewPtrListOffsetsResponse()
+ case 3:
+ return NewPtrMetadataResponse()
+ case 4:
+ return NewPtrLeaderAndISRResponse()
+ case 5:
+ return NewPtrStopReplicaResponse()
+ case 6:
+ return NewPtrUpdateMetadataResponse()
+ case 7:
+ return NewPtrControlledShutdownResponse()
+ case 8:
+ return NewPtrOffsetCommitResponse()
+ case 9:
+ return NewPtrOffsetFetchResponse()
+ case 10:
+ return NewPtrFindCoordinatorResponse()
+ case 11:
+ return NewPtrJoinGroupResponse()
+ case 12:
+ return NewPtrHeartbeatResponse()
+ case 13:
+ return NewPtrLeaveGroupResponse()
+ case 14:
+ return NewPtrSyncGroupResponse()
+ case 15:
+ return NewPtrDescribeGroupsResponse()
+ case 16:
+ return NewPtrListGroupsResponse()
+ case 17:
+ return NewPtrSASLHandshakeResponse()
+ case 18:
+ return NewPtrApiVersionsResponse()
+ case 19:
+ return NewPtrCreateTopicsResponse()
+ case 20:
+ return NewPtrDeleteTopicsResponse()
+ case 21:
+ return NewPtrDeleteRecordsResponse()
+ case 22:
+ return NewPtrInitProducerIDResponse()
+ case 23:
+ return NewPtrOffsetForLeaderEpochResponse()
+ case 24:
+ return NewPtrAddPartitionsToTxnResponse()
+ case 25:
+ return NewPtrAddOffsetsToTxnResponse()
+ case 26:
+ return NewPtrEndTxnResponse()
+ case 27:
+ return NewPtrWriteTxnMarkersResponse()
+ case 28:
+ return NewPtrTxnOffsetCommitResponse()
+ case 29:
+ return NewPtrDescribeACLsResponse()
+ case 30:
+ return NewPtrCreateACLsResponse()
+ case 31:
+ return NewPtrDeleteACLsResponse()
+ case 32:
+ return NewPtrDescribeConfigsResponse()
+ case 33:
+ return NewPtrAlterConfigsResponse()
+ case 34:
+ return NewPtrAlterReplicaLogDirsResponse()
+ case 35:
+ return NewPtrDescribeLogDirsResponse()
+ case 36:
+ return NewPtrSASLAuthenticateResponse()
+ case 37:
+ return NewPtrCreatePartitionsResponse()
+ case 38:
+ return NewPtrCreateDelegationTokenResponse()
+ case 39:
+ return NewPtrRenewDelegationTokenResponse()
+ case 40:
+ return NewPtrExpireDelegationTokenResponse()
+ case 41:
+ return NewPtrDescribeDelegationTokenResponse()
+ case 42:
+ return NewPtrDeleteGroupsResponse()
+ case 43:
+ return NewPtrElectLeadersResponse()
+ case 44:
+ return NewPtrIncrementalAlterConfigsResponse()
+ case 45:
+ return NewPtrAlterPartitionAssignmentsResponse()
+ case 46:
+ return NewPtrListPartitionReassignmentsResponse()
+ case 47:
+ return NewPtrOffsetDeleteResponse()
+ case 48:
+ return NewPtrDescribeClientQuotasResponse()
+ case 49:
+ return NewPtrAlterClientQuotasResponse()
+ case 50:
+ return NewPtrDescribeUserSCRAMCredentialsResponse()
+ case 51:
+ return NewPtrAlterUserSCRAMCredentialsResponse()
+ case 52:
+ return NewPtrVoteResponse()
+ case 53:
+ return NewPtrBeginQuorumEpochResponse()
+ case 54:
+ return NewPtrEndQuorumEpochResponse()
+ case 55:
+ return NewPtrDescribeQuorumResponse()
+ case 56:
+ return NewPtrAlterPartitionResponse()
+ case 57:
+ return NewPtrUpdateFeaturesResponse()
+ case 58:
+ return NewPtrEnvelopeResponse()
+ case 59:
+ return NewPtrFetchSnapshotResponse()
+ case 60:
+ return NewPtrDescribeClusterResponse()
+ case 61:
+ return NewPtrDescribeProducersResponse()
+ case 62:
+ return NewPtrBrokerRegistrationResponse()
+ case 63:
+ return NewPtrBrokerHeartbeatResponse()
+ case 64:
+ return NewPtrUnregisterBrokerResponse()
+ case 65:
+ return NewPtrDescribeTransactionsResponse()
+ case 66:
+ return NewPtrListTransactionsResponse()
+ case 67:
+ return NewPtrAllocateProducerIDsResponse()
+ case 68:
+ return NewPtrConsumerGroupHeartbeatResponse()
+ }
+}
+
+// NameForKey returns the name (e.g., "Fetch") corresponding to a given request key
+// or "" if the key is unknown.
+func NameForKey(key int16) string {
+ switch key {
+ default:
+ return "Unknown"
+ case 0:
+ return "Produce"
+ case 1:
+ return "Fetch"
+ case 2:
+ return "ListOffsets"
+ case 3:
+ return "Metadata"
+ case 4:
+ return "LeaderAndISR"
+ case 5:
+ return "StopReplica"
+ case 6:
+ return "UpdateMetadata"
+ case 7:
+ return "ControlledShutdown"
+ case 8:
+ return "OffsetCommit"
+ case 9:
+ return "OffsetFetch"
+ case 10:
+ return "FindCoordinator"
+ case 11:
+ return "JoinGroup"
+ case 12:
+ return "Heartbeat"
+ case 13:
+ return "LeaveGroup"
+ case 14:
+ return "SyncGroup"
+ case 15:
+ return "DescribeGroups"
+ case 16:
+ return "ListGroups"
+ case 17:
+ return "SASLHandshake"
+ case 18:
+ return "ApiVersions"
+ case 19:
+ return "CreateTopics"
+ case 20:
+ return "DeleteTopics"
+ case 21:
+ return "DeleteRecords"
+ case 22:
+ return "InitProducerID"
+ case 23:
+ return "OffsetForLeaderEpoch"
+ case 24:
+ return "AddPartitionsToTxn"
+ case 25:
+ return "AddOffsetsToTxn"
+ case 26:
+ return "EndTxn"
+ case 27:
+ return "WriteTxnMarkers"
+ case 28:
+ return "TxnOffsetCommit"
+ case 29:
+ return "DescribeACLs"
+ case 30:
+ return "CreateACLs"
+ case 31:
+ return "DeleteACLs"
+ case 32:
+ return "DescribeConfigs"
+ case 33:
+ return "AlterConfigs"
+ case 34:
+ return "AlterReplicaLogDirs"
+ case 35:
+ return "DescribeLogDirs"
+ case 36:
+ return "SASLAuthenticate"
+ case 37:
+ return "CreatePartitions"
+ case 38:
+ return "CreateDelegationToken"
+ case 39:
+ return "RenewDelegationToken"
+ case 40:
+ return "ExpireDelegationToken"
+ case 41:
+ return "DescribeDelegationToken"
+ case 42:
+ return "DeleteGroups"
+ case 43:
+ return "ElectLeaders"
+ case 44:
+ return "IncrementalAlterConfigs"
+ case 45:
+ return "AlterPartitionAssignments"
+ case 46:
+ return "ListPartitionReassignments"
+ case 47:
+ return "OffsetDelete"
+ case 48:
+ return "DescribeClientQuotas"
+ case 49:
+ return "AlterClientQuotas"
+ case 50:
+ return "DescribeUserSCRAMCredentials"
+ case 51:
+ return "AlterUserSCRAMCredentials"
+ case 52:
+ return "Vote"
+ case 53:
+ return "BeginQuorumEpoch"
+ case 54:
+ return "EndQuorumEpoch"
+ case 55:
+ return "DescribeQuorum"
+ case 56:
+ return "AlterPartition"
+ case 57:
+ return "UpdateFeatures"
+ case 58:
+ return "Envelope"
+ case 59:
+ return "FetchSnapshot"
+ case 60:
+ return "DescribeCluster"
+ case 61:
+ return "DescribeProducers"
+ case 62:
+ return "BrokerRegistration"
+ case 63:
+ return "BrokerHeartbeat"
+ case 64:
+ return "UnregisterBroker"
+ case 65:
+ return "DescribeTransactions"
+ case 66:
+ return "ListTransactions"
+ case 67:
+ return "AllocateProducerIDs"
+ case 68:
+ return "ConsumerGroupHeartbeat"
+ }
+}
+
+// Key is a typed representation of a request key, with helper functions.
+type Key int16
+
+const (
+ Produce Key = 0
+ Fetch Key = 1
+ ListOffsets Key = 2
+ Metadata Key = 3
+ LeaderAndISR Key = 4
+ StopReplica Key = 5
+ UpdateMetadata Key = 6
+ ControlledShutdown Key = 7
+ OffsetCommit Key = 8
+ OffsetFetch Key = 9
+ FindCoordinator Key = 10
+ JoinGroup Key = 11
+ Heartbeat Key = 12
+ LeaveGroup Key = 13
+ SyncGroup Key = 14
+ DescribeGroups Key = 15
+ ListGroups Key = 16
+ SASLHandshake Key = 17
+ ApiVersions Key = 18
+ CreateTopics Key = 19
+ DeleteTopics Key = 20
+ DeleteRecords Key = 21
+ InitProducerID Key = 22
+ OffsetForLeaderEpoch Key = 23
+ AddPartitionsToTxn Key = 24
+ AddOffsetsToTxn Key = 25
+ EndTxn Key = 26
+ WriteTxnMarkers Key = 27
+ TxnOffsetCommit Key = 28
+ DescribeACLs Key = 29
+ CreateACLs Key = 30
+ DeleteACLs Key = 31
+ DescribeConfigs Key = 32
+ AlterConfigs Key = 33
+ AlterReplicaLogDirs Key = 34
+ DescribeLogDirs Key = 35
+ SASLAuthenticate Key = 36
+ CreatePartitions Key = 37
+ CreateDelegationToken Key = 38
+ RenewDelegationToken Key = 39
+ ExpireDelegationToken Key = 40
+ DescribeDelegationToken Key = 41
+ DeleteGroups Key = 42
+ ElectLeaders Key = 43
+ IncrementalAlterConfigs Key = 44
+ AlterPartitionAssignments Key = 45
+ ListPartitionReassignments Key = 46
+ OffsetDelete Key = 47
+ DescribeClientQuotas Key = 48
+ AlterClientQuotas Key = 49
+ DescribeUserSCRAMCredentials Key = 50
+ AlterUserSCRAMCredentials Key = 51
+ Vote Key = 52
+ BeginQuorumEpoch Key = 53
+ EndQuorumEpoch Key = 54
+ DescribeQuorum Key = 55
+ AlterPartition Key = 56
+ UpdateFeatures Key = 57
+ Envelope Key = 58
+ FetchSnapshot Key = 59
+ DescribeCluster Key = 60
+ DescribeProducers Key = 61
+ BrokerRegistration Key = 62
+ BrokerHeartbeat Key = 63
+ UnregisterBroker Key = 64
+ DescribeTransactions Key = 65
+ ListTransactions Key = 66
+ AllocateProducerIDs Key = 67
+ ConsumerGroupHeartbeat Key = 68
+)
+
+// Name returns the name for this key.
+func (k Key) Name() string { return NameForKey(int16(k)) }
+
+// Request returns a new request for this key if the key is known.
+func (k Key) Request() Request { return RequestForKey(int16(k)) }
+
+// Response returns a new response for this key if the key is known.
+func (k Key) Response() Response { return ResponseForKey(int16(k)) }
+
+// Int16 is an alias for int16(k).
+func (k Key) Int16() int16 { return int16(k) }
+
+// A type of config.
+//
+// Possible values and their meanings:
+//
+// * 2 (TOPIC)
+//
+// * 4 (BROKER)
+//
+// * 8 (BROKER_LOGGER)
+type ConfigResourceType int8
+
+func (v ConfigResourceType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 2:
+ return "TOPIC"
+ case 4:
+ return "BROKER"
+ case 8:
+ return "BROKER_LOGGER"
+ }
+}
+
+func ConfigResourceTypeStrings() []string {
+ return []string{
+ "TOPIC",
+ "BROKER",
+ "BROKER_LOGGER",
+ }
+}
+
+// ParseConfigResourceType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseConfigResourceType(s string) (ConfigResourceType, error) {
+ switch strnorm(s) {
+ case "topic":
+ return 2, nil
+ case "broker":
+ return 4, nil
+ case "brokerlogger":
+ return 8, nil
+ default:
+ return 0, fmt.Errorf("ConfigResourceType: unable to parse %q", s)
+ }
+}
+
+const (
+ ConfigResourceTypeUnknown ConfigResourceType = 0
+ ConfigResourceTypeTopic ConfigResourceType = 2
+ ConfigResourceTypeBroker ConfigResourceType = 4
+ ConfigResourceTypeBrokerLogger ConfigResourceType = 8
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ConfigResourceType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ConfigResourceType) UnmarshalText(text []byte) error {
+ v, err := ParseConfigResourceType(string(text))
+ *e = v
+ return err
+}
+
+// Where a config entry is from. If there are no config synonyms,
+// the source is DEFAULT_CONFIG.
+//
+// Possible values and their meanings:
+//
+// * 1 (DYNAMIC_TOPIC_CONFIG)
+// Dynamic topic config for a specific topic.
+//
+// * 2 (DYNAMIC_BROKER_CONFIG)
+// Dynamic broker config for a specific broker.
+//
+// * 3 (DYNAMIC_DEFAULT_BROKER_CONFIG)
+// Dynamic broker config used as the default for all brokers in a cluster.
+//
+// * 4 (STATIC_BROKER_CONFIG)
+// Static broker config provided at start up.
+//
+// * 5 (DEFAULT_CONFIG)
+// Built-in default configuration for those that have defaults.
+//
+// * 6 (DYNAMIC_BROKER_LOGGER_CONFIG)
+// Broker logger; see KIP-412.
+type ConfigSource int8
+
+func (v ConfigSource) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 1:
+ return "DYNAMIC_TOPIC_CONFIG"
+ case 2:
+ return "DYNAMIC_BROKER_CONFIG"
+ case 3:
+ return "DYNAMIC_DEFAULT_BROKER_CONFIG"
+ case 4:
+ return "STATIC_BROKER_CONFIG"
+ case 5:
+ return "DEFAULT_CONFIG"
+ case 6:
+ return "DYNAMIC_BROKER_LOGGER_CONFIG"
+ }
+}
+
+func ConfigSourceStrings() []string {
+ return []string{
+ "DYNAMIC_TOPIC_CONFIG",
+ "DYNAMIC_BROKER_CONFIG",
+ "DYNAMIC_DEFAULT_BROKER_CONFIG",
+ "STATIC_BROKER_CONFIG",
+ "DEFAULT_CONFIG",
+ "DYNAMIC_BROKER_LOGGER_CONFIG",
+ }
+}
+
+// ParseConfigSource normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseConfigSource(s string) (ConfigSource, error) {
+ switch strnorm(s) {
+ case "dynamictopicconfig":
+ return 1, nil
+ case "dynamicbrokerconfig":
+ return 2, nil
+ case "dynamicdefaultbrokerconfig":
+ return 3, nil
+ case "staticbrokerconfig":
+ return 4, nil
+ case "defaultconfig":
+ return 5, nil
+ case "dynamicbrokerloggerconfig":
+ return 6, nil
+ default:
+ return 0, fmt.Errorf("ConfigSource: unable to parse %q", s)
+ }
+}
+
+const (
+ ConfigSourceUnknown ConfigSource = 0
+ ConfigSourceDynamicTopicConfig ConfigSource = 1
+ ConfigSourceDynamicBrokerConfig ConfigSource = 2
+ ConfigSourceDynamicDefaultBrokerConfig ConfigSource = 3
+ ConfigSourceStaticBrokerConfig ConfigSource = 4
+ ConfigSourceDefaultConfig ConfigSource = 5
+ ConfigSourceDynamicBrokerLoggerConfig ConfigSource = 6
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ConfigSource) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ConfigSource) UnmarshalText(text []byte) error {
+ v, err := ParseConfigSource(string(text))
+ *e = v
+ return err
+}
+
+// A configuration data type.
+//
+// Possible values and their meanings:
+//
+// * 1 (BOOLEAN)
+//
+// * 2 (STRING)
+//
+// * 3 (INT)
+//
+// * 4 (SHORT)
+//
+// * 5 (LONG)
+//
+// * 6 (DOUBLE)
+//
+// * 7 (LIST)
+//
+// * 8 (CLASS)
+//
+// * 9 (PASSWORD)
+type ConfigType int8
+
+func (v ConfigType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 1:
+ return "BOOLEAN"
+ case 2:
+ return "STRING"
+ case 3:
+ return "INT"
+ case 4:
+ return "SHORT"
+ case 5:
+ return "LONG"
+ case 6:
+ return "DOUBLE"
+ case 7:
+ return "LIST"
+ case 8:
+ return "CLASS"
+ case 9:
+ return "PASSWORD"
+ }
+}
+
+func ConfigTypeStrings() []string {
+ return []string{
+ "BOOLEAN",
+ "STRING",
+ "INT",
+ "SHORT",
+ "LONG",
+ "DOUBLE",
+ "LIST",
+ "CLASS",
+ "PASSWORD",
+ }
+}
+
+// ParseConfigType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseConfigType(s string) (ConfigType, error) {
+ switch strnorm(s) {
+ case "boolean":
+ return 1, nil
+ case "string":
+ return 2, nil
+ case "int":
+ return 3, nil
+ case "short":
+ return 4, nil
+ case "long":
+ return 5, nil
+ case "double":
+ return 6, nil
+ case "list":
+ return 7, nil
+ case "class":
+ return 8, nil
+ case "password":
+ return 9, nil
+ default:
+ return 0, fmt.Errorf("ConfigType: unable to parse %q", s)
+ }
+}
+
+const (
+ ConfigTypeUnknown ConfigType = 0
+ ConfigTypeBoolean ConfigType = 1
+ ConfigTypeString ConfigType = 2
+ ConfigTypeInt ConfigType = 3
+ ConfigTypeShort ConfigType = 4
+ ConfigTypeLong ConfigType = 5
+ ConfigTypeDouble ConfigType = 6
+ ConfigTypeList ConfigType = 7
+ ConfigTypeClass ConfigType = 8
+ ConfigTypePassword ConfigType = 9
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ConfigType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ConfigType) UnmarshalText(text []byte) error {
+ v, err := ParseConfigType(string(text))
+ *e = v
+ return err
+}
+
+// An incremental configuration operation.
+//
+// Possible values and their meanings:
+//
+// * 0 (SET)
+//
+// * 1 (DELETE)
+//
+// * 2 (APPEND)
+//
+// * 3 (SUBTRACT)
+type IncrementalAlterConfigOp int8
+
+func (v IncrementalAlterConfigOp) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 0:
+ return "SET"
+ case 1:
+ return "DELETE"
+ case 2:
+ return "APPEND"
+ case 3:
+ return "SUBTRACT"
+ }
+}
+
+func IncrementalAlterConfigOpStrings() []string {
+ return []string{
+ "SET",
+ "DELETE",
+ "APPEND",
+ "SUBTRACT",
+ }
+}
+
+// ParseIncrementalAlterConfigOp normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseIncrementalAlterConfigOp(s string) (IncrementalAlterConfigOp, error) {
+ switch strnorm(s) {
+ case "set":
+ return 0, nil
+ case "delete":
+ return 1, nil
+ case "append":
+ return 2, nil
+ case "subtract":
+ return 3, nil
+ default:
+ return 0, fmt.Errorf("IncrementalAlterConfigOp: unable to parse %q", s)
+ }
+}
+
+const (
+ IncrementalAlterConfigOpSet IncrementalAlterConfigOp = 0
+ IncrementalAlterConfigOpDelete IncrementalAlterConfigOp = 1
+ IncrementalAlterConfigOpAppend IncrementalAlterConfigOp = 2
+ IncrementalAlterConfigOpSubtract IncrementalAlterConfigOp = 3
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e IncrementalAlterConfigOp) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *IncrementalAlterConfigOp) UnmarshalText(text []byte) error {
+ v, err := ParseIncrementalAlterConfigOp(string(text))
+ *e = v
+ return err
+}
+
+// ACLResourceType is a type of resource to use for ACLs.
+//
+// Possible values and their meanings:
+//
+// * 1 (ANY)
+//
+// * 2 (TOPIC)
+//
+// * 3 (GROUP)
+//
+// * 4 (CLUSTER)
+//
+// * 5 (TRANSACTIONAL_ID)
+//
+// * 6 (DELEGATION_TOKEN)
+//
+// * 7 (USER)
+type ACLResourceType int8
+
+func (v ACLResourceType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 1:
+ return "ANY"
+ case 2:
+ return "TOPIC"
+ case 3:
+ return "GROUP"
+ case 4:
+ return "CLUSTER"
+ case 5:
+ return "TRANSACTIONAL_ID"
+ case 6:
+ return "DELEGATION_TOKEN"
+ case 7:
+ return "USER"
+ }
+}
+
+func ACLResourceTypeStrings() []string {
+ return []string{
+ "ANY",
+ "TOPIC",
+ "GROUP",
+ "CLUSTER",
+ "TRANSACTIONAL_ID",
+ "DELEGATION_TOKEN",
+ "USER",
+ }
+}
+
+// ParseACLResourceType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseACLResourceType(s string) (ACLResourceType, error) {
+ switch strnorm(s) {
+ case "any":
+ return 1, nil
+ case "topic":
+ return 2, nil
+ case "group":
+ return 3, nil
+ case "cluster":
+ return 4, nil
+ case "transactionalid":
+ return 5, nil
+ case "delegationtoken":
+ return 6, nil
+ case "user":
+ return 7, nil
+ default:
+ return 0, fmt.Errorf("ACLResourceType: unable to parse %q", s)
+ }
+}
+
+const (
+ ACLResourceTypeUnknown ACLResourceType = 0
+ ACLResourceTypeAny ACLResourceType = 1
+ ACLResourceTypeTopic ACLResourceType = 2
+ ACLResourceTypeGroup ACLResourceType = 3
+ ACLResourceTypeCluster ACLResourceType = 4
+ ACLResourceTypeTransactionalId ACLResourceType = 5
+ ACLResourceTypeDelegationToken ACLResourceType = 6
+ ACLResourceTypeUser ACLResourceType = 7
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ACLResourceType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ACLResourceType) UnmarshalText(text []byte) error {
+ v, err := ParseACLResourceType(string(text))
+ *e = v
+ return err
+}
+
+// ACLResourcePatternType is how an acl's ResourceName is understood.
+//
+// This field was added with Kafka 2.0.0 for KIP-290.
+//
+// Possible values and their meanings:
+//
+// * 1 (ANY)
+// Matches anything.
+//
+// * 2 (MATCH)
+// Performs pattern matching; i.e., a literal match, or a prefix match, or wildcard.
+//
+// * 3 (LITERAL)
+// The name must be an exact match.
+//
+// * 4 (PREFIXED)
+// The name must have our requested name as a prefix (that is, "foo" will match on "foobar").
+type ACLResourcePatternType int8
+
+func (v ACLResourcePatternType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 1:
+ return "ANY"
+ case 2:
+ return "MATCH"
+ case 3:
+ return "LITERAL"
+ case 4:
+ return "PREFIXED"
+ }
+}
+
+func ACLResourcePatternTypeStrings() []string {
+ return []string{
+ "ANY",
+ "MATCH",
+ "LITERAL",
+ "PREFIXED",
+ }
+}
+
+// ParseACLResourcePatternType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseACLResourcePatternType(s string) (ACLResourcePatternType, error) {
+ switch strnorm(s) {
+ case "any":
+ return 1, nil
+ case "match":
+ return 2, nil
+ case "literal":
+ return 3, nil
+ case "prefixed":
+ return 4, nil
+ default:
+ return 0, fmt.Errorf("ACLResourcePatternType: unable to parse %q", s)
+ }
+}
+
+const (
+ ACLResourcePatternTypeUnknown ACLResourcePatternType = 0
+ ACLResourcePatternTypeAny ACLResourcePatternType = 1
+ ACLResourcePatternTypeMatch ACLResourcePatternType = 2
+ ACLResourcePatternTypeLiteral ACLResourcePatternType = 3
+ ACLResourcePatternTypePrefixed ACLResourcePatternType = 4
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ACLResourcePatternType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ACLResourcePatternType) UnmarshalText(text []byte) error {
+ v, err := ParseACLResourcePatternType(string(text))
+ *e = v
+ return err
+}
+
+// An ACL permission type.
+//
+// Possible values and their meanings:
+//
+// * 1 (ANY)
+// Any permission.
+//
+// * 2 (DENY)
+// Any deny permission.
+//
+// * 3 (ALLOW)
+// Any allow permission.
+type ACLPermissionType int8
+
+func (v ACLPermissionType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 1:
+ return "ANY"
+ case 2:
+ return "DENY"
+ case 3:
+ return "ALLOW"
+ }
+}
+
+func ACLPermissionTypeStrings() []string {
+ return []string{
+ "ANY",
+ "DENY",
+ "ALLOW",
+ }
+}
+
+// ParseACLPermissionType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseACLPermissionType(s string) (ACLPermissionType, error) {
+ switch strnorm(s) {
+ case "any":
+ return 1, nil
+ case "deny":
+ return 2, nil
+ case "allow":
+ return 3, nil
+ default:
+ return 0, fmt.Errorf("ACLPermissionType: unable to parse %q", s)
+ }
+}
+
+const (
+ ACLPermissionTypeUnknown ACLPermissionType = 0
+ ACLPermissionTypeAny ACLPermissionType = 1
+ ACLPermissionTypeDeny ACLPermissionType = 2
+ ACLPermissionTypeAllow ACLPermissionType = 3
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ACLPermissionType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ACLPermissionType) UnmarshalText(text []byte) error {
+ v, err := ParseACLPermissionType(string(text))
+ *e = v
+ return err
+}
+
+// An ACL operation.
+//
+// Possible values and their meanings:
+//
+// * 1 (ANY)
+// Matches anything.
+//
+// * 2 (ALL)
+// Matches anything granted all permissions.
+//
+// * 3 (READ)
+//
+// * 4 (WRITE)
+//
+// * 5 (CREATE)
+//
+// * 6 (DELETE)
+//
+// * 7 (ALTER)
+//
+// * 8 (DESCRIBE)
+//
+// * 9 (CLUSTER_ACTION)
+//
+// * 10 (DESCRIBE_CONFIGS)
+//
+// * 11 (ALTER_CONFIGS)
+//
+// * 12 (IDEMPOTENT_WRITE)
+//
+// * 13 (CREATE_TOKENS)
+//
+// * 14 (DESCRIBE_TOKENS)
+type ACLOperation int8
+
+func (v ACLOperation) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 1:
+ return "ANY"
+ case 2:
+ return "ALL"
+ case 3:
+ return "READ"
+ case 4:
+ return "WRITE"
+ case 5:
+ return "CREATE"
+ case 6:
+ return "DELETE"
+ case 7:
+ return "ALTER"
+ case 8:
+ return "DESCRIBE"
+ case 9:
+ return "CLUSTER_ACTION"
+ case 10:
+ return "DESCRIBE_CONFIGS"
+ case 11:
+ return "ALTER_CONFIGS"
+ case 12:
+ return "IDEMPOTENT_WRITE"
+ case 13:
+ return "CREATE_TOKENS"
+ case 14:
+ return "DESCRIBE_TOKENS"
+ }
+}
+
+func ACLOperationStrings() []string {
+ return []string{
+ "ANY",
+ "ALL",
+ "READ",
+ "WRITE",
+ "CREATE",
+ "DELETE",
+ "ALTER",
+ "DESCRIBE",
+ "CLUSTER_ACTION",
+ "DESCRIBE_CONFIGS",
+ "ALTER_CONFIGS",
+ "IDEMPOTENT_WRITE",
+ "CREATE_TOKENS",
+ "DESCRIBE_TOKENS",
+ }
+}
+
+// ParseACLOperation normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseACLOperation(s string) (ACLOperation, error) {
+ switch strnorm(s) {
+ case "any":
+ return 1, nil
+ case "all":
+ return 2, nil
+ case "read":
+ return 3, nil
+ case "write":
+ return 4, nil
+ case "create":
+ return 5, nil
+ case "delete":
+ return 6, nil
+ case "alter":
+ return 7, nil
+ case "describe":
+ return 8, nil
+ case "clusteraction":
+ return 9, nil
+ case "describeconfigs":
+ return 10, nil
+ case "alterconfigs":
+ return 11, nil
+ case "idempotentwrite":
+ return 12, nil
+ case "createtokens":
+ return 13, nil
+ case "describetokens":
+ return 14, nil
+ default:
+ return 0, fmt.Errorf("ACLOperation: unable to parse %q", s)
+ }
+}
+
+const (
+ ACLOperationUnknown ACLOperation = 0
+ ACLOperationAny ACLOperation = 1
+ ACLOperationAll ACLOperation = 2
+ ACLOperationRead ACLOperation = 3
+ ACLOperationWrite ACLOperation = 4
+ ACLOperationCreate ACLOperation = 5
+ ACLOperationDelete ACLOperation = 6
+ ACLOperationAlter ACLOperation = 7
+ ACLOperationDescribe ACLOperation = 8
+ ACLOperationClusterAction ACLOperation = 9
+ ACLOperationDescribeConfigs ACLOperation = 10
+ ACLOperationAlterConfigs ACLOperation = 11
+ ACLOperationIdempotentWrite ACLOperation = 12
+ ACLOperationCreateTokens ACLOperation = 13
+ ACLOperationDescribeTokens ACLOperation = 14
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ACLOperation) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ACLOperation) UnmarshalText(text []byte) error {
+ v, err := ParseACLOperation(string(text))
+ *e = v
+ return err
+}
+
+// TransactionState is the state of a transaction.
+//
+// Possible values and their meanings:
+//
+// * 0 (Empty)
+//
+// * 1 (Ongoing)
+//
+// * 2 (PrepareCommit)
+//
+// * 3 (PrepareAbort)
+//
+// * 4 (CompleteCommit)
+//
+// * 5 (CompleteAbort)
+//
+// * 6 (Dead)
+//
+// * 7 (PrepareEpochFence)
+type TransactionState int8
+
+func (v TransactionState) String() string {
+ switch v {
+ default:
+ return "Unknown"
+ case 0:
+ return "Empty"
+ case 1:
+ return "Ongoing"
+ case 2:
+ return "PrepareCommit"
+ case 3:
+ return "PrepareAbort"
+ case 4:
+ return "CompleteCommit"
+ case 5:
+ return "CompleteAbort"
+ case 6:
+ return "Dead"
+ case 7:
+ return "PrepareEpochFence"
+ }
+}
+
+func TransactionStateStrings() []string {
+ return []string{
+ "Empty",
+ "Ongoing",
+ "PrepareCommit",
+ "PrepareAbort",
+ "CompleteCommit",
+ "CompleteAbort",
+ "Dead",
+ "PrepareEpochFence",
+ }
+}
+
+// ParseTransactionState normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseTransactionState(s string) (TransactionState, error) {
+ switch strnorm(s) {
+ case "empty":
+ return 0, nil
+ case "ongoing":
+ return 1, nil
+ case "preparecommit":
+ return 2, nil
+ case "prepareabort":
+ return 3, nil
+ case "completecommit":
+ return 4, nil
+ case "completeabort":
+ return 5, nil
+ case "dead":
+ return 6, nil
+ case "prepareepochfence":
+ return 7, nil
+ default:
+ return 0, fmt.Errorf("TransactionState: unable to parse %q", s)
+ }
+}
+
+const (
+ TransactionStateEmpty TransactionState = 0
+ TransactionStateOngoing TransactionState = 1
+ TransactionStatePrepareCommit TransactionState = 2
+ TransactionStatePrepareAbort TransactionState = 3
+ TransactionStateCompleteCommit TransactionState = 4
+ TransactionStateCompleteAbort TransactionState = 5
+ TransactionStateDead TransactionState = 6
+ TransactionStatePrepareEpochFence TransactionState = 7
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e TransactionState) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *TransactionState) UnmarshalText(text []byte) error {
+ v, err := ParseTransactionState(string(text))
+ *e = v
+ return err
+}
+
+// QuotasMatchType specifies how to match a Quota entity as part of the DescribeClientQuotasRequestComponent.
+//
+// Possible values and their meanings:
+//
+// * 0 (EXACT)
+// Matches all quotas for the given EntityType with names equal to the Match field.
+//
+// * 1 (DEFAULT)
+// Matches the default for the given EntityType.
+//
+// * 2 (ANY)
+// Matches all named quotas and default quotas for the given EntityType.
+type QuotasMatchType int8
+
+func (v QuotasMatchType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 0:
+ return "EXACT"
+ case 1:
+ return "DEFAULT"
+ case 2:
+ return "ANY"
+ }
+}
+
+func QuotasMatchTypeStrings() []string {
+ return []string{
+ "EXACT",
+ "DEFAULT",
+ "ANY",
+ }
+}
+
+// ParseQuotasMatchType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseQuotasMatchType(s string) (QuotasMatchType, error) {
+ switch strnorm(s) {
+ case "exact":
+ return 0, nil
+ case "default":
+ return 1, nil
+ case "any":
+ return 2, nil
+ default:
+ return 0, fmt.Errorf("QuotasMatchType: unable to parse %q", s)
+ }
+}
+
+const (
+ QuotasMatchTypeExact QuotasMatchType = 0
+ QuotasMatchTypeDefault QuotasMatchType = 1
+ QuotasMatchTypeAny QuotasMatchType = 2
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e QuotasMatchType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *QuotasMatchType) UnmarshalText(text []byte) error {
+ v, err := ParseQuotasMatchType(string(text))
+ *e = v
+ return err
+}
+
+// Possible values and their meanings:
+//
+// * 0 (ABORT)
+//
+// * 1 (COMMIT)
+//
+// * 2 (QUORUM_REASSIGNMENT)
+//
+// * 3 (LEADER_CHANGE)
+type ControlRecordKeyType int8
+
+func (v ControlRecordKeyType) String() string {
+ switch v {
+ default:
+ return "UNKNOWN"
+ case 0:
+ return "ABORT"
+ case 1:
+ return "COMMIT"
+ case 2:
+ return "QUORUM_REASSIGNMENT"
+ case 3:
+ return "LEADER_CHANGE"
+ }
+}
+
+func ControlRecordKeyTypeStrings() []string {
+ return []string{
+ "ABORT",
+ "COMMIT",
+ "QUORUM_REASSIGNMENT",
+ "LEADER_CHANGE",
+ }
+}
+
+// ParseControlRecordKeyType normalizes the input s and returns
+// the value represented by the string.
+//
+// Normalizing works by stripping all dots, underscores, and dashes,
+// trimming spaces, and lowercasing.
+func ParseControlRecordKeyType(s string) (ControlRecordKeyType, error) {
+ switch strnorm(s) {
+ case "abort":
+ return 0, nil
+ case "commit":
+ return 1, nil
+ case "quorumreassignment":
+ return 2, nil
+ case "leaderchange":
+ return 3, nil
+ default:
+ return 0, fmt.Errorf("ControlRecordKeyType: unable to parse %q", s)
+ }
+}
+
+const (
+ ControlRecordKeyTypeAbort ControlRecordKeyType = 0
+ ControlRecordKeyTypeCommit ControlRecordKeyType = 1
+ ControlRecordKeyTypeQuorumReassignment ControlRecordKeyType = 2
+ ControlRecordKeyTypeLeaderChange ControlRecordKeyType = 3
+)
+
+// MarshalText implements encoding.TextMarshaler.
+func (e ControlRecordKeyType) MarshalText() (text []byte, err error) {
+ return []byte(e.String()), nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (e *ControlRecordKeyType) UnmarshalText(text []byte) error {
+ v, err := ParseControlRecordKeyType(string(text))
+ *e = v
+ return err
+}
+
+func strnorm(s string) string {
+ s = strings.ReplaceAll(s, ".", "")
+ s = strings.ReplaceAll(s, "_", "")
+ s = strings.ReplaceAll(s, "-", "")
+ s = strings.TrimSpace(s)
+ s = strings.ToLower(s)
+ return s
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kmsg/internal/kbin/primitives.go b/vendor/github.com/twmb/franz-go/pkg/kmsg/internal/kbin/primitives.go
new file mode 100644
index 0000000000000..2c5990d06a212
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kmsg/internal/kbin/primitives.go
@@ -0,0 +1,850 @@
+// Package kbin contains Kafka primitive reading and writing functions.
+package kbin
+
+import (
+ "encoding/binary"
+ "errors"
+ "math"
+ "math/bits"
+ "reflect"
+ "unsafe"
+)
+
+// This file contains primitive type encoding and decoding.
+//
+// The Reader helper can be used even when content runs out
+// or an error is hit; all other number requests will return
+// zero so a decode will basically no-op.
+
+// ErrNotEnoughData is returned when a type could not fully decode
+// from a slice because the slice did not have enough data.
+var ErrNotEnoughData = errors.New("response did not contain enough data to be valid")
+
+// AppendBool appends 1 for true or 0 for false to dst.
+func AppendBool(dst []byte, v bool) []byte {
+ if v {
+ return append(dst, 1)
+ }
+ return append(dst, 0)
+}
+
+// AppendInt8 appends an int8 to dst.
+func AppendInt8(dst []byte, i int8) []byte {
+ return append(dst, byte(i))
+}
+
+// AppendInt16 appends a big endian int16 to dst.
+func AppendInt16(dst []byte, i int16) []byte {
+ return AppendUint16(dst, uint16(i))
+}
+
+// AppendUint16 appends a big endian uint16 to dst.
+func AppendUint16(dst []byte, u uint16) []byte {
+ return append(dst, byte(u>>8), byte(u))
+}
+
+// AppendInt32 appends a big endian int32 to dst.
+func AppendInt32(dst []byte, i int32) []byte {
+ return AppendUint32(dst, uint32(i))
+}
+
+// AppendInt64 appends a big endian int64 to dst.
+func AppendInt64(dst []byte, i int64) []byte {
+ return appendUint64(dst, uint64(i))
+}
+
+// AppendFloat64 appends a big endian float64 to dst.
+func AppendFloat64(dst []byte, f float64) []byte {
+ return appendUint64(dst, math.Float64bits(f))
+}
+
+// AppendUuid appends the 16 uuid bytes to dst.
+func AppendUuid(dst []byte, uuid [16]byte) []byte {
+ return append(dst, uuid[:]...)
+}
+
+func appendUint64(dst []byte, u uint64) []byte {
+ return append(dst, byte(u>>56), byte(u>>48), byte(u>>40), byte(u>>32),
+ byte(u>>24), byte(u>>16), byte(u>>8), byte(u))
+}
+
+// AppendUint32 appends a big endian uint32 to dst.
+func AppendUint32(dst []byte, u uint32) []byte {
+ return append(dst, byte(u>>24), byte(u>>16), byte(u>>8), byte(u))
+}
+
+// uvarintLens could only be length 65, but using 256 allows bounds check
+// elimination on lookup.
+const uvarintLens = "\x01\x01\x01\x01\x01\x01\x01\x01\x02\x02\x02\x02\x02\x02\x02\x03\x03\x03\x03\x03\x03\x03\x04\x04\x04\x04\x04\x04\x04\x05\x05\x05\x05\x05\x05\x05\x06\x06\x06\x06\x06\x06\x06\x07\x07\x07\x07\x07\x07\x07\x08\x08\x08\x08\x08\x08\x08\x09\x09\x09\x09\x09\x09\x09\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
+
+// VarintLen returns how long i would be if it were varint encoded.
+func VarintLen(i int32) int {
+ u := uint32(i)<<1 ^ uint32(i>>31)
+ return UvarintLen(u)
+}
+
+// UvarintLen returns how long u would be if it were uvarint encoded.
+func UvarintLen(u uint32) int {
+ return int(uvarintLens[byte(bits.Len32(u))])
+}
+
+func uvarlongLen(u uint64) int {
+ return int(uvarintLens[byte(bits.Len64(u))])
+}
+
+// Varint is a loop unrolled 32 bit varint decoder. The return semantics
+// are the same as binary.Varint, with the added benefit that overflows
+// in 5 byte encodings are handled rather than left to the user.
+func Varint(in []byte) (int32, int) {
+ x, n := Uvarint(in)
+ return int32((x >> 1) ^ -(x & 1)), n
+}
+
+// Uvarint is a loop unrolled 32 bit uvarint decoder. The return semantics
+// are the same as binary.Uvarint, with the added benefit that overflows
+// in 5 byte encodings are handled rather than left to the user.
+func Uvarint(in []byte) (uint32, int) {
+ var x uint32
+ var overflow int
+
+ if len(in) < 1 {
+ goto fail
+ }
+
+ x = uint32(in[0] & 0x7f)
+ if in[0]&0x80 == 0 {
+ return x, 1
+ } else if len(in) < 2 {
+ goto fail
+ }
+
+ x |= uint32(in[1]&0x7f) << 7
+ if in[1]&0x80 == 0 {
+ return x, 2
+ } else if len(in) < 3 {
+ goto fail
+ }
+
+ x |= uint32(in[2]&0x7f) << 14
+ if in[2]&0x80 == 0 {
+ return x, 3
+ } else if len(in) < 4 {
+ goto fail
+ }
+
+ x |= uint32(in[3]&0x7f) << 21
+ if in[3]&0x80 == 0 {
+ return x, 4
+ } else if len(in) < 5 {
+ goto fail
+ }
+
+ x |= uint32(in[4]) << 28
+ if in[4] <= 0x0f {
+ return x, 5
+ }
+
+ overflow = -5
+
+fail:
+ return 0, overflow
+}
+
+// Varlong is a loop unrolled 64 bit varint decoder. The return semantics
+// are the same as binary.Varint, with the added benefit that overflows
+// in 10 byte encodings are handled rather than left to the user.
+func Varlong(in []byte) (int64, int) {
+ x, n := uvarlong(in)
+ return int64((x >> 1) ^ -(x & 1)), n
+}
+
+func uvarlong(in []byte) (uint64, int) {
+ var x uint64
+ var overflow int
+
+ if len(in) < 1 {
+ goto fail
+ }
+
+ x = uint64(in[0] & 0x7f)
+ if in[0]&0x80 == 0 {
+ return x, 1
+ } else if len(in) < 2 {
+ goto fail
+ }
+
+ x |= uint64(in[1]&0x7f) << 7
+ if in[1]&0x80 == 0 {
+ return x, 2
+ } else if len(in) < 3 {
+ goto fail
+ }
+
+ x |= uint64(in[2]&0x7f) << 14
+ if in[2]&0x80 == 0 {
+ return x, 3
+ } else if len(in) < 4 {
+ goto fail
+ }
+
+ x |= uint64(in[3]&0x7f) << 21
+ if in[3]&0x80 == 0 {
+ return x, 4
+ } else if len(in) < 5 {
+ goto fail
+ }
+
+ x |= uint64(in[4]&0x7f) << 28
+ if in[4]&0x80 == 0 {
+ return x, 5
+ } else if len(in) < 6 {
+ goto fail
+ }
+
+ x |= uint64(in[5]&0x7f) << 35
+ if in[5]&0x80 == 0 {
+ return x, 6
+ } else if len(in) < 7 {
+ goto fail
+ }
+
+ x |= uint64(in[6]&0x7f) << 42
+ if in[6]&0x80 == 0 {
+ return x, 7
+ } else if len(in) < 8 {
+ goto fail
+ }
+
+ x |= uint64(in[7]&0x7f) << 49
+ if in[7]&0x80 == 0 {
+ return x, 8
+ } else if len(in) < 9 {
+ goto fail
+ }
+
+ x |= uint64(in[8]&0x7f) << 56
+ if in[8]&0x80 == 0 {
+ return x, 9
+ } else if len(in) < 10 {
+ goto fail
+ }
+
+ x |= uint64(in[9]) << 63
+ if in[9] <= 0x01 {
+ return x, 10
+ }
+
+ overflow = -10
+
+fail:
+ return 0, overflow
+}
+
+// AppendVarint appends a varint encoded i to dst.
+func AppendVarint(dst []byte, i int32) []byte {
+ return AppendUvarint(dst, uint32(i)<<1^uint32(i>>31))
+}
+
+// AppendUvarint appends a uvarint encoded u to dst.
+func AppendUvarint(dst []byte, u uint32) []byte {
+ switch UvarintLen(u) {
+ case 5:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte(u>>28))
+ case 4:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte(u>>21))
+ case 3:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte(u>>14))
+ case 2:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte(u>>7))
+ case 1:
+ return append(dst, byte(u))
+ }
+ return dst
+}
+
+// AppendVarlong appends a varint encoded i to dst.
+func AppendVarlong(dst []byte, i int64) []byte {
+ return appendUvarlong(dst, uint64(i)<<1^uint64(i>>63))
+}
+
+func appendUvarlong(dst []byte, u uint64) []byte {
+ switch uvarlongLen(u) {
+ case 10:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte((u>>42)&0x7f|0x80),
+ byte((u>>49)&0x7f|0x80),
+ byte((u>>56)&0x7f|0x80),
+ byte(u>>63))
+ case 9:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte((u>>42)&0x7f|0x80),
+ byte((u>>49)&0x7f|0x80),
+ byte(u>>56))
+ case 8:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte((u>>42)&0x7f|0x80),
+ byte(u>>49))
+ case 7:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte((u>>35)&0x7f|0x80),
+ byte(u>>42))
+ case 6:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte((u>>28)&0x7f|0x80),
+ byte(u>>35))
+ case 5:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte((u>>21)&0x7f|0x80),
+ byte(u>>28))
+ case 4:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte((u>>14)&0x7f|0x80),
+ byte(u>>21))
+ case 3:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte((u>>7)&0x7f|0x80),
+ byte(u>>14))
+ case 2:
+ return append(dst,
+ byte(u&0x7f|0x80),
+ byte(u>>7))
+ case 1:
+ return append(dst, byte(u))
+ }
+ return dst
+}
+
+// AppendString appends a string to dst prefixed with its int16 length.
+func AppendString(dst []byte, s string) []byte {
+ dst = AppendInt16(dst, int16(len(s)))
+ return append(dst, s...)
+}
+
+// AppendCompactString appends a string to dst prefixed with its uvarint length
+// starting at 1; 0 is reserved for null, which compact strings are not
+// (nullable compact ones are!). Thus, the length is the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactString(dst []byte, s string) []byte {
+ dst = AppendUvarint(dst, 1+uint32(len(s)))
+ return append(dst, s...)
+}
+
+// AppendNullableString appends potentially nil string to dst prefixed with its
+// int16 length or int16(-1) if nil.
+func AppendNullableString(dst []byte, s *string) []byte {
+ if s == nil {
+ return AppendInt16(dst, -1)
+ }
+ return AppendString(dst, *s)
+}
+
+// AppendCompactNullableString appends a potentially nil string to dst with its
+// uvarint length starting at 1, with 0 indicating null. Thus, the length is
+// the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactNullableString(dst []byte, s *string) []byte {
+ if s == nil {
+ return AppendUvarint(dst, 0)
+ }
+ return AppendCompactString(dst, *s)
+}
+
+// AppendBytes appends bytes to dst prefixed with its int32 length.
+func AppendBytes(dst, b []byte) []byte {
+ dst = AppendInt32(dst, int32(len(b)))
+ return append(dst, b...)
+}
+
+// AppendCompactBytes appends bytes to dst prefixed with a its uvarint length
+// starting at 1; 0 is reserved for null, which compact bytes are not (nullable
+// compact ones are!). Thus, the length is the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactBytes(dst, b []byte) []byte {
+ dst = AppendUvarint(dst, 1+uint32(len(b)))
+ return append(dst, b...)
+}
+
+// AppendNullableBytes appends a potentially nil slice to dst prefixed with its
+// int32 length or int32(-1) if nil.
+func AppendNullableBytes(dst, b []byte) []byte {
+ if b == nil {
+ return AppendInt32(dst, -1)
+ }
+ return AppendBytes(dst, b)
+}
+
+// AppendCompactNullableBytes appends a potentially nil slice to dst with its
+// uvarint length starting at 1, with 0 indicating null. Thus, the length is
+// the decoded uvarint - 1.
+//
+// For KIP-482.
+func AppendCompactNullableBytes(dst, b []byte) []byte {
+ if b == nil {
+ return AppendUvarint(dst, 0)
+ }
+ return AppendCompactBytes(dst, b)
+}
+
+// AppendVarintString appends a string to dst prefixed with its length encoded
+// as a varint.
+func AppendVarintString(dst []byte, s string) []byte {
+ dst = AppendVarint(dst, int32(len(s)))
+ return append(dst, s...)
+}
+
+// AppendVarintBytes appends a slice to dst prefixed with its length encoded as
+// a varint.
+func AppendVarintBytes(dst, b []byte) []byte {
+ if b == nil {
+ return AppendVarint(dst, -1)
+ }
+ dst = AppendVarint(dst, int32(len(b)))
+ return append(dst, b...)
+}
+
+// AppendArrayLen appends the length of an array as an int32 to dst.
+func AppendArrayLen(dst []byte, l int) []byte {
+ return AppendInt32(dst, int32(l))
+}
+
+// AppendCompactArrayLen appends the length of an array as a uvarint to dst
+// as the length + 1.
+//
+// For KIP-482.
+func AppendCompactArrayLen(dst []byte, l int) []byte {
+ return AppendUvarint(dst, 1+uint32(l))
+}
+
+// AppendNullableArrayLen appends the length of an array as an int32 to dst,
+// or -1 if isNil is true.
+func AppendNullableArrayLen(dst []byte, l int, isNil bool) []byte {
+ if isNil {
+ return AppendInt32(dst, -1)
+ }
+ return AppendInt32(dst, int32(l))
+}
+
+// AppendCompactNullableArrayLen appends the length of an array as a uvarint to
+// dst as the length + 1; if isNil is true, this appends 0 as a uvarint.
+//
+// For KIP-482.
+func AppendCompactNullableArrayLen(dst []byte, l int, isNil bool) []byte {
+ if isNil {
+ return AppendUvarint(dst, 0)
+ }
+ return AppendUvarint(dst, 1+uint32(l))
+}
+
+// Reader is used to decode Kafka messages.
+//
+// For all functions on Reader, if the reader has been invalidated, functions
+// return defaults (false, 0, nil, ""). Use Complete to detect if the reader
+// was invalidated or if the reader has remaining data.
+type Reader struct {
+ Src []byte
+ bad bool
+}
+
+// Bool returns a bool from the reader.
+func (b *Reader) Bool() bool {
+ if len(b.Src) < 1 {
+ b.bad = true
+ b.Src = nil
+ return false
+ }
+ t := b.Src[0] != 0 // if '0', false
+ b.Src = b.Src[1:]
+ return t
+}
+
+// Int8 returns an int8 from the reader.
+func (b *Reader) Int8() int8 {
+ if len(b.Src) < 1 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := b.Src[0]
+ b.Src = b.Src[1:]
+ return int8(r)
+}
+
+// Int16 returns an int16 from the reader.
+func (b *Reader) Int16() int16 {
+ if len(b.Src) < 2 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := int16(binary.BigEndian.Uint16(b.Src))
+ b.Src = b.Src[2:]
+ return r
+}
+
+// Uint16 returns an uint16 from the reader.
+func (b *Reader) Uint16() uint16 {
+ if len(b.Src) < 2 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := binary.BigEndian.Uint16(b.Src)
+ b.Src = b.Src[2:]
+ return r
+}
+
+// Int32 returns an int32 from the reader.
+func (b *Reader) Int32() int32 {
+ if len(b.Src) < 4 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := int32(binary.BigEndian.Uint32(b.Src))
+ b.Src = b.Src[4:]
+ return r
+}
+
+// Int64 returns an int64 from the reader.
+func (b *Reader) Int64() int64 {
+ return int64(b.readUint64())
+}
+
+// Uuid returns a uuid from the reader.
+func (b *Reader) Uuid() [16]byte {
+ var r [16]byte
+ copy(r[:], b.Span(16))
+ return r
+}
+
+// Float64 returns a float64 from the reader.
+func (b *Reader) Float64() float64 {
+ return math.Float64frombits(b.readUint64())
+}
+
+func (b *Reader) readUint64() uint64 {
+ if len(b.Src) < 8 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := binary.BigEndian.Uint64(b.Src)
+ b.Src = b.Src[8:]
+ return r
+}
+
+// Uint32 returns a uint32 from the reader.
+func (b *Reader) Uint32() uint32 {
+ if len(b.Src) < 4 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ r := binary.BigEndian.Uint32(b.Src)
+ b.Src = b.Src[4:]
+ return r
+}
+
+// Varint returns a varint int32 from the reader.
+func (b *Reader) Varint() int32 {
+ val, n := Varint(b.Src)
+ if n <= 0 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ b.Src = b.Src[n:]
+ return val
+}
+
+// Varlong returns a varlong int64 from the reader.
+func (b *Reader) Varlong() int64 {
+ val, n := Varlong(b.Src)
+ if n <= 0 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ b.Src = b.Src[n:]
+ return val
+}
+
+// Uvarint returns a uvarint encoded uint32 from the reader.
+func (b *Reader) Uvarint() uint32 {
+ val, n := Uvarint(b.Src)
+ if n <= 0 {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ b.Src = b.Src[n:]
+ return val
+}
+
+// Span returns l bytes from the reader.
+func (b *Reader) Span(l int) []byte {
+ if len(b.Src) < l || l < 0 {
+ b.bad = true
+ b.Src = nil
+ return nil
+ }
+ r := b.Src[:l:l]
+ b.Src = b.Src[l:]
+ return r
+}
+
+// UnsafeString returns a Kafka string from the reader without allocating using
+// the unsafe package. This must be used with care; note the string holds a
+// reference to the original slice.
+func (b *Reader) UnsafeString() string {
+ l := b.Int16()
+ return UnsafeString(b.Span(int(l)))
+}
+
+// String returns a Kafka string from the reader.
+func (b *Reader) String() string {
+ l := b.Int16()
+ return string(b.Span(int(l)))
+}
+
+// UnsafeCompactString returns a Kafka compact string from the reader without
+// allocating using the unsafe package. This must be used with care; note the
+// string holds a reference to the original slice.
+func (b *Reader) UnsafeCompactString() string {
+ l := int(b.Uvarint()) - 1
+ return UnsafeString(b.Span(l))
+}
+
+// CompactString returns a Kafka compact string from the reader.
+func (b *Reader) CompactString() string {
+ l := int(b.Uvarint()) - 1
+ return string(b.Span(l))
+}
+
+// UnsafeNullableString returns a Kafka nullable string from the reader without
+// allocating using the unsafe package. This must be used with care; note the
+// string holds a reference to the original slice.
+func (b *Reader) UnsafeNullableString() *string {
+ l := b.Int16()
+ if l < 0 {
+ return nil
+ }
+ s := UnsafeString(b.Span(int(l)))
+ return &s
+}
+
+// NullableString returns a Kafka nullable string from the reader.
+func (b *Reader) NullableString() *string {
+ l := b.Int16()
+ if l < 0 {
+ return nil
+ }
+ s := string(b.Span(int(l)))
+ return &s
+}
+
+// UnsafeCompactNullableString returns a Kafka compact nullable string from the
+// reader without allocating using the unsafe package. This must be used with
+// care; note the string holds a reference to the original slice.
+func (b *Reader) UnsafeCompactNullableString() *string {
+ l := int(b.Uvarint()) - 1
+ if l < 0 {
+ return nil
+ }
+ s := UnsafeString(b.Span(l))
+ return &s
+}
+
+// CompactNullableString returns a Kafka compact nullable string from the
+// reader.
+func (b *Reader) CompactNullableString() *string {
+ l := int(b.Uvarint()) - 1
+ if l < 0 {
+ return nil
+ }
+ s := string(b.Span(l))
+ return &s
+}
+
+// Bytes returns a Kafka byte array from the reader.
+//
+// This never returns nil.
+func (b *Reader) Bytes() []byte {
+ l := b.Int32()
+ // This is not to spec, but it is not clearly documented and Microsoft
+ // EventHubs fails here. -1 means null, which should throw an
+ // exception. EventHubs uses -1 to mean "does not exist" on some
+ // non-nullable fields.
+ //
+ // Until EventHubs is fixed, we return an empty byte slice for null.
+ if l == -1 {
+ return []byte{}
+ }
+ return b.Span(int(l))
+}
+
+// CompactBytes returns a Kafka compact byte array from the reader.
+//
+// This never returns nil.
+func (b *Reader) CompactBytes() []byte {
+ l := int(b.Uvarint()) - 1
+ if l == -1 { // same as above: -1 should not be allowed here
+ return []byte{}
+ }
+ return b.Span(l)
+}
+
+// NullableBytes returns a Kafka nullable byte array from the reader, returning
+// nil as appropriate.
+func (b *Reader) NullableBytes() []byte {
+ l := b.Int32()
+ if l < 0 {
+ return nil
+ }
+ r := b.Span(int(l))
+ return r
+}
+
+// CompactNullableBytes returns a Kafka compact nullable byte array from the
+// reader, returning nil as appropriate.
+func (b *Reader) CompactNullableBytes() []byte {
+ l := int(b.Uvarint()) - 1
+ if l < 0 {
+ return nil
+ }
+ r := b.Span(l)
+ return r
+}
+
+// ArrayLen returns a Kafka array length from the reader.
+func (b *Reader) ArrayLen() int32 {
+ r := b.Int32()
+ // The min size of a Kafka type is a byte, so if we do not have
+ // at least the array length of bytes left, it is bad.
+ if len(b.Src) < int(r) {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ return r
+}
+
+// VarintArrayLen returns a Kafka array length from the reader.
+func (b *Reader) VarintArrayLen() int32 {
+ r := b.Varint()
+ // The min size of a Kafka type is a byte, so if we do not have
+ // at least the array length of bytes left, it is bad.
+ if len(b.Src) < int(r) {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ return r
+}
+
+// CompactArrayLen returns a Kafka compact array length from the reader.
+func (b *Reader) CompactArrayLen() int32 {
+ r := int32(b.Uvarint()) - 1
+ // The min size of a Kafka type is a byte, so if we do not have
+ // at least the array length of bytes left, it is bad.
+ if len(b.Src) < int(r) {
+ b.bad = true
+ b.Src = nil
+ return 0
+ }
+ return r
+}
+
+// VarintBytes returns a Kafka encoded varint array from the reader, returning
+// nil as appropriate.
+func (b *Reader) VarintBytes() []byte {
+ l := b.Varint()
+ if l < 0 {
+ return nil
+ }
+ return b.Span(int(l))
+}
+
+// UnsafeVarintString returns a Kafka encoded varint string from the reader
+// without allocating using the unsafe package. This must be used with care;
+// note the string holds a reference to the original slice.
+func (b *Reader) UnsafeVarintString() string {
+ return UnsafeString(b.VarintBytes())
+}
+
+// VarintString returns a Kafka encoded varint string from the reader.
+func (b *Reader) VarintString() string {
+ return string(b.VarintBytes())
+}
+
+// Complete returns ErrNotEnoughData if the source ran out while decoding.
+func (b *Reader) Complete() error {
+ if b.bad {
+ return ErrNotEnoughData
+ }
+ return nil
+}
+
+// Ok returns true if the reader is still ok.
+func (b *Reader) Ok() bool {
+ return !b.bad
+}
+
+// UnsafeString returns the slice as a string using unsafe rule (6).
+func UnsafeString(slice []byte) string {
+ var str string
+ strhdr := (*reflect.StringHeader)(unsafe.Pointer(&str)) //nolint:gosec // known way to convert slice to string
+ strhdr.Data = ((*reflect.SliceHeader)(unsafe.Pointer(&slice))).Data //nolint:gosec // known way to convert slice to string
+ strhdr.Len = len(slice)
+ return str
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kmsg/record.go b/vendor/github.com/twmb/franz-go/pkg/kmsg/record.go
new file mode 100644
index 0000000000000..86499fd79660f
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kmsg/record.go
@@ -0,0 +1,174 @@
+package kmsg
+
+import "github.com/twmb/franz-go/pkg/kmsg/internal/kbin"
+
+// A Record is a Kafka v0.11.0.0 record. It corresponds to an individual
+// message as it is written on the wire.
+type Record struct {
+ // Length is the length of this record on the wire of everything that
+ // follows this field. It is an int32 encoded as a varint.
+ Length int32
+
+ // Attributes are record level attributes. This field currently is unused.
+ Attributes int8
+
+ // TimestampDelta is the millisecond delta of this record's timestamp
+ // from the record's RecordBatch's FirstTimestamp.
+ //
+ // NOTE: this is actually an int64 but we cannot change the type for
+ // backwards compatibility. Use TimestampDelta64.
+ TimestampDelta int32
+ TimestampDelta64 int64
+
+ // OffsetDelta is the delta of this record's offset from the record's
+ // RecordBatch's FirstOffset.
+ //
+ // For producing, this is usually equal to the index of the record in
+ // the record batch.
+ OffsetDelta int32
+
+ // Key is an blob of data for a record.
+ //
+ // Key's are usually used for hashing the record to specific Kafka partitions.
+ Key []byte
+
+ // Value is a blob of data. This field is the main "message" portion of a
+ // record.
+ Value []byte
+
+ // Headers are optional user provided metadata for records. Unlike normal
+ // arrays, the number of headers is encoded as a varint.
+ Headers []Header
+}
+
+func (v *Record) AppendTo(dst []byte) []byte {
+ {
+ v := v.Length
+ dst = kbin.AppendVarint(dst, v)
+ }
+ {
+ v := v.Attributes
+ dst = kbin.AppendInt8(dst, v)
+ }
+ {
+ d := v.TimestampDelta64
+ if d == 0 {
+ d = int64(v.TimestampDelta)
+ }
+ dst = kbin.AppendVarlong(dst, d)
+ }
+ {
+ v := v.OffsetDelta
+ dst = kbin.AppendVarint(dst, v)
+ }
+ {
+ v := v.Key
+ dst = kbin.AppendVarintBytes(dst, v)
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendVarintBytes(dst, v)
+ }
+ {
+ v := v.Headers
+ dst = kbin.AppendVarint(dst, int32(len(v)))
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.Key
+ dst = kbin.AppendVarintString(dst, v)
+ }
+ {
+ v := v.Value
+ dst = kbin.AppendVarintBytes(dst, v)
+ }
+ }
+ }
+ return dst
+}
+
+func (v *Record) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *Record) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *Record) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ s := v
+ {
+ v := b.Varint()
+ s.Length = v
+ }
+ {
+ v := b.Int8()
+ s.Attributes = v
+ }
+ {
+ v := b.Varlong()
+ s.TimestampDelta64 = v
+ s.TimestampDelta = int32(v)
+ }
+ {
+ v := b.Varint()
+ s.OffsetDelta = v
+ }
+ {
+ v := b.VarintBytes()
+ s.Key = v
+ }
+ {
+ v := b.VarintBytes()
+ s.Value = v
+ }
+ {
+ v := s.Headers
+ a := v
+ var l int32
+ l = b.VarintArrayLen()
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]Header, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ v = b.UnsafeVarintString()
+ } else {
+ v = b.VarintString()
+ }
+ s.Key = v
+ }
+ {
+ v := b.VarintBytes()
+ s.Value = v
+ }
+ }
+ v = a
+ s.Headers = v
+ }
+ return b.Complete()
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to Record.
+func (v *Record) Default() {
+}
+
+// NewRecord returns a default Record
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewRecord() Record {
+ var v Record
+ v.Default()
+ return v
+}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go b/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go
new file mode 100644
index 0000000000000..3081c346f5aae
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go
@@ -0,0 +1,1166 @@
+// Package kversion specifies versions for Kafka request keys.
+//
+// Kafka technically has internal broker versions that bump multiple times per
+// release. This package only defines releases and tip.
+package kversion
+
+import (
+ "bytes"
+ "fmt"
+ "regexp"
+ "sync"
+ "text/tabwriter"
+
+ "github.com/twmb/franz-go/pkg/kmsg"
+)
+
+// Versions is a list of versions, with each item corresponding to a Kafka key
+// and each item's value corresponding to the max version supported.
+//
+// Minimum versions are not currently tracked because all keys have a minimum
+// version of zero. The internals of a Versions may change in the future to
+// support minimum versions; the outward facing API of Versions should not
+// change to support this.
+//
+// As well, supported features may be added in the future.
+type Versions struct {
+ // If any version is -1, then it is left out in that version.
+ // This was first done in version 2.7.0, where Kafka added support
+ // for 52, 53, 54, 55, but it was not a part of the 2.7.0 release,
+ // so ApiVersionsResponse goes from 51 to 56.
+ k2v []int16
+}
+
+var (
+ reFromString *regexp.Regexp
+ reFromStringOnce sync.Once
+)
+
+var versions = []struct {
+ name string
+ v *Versions
+}{
+ {"v0.8.0", V0_8_0()},
+ {"v0.8.1", V0_8_1()},
+ {"v0.8.2", V0_8_2()},
+ {"v0.9.0", V0_9_0()},
+ {"v0.10.0", V0_10_0()},
+ {"v0.10.1", V0_10_1()},
+ {"v0.10.2", V0_10_2()},
+ {"v0.11.0", V0_11_0()},
+ {"v1.0", V1_0_0()},
+ {"v1.1", V1_1_0()},
+ {"v2.0", V2_0_0()},
+ {"v2.1", V2_1_0()},
+ {"v2.2", V2_2_0()},
+ {"v2.3", V2_3_0()},
+ {"v2.4", V2_4_0()},
+ {"v2.5", V2_5_0()},
+ {"v2.6", V2_6_0()},
+ {"v2.7", V2_7_0()},
+ {"v2.8", V2_8_0()},
+ {"v3.0", V3_0_0()},
+ {"v3.1", V3_1_0()},
+ {"v3.2", V3_2_0()},
+ {"v3.3", V3_3_0()},
+ {"v3.4", V3_4_0()},
+ {"v3.5", V3_5_0()},
+ {"v3.6", V3_6_0()},
+ {"v3.7", V3_7_0()},
+}
+
+// VersionStrings returns all recognized versions, minus any patch, that can be
+// used as input to FromString.
+func VersionStrings() []string {
+ var vs []string
+ for _, v := range versions {
+ vs = append(vs, v.name)
+ }
+ return vs
+}
+
+// FromString returns a Versions from v.
+// The expected input is:
+// - for v0, v0.#.# or v0.#.#.#
+// - for v1, v1.# or v1.#.#
+//
+// The "v" is optional.
+func FromString(v string) *Versions {
+ reFromStringOnce.Do(func() {
+ // 0: entire string
+ // 1: v1+ match, minus patch
+ // 2: v0 match, minus subpatch
+ reFromString = regexp.MustCompile(`^(?:(v?[1-9]+\.\d+)(?:\.\d+)?|(v?0\.\d+\.\d+)(?:\.\d+)?)$`)
+ })
+ m := reFromString.FindStringSubmatch(v)
+ if m == nil {
+ return nil
+ }
+ v = m[1]
+ if m[2] != "" {
+ v = m[2]
+ }
+ withv := "v" + v
+ for _, v2 := range versions {
+ if v2.name == v || v2.name == withv {
+ return v2.v
+ }
+ }
+ return nil
+}
+
+// FromApiVersionsResponse returns a Versions from a kmsg.ApiVersionsResponse.
+func FromApiVersionsResponse(r *kmsg.ApiVersionsResponse) *Versions {
+ var v Versions
+ for _, key := range r.ApiKeys {
+ v.SetMaxKeyVersion(key.ApiKey, key.MaxVersion)
+ }
+ return &v
+}
+
+// HasKey returns true if the versions contains the given key.
+func (vs *Versions) HasKey(k int16) bool {
+ _, has := vs.LookupMaxKeyVersion(k)
+ return has
+}
+
+// LookupMaxKeyVersion returns the version for the given key and whether the
+// key exists. If the key does not exist, this returns (-1, false).
+func (vs *Versions) LookupMaxKeyVersion(k int16) (int16, bool) {
+ if k < 0 {
+ return -1, false
+ }
+ if int(k) >= len(vs.k2v) {
+ return -1, false
+ }
+ version := vs.k2v[k]
+ if version < 0 {
+ return -1, false
+ }
+ return version, true
+}
+
+// SetMaxKeyVersion sets the max version for the given key.
+//
+// Setting a version to -1 unsets the key.
+//
+// Versions are backed by a slice; if the slice is not long enough, it is
+// extended to fit the key.
+func (vs *Versions) SetMaxKeyVersion(k, v int16) {
+ if v < 0 {
+ v = -1
+ }
+ // If the version is < 0, we are unsetting a version. If we are
+ // unsetting a version that is more than the amount of keys we already
+ // have, we have no reason to unset.
+ if k < 0 || v < 0 && int(k) >= len(vs.k2v)+1 {
+ return
+ }
+ needLen := int(k) + 1
+ for len(vs.k2v) < needLen {
+ vs.k2v = append(vs.k2v, -1)
+ }
+ vs.k2v[k] = v
+}
+
+// Equal returns whether two versions are equal.
+func (vs *Versions) Equal(other *Versions) bool {
+ // We allow the version slices to be of different lengths, so long as
+ // the versions for keys in one and not the other are -1.
+ //
+ // Basically, all non-negative-one keys must be equal.
+ long, short := vs.k2v, other.k2v
+ if len(short) > len(long) {
+ long, short = short, long
+ }
+ for i, v := range short {
+ if v != long[i] {
+ return false
+ }
+ }
+ for _, v := range long[len(short):] {
+ if v >= 0 {
+ return false
+ }
+ }
+ return true
+}
+
+// EachMaxKeyVersion calls fn for each key and max version
+func (vs *Versions) EachMaxKeyVersion(fn func(k, v int16)) {
+ for k, v := range vs.k2v {
+ if v >= 0 {
+ fn(int16(k), v)
+ }
+ }
+}
+
+// VersionGuessOpt is an option to change how version guessing is done.
+type VersionGuessOpt interface {
+ apply(*guessCfg)
+}
+
+type guessOpt struct{ fn func(*guessCfg) }
+
+func (opt guessOpt) apply(cfg *guessCfg) { opt.fn(cfg) }
+
+// SkipKeys skips the given keys while guessing versions.
+func SkipKeys(keys ...int16) VersionGuessOpt {
+ return guessOpt{func(cfg *guessCfg) { cfg.skipKeys = keys }}
+}
+
+// TryRaftBroker changes from guessing the version for a classical ZooKeeper
+// based broker to guessing for a raft based broker (v2.8+).
+//
+// Note that with raft, there can be a TryRaftController attempt as well.
+func TryRaftBroker() VersionGuessOpt {
+ return guessOpt{func(cfg *guessCfg) { cfg.listener = rBroker }}
+}
+
+// TryRaftController changes from guessing the version for a classical
+// ZooKeeper based broker to guessing for a raft based controller broker
+// (v2.8+).
+//
+// Note that with raft, there can be a TryRaftBroker attempt as well. Odds are
+// that if you are an end user speaking to a raft based Kafka cluster, you are
+// speaking to a raft broker. The controller is specifically for broker to
+// broker communication.
+func TryRaftController() VersionGuessOpt {
+ return guessOpt{func(cfg *guessCfg) { cfg.listener = rController }}
+}
+
+type guessCfg struct {
+ skipKeys []int16
+ listener listener
+}
+
+// VersionGuess attempts to guess which version of Kafka these versions belong
+// to. If an exact match can be determined, this returns a string in the format
+// v0.#.# or v#.# (depending on whether Kafka is pre-1.0 or post). For
+// example, v0.8.0 or v2.7.
+//
+// Patch numbers are not included in the guess as it is not possible to
+// determine the Kafka patch version being used as a client.
+//
+// If the version is determined to be higher than kversion knows of or is tip,
+// this package returns "at least v#.#".
+//
+// Custom versions, or in-between versions, are detected and return slightly
+// more verbose strings.
+//
+// Options can be specified to change how version guessing is performed, for
+// example, certain keys can be skipped, or the guessing can try evaluating the
+// versions as Raft broker based versions.
+//
+// Internally, this function tries guessing the version against both KRaft and
+// Kafka APIs. The more exact match is returned.
+func (vs *Versions) VersionGuess(opts ...VersionGuessOpt) string {
+ standard := vs.versionGuess(opts...)
+ raftBroker := vs.versionGuess(append(opts, TryRaftBroker())...)
+ raftController := vs.versionGuess(append(opts, TryRaftController())...)
+
+ // If any of these are exact, return the exact guess.
+ for _, g := range []guess{
+ standard,
+ raftBroker,
+ raftController,
+ } {
+ if g.how == guessExact {
+ return g.String()
+ }
+ }
+
+ // If any are atLeast, that means it is newer than we can guess and we
+ // return the highest version.
+ for _, g := range []guess{
+ standard,
+ raftBroker,
+ raftController,
+ } {
+ if g.how == guessAtLeast {
+ return g.String()
+ }
+ }
+
+ // This is a custom version. We could do some advanced logic to try to
+ // return highest of all three guesses, but that may be inaccurate:
+ // KRaft may detect a higher guess because not all requests exist in
+ // KRaft. Instead, we just return our standard guess.
+ return standard.String()
+}
+
+type guess struct {
+ v1 string
+ v2 string // for between
+ how int8
+}
+
+const (
+ guessExact = iota
+ guessAtLeast
+ guessCustomUnknown
+ guessCustomAtLeast
+ guessBetween
+ guessNotEven
+)
+
+func (g guess) String() string {
+ switch g.how {
+ case guessExact:
+ return g.v1
+ case guessAtLeast:
+ return "at least " + g.v1
+ case guessCustomUnknown:
+ return "unknown custom version"
+ case guessCustomAtLeast:
+ return "unknown custom version at least " + g.v1
+ case guessBetween:
+ return "between " + g.v1 + " and " + g.v2
+ case guessNotEven:
+ return "not even " + g.v1
+ }
+ return g.v1
+}
+
+func (vs *Versions) versionGuess(opts ...VersionGuessOpt) guess {
+ cfg := guessCfg{
+ listener: zkBroker,
+ // Envelope was added in 2.7 for kraft and zkBroker in 3.4; we
+ // need to skip it for 2.7 through 3.4 otherwise the version
+ // detection fails. We can just skip it generally since there
+ // are enough differentiating factors that accurately detecting
+ // envelope doesn't matter.
+ //
+ // TODO: add introduced-version to differentiate some specific
+ // keys.
+ skipKeys: []int16{4, 5, 6, 7, 27, 52, 53, 54, 55, 56, 57, 58, 59, 62, 63, 64, 67},
+ }
+ for _, opt := range opts {
+ opt.apply(&cfg)
+ }
+
+ skip := make(map[int16]bool, len(cfg.skipKeys))
+ for _, k := range cfg.skipKeys {
+ skip[k] = true
+ }
+
+ var last string
+ cmp := make(map[int16]int16, len(maxTip))
+ cmpskip := make(map[int16]int16)
+ for _, comparison := range []struct {
+ cmp listenerKeys
+ name string
+ }{
+ {max080, "v0.8.0"},
+ {max081, "v0.8.1"},
+ {max082, "v0.8.2"},
+ {max090, "v0.9.0"},
+ {max0100, "v0.10.0"},
+ {max0101, "v0.10.1"},
+ {max0102, "v0.10.2"},
+ {max0110, "v0.11.0"},
+ {max100, "v1.0"},
+ {max110, "v1.1"},
+ {max200, "v2.0"},
+ {max210, "v2.1"},
+ {max220, "v2.2"},
+ {max230, "v2.3"},
+ {max240, "v2.4"},
+ {max250, "v2.5"},
+ {max260, "v2.6"},
+ {max270, "v2.7"},
+ {max280, "v2.8"},
+ {max300, "v3.0"},
+ {max310, "v3.1"},
+ {max320, "v3.2"},
+ {max330, "v3.3"},
+ {max340, "v3.4"},
+ {max350, "v3.5"},
+ {max360, "v3.6"},
+ {max370, "v3.7"},
+ } {
+ for k, v := range comparison.cmp.filter(cfg.listener) {
+ if v == -1 {
+ continue
+ }
+ k16 := int16(k)
+ if skip[k16] {
+ cmpskip[k16] = v
+ } else {
+ cmp[k16] = v
+ }
+ }
+
+ var under, equal, over bool
+
+ for k, v := range vs.k2v {
+ k16 := int16(k)
+ if skip[k16] {
+ skipv, ok := cmpskip[k16]
+ if v == -1 || !ok {
+ continue
+ }
+ cmp[k16] = skipv
+ }
+ cmpv, has := cmp[k16]
+ if has {
+ // If our version for this key is less than the
+ // comparison versions, then we are less than what we
+ // are comparing.
+ if v < cmpv {
+ under = true
+ } else if v > cmpv {
+ // Similarly, if our version is more, then we
+ // are over what we are comparing.
+ over = true
+ } else {
+ equal = true
+ }
+ delete(cmp, k16)
+ } else if v >= 0 {
+ // If what we are comparing to does not even have this
+ // key **and** our version is larger non-zero, then our
+ // version is larger than what we are comparing to.
+ //
+ // We can have a negative version if a key was manually
+ // unset.
+ over = true
+ }
+ // If the version is < 0, the key is unset.
+ }
+
+ // If our versions did not clear out what we are comparing against, we
+ // do not have all keys that we need for this version.
+ if len(cmp) > 0 {
+ under = true
+ }
+
+ current := comparison.name
+ switch {
+ case under && over:
+ // Regardless of equal being true or not, this is a custom version.
+ if last != "" {
+ return guess{v1: last, how: guessCustomAtLeast}
+ }
+ return guess{v1: last, how: guessCustomUnknown}
+
+ case under:
+ // Regardless of equal being true or not, we have not yet hit
+ // this version.
+ if last != "" {
+ return guess{v1: last, v2: current, how: guessBetween}
+ }
+ return guess{v1: current, how: guessNotEven}
+
+ case over:
+ // Regardless of equal being true or not, we try again.
+ last = current
+
+ case equal:
+ return guess{v1: current, how: guessExact}
+ }
+ // At least one of under, equal, or over must be true, so there
+ // is no default case.
+ }
+
+ return guess{v1: last, how: guessAtLeast}
+}
+
+// String returns a string representation of the versions; the format may
+// change.
+func (vs *Versions) String() string {
+ var buf bytes.Buffer
+ w := tabwriter.NewWriter(&buf, 0, 0, 2, ' ', 0)
+ for k, v := range vs.k2v {
+ if v < 0 {
+ continue
+ }
+ name := kmsg.NameForKey(int16(k))
+ if name == "" {
+ name = "Unknown"
+ }
+ fmt.Fprintf(w, "%s\t%d\n", name, v)
+ }
+ w.Flush()
+ return buf.String()
+}
+
+// Stable is a shortcut for the latest _released_ Kafka versions.
+//
+// This is the default version used in kgo to avoid breaking tip changes.
+func Stable() *Versions { return zkBrokerOf(maxStable) }
+
+// Tip is the latest defined Kafka key versions; this may be slightly out of date.
+func Tip() *Versions { return zkBrokerOf(maxTip) }
+
+func V0_8_0() *Versions { return zkBrokerOf(max080) }
+func V0_8_1() *Versions { return zkBrokerOf(max081) }
+func V0_8_2() *Versions { return zkBrokerOf(max082) }
+func V0_9_0() *Versions { return zkBrokerOf(max090) }
+func V0_10_0() *Versions { return zkBrokerOf(max0100) }
+func V0_10_1() *Versions { return zkBrokerOf(max0101) }
+func V0_10_2() *Versions { return zkBrokerOf(max0102) }
+func V0_11_0() *Versions { return zkBrokerOf(max0110) }
+func V1_0_0() *Versions { return zkBrokerOf(max100) }
+func V1_1_0() *Versions { return zkBrokerOf(max110) }
+func V2_0_0() *Versions { return zkBrokerOf(max200) }
+func V2_1_0() *Versions { return zkBrokerOf(max210) }
+func V2_2_0() *Versions { return zkBrokerOf(max220) }
+func V2_3_0() *Versions { return zkBrokerOf(max230) }
+func V2_4_0() *Versions { return zkBrokerOf(max240) }
+func V2_5_0() *Versions { return zkBrokerOf(max250) }
+func V2_6_0() *Versions { return zkBrokerOf(max260) }
+func V2_7_0() *Versions { return zkBrokerOf(max270) }
+func V2_8_0() *Versions { return zkBrokerOf(max280) }
+func V3_0_0() *Versions { return zkBrokerOf(max300) }
+func V3_1_0() *Versions { return zkBrokerOf(max310) }
+func V3_2_0() *Versions { return zkBrokerOf(max320) }
+func V3_3_0() *Versions { return zkBrokerOf(max330) }
+func V3_4_0() *Versions { return zkBrokerOf(max340) }
+func V3_5_0() *Versions { return zkBrokerOf(max350) }
+func V3_6_0() *Versions { return zkBrokerOf(max360) }
+func V3_7_0() *Versions { return zkBrokerOf(max370) }
+
+func zkBrokerOf(lks listenerKeys) *Versions {
+ return &Versions{lks.filter(zkBroker)}
+}
+
+type listener uint8
+
+func (l listener) has(target listener) bool {
+ return l&target != 0
+}
+
+const (
+ zkBroker listener = 1 << iota
+ rBroker
+ rController
+)
+
+type listenerKey struct {
+ listener listener
+ version int16
+}
+
+type listenerKeys []listenerKey
+
+func (lks listenerKeys) filter(listener listener) []int16 {
+ r := make([]int16, 0, len(lks))
+ for _, lk := range lks {
+ if lk.listener.has(listener) {
+ r = append(r, lk.version)
+ } else {
+ r = append(r, -1)
+ }
+ }
+ return r
+}
+
+// All requests before KRaft started being introduced support the zkBroker, but
+// KRaft changed that. Kafka commit 698319b8e2c1f6cb574f339eede6f2a5b1919b55
+// added which listeners support which API keys.
+func k(listeners ...listener) listenerKey {
+ var k listenerKey
+ for _, listener := range listeners {
+ k.listener |= listener
+ }
+ return k
+}
+
+func (l *listenerKey) inc() {
+ l.version++
+}
+
+// For the comments below, appends are annotated with the key being introduced,
+// while incs are annotated with the version the inc results in.
+
+func nextMax(prev listenerKeys, do func(listenerKeys) listenerKeys) listenerKeys {
+ return do(append(listenerKeys(nil), prev...))
+}
+
+var max080 = nextMax(nil, func(listenerKeys) listenerKeys {
+ return listenerKeys{
+ k(zkBroker, rBroker), // 0 produce
+ k(zkBroker, rBroker, rController), // 1 fetch
+ k(zkBroker, rBroker), // 2 list offset
+ k(zkBroker, rBroker), // 3 metadata
+ k(zkBroker), // 4 leader and isr
+ k(zkBroker), // 5 stop replica
+ k(zkBroker), // 6 update metadata, actually not supported for a bit
+ k(zkBroker, rController), // 7 controlled shutdown, actually not supported for a bit
+ }
+})
+
+var max081 = nextMax(max080, func(v listenerKeys) listenerKeys {
+ return append(v,
+ k(zkBroker, rBroker), // 8 offset commit KAFKA-965 db37ed0054
+ k(zkBroker, rBroker), // 9 offset fetch (same)
+ )
+})
+
+var max082 = nextMax(max081, func(v listenerKeys) listenerKeys {
+ v[8].inc() // 1 offset commit KAFKA-1462
+ v[9].inc() // 1 offset fetch KAFKA-1841 161b1aa16e I think?
+ return append(v,
+ k(zkBroker, rBroker), // 10 find coordinator KAFKA-1012 a670537aa3
+ k(zkBroker, rBroker), // 11 join group (same)
+ k(zkBroker, rBroker), // 12 heartbeat (same)
+ )
+})
+
+var max090 = nextMax(max082, func(v listenerKeys) listenerKeys {
+ v[0].inc() // 1 produce KAFKA-2136 436b7ddc38; KAFKA-2083 ?? KIP-13
+ v[1].inc() // 1 fetch (same)
+ v[6].inc() // 1 update metadata KAFKA-2411 d02ca36ca1
+ v[7].inc() // 1 controlled shutdown (same)
+ v[8].inc() // 2 offset commit KAFKA-1634
+ return append(v,
+ k(zkBroker, rBroker), // 13 leave group KAFKA-2397 636e14a991
+ k(zkBroker, rBroker), // 14 sync group KAFKA-2464 86eb74d923
+ k(zkBroker, rBroker), // 15 describe groups KAFKA-2687 596c203af1
+ k(zkBroker, rBroker), // 16 list groups KAFKA-2687 596c203af1
+ )
+})
+
+var max0100 = nextMax(max090, func(v listenerKeys) listenerKeys {
+ v[0].inc() // 2 produce KAFKA-3025 45c8195fa1 KIP-31 KIP-32
+ v[1].inc() // 2 fetch (same)
+ v[3].inc() // 1 metadata KAFKA-3306 33d745e2dc
+ v[6].inc() // 2 update metadata KAFKA-1215 951e30adc6
+ return append(v,
+ k(zkBroker, rBroker, rController), // 17 sasl handshake KAFKA-3149 5b375d7bf9
+ k(zkBroker, rBroker, rController), // 18 api versions KAFKA-3307 8407dac6ee
+ )
+})
+
+var max0101 = nextMax(max0100, func(v listenerKeys) listenerKeys {
+ v[1].inc() // 3 fetch KAFKA-2063 d04b0998c0 KIP-74
+ v[2].inc() // 1 list offset KAFKA-4148 eaaa433fc9 KIP-79
+ v[3].inc() // 2 metadata KAFKA-4093 ecc1fb10fa KIP-78
+ v[11].inc() // 1 join group KAFKA-3888 40b1dd3f49 KIP-62
+ return append(v,
+ k(zkBroker, rBroker, rController), // 19 create topics KAFKA-2945 fc47b9fa6b
+ k(zkBroker, rBroker, rController), // 20 delete topics KAFKA-2946 539633ba0e
+ )
+})
+
+var max0102 = nextMax(max0101, func(v listenerKeys) listenerKeys {
+ v[6].inc() // 3 update metadata KAFKA-4565 d25671884b KIP-103
+ v[19].inc() // 1 create topics KAFKA-4591 da57bc27e7 KIP-108
+ return v
+})
+
+var max0110 = nextMax(max0102, func(v listenerKeys) listenerKeys {
+ v[0].inc() // 3 produce KAFKA-4816 5bd06f1d54 KIP-98
+ v[1].inc() // 4 fetch (same)
+ v[1].inc() // 5 fetch KAFKA-4586 8b05ad406d KIP-107
+ v[3].inc() // 4 metadata KAFKA-5291 7311dcbc53 (3 below)
+ v[9].inc() // 2 offset fetch KAFKA-3853 c2d9b95f36 KIP-98
+ v[10].inc() // 1 find coordinator KAFKA-5043 d0e7c6b930 KIP-98
+ v = append(v,
+ k(zkBroker, rBroker), // 21 delete records KAFKA-4586 see above
+ k(zkBroker, rBroker), // 22 init producer id KAFKA-4817 bdf4cba047 KIP-98 (raft added in KAFKA-12620 e97cff2702b6ba836c7925caa36ab18066a7c95d KIP-730)
+ k(zkBroker, rBroker), // 23 offset for leader epoch KAFKA-1211 0baea2ac13 KIP-101
+
+ k(zkBroker, rBroker), // 24 add partitions to txn KAFKA-4990 865d82af2c KIP-98 (raft 3.0 6e857c531f14d07d5b05f174e6063a124c917324)
+ k(zkBroker, rBroker), // 25 add offsets to txn (same, same raft)
+ k(zkBroker, rBroker), // 26 end txn (same, same raft)
+ k(zkBroker, rBroker), // 27 write txn markers (same)
+ k(zkBroker, rBroker), // 28 txn offset commit (same, same raft)
+
+ // raft broker / controller added in 5b0c58ed53c420e93957369516f34346580dac95
+ k(zkBroker, rBroker, rController), // 29 describe acls KAFKA-3266 9815e18fef KIP-140
+ k(zkBroker, rBroker, rController), // 30 create acls (same)
+ k(zkBroker, rBroker, rController), // 31 delete acls (same)
+
+ k(zkBroker, rBroker), // 32 describe configs KAFKA-3267 972b754536 KIP-133
+ k(zkBroker, rBroker, rController), // 33 alter configs (same) (raft broker 3.0 6e857c531f14d07d5b05f174e6063a124c917324, controller 273d66479dbee2398b09e478ffaf996498d1ab34)
+ )
+
+ // KAFKA-4954 0104b657a1 KIP-124
+ v[2].inc() // 2 list offset (reused in e71dce89c0 KIP-98)
+ v[3].inc() // 3 metadata
+ v[8].inc() // 3 offset commit
+ v[9].inc() // 3 offset fetch
+ v[11].inc() // 2 join group
+ v[12].inc() // 1 heartbeat
+ v[13].inc() // 1 leave group
+ v[14].inc() // 1 sync group
+ v[15].inc() // 1 describe groups
+ v[16].inc() // 1 list group
+ v[18].inc() // 1 api versions
+ v[19].inc() // 2 create topics
+ v[20].inc() // 1 delete topics
+
+ return v
+})
+
+var max100 = nextMax(max0110, func(v listenerKeys) listenerKeys {
+ v[0].inc() // 4 produce KAFKA-4763 fc93fb4b61 KIP-112
+ v[1].inc() // 6 fetch (same)
+ v[3].inc() // 5 metadata (same)
+ v[4].inc() // 1 leader and isr (same)
+ v[6].inc() // 4 update metadata (same)
+
+ v[0].inc() // 5 produce KAFKA-5793 94692288be
+ v[17].inc() // 1 sasl handshake KAFKA-4764 8fca432223 KIP-152
+
+ return append(v,
+ k(zkBroker, rBroker), // 34 alter replica log dirs KAFKA-5694 adefc8ea07 KIP-113
+ k(zkBroker, rBroker), // 35 describe log dirs (same)
+ k(zkBroker, rBroker, rController), // 36 sasl authenticate KAFKA-4764 (see above)
+ k(zkBroker, rBroker, rController), // 37 create partitions KAFKA-5856 5f6393f9b1 KIP-195 (raft 3.0 6e857c531f14d07d5b05f174e6063a124c917324)
+ )
+})
+
+var max110 = nextMax(max100, func(v listenerKeys) listenerKeys {
+ v = append(v,
+ k(zkBroker), // 38 create delegation token KAFKA-4541 27a8d0f9e7 under KAFKA-1696 KIP-48
+ k(zkBroker), // 39 renew delegation token (same)
+ k(zkBroker), // 40 expire delegation token (same)
+ k(zkBroker), // 41 describe delegation token (same)
+ k(zkBroker, rBroker), // 42 delete groups KAFKA-6275 1ed6da7cc8 KIP-229
+ )
+
+ v[1].inc() // 7 fetch KAFKA-6254 7fe1c2b3d3 KIP-227
+ v[32].inc() // 1 describe configs KAFKA-6241 b814a16b96 KIP-226
+
+ return v
+})
+
+var max200 = nextMax(max110, func(v listenerKeys) listenerKeys {
+ v[0].inc() // 6 produce KAFKA-6028 1facab387f KIP-219
+ v[1].inc() // 8 fetch (same)
+ v[2].inc() // 3 list offset (same)
+ v[3].inc() // 6 metadata (same)
+ v[8].inc() // 4 offset commit (same)
+ v[9].inc() // 4 offset fetch (same)
+ v[10].inc() // 2 find coordinator (same)
+ v[11].inc() // 3 join group (same)
+ v[12].inc() // 2 heartbeat (same)
+ v[13].inc() // 2 leave group (same)
+ v[14].inc() // 2 sync group (same)
+ v[15].inc() // 2 describe groups (same)
+ v[16].inc() // 2 list group (same)
+ v[18].inc() // 2 api versions (same)
+ v[19].inc() // 3 create topics (same)
+ v[20].inc() // 2 delete topics (same)
+ v[21].inc() // 1 delete records (same)
+ v[22].inc() // 1 init producer id (same)
+ v[24].inc() // 1 add partitions to txn (same)
+ v[25].inc() // 1 add offsets to txn (same)
+ v[26].inc() // 1 end txn (same)
+ v[28].inc() // 1 txn offset commit (same)
+ // 29, 30, 31 bumped below, but also had throttle changes
+ v[32].inc() // 2 describe configs (same)
+ v[33].inc() // 1 alter configs (same)
+ v[34].inc() // 1 alter replica log dirs (same)
+ v[35].inc() // 1 describe log dirs (same)
+ v[37].inc() // 1 create partitions (same)
+ v[38].inc() // 1 create delegation token (same)
+ v[39].inc() // 1 renew delegation token (same)
+ v[40].inc() // 1 expire delegation token (same)
+ v[41].inc() // 1 describe delegation token (same)
+ v[42].inc() // 1 delete groups (same)
+
+ v[29].inc() // 1 describe acls KAFKA-6841 b3aa655a70 KIP-290
+ v[30].inc() // 1 create acls (same)
+ v[31].inc() // 1 delete acls (same)
+
+ v[23].inc() // 1 offset for leader epoch KAFKA-6361 9679c44d2b KIP-279
+ return v
+})
+
+var max210 = nextMax(max200, func(v listenerKeys) listenerKeys {
+ v[8].inc() // 5 offset commit KAFKA-4682 418a91b5d4 KIP-211
+
+ v[20].inc() // 3 delete topics KAFKA-5975 04770916a7 KIP-322
+
+ v[1].inc() // 9 fetch KAFKA-7333 05ba5aa008 KIP-320
+ v[2].inc() // 4 list offset (same)
+ v[3].inc() // 7 metadata (same)
+ v[8].inc() // 6 offset commit (same)
+ v[9].inc() // 5 offset fetch (same)
+ v[23].inc() // 2 offset for leader epoch (same, also in Kafka PR #5635 79ad9026a6)
+ v[28].inc() // 2 txn offset commit (same)
+
+ v[0].inc() // 7 produce KAFKA-4514 741cb761c5 KIP-110
+ v[1].inc() // 10 fetch (same)
+ return v
+})
+
+var max220 = nextMax(max210, func(v listenerKeys) listenerKeys {
+ v[2].inc() // 5 list offset KAFKA-2334 152292994e KIP-207
+ v[11].inc() // 4 join group KAFKA-7824 9a9310d074 KIP-394
+ v[36].inc() // 1 sasl authenticate KAFKA-7352 e8a3bc7425 KIP-368
+
+ v[4].inc() // 2 leader and isr KAFKA-7235 2155c6d54b KIP-380
+ v[5].inc() // 1 stop replica (same)
+ v[6].inc() // 5 update metadata (same)
+ v[7].inc() // 2 controlled shutdown (same)
+
+ return append(v,
+ k(zkBroker, rBroker, rController), // 43 elect preferred leaders KAFKA-5692 269b65279c KIP-183 (raft 3.0 6e857c531f14d07d5b05f174e6063a124c917324)
+ )
+})
+
+var max230 = nextMax(max220, func(v listenerKeys) listenerKeys {
+ v[3].inc() // 8 metadata KAFKA-7922 a42f16f980 KIP-430
+ v[15].inc() // 3 describe groups KAFKA-7922 f11fa5ef40 KIP-430
+
+ v[1].inc() // 11 fetch KAFKA-8365 e2847e8603 KIP-392
+ v[23].inc() // 3 offset for leader epoch (same)
+
+ v[11].inc() // 5 join group KAFKA-7862 0f995ba6be KIP-345
+ v[8].inc() // 7 offset commit KAFKA-8225 9fa331b811 KIP-345
+ v[12].inc() // 3 heartbeat (same)
+ v[14].inc() // 3 sync group (same)
+
+ return append(v,
+ k(zkBroker, rBroker, rController), // 44 incremental alter configs KAFKA-7466 3b1524c5df KIP-339
+ )
+})
+
+var max240 = nextMax(max230, func(v listenerKeys) listenerKeys {
+ v[4].inc() // 3 leader and isr KAFKA-8345 81900d0ba0 KIP-455
+ v[15].inc() // 4 describe groups KAFKA-8538 f8db022b08 KIP-345
+ v[19].inc() // 4 create topics KAFKA-8305 8e161580b8 KIP-464
+ v[43].inc() // 1 elect preferred leaders KAFKA-8286 121308cc7a KIP-460
+ v = append(v,
+ // raft added in e07de97a4ce730a2755db7eeacb9b3e1f69a12c8 for the following two
+ k(zkBroker, rBroker, rController), // 45 alter partition reassignments KAFKA-8345 81900d0ba0 KIP-455
+ k(zkBroker, rBroker, rController), // 46 list partition reassignments (same)
+
+ k(zkBroker, rBroker), // 47 offset delete KAFKA-8730 e24d0e22ab KIP-496
+ )
+
+ v[13].inc() // 3 leave group KAFKA-8221 74c90f46c3 KIP-345
+
+ // introducing flexible versions; 24 were bumped
+ v[3].inc() // 9 metadata KAFKA-8885 apache/kafka#7325 KIP-482
+ v[4].inc() // 4 leader and isr (same)
+ v[5].inc() // 2 stop replica (same)
+ v[6].inc() // 6 update metadata (same)
+ v[7].inc() // 3 controlled shutdown (same)
+ v[8].inc() // 8 offset commit (same)
+ v[9].inc() // 6 offset fetch (same)
+ v[10].inc() // 3 find coordinator (same)
+ v[11].inc() // 6 join group (same)
+ v[12].inc() // 4 heartbeat (same)
+ v[13].inc() // 4 leave group (same)
+ v[14].inc() // 4 sync group (same)
+ v[15].inc() // 5 describe groups (same)
+ v[16].inc() // 3 list group (same)
+ v[18].inc() // 3 api versions (same, also KIP-511 [non-flexible fields added])
+ v[19].inc() // 5 create topics (same)
+ v[20].inc() // 4 delete topics (same)
+ v[22].inc() // 2 init producer id (same)
+ v[38].inc() // 2 create delegation token (same)
+ v[42].inc() // 2 delete groups (same)
+ v[43].inc() // 2 elect preferred leaders (same)
+ v[44].inc() // 1 incremental alter configs (same)
+ // also 45, 46; not bumped since in same release
+
+ // Create topics (19) was bumped up to 5 in KAFKA-8907 5d0052fe00
+ // KIP-525, then 6 in the above bump, then back down to 5 once the
+ // tagged PR was merged (KAFKA-8932 1f1179ea64 for the bump down).
+
+ v[0].inc() // 8 produce KAFKA-8729 f6f24c4700 KIP-467
+
+ return v
+})
+
+var max250 = nextMax(max240, func(v listenerKeys) listenerKeys {
+ v[22].inc() // 3 init producer id KAFKA-8710 fecb977b25 KIP-360
+ v[9].inc() // 7 offset fetch KAFKA-9346 6da70f9b95 KIP-447
+
+ // more flexible versions, KAFKA-9420 0a2569e2b99 KIP-482
+ // 6 bumped, then sasl handshake reverted later in 1a8dcffe4
+ v[36].inc() // 2 sasl authenticate
+ v[37].inc() // 2 create partitions
+ v[39].inc() // 2 renew delegation token
+ v[40].inc() // 2 expire delegation token
+ v[41].inc() // 2 describe delegation token
+
+ v[28].inc() // 3 txn offset commit KAFKA-9365 ed7c071e07f KIP-447
+
+ v[29].inc() // 2 describe acls KAFKA-9026 40b35178e5 KIP-482 (for flexible versions)
+ v[30].inc() // 2 create acls KAFKA-9027 738e14edb KIP-482 (flexible)
+ v[31].inc() // 2 delete acls KAFKA-9028 738e14edb KIP-482 (flexible)
+
+ v[11].inc() // 7 join group KAFKA-9437 96c4ce480 KIP-559
+ v[14].inc() // 5 sync group (same)
+
+ return v
+})
+
+var max260 = nextMax(max250, func(v listenerKeys) listenerKeys {
+ v[21].inc() // 2 delete records KAFKA-8768 f869e33ab KIP-482 (opportunistic bump for flexible versions)
+ v[35].inc() // 2 describe log dirs KAFKA-9435 4f1e8331ff9 KIP-482 (same)
+
+ v = append(v,
+ k(zkBroker, rBroker), // 48 describe client quotas KAFKA-7740 227a7322b KIP-546 (raft in 5964401bf9aab611bd4a072941bd1c927e044258)
+ k(zkBroker, rBroker, rController), // 49 alter client quotas (same)
+ )
+
+ v[5].inc() // 3 stop replica KAFKA-9539 7c7d55dbd KIP-570
+
+ v[16].inc() // 4 list group KAFKA-9130 fe948d39e KIP-518
+ v[32].inc() // 3 describe configs KAFKA-9494 af3b8b50f2 KIP-569
+
+ return v
+})
+
+var max270 = nextMax(max260, func(v listenerKeys) listenerKeys {
+ // KAFKA-10163 a5ffd1ca44c KIP-599
+ v[37].inc() // 3 create partitions
+ v[19].inc() // 6 create topics (same)
+ v[20].inc() // 5 delete topics (same)
+
+ // KAFKA-9911 b937ec7567 KIP-588
+ v[22].inc() // 4 init producer id
+ v[24].inc() // 2 add partitions to txn
+ v[25].inc() // 2 add offsets to txn
+ v[26].inc() // 2 end txn
+
+ v = append(v,
+ k(zkBroker, rBroker, rController), // 50 describe user scram creds, KAFKA-10259 e8524ccd8fca0caac79b844d87e98e9c055f76fb KIP-554; 38c409cf33c kraft
+ k(zkBroker, rBroker, rController), // 51 alter user scram creds, same
+ )
+
+ // KAFKA-10435 634c9175054cc69d10b6da22ea1e95edff6a4747 KIP-595
+ // This opted in fetch request to flexible versions.
+ //
+ // KAFKA-10487: further change in aa5263fba903c85812c0c31443f7d49ee371e9db
+ v[1].inc() // 12 fetch
+
+ // KAFKA-10492 b7c8490cf47b0c18253d6a776b2b35c76c71c65d KIP-595
+ //
+ // These are the first requests that are raft only.
+ v = append(v,
+ k(rController), // 52 vote
+ k(rController), // 53 begin quorum epoch
+ k(rController), // 54 end quorum epoch
+ k(rBroker, rController), // 55 describe quorum
+ )
+
+ // KAFKA-8836 57de67db22eb373f92ec5dd449d317ed2bc8b8d1 KIP-497
+ v = append(v,
+ k(zkBroker, rController), // 56 alter isr
+ )
+
+ // KAFKA-10028 fb4f297207ef62f71e4a6d2d0dac75752933043d KIP-584
+ return append(v,
+ k(zkBroker, rBroker, rController), // 57 update features (rbroker 3.0 6e857c531f14d07d5b05f174e6063a124c917324; rcontroller 3.2 55ff5d360381af370fe5b3a215831beac49571a4 KIP-778 KAFKA-13823)
+ )
+})
+
+var max280 = nextMax(max270, func(v listenerKeys) listenerKeys {
+ // KAFKA-10181 KAFKA-10181 KIP-590
+ v = append(v,
+ k(zkBroker, rController), // 58 envelope, controller first, zk in KAFKA-14446 8b045dcbf6b89e1a9594ff95642d4882765e4b0d KIP-866 Kafka 3.4
+ )
+
+ // KAFKA-10729 85f94d50271c952c3e9ee49c4fc814c0da411618 KIP-482
+ // (flexible bumps)
+ v[0].inc() // 9 produce
+ v[2].inc() // 6 list offsets
+ v[23].inc() // 4 offset for leader epoch
+ v[24].inc() // 3 add partitions to txn
+ v[25].inc() // 3 add offsets to txn
+ v[26].inc() // 3 end txn
+ v[27].inc() // 1 write txn markers
+ v[32].inc() // 4 describe configs
+ v[33].inc() // 2 alter configs
+ v[34].inc() // 2 alter replica log dirs
+ v[48].inc() // 1 describe client quotas
+ v[49].inc() // 1 alter client quotas
+
+ // KAFKA-10547 5c921afa4a593478f7d1c49e5db9d787558d0d5e KIP-516
+ v[3].inc() // 10 metadata
+ v[6].inc() // 7 update metadata
+
+ // KAFKA-10545 1dd1e7f945d7a8c1dc177223cd88800680f1ff46 KIP-516
+ v[4].inc() // 5 leader and isr
+
+ // KAFKA-10427 2023aed59d863278a6302e03066d387f994f085c KIP-630
+ v = append(v,
+ k(rController), // 59 fetch snapshot
+ )
+
+ // KAFKA-12204 / KAFKA-10851 302eee63c479fd4b955c44f1058a5e5d111acb57 KIP-700
+ v = append(v,
+ k(zkBroker, rBroker, rController), // 60 describe cluster; rController in KAFKA-15396 41b695b6e30baa4243d9ca4f359b833e17ed0e77 KIP-919
+ )
+
+ // KAFKA-12212 7a1d1d9a69a241efd68e572badee999229b3942f KIP-700
+ v[3].inc() // 11 metadata
+
+ // KAFKA-10764 4f588f7ca2a1c5e8dd845863da81425ac69bac92 KIP-516
+ v[19].inc() // 7 create topics
+ v[20].inc() // 6 delete topics
+
+ // KAFKA-12238 e9edf104866822d9e6c3b637ffbf338767b5bf27 KIP-664
+ v = append(v,
+ k(zkBroker, rBroker), // 61 describe producers
+ )
+
+ // KAFKA-12248 a022072df3c8175950c03263d2bbf2e3ea7a7a5d KIP-500
+ // (commit mentions KIP-500, these are actually described in KIP-631)
+ // Broker registration was later updated in d9bb2ef596343da9402bff4903b129cff1f7c22b
+ v = append(v,
+ k(rController), // 62 broker registration
+ k(rController), // 63 broker heartbeat
+ )
+
+ // KAFKA-12249 3f36f9a7ca153a9d221f6bedeb7d1503aa18eff1 KIP-500 / KIP-631
+ // Renamed from Decommission to Unregister in 06dce721ec0185d49fac37775dbf191d0e80e687
+ v = append(v,
+ // kraft broker added in 7143267f71ca0c14957d8560fbc42a5f8aac564d
+ k(rBroker, rController), // 64 unregister broker
+ )
+ return v
+})
+
+var max300 = nextMax(max280, func(v listenerKeys) listenerKeys {
+ // KAFKA-12267 3f09fb97b6943c0612488dfa8e5eab8078fd7ca0 KIP-664
+ v = append(v,
+ k(zkBroker, rBroker), // 65 describe transactions
+ )
+
+ // KAFKA-12369 3708a7c6c1ecf1304f091dda1e79ae53ba2df489 KIP-664
+ v = append(v,
+ k(zkBroker, rBroker), // 66 list transactions
+ )
+
+ // KAFKA-12620 72d108274c98dca44514007254552481c731c958 KIP-730
+ // raft broker added in e97cff2702b6ba836c7925caa36ab18066a7c95d
+ v = append(v,
+ k(zkBroker, rController), // 67 allocate producer ids
+ )
+
+ // KAFKA-12541 bd72ef1bf1e40feb3bc17349a385b479fa5fa530 KIP-734
+ v[2].inc() // 7 list offsets
+
+ // KAFKA-12663 f5d5f654db359af077088685e29fbe5ea69616cf KIP-699
+ v[10].inc() // 4 find coordinator
+
+ // KAFKA-12234 e00c0f3719ad0803620752159ef8315d668735d6 KIP-709
+ v[9].inc() // 8 offset fetch
+
+ return v
+})
+
+var max310 = nextMax(max300, func(v listenerKeys) listenerKeys {
+ // KAFKA-10580 2b8aff58b575c199ee8372e5689420c9d77357a5 KIP-516
+ v[1].inc() // 13 fetch
+
+ // KAFKA-10744 1d22b0d70686aef5689b775ea2ea7610a37f3e8c KIP-516
+ v[3].inc() // 12 metadata
+
+ return v
+})
+
+var max320 = nextMax(max310, func(v listenerKeys) listenerKeys {
+ // KAFKA-13495 69645f1fe5103adb00de6fa43152e7df989f3aea KIP-800
+ v[11].inc() // 8 join group
+
+ // KAFKA-13496 bf609694f83931990ce63e0123f811e6475820c5 KIP-800
+ v[13].inc() // 5 leave group
+
+ // KAFKA-13527 31fca1611a6780e8a8aa3ac21618135201718e32 KIP-784
+ v[35].inc() // 3 describe log dirs
+
+ // KAFKA-13435 c8fbe26f3bd3a7c018e7619deba002ee454208b9 KIP-814
+ v[11].inc() // 9 join group
+
+ // KAFKA-13587 52621613fd386203773ba93903abd50b46fa093a KIP-704
+ v[4].inc() // 6 leader and isr
+ v[56].inc() // 1 alter isr => alter partition
+
+ return v
+})
+
+var max330 = nextMax(max320, func(v listenerKeys) listenerKeys {
+ // KAFKA-13823 55ff5d360381af370fe5b3a215831beac49571a4 KIP-778
+ v[57].inc() // 1 update features
+
+ // KAFKA-13958 4fcfd9ddc4a8da3d4cfbb69268c06763352e29a9 KIP-827
+ v[35].inc() // 4 describe log dirs
+
+ // KAFKA-841 f83d95d9a28 KIP-841
+ v[56].inc() // 2 alter partition
+
+ // KAFKA-13888 a126e3a622f KIP-836
+ v[55].inc() // 1 describe quorum
+
+ // KAFKA-6945 d65d8867983 KIP-373
+ v[29].inc() // 3 describe acls
+ v[30].inc() // 3 create acls
+ v[31].inc() // 3 delete acls
+ v[38].inc() // 3 create delegation token
+ v[41].inc() // 3 describe delegation token
+
+ return v
+})
+
+var max340 = nextMax(max330, func(v listenerKeys) listenerKeys {
+ // KAFKA-14304 7b7e40a536a79cebf35cc278b9375c8352d342b9 KIP-866
+ // KAFKA-14448 67c72596afe58363eceeb32084c5c04637a33831 added BrokerRegistration
+ // KAFKA-14493 db490707606855c265bc938e1b236070e0e2eba5 changed BrokerRegistration
+ // KAFKA-14304 0bb05d8679b684ad8fbb2eb40dfc00066186a75a changed BrokerRegistration back to a bool...
+ // 5b521031edea8ea7cbcca7dc24a58429423740ff added tag to ApiVersions
+ v[4].inc() // 7 leader and isr
+ v[5].inc() // 4 stop replica
+ v[6].inc() // 8 update metadata
+ v[62].inc() // 1 broker registration
+ return v
+})
+
+var max350 = nextMax(max340, func(v listenerKeys) listenerKeys {
+ // KAFKA-13369 7146ac57ba9ddd035dac992b9f188a8e7677c08d KIP-405
+ v[1].inc() // 14 fetch
+ v[2].inc() // 8 list offsets
+
+ v[1].inc() // 15 fetch // KAFKA-14617 79b5f7f1ce2 KIP-903
+ v[56].inc() // 3 alter partition // KAFKA-14617 8c88cdb7186b1d594f991eb324356dcfcabdf18a KIP-903
+ return v
+})
+
+var max360 = nextMax(max350, func(v listenerKeys) listenerKeys {
+ // KAFKA-14402 29a1a16668d76a1cc04ec9e39ea13026f2dce1de KIP-890
+ // Later commit swapped to stable
+ v[24].inc() // 4 add partitions to txn
+ return v
+})
+
+var max370 = nextMax(max360, func(v listenerKeys) listenerKeys {
+ // KAFKA-15661 c8f687ac1505456cb568de2b60df235eb1ceb5f0 KIP-951
+ v[0].inc() // 10 produce
+ v[1].inc() // 16 fetch
+
+ // 7826d5fc8ab695a5ad927338469ddc01b435a298 KIP-848
+ // (change introduced in 3.6 but was marked unstable and not visible)
+ v[8].inc() // 9 offset commit
+ // KAFKA-14499 7054625c45dc6edb3c07271fe4a6c24b4638424f KIP-848 (and prior)
+ v[9].inc() // 9 offset fetch
+
+ // KAFKA-15368 41b695b6e30baa4243d9ca4f359b833e17ed0e77 KIP-919
+ // (added rController as well, see above)
+ v[60].inc() // 1 describe cluster
+
+ // KAFKA-14391 3be7f7d611d0786f2f98159d5c7492b0d94a2bb7 KIP-848
+ // as well as some patches following
+ v = append(v,
+ k(zkBroker, rBroker), // 68 consumer group heartbeat
+ )
+
+ return v
+})
+
+var (
+ maxStable = max370
+ maxTip = nextMax(maxStable, func(v listenerKeys) listenerKeys {
+ return v
+ })
+)
diff --git a/vendor/github.com/twmb/franz-go/pkg/sasl/sasl.go b/vendor/github.com/twmb/franz-go/pkg/sasl/sasl.go
new file mode 100644
index 0000000000000..dd85a02a188d4
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/pkg/sasl/sasl.go
@@ -0,0 +1,41 @@
+// Package sasl specifies interfaces that any SASL authentication must provide
+// to interop with Kafka SASL.
+package sasl
+
+import "context"
+
+// Session is an authentication session.
+type Session interface {
+ // Challenge is called with a server response. This must return
+ // if the authentication is done, or, if not, the next message
+ // to send. If the authentication is done, this can return an
+ // additional last message to be written (for which we will not
+ // read a response).
+ //
+ // Returning an error stops the authentication flow.
+ Challenge([]byte) (bool, []byte, error)
+}
+
+// Mechanism authenticates with SASL.
+type Mechanism interface {
+ // Name is the name of this SASL authentication mechanism.
+ Name() string
+
+ // Authenticate initializes an authentication session to the provided
+ // host:port. If the mechanism is a client-first authentication
+ // mechanism, this also returns the first message to write.
+ //
+ // If initializing a session fails, this can return an error to stop
+ // the authentication flow.
+ //
+ // The provided context can be used through the duration of the session.
+ Authenticate(ctx context.Context, host string) (Session, []byte, error)
+}
+
+// ClosingMechanism is an optional interface for SASL mechanisms. Implementing
+// this interface signals that the mechanism should be closed if it will never
+// be used again.
+type ClosingMechanism interface {
+ // Close permanently closes a mechanism.
+ Close()
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/LICENSE b/vendor/github.com/twmb/franz-go/plugin/kotel/LICENSE
new file mode 100644
index 0000000000000..36e18034325d5
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/LICENSE
@@ -0,0 +1,24 @@
+Copyright 2020, Travis Bischel.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the library nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/README.md b/vendor/github.com/twmb/franz-go/plugin/kotel/README.md
new file mode 100644
index 0000000000000..4ed1b5ae781cb
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/README.md
@@ -0,0 +1,194 @@
+kotel
+===
+
+Kotel is an OpenTelemetry instrumentation plug-in package for franz-go. It
+provides [tracing](https://pkg.go.dev/go.opentelemetry.io/otel/trace)
+and [metrics](https://pkg.go.dev/go.opentelemetry.io/otel/metric) options
+through
+a [`kgo.Hook`](https://pkg.go.dev/github.com/twmb/franz-go/pkg/kgo#Hook). With
+kotel, you can trace records produced or consumed with franz-go. You can pass
+parent traces into records and extract parent traces from records. It also
+tracks metrics related to connections, errors, and bytes transferred.
+
+To learn more about how to use kotel, see the usage sections in the README and
+refer to the [OpenTelemetry documentation](https://opentelemetry.io/docs) for
+additional information about OpenTelemetry and how it can be used in your
+franz-go projects.
+
+## Tracing
+
+kotel provides tracing capabilities for Kafka using OpenTelemetry
+specifications. It allows for the creation of three different span
+operations: "publish", "receive", and "process". Additionally, it also provides
+a set of attributes to use with these spans.
+
+### How it works
+
+The kotel tracer module uses hooks to automatically create and close "publish"
+and "receive" spans as a `kgo.Record` flows through the application. However,
+for the "process" span, it uses a convenience method that must be manually
+invoked and closed in the consumer code to capture processing.
+
+The following table provides a visual representation of the lineage of the
+span operations:
+
+| Order | Hook/Method | Operation | State |
+|-------|---------------------------------|-----------|-------|
+| 1 | kgo.HookProduceRecordBuffered | Publish | Start |
+| 2 | kgo.HookProduceRecordUnbuffered | Publish | End |
+| 3 | kgo.HookFetchRecordBuffered | Receive | Start |
+| 4 | kgo.HookFetchRecordUnbuffered | Receive | End |
+| 5 | kotel.Tracer.WithProcessSpan | Process | Start |
+
+### Getting started
+
+To start using kotel for tracing, you will need to:
+
+1. Set up a tracer provider
+2. Configure any desired tracer options
+3. Create a new kotel tracer
+4. Create a new kotel service hook
+5. Create a new Kafka client and pass in the kotel hook
+
+Here's an example of how you might do this:
+
+```go
+// Initialize tracer provider.
+tracerProvider, err := initTracerProvider()
+
+// Create a new kotel tracer.
+tracerOpts := []kotel.TracerOpt{
+ kotel.TracerProvider(tracerProvider),
+ kotel.TracerPropagator(propagation.NewCompositeTextMapPropagator(propagation.TraceContext{})),
+}
+tracer := kotel.NewTracer(tracerOpts...)
+
+// Create a new kotel service.
+kotelOps := []kotel.Opt{
+ kotel.WithTracer(tracer),
+}
+kotelService := kotel.NewKotel(kotelOps...)
+
+// Create a new Kafka client.
+cl, err := kgo.NewClient(
+ // Pass in the kotel hook.
+ kgo.WithHooks(kotelService.Hooks()...),
+ // ...other opts.
+)
+```
+
+### Sending records
+
+When producing a record with franz-go, it will traced by kotel. To include
+parent traces, pass in an instrumented context.
+
+Here's an example of how to do this:
+
+```go
+func httpHandler(w http.ResponseWriter, r *http.Request) {
+ // Start a new span with options.
+ opts := []trace.SpanStartOption{
+ trace.WithSpanKind(trace.SpanKindServer),
+ trace.WithAttributes([]attribute.KeyValue{attribute.String("some-key", "foo")}...),
+ }
+ ctx, span := tracer.Start(r.Context(), "request", opts...)
+ // End the span when function exits.
+ defer span.End()
+
+ var wg sync.WaitGroup
+ wg.Add(1)
+ record := &kgo.Record{Topic: "topic", Value: []byte("foo")}
+ // Pass in the context from the tracer.Start() call to ensure that the span
+ // created is linked to the parent span.
+ cl.Produce(ctx, record, func(_ *kgo.Record, err error) {
+ defer wg.Done()
+ if err != nil {
+ fmt.Printf("record had a produce error: %v\n", err)
+ span.SetStatus(codes.Error, err.Error())
+ span.RecordError(err)
+ }
+ })
+ wg.Wait()
+}
+```
+
+### Processing Records
+
+Use the `kotel.Tracer.WithProcessSpan` method to start a "process" span. Make
+sure to end the span after you finish processing the record. The trace can be
+continued to the next processing step if desired.
+
+Here is an example of how you might do this:
+
+```go
+func processRecord(record *kgo.Record, tracer *kotel.Tracer) {
+ ctx, span := tracer.WithProcessSpan(record)
+ // Process the record here.
+ // End the span when function exits.
+ defer span.End()
+ // optionally pass the context to the next processing step.
+ fmt.Printf(
+ "processed offset '%s' with key '%s' and value '%s'\n",
+ strconv.FormatInt(record.Offset, 10),
+ string(record.Key),
+ string(record.Value),
+ )
+}
+```
+
+## Metrics
+
+The kotel meter module tracks various metrics related to the processing of
+records, such as the number of successful and unsuccessful connections, bytes
+written and read, and the number of buffered records. These metrics are all
+counters and are tracked under the following names:
+
+```
+messaging.kafka.connects.count{node_id = "#{node}"}
+messaging.kafka.connect_errors.count{node_id = "#{node}"}
+messaging.kafka.disconnects.count{node_id = "#{node}"}
+messaging.kafka.write_errors.count{node_id = "#{node}"}
+messaging.kafka.write_bytes{node_id = "#{node}"}
+messaging.kafka.read_errors.count{node_id = "#{node}"}
+messaging.kafka.read_bytes.count{node_id = "#{node}"}
+messaging.kafka.produce_bytes.count{node_id = "#{node}", topic = "#{topic}"}
+messaging.kafka.produce_records.count{node_id = "#{node}", topic = "#{topic}"}
+messaging.kafka.fetch_bytes.count{node_id = "#{node}", topic = "#{topic}"}
+messaging.kafka.fetch_records.count{node_id = "#{node}", topic = "#{topic}"}
+```
+
+### Getting started
+
+To start using kotel for metrics, you will need to:
+
+1. Set up a meter provider
+2. Configure any desired meter options
+3. Create a new kotel meter
+4. Create a new kotel service hook
+5. Create a new Kafka client and pass in the kotel hook
+
+Here's an example of how you might do this:
+
+```go
+// Initialize meter provider.
+meterProvider, err := initMeterProvider()
+
+// Create a new kotel meter.
+meterOpts := []kotel.MeterOpt{kotel.MeterProvider(meterProvider)}
+meter := kotel.NewMeter(meterOpts...)
+
+// Pass the meter to NewKotel hook.
+kotelOps := []kotel.Opt{
+ kotel.WithMeter(meter),
+}
+
+// Create a new kotel service.
+kotelService := kotel.NewKotel(kotelOps...)
+
+// Create a new Kafka client.
+cl, err := kgo.NewClient(
+ // Pass in the kotel hook.
+ kgo.WithHooks(kotelService.Hooks()...),
+ // ...other opts.
+)
+```
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/carrier.go b/vendor/github.com/twmb/franz-go/plugin/kotel/carrier.go
new file mode 100644
index 0000000000000..e851ab6f9fa7e
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/carrier.go
@@ -0,0 +1,53 @@
+package kotel
+
+import (
+ "github.com/twmb/franz-go/pkg/kgo"
+)
+
+// RecordCarrier injects and extracts traces from a kgo.Record.
+//
+// This type exists to satisfy the otel/propagation.TextMapCarrier interface.
+type RecordCarrier struct {
+ record *kgo.Record
+}
+
+// NewRecordCarrier creates a new RecordCarrier.
+func NewRecordCarrier(record *kgo.Record) RecordCarrier {
+ return RecordCarrier{record: record}
+}
+
+// Get retrieves a single value for a given key if it exists.
+func (c RecordCarrier) Get(key string) string {
+ for _, h := range c.record.Headers {
+ if h.Key == key {
+ return string(h.Value)
+ }
+ }
+ return ""
+}
+
+// Set sets a header.
+func (c RecordCarrier) Set(key, val string) {
+ // Check if key already exists.
+ for i, h := range c.record.Headers {
+ if h.Key == key {
+ // Key exist, update the value.
+ c.record.Headers[i].Value = []byte(val)
+ return
+ }
+ }
+ // Key does not exist, append new header.
+ c.record.Headers = append(c.record.Headers, kgo.RecordHeader{
+ Key: key,
+ Value: []byte(val),
+ })
+}
+
+// Keys returns a slice of all key identifiers in the carrier.
+func (c RecordCarrier) Keys() []string {
+ out := make([]string, len(c.record.Headers))
+ for i, h := range c.record.Headers {
+ out[i] = h.Key
+ }
+ return out
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/kotel.go b/vendor/github.com/twmb/franz-go/plugin/kotel/kotel.go
new file mode 100644
index 0000000000000..47e443b8ea42a
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/kotel.go
@@ -0,0 +1,61 @@
+package kotel
+
+import (
+ "github.com/twmb/franz-go/pkg/kgo"
+)
+
+const (
+ instrumentationName = "github.com/twmb/franz-go/plugin/kotel"
+)
+
+// Kotel represents the configuration options available for the kotel plugin.
+type Kotel struct {
+ meter *Meter
+ tracer *Tracer
+}
+
+// Opt interface used for setting optional kotel properties.
+type Opt interface{ apply(*Kotel) }
+
+type optFunc func(*Kotel)
+
+func (o optFunc) apply(c *Kotel) { o(c) }
+
+// WithTracer configures Kotel with a Tracer.
+func WithTracer(t *Tracer) Opt {
+ return optFunc(func(k *Kotel) {
+ if t != nil {
+ k.tracer = t
+ }
+ })
+}
+
+// WithMeter configures Kotel with a Meter.
+func WithMeter(m *Meter) Opt {
+ return optFunc(func(k *Kotel) {
+ if m != nil {
+ k.meter = m
+ }
+ })
+}
+
+// Hooks return a list of kgo.hooks compatible with its interface.
+func (k *Kotel) Hooks() []kgo.Hook {
+ var hooks []kgo.Hook
+ if k.tracer != nil {
+ hooks = append(hooks, k.tracer)
+ }
+ if k.meter != nil {
+ hooks = append(hooks, k.meter)
+ }
+ return hooks
+}
+
+// NewKotel creates a new Kotel struct and applies opts to it.
+func NewKotel(opts ...Opt) *Kotel {
+ k := &Kotel{}
+ for _, opt := range opts {
+ opt.apply(k)
+ }
+ return k
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/meter.go b/vendor/github.com/twmb/franz-go/plugin/kotel/meter.go
new file mode 100644
index 0000000000000..8c56a8c7002d1
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/meter.go
@@ -0,0 +1,375 @@
+package kotel
+
+import (
+ "context"
+ "log"
+ "math"
+ "net"
+ "strconv"
+ "time"
+
+ "github.com/twmb/franz-go/pkg/kgo"
+ "go.opentelemetry.io/otel"
+ "go.opentelemetry.io/otel/attribute"
+ "go.opentelemetry.io/otel/metric"
+ semconv "go.opentelemetry.io/otel/semconv/v1.18.0"
+)
+
+var ( // interface checks to ensure we implement the hooks properly
+ _ kgo.HookBrokerConnect = new(Meter)
+ _ kgo.HookBrokerDisconnect = new(Meter)
+ _ kgo.HookBrokerWrite = new(Meter)
+ _ kgo.HookBrokerRead = new(Meter)
+ _ kgo.HookProduceBatchWritten = new(Meter)
+ _ kgo.HookFetchBatchRead = new(Meter)
+)
+
+const (
+ dimensionless = "1"
+ bytes = "by"
+)
+
+type Meter struct {
+ provider metric.MeterProvider
+ meter metric.Meter
+ instruments instruments
+
+ mergeConnectsMeter bool
+}
+
+// MeterOpt interface used for setting optional config properties.
+type MeterOpt interface {
+ apply(*Meter)
+}
+
+type meterOptFunc func(*Meter)
+
+// MeterProvider takes a metric.MeterProvider and applies it to the Meter
+// If none is specified, the global provider is used.
+func MeterProvider(provider metric.MeterProvider) MeterOpt {
+ return meterOptFunc(func(m *Meter) {
+ if provider != nil {
+ m.provider = provider
+ }
+ })
+}
+
+// WithMergedConnectsMeter merges the `messaging.kafka.connect_errors.count`
+// counter into the `messaging.kafka.connects.count` counter, adding an
+// attribute "outcome" with the values "success" or "failure". This option
+// shall be used when a single metric with different dimensions is preferred
+// over two separate metrics that produce data at alternating intervals.
+// For example, it becomes possible to alert on the metric no longer
+// producing data.
+func WithMergedConnectsMeter() MeterOpt {
+ return meterOptFunc(func(m *Meter) {
+ m.mergeConnectsMeter = true
+ })
+
+}
+
+func (o meterOptFunc) apply(m *Meter) {
+ o(m)
+}
+
+// NewMeter returns a Meter, used as option for kotel to instrument franz-go
+// with instruments.
+func NewMeter(opts ...MeterOpt) *Meter {
+ m := &Meter{}
+ for _, opt := range opts {
+ opt.apply(m)
+ }
+ if m.provider == nil {
+ m.provider = otel.GetMeterProvider()
+ }
+ m.meter = m.provider.Meter(
+ instrumentationName,
+ metric.WithInstrumentationVersion(semVersion()),
+ metric.WithSchemaURL(semconv.SchemaURL),
+ )
+ m.instruments = m.newInstruments()
+ return m
+}
+
+// instruments ---------------------------------------------------------------
+
+type instruments struct {
+ connects metric.Int64Counter
+ connectErrs metric.Int64Counter
+ disconnects metric.Int64Counter
+
+ writeErrs metric.Int64Counter
+ writeBytes metric.Int64Counter
+
+ readErrs metric.Int64Counter
+ readBytes metric.Int64Counter
+
+ produceBytes metric.Int64Counter
+ produceRecords metric.Int64Counter
+ fetchBytes metric.Int64Counter
+ fetchRecords metric.Int64Counter
+}
+
+func (m *Meter) newInstruments() instruments {
+ // connects and disconnects
+ connects, err := m.meter.Int64Counter(
+ "messaging.kafka.connects.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of connections opened, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create connects instrument, %v", err)
+ }
+
+ var connectErrs metric.Int64Counter
+ if !m.mergeConnectsMeter {
+ var err error
+ connectErrs, err = m.meter.Int64Counter(
+ "messaging.kafka.connect_errors.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of connection errors, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create connectErrs instrument, %v", err)
+ }
+ }
+
+ disconnects, err := m.meter.Int64Counter(
+ "messaging.kafka.disconnects.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of connections closed, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create disconnects instrument, %v", err)
+ }
+
+ // write
+
+ writeErrs, err := m.meter.Int64Counter(
+ "messaging.kafka.write_errors.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of write errors, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create writeErrs instrument, %v", err)
+ }
+
+ writeBytes, err := m.meter.Int64Counter(
+ "messaging.kafka.write_bytes",
+ metric.WithUnit(bytes),
+ metric.WithDescription("Total number of bytes written, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create writeBytes instrument, %v", err)
+ }
+
+ // read
+
+ readErrs, err := m.meter.Int64Counter(
+ "messaging.kafka.read_errors.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of read errors, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create readErrs instrument, %v", err)
+ }
+
+ readBytes, err := m.meter.Int64Counter(
+ "messaging.kafka.read_bytes.count",
+ metric.WithUnit(bytes),
+ metric.WithDescription("Total number of bytes read, by broker"),
+ )
+ if err != nil {
+ log.Printf("failed to create readBytes instrument, %v", err)
+ }
+
+ // produce & consume
+
+ produceBytes, err := m.meter.Int64Counter(
+ "messaging.kafka.produce_bytes.count",
+ metric.WithUnit(bytes),
+ metric.WithDescription("Total number of uncompressed bytes produced, by broker and topic"),
+ )
+ if err != nil {
+ log.Printf("failed to create produceBytes instrument, %v", err)
+ }
+
+ produceRecords, err := m.meter.Int64Counter(
+ "messaging.kafka.produce_records.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of produced records, by broker and topic"),
+ )
+ if err != nil {
+ log.Printf("failed to create produceRecords instrument, %v", err)
+ }
+
+ fetchBytes, err := m.meter.Int64Counter(
+ "messaging.kafka.fetch_bytes.count",
+ metric.WithUnit(bytes),
+ metric.WithDescription("Total number of uncompressed bytes fetched, by broker and topic"),
+ )
+ if err != nil {
+ log.Printf("failed to create fetchBytes instrument, %v", err)
+ }
+
+ fetchRecords, err := m.meter.Int64Counter(
+ "messaging.kafka.fetch_records.count",
+ metric.WithUnit(dimensionless),
+ metric.WithDescription("Total number of fetched records, by broker and topic"),
+ )
+ if err != nil {
+ log.Printf("failed to create fetchRecords instrument, %v", err)
+ }
+
+ return instruments{
+ connects: connects,
+ connectErrs: connectErrs,
+ disconnects: disconnects,
+
+ writeErrs: writeErrs,
+ writeBytes: writeBytes,
+
+ readErrs: readErrs,
+ readBytes: readBytes,
+
+ produceBytes: produceBytes,
+ produceRecords: produceRecords,
+ fetchBytes: fetchBytes,
+ fetchRecords: fetchRecords,
+ }
+}
+
+// Helpers -------------------------------------------------------------------
+
+func strnode(node int32) string {
+ if node < 0 {
+ return "seed_" + strconv.Itoa(int(node)-math.MinInt32)
+ }
+ return strconv.Itoa(int(node))
+}
+
+// Hooks ---------------------------------------------------------------------
+
+func (m *Meter) OnBrokerConnect(meta kgo.BrokerMetadata, _ time.Duration, _ net.Conn, err error) {
+ node := strnode(meta.NodeID)
+
+ if m.mergeConnectsMeter {
+ if err != nil {
+ m.instruments.connects.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attribute.NewSet(
+ attribute.String("node_id", node),
+ attribute.String("outcome", "failure"),
+ )),
+ )
+ return
+ }
+ m.instruments.connects.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attribute.NewSet(
+ attribute.String("node_id", node),
+ attribute.String("outcome", "success"),
+ )),
+ )
+ return
+ }
+
+ attributes := attribute.NewSet(attribute.String("node_id", node))
+ if err != nil {
+ m.instruments.connectErrs.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attributes),
+ )
+ return
+ }
+ m.instruments.connects.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attributes),
+ )
+}
+
+func (m *Meter) OnBrokerDisconnect(meta kgo.BrokerMetadata, _ net.Conn) {
+ node := strnode(meta.NodeID)
+ attributes := attribute.NewSet(attribute.String("node_id", node))
+ m.instruments.disconnects.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attributes),
+ )
+}
+
+func (m *Meter) OnBrokerWrite(meta kgo.BrokerMetadata, _ int16, bytesWritten int, _, _ time.Duration, err error) {
+ node := strnode(meta.NodeID)
+ attributes := attribute.NewSet(attribute.String("node_id", node))
+ if err != nil {
+ m.instruments.writeErrs.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attributes),
+ )
+ return
+ }
+ m.instruments.writeBytes.Add(
+ context.Background(),
+ int64(bytesWritten),
+ metric.WithAttributeSet(attributes),
+ )
+}
+
+func (m *Meter) OnBrokerRead(meta kgo.BrokerMetadata, _ int16, bytesRead int, _, _ time.Duration, err error) {
+ node := strnode(meta.NodeID)
+ attributes := attribute.NewSet(attribute.String("node_id", node))
+ if err != nil {
+ m.instruments.readErrs.Add(
+ context.Background(),
+ 1,
+ metric.WithAttributeSet(attributes),
+ )
+ return
+ }
+ m.instruments.readBytes.Add(
+ context.Background(),
+ int64(bytesRead),
+ metric.WithAttributeSet(attributes),
+ )
+}
+
+func (m *Meter) OnProduceBatchWritten(meta kgo.BrokerMetadata, topic string, _ int32, pbm kgo.ProduceBatchMetrics) {
+ node := strnode(meta.NodeID)
+ attributes := attribute.NewSet(
+ attribute.String("node_id", node),
+ attribute.String("topic", topic),
+ )
+ m.instruments.produceBytes.Add(
+ context.Background(),
+ int64(pbm.UncompressedBytes),
+ metric.WithAttributeSet(attributes),
+ )
+ m.instruments.produceRecords.Add(
+ context.Background(),
+ int64(pbm.NumRecords),
+ metric.WithAttributeSet(attributes),
+ )
+}
+
+func (m *Meter) OnFetchBatchRead(meta kgo.BrokerMetadata, topic string, _ int32, fbm kgo.FetchBatchMetrics) {
+ node := strnode(meta.NodeID)
+ attributes := attribute.NewSet(
+ attribute.String("node_id", node),
+ attribute.String("topic", topic),
+ )
+ m.instruments.fetchBytes.Add(
+ context.Background(),
+ int64(fbm.UncompressedBytes),
+ metric.WithAttributeSet(attributes),
+ )
+ m.instruments.fetchRecords.Add(
+ context.Background(),
+ int64(fbm.NumRecords),
+ metric.WithAttributeSet(attributes),
+ )
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/tracer.go b/vendor/github.com/twmb/franz-go/plugin/kotel/tracer.go
new file mode 100644
index 0000000000000..0e3f6cf518b09
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/tracer.go
@@ -0,0 +1,254 @@
+package kotel
+
+import (
+ "context"
+ "unicode/utf8"
+
+ "github.com/twmb/franz-go/pkg/kgo"
+ "go.opentelemetry.io/otel"
+ "go.opentelemetry.io/otel/attribute"
+ "go.opentelemetry.io/otel/codes"
+ "go.opentelemetry.io/otel/propagation"
+ semconv "go.opentelemetry.io/otel/semconv/v1.18.0"
+ "go.opentelemetry.io/otel/trace"
+)
+
+var ( // interface checks to ensure we implement the hooks properly.
+ _ kgo.HookProduceRecordBuffered = new(Tracer)
+ _ kgo.HookProduceRecordUnbuffered = new(Tracer)
+ _ kgo.HookFetchRecordBuffered = new(Tracer)
+ _ kgo.HookFetchRecordUnbuffered = new(Tracer)
+)
+
+type Tracer struct {
+ tracerProvider trace.TracerProvider
+ propagators propagation.TextMapPropagator
+ tracer trace.Tracer
+ clientID string
+ consumerGroup string
+ keyFormatter func(*kgo.Record) (string, error)
+}
+
+// TracerOpt interface used for setting optional config properties.
+type TracerOpt interface{ apply(*Tracer) }
+
+type tracerOptFunc func(*Tracer)
+
+func (o tracerOptFunc) apply(t *Tracer) { o(t) }
+
+// TracerProvider takes a trace.TracerProvider and applies it to the Tracer.
+// If none is specified, the global provider is used.
+func TracerProvider(provider trace.TracerProvider) TracerOpt {
+ return tracerOptFunc(func(t *Tracer) { t.tracerProvider = provider })
+}
+
+// TracerPropagator takes a propagation.TextMapPropagator and applies it to the
+// Tracer.
+//
+// If none is specified, the global Propagator is used.
+func TracerPropagator(propagator propagation.TextMapPropagator) TracerOpt {
+ return tracerOptFunc(func(t *Tracer) { t.propagators = propagator })
+}
+
+// ClientID sets the optional client_id attribute value.
+func ClientID(id string) TracerOpt {
+ return tracerOptFunc(func(t *Tracer) { t.clientID = id })
+}
+
+// ConsumerGroup sets the optional group attribute value.
+func ConsumerGroup(group string) TracerOpt {
+ return tracerOptFunc(func(t *Tracer) { t.consumerGroup = group })
+}
+
+// KeyFormatter formats a Record's key for use in a span's attributes,
+// overriding the default of string(Record.Key).
+//
+// This option can be used to parse binary data and return a canonical string
+// representation. If the returned string is not valid UTF-8 or if the
+// formatter returns an error, the key is not attached to the span.
+func KeyFormatter(fn func(*kgo.Record) (string, error)) TracerOpt {
+ return tracerOptFunc(func(t *Tracer) { t.keyFormatter = fn })
+}
+
+// NewTracer returns a Tracer, used as option for kotel to instrument franz-go
+// with tracing.
+func NewTracer(opts ...TracerOpt) *Tracer {
+ t := &Tracer{}
+ for _, opt := range opts {
+ opt.apply(t)
+ }
+ if t.tracerProvider == nil {
+ t.tracerProvider = otel.GetTracerProvider()
+ }
+ if t.propagators == nil {
+ t.propagators = otel.GetTextMapPropagator()
+ }
+ t.tracer = t.tracerProvider.Tracer(
+ instrumentationName,
+ trace.WithInstrumentationVersion(semVersion()),
+ trace.WithSchemaURL(semconv.SchemaURL),
+ )
+ return t
+}
+
+func (t *Tracer) maybeKeyAttr(attrs []attribute.KeyValue, r *kgo.Record) []attribute.KeyValue {
+ if r.Key == nil {
+ return attrs
+ }
+ var keykey string
+ if t.keyFormatter != nil {
+ k, err := t.keyFormatter(r)
+ if err != nil || !utf8.ValidString(k) {
+ return attrs
+ }
+ keykey = k
+ } else {
+ if !utf8.Valid(r.Key) {
+ return attrs
+ }
+ keykey = string(r.Key)
+ }
+ return append(attrs, semconv.MessagingKafkaMessageKeyKey.String(keykey))
+}
+
+// WithProcessSpan starts a new span for the "process" operation on a consumer
+// record.
+//
+// It sets up the span options. The user's application code is responsible for
+// ending the span.
+//
+// This should only ever be called within a polling loop of a consumed record and
+// not a record which has been created for producing, so call this at the start of each
+// iteration of your processing for the record.
+func (t *Tracer) WithProcessSpan(r *kgo.Record) (context.Context, trace.Span) {
+ // Set up the span options.
+ attrs := []attribute.KeyValue{
+ semconv.MessagingSystemKey.String("kafka"),
+ semconv.MessagingSourceKindTopic,
+ semconv.MessagingSourceName(r.Topic),
+ semconv.MessagingOperationProcess,
+ semconv.MessagingKafkaSourcePartition(int(r.Partition)),
+ semconv.MessagingKafkaMessageOffset(int(r.Offset)),
+ }
+ attrs = t.maybeKeyAttr(attrs, r)
+ if t.clientID != "" {
+ attrs = append(attrs, semconv.MessagingKafkaClientIDKey.String(t.clientID))
+ }
+ if t.consumerGroup != "" {
+ attrs = append(attrs, semconv.MessagingKafkaConsumerGroupKey.String(t.consumerGroup))
+ }
+ if r.Key != nil && r.Value == nil {
+ attrs = append(attrs, semconv.MessagingKafkaMessageTombstoneKey.Bool(true))
+ }
+ opts := []trace.SpanStartOption{
+ trace.WithAttributes(attrs...),
+ trace.WithSpanKind(trace.SpanKindConsumer),
+ }
+
+ if r.Context == nil {
+ r.Context = context.Background()
+ }
+ // Start a new span using the provided context and options.
+ return t.tracer.Start(r.Context, r.Topic+" process", opts...)
+}
+
+// Hooks ----------------------------------------------------------------------
+
+// OnProduceRecordBuffered starts a new span for the "publish" operation on a
+// buffered record.
+//
+// It sets span options and injects the span context into record and updates
+// the record's context, so it can be ended in the OnProduceRecordUnbuffered
+// hook.
+func (t *Tracer) OnProduceRecordBuffered(r *kgo.Record) {
+ // Set up span options.
+ attrs := []attribute.KeyValue{
+ semconv.MessagingSystemKey.String("kafka"),
+ semconv.MessagingDestinationKindTopic,
+ semconv.MessagingDestinationName(r.Topic),
+ semconv.MessagingOperationPublish,
+ }
+ attrs = t.maybeKeyAttr(attrs, r)
+ if t.clientID != "" {
+ attrs = append(attrs, semconv.MessagingKafkaClientIDKey.String(t.clientID))
+ }
+ if r.Key != nil && r.Value == nil {
+ attrs = append(attrs, semconv.MessagingKafkaMessageTombstoneKey.Bool(true))
+ }
+ opts := []trace.SpanStartOption{
+ trace.WithAttributes(attrs...),
+ trace.WithSpanKind(trace.SpanKindProducer),
+ }
+ // Start the "publish" span.
+ ctx, _ := t.tracer.Start(r.Context, r.Topic+" publish", opts...)
+ // Inject the span context into the record.
+ t.propagators.Inject(ctx, NewRecordCarrier(r))
+ // Update the record context.
+ r.Context = ctx
+}
+
+// OnProduceRecordUnbuffered continues and ends the "publish" span for an
+// unbuffered record.
+//
+// It sets attributes with values unset when producing and records any error
+// that occurred during the publish operation.
+func (t *Tracer) OnProduceRecordUnbuffered(r *kgo.Record, err error) {
+ span := trace.SpanFromContext(r.Context)
+ defer span.End()
+ span.SetAttributes(
+ semconv.MessagingKafkaDestinationPartition(int(r.Partition)),
+ )
+ if err != nil {
+ span.SetStatus(codes.Error, err.Error())
+ span.RecordError(err)
+ }
+}
+
+// OnFetchRecordBuffered starts a new span for the "receive" operation on a
+// buffered record.
+//
+// It sets the span options and extracts the span context from the record,
+// updates the record's context to ensure it can be ended in the
+// OnFetchRecordUnbuffered hook and can be used in downstream consumer
+// processing.
+func (t *Tracer) OnFetchRecordBuffered(r *kgo.Record) {
+ // Set up the span options.
+ attrs := []attribute.KeyValue{
+ semconv.MessagingSystemKey.String("kafka"),
+ semconv.MessagingSourceKindTopic,
+ semconv.MessagingSourceName(r.Topic),
+ semconv.MessagingOperationReceive,
+ semconv.MessagingKafkaSourcePartition(int(r.Partition)),
+ }
+ attrs = t.maybeKeyAttr(attrs, r)
+ if t.clientID != "" {
+ attrs = append(attrs, semconv.MessagingKafkaClientIDKey.String(t.clientID))
+ }
+ if t.consumerGroup != "" {
+ attrs = append(attrs, semconv.MessagingKafkaConsumerGroupKey.String(t.consumerGroup))
+ }
+ if r.Key != nil && r.Value == nil {
+ attrs = append(attrs, semconv.MessagingKafkaMessageTombstoneKey.Bool(true))
+ }
+ opts := []trace.SpanStartOption{
+ trace.WithAttributes(attrs...),
+ trace.WithSpanKind(trace.SpanKindConsumer),
+ }
+
+ if r.Context == nil {
+ r.Context = context.Background()
+ }
+ // Extract the span context from the record.
+ ctx := t.propagators.Extract(r.Context, NewRecordCarrier(r))
+ // Start the "receive" span.
+ newCtx, _ := t.tracer.Start(ctx, r.Topic+" receive", opts...)
+ // Update the record context.
+ r.Context = newCtx
+}
+
+// OnFetchRecordUnbuffered continues and ends the "receive" span for an
+// unbuffered record.
+func (t *Tracer) OnFetchRecordUnbuffered(r *kgo.Record, _ bool) {
+ span := trace.SpanFromContext(r.Context)
+ defer span.End()
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kotel/version.go b/vendor/github.com/twmb/franz-go/plugin/kotel/version.go
new file mode 100644
index 0000000000000..0152aa7012c61
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kotel/version.go
@@ -0,0 +1,21 @@
+package kotel
+
+import "runtime/debug"
+
+// version is the current release version of the kotel instrumentation.
+func version() string {
+ info, ok := debug.ReadBuildInfo()
+ if ok {
+ for _, dep := range info.Deps {
+ if dep.Path == instrumentationName {
+ return dep.Version
+ }
+ }
+ }
+ return "unknown"
+}
+
+// semVersion is the semantic version to be supplied to tracer/meter creation.
+func semVersion() string {
+ return "semver:" + version()
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kprom/LICENSE b/vendor/github.com/twmb/franz-go/plugin/kprom/LICENSE
new file mode 100644
index 0000000000000..36e18034325d5
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kprom/LICENSE
@@ -0,0 +1,24 @@
+Copyright 2020, Travis Bischel.
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in the
+ documentation and/or other materials provided with the distribution.
+ * Neither the name of the library nor the
+ names of its contributors may be used to endorse or promote products
+ derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/twmb/franz-go/plugin/kprom/README.md b/vendor/github.com/twmb/franz-go/plugin/kprom/README.md
new file mode 100644
index 0000000000000..5c0db3b04475a
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kprom/README.md
@@ -0,0 +1,42 @@
+kprom
+===
+
+kprom is a plug-in package to provide prometheus
+[metrics](https://pkg.go.dev/github.com/prometheus/client_golang/prometheus)
+through a
+[`kgo.Hook`](https://pkg.go.dev/github.com/twmb/franz-go/pkg/kgo#Hook).
+
+This package tracks the following metrics under the following names, all
+metrics being counter vecs:
+
+```go
+#{ns}_connects_total{node_id="#{node}"}
+#{ns}_connect_errors_total{node_id="#{node}"}
+#{ns}_write_errors_total{node_id="#{node}"}
+#{ns}_write_bytes_total{node_id="#{node}"}
+#{ns}_read_errors_total{node_id="#{node}"}
+#{ns}_read_bytes_total{node_id="#{node}"}
+#{ns}_produce_bytes_total{node_id="#{node}",topic="#{topic}"}
+#{ns}_fetch_bytes_total{node_id="#{node}",topic="#{topic}"}
+#{ns}_buffered_produce_records_total
+#{ns}_buffered_fetch_records_total
+```
+
+The above metrics can be expanded considerably with options in this package,
+allowing timings, uncompressed and compressed bytes, and different labels.
+
+Note that seed brokers use broker IDs prefixed with "seed_", with the number
+corresponding to which seed it is.
+
+To use,
+
+```go
+metrics := kprom.NewMetrics("namespace")
+cl, err := kgo.NewClient(
+ kgo.WithHooks(metrics),
+ // ...other opts
+)
+```
+
+You can use your own prometheus registry, as well as a few other options.
+See the package [documentation](https://pkg.go.dev/github.com/twmb/franz-go/plugin/kprom) for more info!
diff --git a/vendor/github.com/twmb/franz-go/plugin/kprom/config.go b/vendor/github.com/twmb/franz-go/plugin/kprom/config.go
new file mode 100644
index 0000000000000..d907bebe52284
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kprom/config.go
@@ -0,0 +1,233 @@
+package kprom
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+)
+
+type cfg struct {
+ namespace string
+ subsystem string
+
+ reg prometheus.Registerer
+ gatherer prometheus.Gatherer
+
+ withClientLabel bool
+ histograms map[Histogram][]float64
+ defBuckets []float64
+ fetchProduceOpts fetchProduceOpts
+
+ handlerOpts promhttp.HandlerOpts
+ goCollectors bool
+}
+
+func newCfg(namespace string, opts ...Opt) cfg {
+ regGatherer := RegistererGatherer(prometheus.NewRegistry())
+ cfg := cfg{
+ namespace: namespace,
+ reg: regGatherer,
+ gatherer: regGatherer,
+
+ defBuckets: DefBuckets,
+ fetchProduceOpts: fetchProduceOpts{
+ uncompressedBytes: true,
+ labels: []string{"node_id", "topic"},
+ },
+ }
+
+ for _, opt := range opts {
+ opt.apply(&cfg)
+ }
+
+ if cfg.goCollectors {
+ cfg.reg.MustRegister(prometheus.NewProcessCollector(prometheus.ProcessCollectorOpts{}))
+ cfg.reg.MustRegister(prometheus.NewGoCollector())
+ }
+
+ return cfg
+}
+
+// Opt is an option to configure Metrics.
+type Opt interface {
+ apply(*cfg)
+}
+
+type opt struct{ fn func(*cfg) }
+
+func (o opt) apply(c *cfg) { o.fn(c) }
+
+type RegistererGatherer interface {
+ prometheus.Registerer
+ prometheus.Gatherer
+}
+
+// Registry sets the registerer and gatherer to add metrics to, rather than a
+// new registry. Use this option if you want to configure both Gatherer and
+// Registerer with the same object.
+func Registry(rg RegistererGatherer) Opt {
+ return opt{func(c *cfg) {
+ c.reg = rg
+ c.gatherer = rg
+ }}
+}
+
+// Registerer sets the registerer to add register to, rather than a new registry.
+func Registerer(reg prometheus.Registerer) Opt {
+ return opt{func(c *cfg) { c.reg = reg }}
+}
+
+// Gatherer sets the gatherer to add gather to, rather than a new registry.
+func Gatherer(gatherer prometheus.Gatherer) Opt {
+ return opt{func(c *cfg) { c.gatherer = gatherer }}
+}
+
+// GoCollectors adds the prometheus.NewProcessCollector and
+// prometheus.NewGoCollector collectors the the Metric's registry.
+func GoCollectors() Opt {
+ return opt{func(c *cfg) { c.goCollectors = true }}
+}
+
+// HandlerOpts sets handler options to use if you wish you use the
+// Metrics.Handler function.
+//
+// This is only useful if you both (a) do not want to provide your own registry
+// and (b) want to override the default handler options.
+func HandlerOpts(opts promhttp.HandlerOpts) Opt {
+ return opt{func(c *cfg) { c.handlerOpts = opts }}
+}
+
+// WithClientLabel adds a "cliend_id" label to all metrics.
+func WithClientLabel() Opt {
+ return opt{func(c *cfg) { c.withClientLabel = true }}
+}
+
+// Subsystem sets the subsystem for the kprom metrics, overriding the default
+// empty string.
+func Subsystem(ss string) Opt {
+ return opt{func(c *cfg) { c.subsystem = ss }}
+}
+
+// Buckets sets the buckets to be used with Histograms, overriding the default
+// of [kprom.DefBuckets]. If custom buckets per histogram is needed,
+// HistogramOpts can be used.
+func Buckets(buckets []float64) Opt {
+ return opt{func(c *cfg) { c.defBuckets = buckets }}
+}
+
+// DefBuckets are the default Histogram buckets. The default buckets are
+// tailored to broadly measure the kafka timings (in seconds).
+var DefBuckets = []float64{0.001, 0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.128, 0.256, 0.512, 1.024, 2.048}
+
+// A Histogram is an identifier for a kprom histogram that can be enabled
+type Histogram uint8
+
+const (
+ ReadWait Histogram = iota // Enables {ns}_{ss}_read_wait_seconds.
+ ReadTime // Enables {ns}_{ss}_read_time_seconds.
+ WriteWait // Enables {ns}_{ss}_write_wait_seconds.
+ WriteTime // Enables {ns}_{ss}_write_time_seconds.
+ RequestDurationE2E // Enables {ns}_{ss}_request_durationE2E_seconds.
+ RequestThrottled // Enables {ns}_{ss}_request_throttled_seconds.
+)
+
+// HistogramOpts allows histograms to be enabled with custom buckets
+type HistogramOpts struct {
+ Enable Histogram
+ Buckets []float64
+}
+
+// HistogramsFromOpts allows the user full control of what histograms to enable
+// and define buckets to be used with each histogram.
+//
+// metrics, _ := kprom.NewMetrics(
+// kprom.HistogramsFromOpts(
+// kprom.HistogramOpts{
+// Enable: kprom.ReadWait,
+// Buckets: prometheus.LinearBuckets(10, 10, 8),
+// },
+// kprom.HistogramOpts{
+// Enable: kprom.ReadeTime,
+// // kprom default bucket will be used
+// },
+// ),
+// )
+func HistogramsFromOpts(hs ...HistogramOpts) Opt {
+ return opt{func(c *cfg) {
+ c.histograms = make(map[Histogram][]float64)
+ for _, h := range hs {
+ c.histograms[h.Enable] = h.Buckets
+ }
+ }}
+}
+
+// Histograms sets the histograms to be enabled for kprom, overiding the
+// default of disabling all histograms.
+//
+// metrics, _ := kprom.NewMetrics(
+// kprom.Histograms(
+// kprom.RequestDurationE2E,
+// ),
+// )
+func Histograms(hs ...Histogram) Opt {
+ hos := make([]HistogramOpts, 0)
+ for _, h := range hs {
+ hos = append(hos, HistogramOpts{Enable: h})
+ }
+ return HistogramsFromOpts(hos...)
+}
+
+// A Detail is a label that can be set on fetch/produce metrics
+type Detail uint8
+
+const (
+ ByNode Detail = iota // Include label "node_id" for fetch and produce metrics.
+ ByTopic // Include label "topic" for fetch and produce metrics.
+ Batches // Report number of fetched and produced batches.
+ Records // Report the number of fetched and produced records.
+ CompressedBytes // Report the number of fetched and produced compressed bytes.
+ UncompressedBytes // Report the number of fetched and produced uncompressed bytes.
+ ConsistentNaming // Renames {fetch,produce}_bytes_total to {fetch,produce}_uncompressed_bytes_total, making the names consistent with the CompressedBytes detail.
+)
+
+type fetchProduceOpts struct {
+ labels []string
+ batches bool
+ records bool
+ compressedBytes bool
+ uncompressedBytes bool
+ consistentNaming bool
+}
+
+// FetchAndProduceDetail determines details for fetch/produce metrics,
+// overriding the default of (UncompressedBytes, ByTopic, ByNode).
+func FetchAndProduceDetail(details ...Detail) Opt {
+ return opt{
+ func(c *cfg) {
+ labelsDeduped := make(map[Detail]string)
+ c.fetchProduceOpts = fetchProduceOpts{}
+ for _, l := range details {
+ switch l {
+ case ByTopic:
+ labelsDeduped[ByTopic] = "topic"
+ case ByNode:
+ labelsDeduped[ByNode] = "node_id"
+ case Batches:
+ c.fetchProduceOpts.batches = true
+ case Records:
+ c.fetchProduceOpts.records = true
+ case UncompressedBytes:
+ c.fetchProduceOpts.uncompressedBytes = true
+ case CompressedBytes:
+ c.fetchProduceOpts.compressedBytes = true
+ case ConsistentNaming:
+ c.fetchProduceOpts.consistentNaming = true
+ }
+ }
+ var labels []string
+ for _, l := range labelsDeduped {
+ labels = append(labels, l)
+ }
+ c.fetchProduceOpts.labels = labels
+ },
+ }
+}
diff --git a/vendor/github.com/twmb/franz-go/plugin/kprom/kprom.go b/vendor/github.com/twmb/franz-go/plugin/kprom/kprom.go
new file mode 100644
index 0000000000000..9d6a470048726
--- /dev/null
+++ b/vendor/github.com/twmb/franz-go/plugin/kprom/kprom.go
@@ -0,0 +1,510 @@
+// Package kprom provides prometheus plug-in metrics for a kgo client.
+//
+// This package tracks the following metrics under the following names,
+// all metrics being counter vecs:
+//
+// #{ns}_connects_total{node_id="#{node}"}
+// #{ns}_connect_errors_total{node_id="#{node}"}
+// #{ns}_write_errors_total{node_id="#{node}"}
+// #{ns}_write_bytes_total{node_id="#{node}"}
+// #{ns}_read_errors_total{node_id="#{node}"}
+// #{ns}_read_bytes_total{node_id="#{node}"}
+// #{ns}_produce_bytes_total{node_id="#{node}",topic="#{topic}"}
+// #{ns}_fetch_bytes_total{node_id="#{node}",topic="#{topic}"}
+// #{ns}_buffered_produce_records_total
+// #{ns}_buffered_fetch_records_total
+//
+// The above metrics can be expanded considerably with options in this package,
+// allowing timings, uncompressed and compressed bytes, and different labels.
+//
+// This can be used in a client like so:
+//
+// m := kprom.NewMetrics("my_namespace")
+// cl, err := kgo.NewClient(
+// kgo.WithHooks(m),
+// // ...other opts
+// )
+//
+// More examples are linked in the main project readme: https://github.com/twmb/franz-go/#metrics--logging
+//
+// By default, metrics are installed under the a new prometheus registry, but
+// this can be overridden with the Registry option.
+//
+// Note that seed brokers use broker IDs prefixed with "seed_", with the number
+// corresponding to which seed it is.
+package kprom
+
+import (
+ "net"
+ "net/http"
+ "time"
+
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/prometheus/client_golang/prometheus/promhttp"
+
+ "github.com/twmb/franz-go/pkg/kgo"
+)
+
+var ( // interface checks to ensure we implement the hooks properly
+ _ kgo.HookBrokerConnect = new(Metrics)
+ _ kgo.HookBrokerDisconnect = new(Metrics)
+ _ kgo.HookBrokerWrite = new(Metrics)
+ _ kgo.HookBrokerRead = new(Metrics)
+ _ kgo.HookProduceBatchWritten = new(Metrics)
+ _ kgo.HookFetchBatchRead = new(Metrics)
+ _ kgo.HookBrokerE2E = new(Metrics)
+ _ kgo.HookBrokerThrottle = new(Metrics)
+ _ kgo.HookNewClient = new(Metrics)
+ _ kgo.HookClientClosed = new(Metrics)
+)
+
+// Metrics provides prometheus metrics
+type Metrics struct {
+ cfg cfg
+
+ // Connection
+ connConnectsTotal *prometheus.CounterVec
+ connConnectErrorsTotal *prometheus.CounterVec
+ connDisconnectsTotal *prometheus.CounterVec
+
+ // Write
+ writeBytesTotal *prometheus.CounterVec
+ writeErrorsTotal *prometheus.CounterVec
+ writeWaitSeconds *prometheus.HistogramVec
+ writeTimeSeconds *prometheus.HistogramVec
+
+ // Read
+ readBytesTotal *prometheus.CounterVec
+ readErrorsTotal *prometheus.CounterVec
+ readWaitSeconds *prometheus.HistogramVec
+ readTimeSeconds *prometheus.HistogramVec
+
+ // Request E2E & Throttle
+ requestDurationE2ESeconds *prometheus.HistogramVec
+ requestThrottledSeconds *prometheus.HistogramVec
+
+ // Produce
+ produceCompressedBytes *prometheus.CounterVec
+ produceUncompressedBytes *prometheus.CounterVec
+ produceBatchesTotal *prometheus.CounterVec
+ produceRecordsTotal *prometheus.CounterVec
+
+ // Fetch
+ fetchCompressedBytes *prometheus.CounterVec
+ fetchUncompressedBytes *prometheus.CounterVec
+ fetchBatchesTotal *prometheus.CounterVec
+ fetchRecordsTotal *prometheus.CounterVec
+
+ // Buffered
+ bufferedFetchRecords prometheus.GaugeFunc
+ bufferedProduceRecords prometheus.GaugeFunc
+}
+
+// NewMetrics returns a new Metrics that adds prometheus metrics to the
+// registry under the given namespace.
+func NewMetrics(namespace string, opts ...Opt) *Metrics {
+ return &Metrics{cfg: newCfg(namespace, opts...)}
+}
+
+// Registry returns the prometheus registry that metrics were added to.
+//
+// This is useful if you want the Metrics type to create its own registry for
+// you to add additional metrics to.
+func (m *Metrics) Registry() prometheus.Registerer {
+ return m.cfg.reg
+}
+
+// Handler returns an http.Handler providing prometheus metrics.
+func (m *Metrics) Handler() http.Handler {
+ return promhttp.HandlerFor(m.cfg.gatherer, m.cfg.handlerOpts)
+}
+
+// OnNewClient implements the HookNewClient interface for metrics
+// gathering.
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnNewClient(client *kgo.Client) {
+ var (
+ factory = promauto.With(m.cfg.reg)
+ namespace = m.cfg.namespace
+ subsystem = m.cfg.subsystem
+ constLabels prometheus.Labels
+ )
+ if m.cfg.withClientLabel {
+ constLabels = make(prometheus.Labels)
+ constLabels["client_id"] = client.OptValue(kgo.ClientID).(string)
+ }
+
+ // returns Hist buckets if set, otherwise defBucket
+ getHistogramBuckets := func(h Histogram) []float64 {
+ if buckets, ok := m.cfg.histograms[h]; ok && len(buckets) != 0 {
+ return buckets
+ }
+ return m.cfg.defBuckets
+ }
+
+ // Connection
+
+ m.connConnectsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "connects_total",
+ Help: "Total number of connections opened",
+ }, []string{"node_id"})
+
+ m.connConnectErrorsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "connect_errors_total",
+ Help: "Total number of connection errors",
+ }, []string{"node_id"})
+
+ m.connDisconnectsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "disconnects_total",
+ Help: "Total number of connections closed",
+ }, []string{"node_id"})
+
+ // Write
+
+ m.writeBytesTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "write_bytes_total",
+ Help: "Total number of bytes written",
+ }, []string{"node_id"})
+
+ m.writeErrorsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "write_errors_total",
+ Help: "Total number of write errors",
+ }, []string{"node_id"})
+
+ m.writeWaitSeconds = factory.NewHistogramVec(prometheus.HistogramOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "write_wait_seconds",
+ Help: "Time spent waiting to write to Kafka",
+ Buckets: getHistogramBuckets(WriteWait),
+ }, []string{"node_id"})
+
+ m.writeTimeSeconds = factory.NewHistogramVec(prometheus.HistogramOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "write_time_seconds",
+ Help: "Time spent writing to Kafka",
+ Buckets: getHistogramBuckets(WriteTime),
+ }, []string{"node_id"})
+
+ // Read
+
+ m.readBytesTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "read_bytes_total",
+ Help: "Total number of bytes read",
+ }, []string{"node_id"})
+
+ m.readErrorsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "read_errors_total",
+ Help: "Total number of read errors",
+ }, []string{"node_id"})
+
+ m.readWaitSeconds = factory.NewHistogramVec(prometheus.HistogramOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "read_wait_seconds",
+ Help: "Time spent waiting to read from Kafka",
+ Buckets: getHistogramBuckets(ReadWait),
+ }, []string{"node_id"})
+
+ m.readTimeSeconds = factory.NewHistogramVec(prometheus.HistogramOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "read_time_seconds",
+ Help: "Time spent reading from Kafka",
+ Buckets: getHistogramBuckets(ReadTime),
+ }, []string{"node_id"})
+
+ // Request E2E duration & Throttle
+
+ m.requestDurationE2ESeconds = factory.NewHistogramVec(prometheus.HistogramOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "request_duration_e2e_seconds",
+ Help: "Time from the start of when a request is written to the end of when the response for that request was fully read",
+ Buckets: getHistogramBuckets(RequestDurationE2E),
+ }, []string{"node_id"})
+
+ m.requestThrottledSeconds = factory.NewHistogramVec(prometheus.HistogramOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "request_throttled_seconds",
+ Help: "Time the request was throttled",
+ Buckets: getHistogramBuckets(RequestThrottled),
+ }, []string{"node_id"})
+
+ // Produce
+
+ m.produceCompressedBytes = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "produce_compressed_bytes_total",
+ Help: "Total number of compressed bytes produced",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ produceUncompressedBytesName := "produce_bytes_total"
+ if m.cfg.fetchProduceOpts.consistentNaming {
+ produceUncompressedBytesName = "produce_uncompressed_bytes_total"
+ }
+ m.produceUncompressedBytes = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: produceUncompressedBytesName,
+ Help: "Total number of uncompressed bytes produced",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ m.produceBatchesTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "produce_batches_total",
+ Help: "Total number of batches produced",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ m.produceRecordsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "produce_records_total",
+ Help: "Total number of records produced",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ // Fetch
+
+ m.fetchCompressedBytes = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "fetch_compressed_bytes_total",
+ Help: "Total number of compressed bytes fetched",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ fetchUncompressedBytesName := "fetch_bytes_total"
+ if m.cfg.fetchProduceOpts.consistentNaming {
+ fetchUncompressedBytesName = "fetch_uncompressed_bytes_total"
+ }
+ m.fetchUncompressedBytes = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: fetchUncompressedBytesName,
+ Help: "Total number of uncompressed bytes fetched",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ m.fetchBatchesTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "fetch_batches_total",
+ Help: "Total number of batches fetched",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ m.fetchRecordsTotal = factory.NewCounterVec(prometheus.CounterOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "fetch_records_total",
+ Help: "Total number of records fetched",
+ }, m.cfg.fetchProduceOpts.labels)
+
+ // Buffers
+
+ m.bufferedProduceRecords = factory.NewGaugeFunc(
+ prometheus.GaugeOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "buffered_produce_records_total",
+ Help: "Total number of records buffered within the client ready to be produced",
+ },
+ func() float64 { return float64(client.BufferedProduceRecords()) },
+ )
+
+ m.bufferedFetchRecords = factory.NewGaugeFunc(
+ prometheus.GaugeOpts{
+ Namespace: namespace,
+ Subsystem: subsystem,
+ ConstLabels: constLabels,
+ Name: "buffered_fetch_records_total",
+ Help: "Total number of records buffered within the client ready to be consumed",
+ },
+ func() float64 { return float64(client.BufferedFetchRecords()) },
+ )
+}
+
+// OnClientClosed will unregister kprom metrics from kprom registerer
+func (m *Metrics) OnClientClosed(*kgo.Client) {
+ _ = m.cfg.reg.Unregister(m.connConnectsTotal)
+ _ = m.cfg.reg.Unregister(m.connConnectErrorsTotal)
+ _ = m.cfg.reg.Unregister(m.connDisconnectsTotal)
+ _ = m.cfg.reg.Unregister(m.writeBytesTotal)
+ _ = m.cfg.reg.Unregister(m.writeErrorsTotal)
+ _ = m.cfg.reg.Unregister(m.writeWaitSeconds)
+ _ = m.cfg.reg.Unregister(m.writeTimeSeconds)
+ _ = m.cfg.reg.Unregister(m.readBytesTotal)
+ _ = m.cfg.reg.Unregister(m.readErrorsTotal)
+ _ = m.cfg.reg.Unregister(m.readWaitSeconds)
+ _ = m.cfg.reg.Unregister(m.readTimeSeconds)
+ _ = m.cfg.reg.Unregister(m.requestDurationE2ESeconds)
+ _ = m.cfg.reg.Unregister(m.requestThrottledSeconds)
+ _ = m.cfg.reg.Unregister(m.produceCompressedBytes)
+ _ = m.cfg.reg.Unregister(m.produceUncompressedBytes)
+ _ = m.cfg.reg.Unregister(m.produceBatchesTotal)
+ _ = m.cfg.reg.Unregister(m.produceRecordsTotal)
+ _ = m.cfg.reg.Unregister(m.fetchCompressedBytes)
+ _ = m.cfg.reg.Unregister(m.fetchUncompressedBytes)
+ _ = m.cfg.reg.Unregister(m.fetchBatchesTotal)
+ _ = m.cfg.reg.Unregister(m.fetchRecordsTotal)
+ _ = m.cfg.reg.Unregister(m.bufferedFetchRecords)
+ _ = m.cfg.reg.Unregister(m.bufferedProduceRecords)
+}
+
+// OnBrokerConnect implements the HookBrokerConnect interface for metrics
+// gathering.
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnBrokerConnect(meta kgo.BrokerMetadata, _ time.Duration, _ net.Conn, err error) {
+ nodeId := kgo.NodeName(meta.NodeID)
+ if err != nil {
+ m.connConnectErrorsTotal.WithLabelValues(nodeId).Inc()
+ return
+ }
+ m.connConnectsTotal.WithLabelValues(nodeId).Inc()
+}
+
+// OnBrokerDisconnect implements the HookBrokerDisconnect interface for metrics
+// gathering.
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnBrokerDisconnect(meta kgo.BrokerMetadata, _ net.Conn) {
+ nodeId := kgo.NodeName(meta.NodeID)
+ m.connDisconnectsTotal.WithLabelValues(nodeId).Inc()
+}
+
+// OnBrokerThrottle implements the HookBrokerThrottle interface for metrics
+// gathering.
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnBrokerThrottle(meta kgo.BrokerMetadata, throttleInterval time.Duration, _ bool) {
+ if _, ok := m.cfg.histograms[RequestThrottled]; ok {
+ nodeId := kgo.NodeName(meta.NodeID)
+ m.requestThrottledSeconds.WithLabelValues(nodeId).Observe(throttleInterval.Seconds())
+ }
+}
+
+// OnProduceBatchWritten implements the HookProduceBatchWritten interface for
+// metrics gathering.
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnProduceBatchWritten(meta kgo.BrokerMetadata, topic string, _ int32, metrics kgo.ProduceBatchMetrics) {
+ labels := m.fetchProducerLabels(kgo.NodeName(meta.NodeID), topic)
+ if m.cfg.fetchProduceOpts.uncompressedBytes {
+ m.produceUncompressedBytes.With(labels).Add(float64(metrics.UncompressedBytes))
+ }
+ if m.cfg.fetchProduceOpts.compressedBytes {
+ m.produceCompressedBytes.With(labels).Add(float64(metrics.CompressedBytes))
+ }
+ if m.cfg.fetchProduceOpts.batches {
+ m.produceBatchesTotal.With(labels).Inc()
+ }
+ if m.cfg.fetchProduceOpts.records {
+ m.produceRecordsTotal.With(labels).Add(float64(metrics.NumRecords))
+ }
+}
+
+// OnFetchBatchRead implements the HookFetchBatchRead interface for metrics
+// gathering.
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnFetchBatchRead(meta kgo.BrokerMetadata, topic string, _ int32, metrics kgo.FetchBatchMetrics) {
+ labels := m.fetchProducerLabels(kgo.NodeName(meta.NodeID), topic)
+ if m.cfg.fetchProduceOpts.uncompressedBytes {
+ m.fetchUncompressedBytes.With(labels).Add(float64(metrics.UncompressedBytes))
+ }
+ if m.cfg.fetchProduceOpts.compressedBytes {
+ m.fetchCompressedBytes.With(labels).Add(float64(metrics.CompressedBytes))
+ }
+ if m.cfg.fetchProduceOpts.batches {
+ m.fetchBatchesTotal.With(labels).Inc()
+ }
+ if m.cfg.fetchProduceOpts.records {
+ m.fetchRecordsTotal.With(labels).Add(float64(metrics.NumRecords))
+ }
+}
+
+// // Nop hook for compat, logic moved to OnBrokerE2E
+func (m *Metrics) OnBrokerRead(meta kgo.BrokerMetadata, _ int16, bytesRead int, _, _ time.Duration, err error) {
+}
+
+// Nop hook for compat, logic moved to OnBrokerE2E
+func (m *Metrics) OnBrokerWrite(meta kgo.BrokerMetadata, _ int16, bytesWritten int, _, _ time.Duration, err error) {
+}
+
+// OnBrokerE2E implements the HookBrokerE2E interface for metrics gathering
+// This method is meant to be called by the hook system and not by the user
+func (m *Metrics) OnBrokerE2E(meta kgo.BrokerMetadata, _ int16, e2e kgo.BrokerE2E) {
+ nodeId := kgo.NodeName(meta.NodeID)
+ if e2e.WriteErr != nil {
+ m.writeErrorsTotal.WithLabelValues(nodeId).Inc()
+ return
+ }
+ m.writeBytesTotal.WithLabelValues(nodeId).Add(float64(e2e.BytesWritten))
+ if _, ok := m.cfg.histograms[WriteWait]; ok {
+ m.writeWaitSeconds.WithLabelValues(nodeId).Observe(e2e.WriteWait.Seconds())
+ }
+ if _, ok := m.cfg.histograms[WriteTime]; ok {
+ m.writeTimeSeconds.WithLabelValues(nodeId).Observe(e2e.TimeToWrite.Seconds())
+ }
+ if e2e.ReadErr != nil {
+ m.readErrorsTotal.WithLabelValues(nodeId).Inc()
+ return
+ }
+ m.readBytesTotal.WithLabelValues(nodeId).Add(float64(e2e.BytesRead))
+ if _, ok := m.cfg.histograms[ReadWait]; ok {
+ m.readWaitSeconds.WithLabelValues(nodeId).Observe(e2e.ReadWait.Seconds())
+ }
+ if _, ok := m.cfg.histograms[ReadTime]; ok {
+ m.readTimeSeconds.WithLabelValues(nodeId).Observe(e2e.TimeToRead.Seconds())
+ }
+ if _, ok := m.cfg.histograms[RequestDurationE2E]; ok {
+ m.requestDurationE2ESeconds.WithLabelValues(nodeId).Observe(e2e.DurationE2E().Seconds())
+ }
+}
+
+func (m *Metrics) fetchProducerLabels(nodeId, topic string) prometheus.Labels {
+ labels := make(prometheus.Labels, 2)
+ for _, l := range m.cfg.fetchProduceOpts.labels {
+ switch l {
+ case "topic":
+ labels[l] = topic
+ case "node_id":
+ labels[l] = nodeId
+ }
+ }
+ return labels
+}
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/README.md b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/README.md
new file mode 100644
index 0000000000000..5f03e01386f23
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/README.md
@@ -0,0 +1,3 @@
+# Semconv v1.18.0
+
+[![PkgGoDev](https://pkg.go.dev/badge/go.opentelemetry.io/otel/semconv/v1.18.0)](https://pkg.go.dev/go.opentelemetry.io/otel/semconv/v1.18.0)
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/doc.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/doc.go
new file mode 100644
index 0000000000000..ff55fe79b5885
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/doc.go
@@ -0,0 +1,9 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+// Package semconv implements OpenTelemetry semantic conventions.
+//
+// OpenTelemetry semantic conventions are agreed standardized naming
+// patterns for OpenTelemetry things. This package represents the conventions
+// as of the v1.18.0 version of the OpenTelemetry specification.
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/event.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/event.go
new file mode 100644
index 0000000000000..60ef182ffcd0d
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/event.go
@@ -0,0 +1,188 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+// Code generated from semantic convention specification. DO NOT EDIT.
+
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
+
+import "go.opentelemetry.io/otel/attribute"
+
+// This semantic convention defines the attributes used to represent a feature
+// flag evaluation as an event.
+const (
+ // FeatureFlagKeyKey is the attribute Key conforming to the
+ // "feature_flag.key" semantic conventions. It represents the unique
+ // identifier of the feature flag.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'logo-color'
+ FeatureFlagKeyKey = attribute.Key("feature_flag.key")
+
+ // FeatureFlagProviderNameKey is the attribute Key conforming to the
+ // "feature_flag.provider_name" semantic conventions. It represents the
+ // name of the service provider that performs the flag evaluation.
+ //
+ // Type: string
+ // RequirementLevel: Recommended
+ // Stability: stable
+ // Examples: 'Flag Manager'
+ FeatureFlagProviderNameKey = attribute.Key("feature_flag.provider_name")
+
+ // FeatureFlagVariantKey is the attribute Key conforming to the
+ // "feature_flag.variant" semantic conventions. It represents the sHOULD be
+ // a semantic identifier for a value. If one is unavailable, a stringified
+ // version of the value can be used.
+ //
+ // Type: string
+ // RequirementLevel: Recommended
+ // Stability: stable
+ // Examples: 'red', 'true', 'on'
+ // Note: A semantic identifier, commonly referred to as a variant, provides
+ // a means
+ // for referring to a value without including the value itself. This can
+ // provide additional context for understanding the meaning behind a value.
+ // For example, the variant `red` maybe be used for the value `#c05543`.
+ //
+ // A stringified version of the value can be used in situations where a
+ // semantic identifier is unavailable. String representation of the value
+ // should be determined by the implementer.
+ FeatureFlagVariantKey = attribute.Key("feature_flag.variant")
+)
+
+// FeatureFlagKey returns an attribute KeyValue conforming to the
+// "feature_flag.key" semantic conventions. It represents the unique identifier
+// of the feature flag.
+func FeatureFlagKey(val string) attribute.KeyValue {
+ return FeatureFlagKeyKey.String(val)
+}
+
+// FeatureFlagProviderName returns an attribute KeyValue conforming to the
+// "feature_flag.provider_name" semantic conventions. It represents the name of
+// the service provider that performs the flag evaluation.
+func FeatureFlagProviderName(val string) attribute.KeyValue {
+ return FeatureFlagProviderNameKey.String(val)
+}
+
+// FeatureFlagVariant returns an attribute KeyValue conforming to the
+// "feature_flag.variant" semantic conventions. It represents the sHOULD be a
+// semantic identifier for a value. If one is unavailable, a stringified
+// version of the value can be used.
+func FeatureFlagVariant(val string) attribute.KeyValue {
+ return FeatureFlagVariantKey.String(val)
+}
+
+// RPC received/sent message.
+const (
+ // MessageTypeKey is the attribute Key conforming to the "message.type"
+ // semantic conventions. It represents the whether this is a received or
+ // sent message.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessageTypeKey = attribute.Key("message.type")
+
+ // MessageIDKey is the attribute Key conforming to the "message.id"
+ // semantic conventions. It represents the mUST be calculated as two
+ // different counters starting from `1` one for sent messages and one for
+ // received message.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: This way we guarantee that the values will be consistent between
+ // different implementations.
+ MessageIDKey = attribute.Key("message.id")
+
+ // MessageCompressedSizeKey is the attribute Key conforming to the
+ // "message.compressed_size" semantic conventions. It represents the
+ // compressed size of the message in bytes.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessageCompressedSizeKey = attribute.Key("message.compressed_size")
+
+ // MessageUncompressedSizeKey is the attribute Key conforming to the
+ // "message.uncompressed_size" semantic conventions. It represents the
+ // uncompressed size of the message in bytes.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessageUncompressedSizeKey = attribute.Key("message.uncompressed_size")
+)
+
+var (
+ // sent
+ MessageTypeSent = MessageTypeKey.String("SENT")
+ // received
+ MessageTypeReceived = MessageTypeKey.String("RECEIVED")
+)
+
+// MessageID returns an attribute KeyValue conforming to the "message.id"
+// semantic conventions. It represents the mUST be calculated as two different
+// counters starting from `1` one for sent messages and one for received
+// message.
+func MessageID(val int) attribute.KeyValue {
+ return MessageIDKey.Int(val)
+}
+
+// MessageCompressedSize returns an attribute KeyValue conforming to the
+// "message.compressed_size" semantic conventions. It represents the compressed
+// size of the message in bytes.
+func MessageCompressedSize(val int) attribute.KeyValue {
+ return MessageCompressedSizeKey.Int(val)
+}
+
+// MessageUncompressedSize returns an attribute KeyValue conforming to the
+// "message.uncompressed_size" semantic conventions. It represents the
+// uncompressed size of the message in bytes.
+func MessageUncompressedSize(val int) attribute.KeyValue {
+ return MessageUncompressedSizeKey.Int(val)
+}
+
+// The attributes used to report a single exception associated with a span.
+const (
+ // ExceptionEscapedKey is the attribute Key conforming to the
+ // "exception.escaped" semantic conventions. It represents the sHOULD be
+ // set to true if the exception event is recorded at a point where it is
+ // known that the exception is escaping the scope of the span.
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: An exception is considered to have escaped (or left) the scope of
+ // a span,
+ // if that span is ended while the exception is still logically "in
+ // flight".
+ // This may be actually "in flight" in some languages (e.g. if the
+ // exception
+ // is passed to a Context manager's `__exit__` method in Python) but will
+ // usually be caught at the point of recording the exception in most
+ // languages.
+ //
+ // It is usually not possible to determine at the point where an exception
+ // is thrown
+ // whether it will escape the scope of a span.
+ // However, it is trivial to know that an exception
+ // will escape, if one checks for an active exception just before ending
+ // the span,
+ // as done in the [example above](#recording-an-exception).
+ //
+ // It follows that an exception may still escape the scope of the span
+ // even if the `exception.escaped` attribute was not set or set to false,
+ // since the event might have been recorded at a time where it was not
+ // clear whether the exception will escape.
+ ExceptionEscapedKey = attribute.Key("exception.escaped")
+)
+
+// ExceptionEscaped returns an attribute KeyValue conforming to the
+// "exception.escaped" semantic conventions. It represents the sHOULD be set to
+// true if the exception event is recorded at a point where it is known that
+// the exception is escaping the scope of the span.
+func ExceptionEscaped(val bool) attribute.KeyValue {
+ return ExceptionEscapedKey.Bool(val)
+}
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/exception.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/exception.go
new file mode 100644
index 0000000000000..d7b2de12475d3
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/exception.go
@@ -0,0 +1,9 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
+
+const (
+ // ExceptionEventName is the name of the Span event representing an exception.
+ ExceptionEventName = "exception"
+)
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/http.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/http.go
new file mode 100644
index 0000000000000..9c5d10fe5562f
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/http.go
@@ -0,0 +1,10 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
+
+// HTTP scheme attributes.
+var (
+ HTTPSchemeHTTP = HTTPSchemeKey.String("http")
+ HTTPSchemeHTTPS = HTTPSchemeKey.String("https")
+)
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/resource.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/resource.go
new file mode 100644
index 0000000000000..5f8c8fd4c5deb
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/resource.go
@@ -0,0 +1,1999 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+// Code generated from semantic convention specification. DO NOT EDIT.
+
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
+
+import "go.opentelemetry.io/otel/attribute"
+
+// The web browser in which the application represented by the resource is
+// running. The `browser.*` attributes MUST be used only for resources that
+// represent applications running in a web browser (regardless of whether
+// running on a mobile or desktop device).
+const (
+ // BrowserBrandsKey is the attribute Key conforming to the "browser.brands"
+ // semantic conventions. It represents the array of brand name and version
+ // separated by a space
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: ' Not A;Brand 99', 'Chromium 99', 'Chrome 99'
+ // Note: This value is intended to be taken from the [UA client hints
+ // API](https://wicg.github.io/ua-client-hints/#interface)
+ // (`navigator.userAgentData.brands`).
+ BrowserBrandsKey = attribute.Key("browser.brands")
+
+ // BrowserPlatformKey is the attribute Key conforming to the
+ // "browser.platform" semantic conventions. It represents the platform on
+ // which the browser is running
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Windows', 'macOS', 'Android'
+ // Note: This value is intended to be taken from the [UA client hints
+ // API](https://wicg.github.io/ua-client-hints/#interface)
+ // (`navigator.userAgentData.platform`). If unavailable, the legacy
+ // `navigator.platform` API SHOULD NOT be used instead and this attribute
+ // SHOULD be left unset in order for the values to be consistent.
+ // The list of possible values is defined in the [W3C User-Agent Client
+ // Hints
+ // specification](https://wicg.github.io/ua-client-hints/#sec-ch-ua-platform).
+ // Note that some (but not all) of these values can overlap with values in
+ // the [`os.type` and `os.name` attributes](./os.md). However, for
+ // consistency, the values in the `browser.platform` attribute should
+ // capture the exact value that the user agent provides.
+ BrowserPlatformKey = attribute.Key("browser.platform")
+
+ // BrowserMobileKey is the attribute Key conforming to the "browser.mobile"
+ // semantic conventions. It represents a boolean that is true if the
+ // browser is running on a mobile device
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: This value is intended to be taken from the [UA client hints
+ // API](https://wicg.github.io/ua-client-hints/#interface)
+ // (`navigator.userAgentData.mobile`). If unavailable, this attribute
+ // SHOULD be left unset.
+ BrowserMobileKey = attribute.Key("browser.mobile")
+
+ // BrowserUserAgentKey is the attribute Key conforming to the
+ // "browser.user_agent" semantic conventions. It represents the full
+ // user-agent string provided by the browser
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
+ // AppleWebKit/537.36 (KHTML, '
+ // 'like Gecko) Chrome/95.0.4638.54 Safari/537.36'
+ // Note: The user-agent value SHOULD be provided only from browsers that do
+ // not have a mechanism to retrieve brands and platform individually from
+ // the User-Agent Client Hints API. To retrieve the value, the legacy
+ // `navigator.userAgent` API can be used.
+ BrowserUserAgentKey = attribute.Key("browser.user_agent")
+
+ // BrowserLanguageKey is the attribute Key conforming to the
+ // "browser.language" semantic conventions. It represents the preferred
+ // language of the user using the browser
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'en', 'en-US', 'fr', 'fr-FR'
+ // Note: This value is intended to be taken from the Navigator API
+ // `navigator.language`.
+ BrowserLanguageKey = attribute.Key("browser.language")
+)
+
+// BrowserBrands returns an attribute KeyValue conforming to the
+// "browser.brands" semantic conventions. It represents the array of brand name
+// and version separated by a space
+func BrowserBrands(val ...string) attribute.KeyValue {
+ return BrowserBrandsKey.StringSlice(val)
+}
+
+// BrowserPlatform returns an attribute KeyValue conforming to the
+// "browser.platform" semantic conventions. It represents the platform on which
+// the browser is running
+func BrowserPlatform(val string) attribute.KeyValue {
+ return BrowserPlatformKey.String(val)
+}
+
+// BrowserMobile returns an attribute KeyValue conforming to the
+// "browser.mobile" semantic conventions. It represents a boolean that is true
+// if the browser is running on a mobile device
+func BrowserMobile(val bool) attribute.KeyValue {
+ return BrowserMobileKey.Bool(val)
+}
+
+// BrowserUserAgent returns an attribute KeyValue conforming to the
+// "browser.user_agent" semantic conventions. It represents the full user-agent
+// string provided by the browser
+func BrowserUserAgent(val string) attribute.KeyValue {
+ return BrowserUserAgentKey.String(val)
+}
+
+// BrowserLanguage returns an attribute KeyValue conforming to the
+// "browser.language" semantic conventions. It represents the preferred
+// language of the user using the browser
+func BrowserLanguage(val string) attribute.KeyValue {
+ return BrowserLanguageKey.String(val)
+}
+
+// A cloud environment (e.g. GCP, Azure, AWS)
+const (
+ // CloudProviderKey is the attribute Key conforming to the "cloud.provider"
+ // semantic conventions. It represents the name of the cloud provider.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ CloudProviderKey = attribute.Key("cloud.provider")
+
+ // CloudAccountIDKey is the attribute Key conforming to the
+ // "cloud.account.id" semantic conventions. It represents the cloud account
+ // ID the resource is assigned to.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '111111111111', 'opentelemetry'
+ CloudAccountIDKey = attribute.Key("cloud.account.id")
+
+ // CloudRegionKey is the attribute Key conforming to the "cloud.region"
+ // semantic conventions. It represents the geographical region the resource
+ // is running.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'us-central1', 'us-east-1'
+ // Note: Refer to your provider's docs to see the available regions, for
+ // example [Alibaba Cloud
+ // regions](https://www.alibabacloud.com/help/doc-detail/40654.htm), [AWS
+ // regions](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/),
+ // [Azure
+ // regions](https://azure.microsoft.com/en-us/global-infrastructure/geographies/),
+ // [Google Cloud regions](https://cloud.google.com/about/locations), or
+ // [Tencent Cloud
+ // regions](https://intl.cloud.tencent.com/document/product/213/6091).
+ CloudRegionKey = attribute.Key("cloud.region")
+
+ // CloudAvailabilityZoneKey is the attribute Key conforming to the
+ // "cloud.availability_zone" semantic conventions. It represents the cloud
+ // regions often have multiple, isolated locations known as zones to
+ // increase availability. Availability zone represents the zone where the
+ // resource is running.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'us-east-1c'
+ // Note: Availability zones are called "zones" on Alibaba Cloud and Google
+ // Cloud.
+ CloudAvailabilityZoneKey = attribute.Key("cloud.availability_zone")
+
+ // CloudPlatformKey is the attribute Key conforming to the "cloud.platform"
+ // semantic conventions. It represents the cloud platform in use.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: The prefix of the service SHOULD match the one specified in
+ // `cloud.provider`.
+ CloudPlatformKey = attribute.Key("cloud.platform")
+)
+
+var (
+ // Alibaba Cloud
+ CloudProviderAlibabaCloud = CloudProviderKey.String("alibaba_cloud")
+ // Amazon Web Services
+ CloudProviderAWS = CloudProviderKey.String("aws")
+ // Microsoft Azure
+ CloudProviderAzure = CloudProviderKey.String("azure")
+ // Google Cloud Platform
+ CloudProviderGCP = CloudProviderKey.String("gcp")
+ // IBM Cloud
+ CloudProviderIbmCloud = CloudProviderKey.String("ibm_cloud")
+ // Tencent Cloud
+ CloudProviderTencentCloud = CloudProviderKey.String("tencent_cloud")
+)
+
+var (
+ // Alibaba Cloud Elastic Compute Service
+ CloudPlatformAlibabaCloudECS = CloudPlatformKey.String("alibaba_cloud_ecs")
+ // Alibaba Cloud Function Compute
+ CloudPlatformAlibabaCloudFc = CloudPlatformKey.String("alibaba_cloud_fc")
+ // Red Hat OpenShift on Alibaba Cloud
+ CloudPlatformAlibabaCloudOpenshift = CloudPlatformKey.String("alibaba_cloud_openshift")
+ // AWS Elastic Compute Cloud
+ CloudPlatformAWSEC2 = CloudPlatformKey.String("aws_ec2")
+ // AWS Elastic Container Service
+ CloudPlatformAWSECS = CloudPlatformKey.String("aws_ecs")
+ // AWS Elastic Kubernetes Service
+ CloudPlatformAWSEKS = CloudPlatformKey.String("aws_eks")
+ // AWS Lambda
+ CloudPlatformAWSLambda = CloudPlatformKey.String("aws_lambda")
+ // AWS Elastic Beanstalk
+ CloudPlatformAWSElasticBeanstalk = CloudPlatformKey.String("aws_elastic_beanstalk")
+ // AWS App Runner
+ CloudPlatformAWSAppRunner = CloudPlatformKey.String("aws_app_runner")
+ // Red Hat OpenShift on AWS (ROSA)
+ CloudPlatformAWSOpenshift = CloudPlatformKey.String("aws_openshift")
+ // Azure Virtual Machines
+ CloudPlatformAzureVM = CloudPlatformKey.String("azure_vm")
+ // Azure Container Instances
+ CloudPlatformAzureContainerInstances = CloudPlatformKey.String("azure_container_instances")
+ // Azure Kubernetes Service
+ CloudPlatformAzureAKS = CloudPlatformKey.String("azure_aks")
+ // Azure Functions
+ CloudPlatformAzureFunctions = CloudPlatformKey.String("azure_functions")
+ // Azure App Service
+ CloudPlatformAzureAppService = CloudPlatformKey.String("azure_app_service")
+ // Azure Red Hat OpenShift
+ CloudPlatformAzureOpenshift = CloudPlatformKey.String("azure_openshift")
+ // Google Cloud Compute Engine (GCE)
+ CloudPlatformGCPComputeEngine = CloudPlatformKey.String("gcp_compute_engine")
+ // Google Cloud Run
+ CloudPlatformGCPCloudRun = CloudPlatformKey.String("gcp_cloud_run")
+ // Google Cloud Kubernetes Engine (GKE)
+ CloudPlatformGCPKubernetesEngine = CloudPlatformKey.String("gcp_kubernetes_engine")
+ // Google Cloud Functions (GCF)
+ CloudPlatformGCPCloudFunctions = CloudPlatformKey.String("gcp_cloud_functions")
+ // Google Cloud App Engine (GAE)
+ CloudPlatformGCPAppEngine = CloudPlatformKey.String("gcp_app_engine")
+ // Red Hat OpenShift on Google Cloud
+ CloudPlatformGCPOpenshift = CloudPlatformKey.String("gcp_openshift")
+ // Red Hat OpenShift on IBM Cloud
+ CloudPlatformIbmCloudOpenshift = CloudPlatformKey.String("ibm_cloud_openshift")
+ // Tencent Cloud Cloud Virtual Machine (CVM)
+ CloudPlatformTencentCloudCvm = CloudPlatformKey.String("tencent_cloud_cvm")
+ // Tencent Cloud Elastic Kubernetes Service (EKS)
+ CloudPlatformTencentCloudEKS = CloudPlatformKey.String("tencent_cloud_eks")
+ // Tencent Cloud Serverless Cloud Function (SCF)
+ CloudPlatformTencentCloudScf = CloudPlatformKey.String("tencent_cloud_scf")
+)
+
+// CloudAccountID returns an attribute KeyValue conforming to the
+// "cloud.account.id" semantic conventions. It represents the cloud account ID
+// the resource is assigned to.
+func CloudAccountID(val string) attribute.KeyValue {
+ return CloudAccountIDKey.String(val)
+}
+
+// CloudRegion returns an attribute KeyValue conforming to the
+// "cloud.region" semantic conventions. It represents the geographical region
+// the resource is running.
+func CloudRegion(val string) attribute.KeyValue {
+ return CloudRegionKey.String(val)
+}
+
+// CloudAvailabilityZone returns an attribute KeyValue conforming to the
+// "cloud.availability_zone" semantic conventions. It represents the cloud
+// regions often have multiple, isolated locations known as zones to increase
+// availability. Availability zone represents the zone where the resource is
+// running.
+func CloudAvailabilityZone(val string) attribute.KeyValue {
+ return CloudAvailabilityZoneKey.String(val)
+}
+
+// Resources used by AWS Elastic Container Service (ECS).
+const (
+ // AWSECSContainerARNKey is the attribute Key conforming to the
+ // "aws.ecs.container.arn" semantic conventions. It represents the Amazon
+ // Resource Name (ARN) of an [ECS container
+ // instance](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples:
+ // 'arn:aws:ecs:us-west-1:123456789123:container/32624152-9086-4f0e-acae-1a75b14fe4d9'
+ AWSECSContainerARNKey = attribute.Key("aws.ecs.container.arn")
+
+ // AWSECSClusterARNKey is the attribute Key conforming to the
+ // "aws.ecs.cluster.arn" semantic conventions. It represents the ARN of an
+ // [ECS
+ // cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'arn:aws:ecs:us-west-2:123456789123:cluster/my-cluster'
+ AWSECSClusterARNKey = attribute.Key("aws.ecs.cluster.arn")
+
+ // AWSECSLaunchtypeKey is the attribute Key conforming to the
+ // "aws.ecs.launchtype" semantic conventions. It represents the [launch
+ // type](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html)
+ // for an ECS task.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ AWSECSLaunchtypeKey = attribute.Key("aws.ecs.launchtype")
+
+ // AWSECSTaskARNKey is the attribute Key conforming to the
+ // "aws.ecs.task.arn" semantic conventions. It represents the ARN of an
+ // [ECS task
+ // definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples:
+ // 'arn:aws:ecs:us-west-1:123456789123:task/10838bed-421f-43ef-870a-f43feacbbb5b'
+ AWSECSTaskARNKey = attribute.Key("aws.ecs.task.arn")
+
+ // AWSECSTaskFamilyKey is the attribute Key conforming to the
+ // "aws.ecs.task.family" semantic conventions. It represents the task
+ // definition family this task definition is a member of.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry-family'
+ AWSECSTaskFamilyKey = attribute.Key("aws.ecs.task.family")
+
+ // AWSECSTaskRevisionKey is the attribute Key conforming to the
+ // "aws.ecs.task.revision" semantic conventions. It represents the revision
+ // for this task definition.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '8', '26'
+ AWSECSTaskRevisionKey = attribute.Key("aws.ecs.task.revision")
+)
+
+var (
+ // ec2
+ AWSECSLaunchtypeEC2 = AWSECSLaunchtypeKey.String("ec2")
+ // fargate
+ AWSECSLaunchtypeFargate = AWSECSLaunchtypeKey.String("fargate")
+)
+
+// AWSECSContainerARN returns an attribute KeyValue conforming to the
+// "aws.ecs.container.arn" semantic conventions. It represents the Amazon
+// Resource Name (ARN) of an [ECS container
+// instance](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_instances.html).
+func AWSECSContainerARN(val string) attribute.KeyValue {
+ return AWSECSContainerARNKey.String(val)
+}
+
+// AWSECSClusterARN returns an attribute KeyValue conforming to the
+// "aws.ecs.cluster.arn" semantic conventions. It represents the ARN of an [ECS
+// cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/clusters.html).
+func AWSECSClusterARN(val string) attribute.KeyValue {
+ return AWSECSClusterARNKey.String(val)
+}
+
+// AWSECSTaskARN returns an attribute KeyValue conforming to the
+// "aws.ecs.task.arn" semantic conventions. It represents the ARN of an [ECS
+// task
+// definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html).
+func AWSECSTaskARN(val string) attribute.KeyValue {
+ return AWSECSTaskARNKey.String(val)
+}
+
+// AWSECSTaskFamily returns an attribute KeyValue conforming to the
+// "aws.ecs.task.family" semantic conventions. It represents the task
+// definition family this task definition is a member of.
+func AWSECSTaskFamily(val string) attribute.KeyValue {
+ return AWSECSTaskFamilyKey.String(val)
+}
+
+// AWSECSTaskRevision returns an attribute KeyValue conforming to the
+// "aws.ecs.task.revision" semantic conventions. It represents the revision for
+// this task definition.
+func AWSECSTaskRevision(val string) attribute.KeyValue {
+ return AWSECSTaskRevisionKey.String(val)
+}
+
+// Resources used by AWS Elastic Kubernetes Service (EKS).
+const (
+ // AWSEKSClusterARNKey is the attribute Key conforming to the
+ // "aws.eks.cluster.arn" semantic conventions. It represents the ARN of an
+ // EKS cluster.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'arn:aws:ecs:us-west-2:123456789123:cluster/my-cluster'
+ AWSEKSClusterARNKey = attribute.Key("aws.eks.cluster.arn")
+)
+
+// AWSEKSClusterARN returns an attribute KeyValue conforming to the
+// "aws.eks.cluster.arn" semantic conventions. It represents the ARN of an EKS
+// cluster.
+func AWSEKSClusterARN(val string) attribute.KeyValue {
+ return AWSEKSClusterARNKey.String(val)
+}
+
+// Resources specific to Amazon Web Services.
+const (
+ // AWSLogGroupNamesKey is the attribute Key conforming to the
+ // "aws.log.group.names" semantic conventions. It represents the name(s) of
+ // the AWS log group(s) an application is writing to.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '/aws/lambda/my-function', 'opentelemetry-service'
+ // Note: Multiple log groups must be supported for cases like
+ // multi-container applications, where a single application has sidecar
+ // containers, and each write to their own log group.
+ AWSLogGroupNamesKey = attribute.Key("aws.log.group.names")
+
+ // AWSLogGroupARNsKey is the attribute Key conforming to the
+ // "aws.log.group.arns" semantic conventions. It represents the Amazon
+ // Resource Name(s) (ARN) of the AWS log group(s).
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples:
+ // 'arn:aws:logs:us-west-1:123456789012:log-group:/aws/my/group:*'
+ // Note: See the [log group ARN format
+ // documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html#CWL_ARN_Format).
+ AWSLogGroupARNsKey = attribute.Key("aws.log.group.arns")
+
+ // AWSLogStreamNamesKey is the attribute Key conforming to the
+ // "aws.log.stream.names" semantic conventions. It represents the name(s)
+ // of the AWS log stream(s) an application is writing to.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'logs/main/10838bed-421f-43ef-870a-f43feacbbb5b'
+ AWSLogStreamNamesKey = attribute.Key("aws.log.stream.names")
+
+ // AWSLogStreamARNsKey is the attribute Key conforming to the
+ // "aws.log.stream.arns" semantic conventions. It represents the ARN(s) of
+ // the AWS log stream(s).
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples:
+ // 'arn:aws:logs:us-west-1:123456789012:log-group:/aws/my/group:log-stream:logs/main/10838bed-421f-43ef-870a-f43feacbbb5b'
+ // Note: See the [log stream ARN format
+ // documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/iam-access-control-overview-cwl.html#CWL_ARN_Format).
+ // One log group can contain several log streams, so these ARNs necessarily
+ // identify both a log group and a log stream.
+ AWSLogStreamARNsKey = attribute.Key("aws.log.stream.arns")
+)
+
+// AWSLogGroupNames returns an attribute KeyValue conforming to the
+// "aws.log.group.names" semantic conventions. It represents the name(s) of the
+// AWS log group(s) an application is writing to.
+func AWSLogGroupNames(val ...string) attribute.KeyValue {
+ return AWSLogGroupNamesKey.StringSlice(val)
+}
+
+// AWSLogGroupARNs returns an attribute KeyValue conforming to the
+// "aws.log.group.arns" semantic conventions. It represents the Amazon Resource
+// Name(s) (ARN) of the AWS log group(s).
+func AWSLogGroupARNs(val ...string) attribute.KeyValue {
+ return AWSLogGroupARNsKey.StringSlice(val)
+}
+
+// AWSLogStreamNames returns an attribute KeyValue conforming to the
+// "aws.log.stream.names" semantic conventions. It represents the name(s) of
+// the AWS log stream(s) an application is writing to.
+func AWSLogStreamNames(val ...string) attribute.KeyValue {
+ return AWSLogStreamNamesKey.StringSlice(val)
+}
+
+// AWSLogStreamARNs returns an attribute KeyValue conforming to the
+// "aws.log.stream.arns" semantic conventions. It represents the ARN(s) of the
+// AWS log stream(s).
+func AWSLogStreamARNs(val ...string) attribute.KeyValue {
+ return AWSLogStreamARNsKey.StringSlice(val)
+}
+
+// A container instance.
+const (
+ // ContainerNameKey is the attribute Key conforming to the "container.name"
+ // semantic conventions. It represents the container name used by container
+ // runtime.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry-autoconf'
+ ContainerNameKey = attribute.Key("container.name")
+
+ // ContainerIDKey is the attribute Key conforming to the "container.id"
+ // semantic conventions. It represents the container ID. Usually a UUID, as
+ // for example used to [identify Docker
+ // containers](https://docs.docker.com/engine/reference/run/#container-identification).
+ // The UUID might be abbreviated.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'a3bf90e006b2'
+ ContainerIDKey = attribute.Key("container.id")
+
+ // ContainerRuntimeKey is the attribute Key conforming to the
+ // "container.runtime" semantic conventions. It represents the container
+ // runtime managing this container.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'docker', 'containerd', 'rkt'
+ ContainerRuntimeKey = attribute.Key("container.runtime")
+
+ // ContainerImageNameKey is the attribute Key conforming to the
+ // "container.image.name" semantic conventions. It represents the name of
+ // the image the container was built on.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'gcr.io/opentelemetry/operator'
+ ContainerImageNameKey = attribute.Key("container.image.name")
+
+ // ContainerImageTagKey is the attribute Key conforming to the
+ // "container.image.tag" semantic conventions. It represents the container
+ // image tag.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '0.1'
+ ContainerImageTagKey = attribute.Key("container.image.tag")
+)
+
+// ContainerName returns an attribute KeyValue conforming to the
+// "container.name" semantic conventions. It represents the container name used
+// by container runtime.
+func ContainerName(val string) attribute.KeyValue {
+ return ContainerNameKey.String(val)
+}
+
+// ContainerID returns an attribute KeyValue conforming to the
+// "container.id" semantic conventions. It represents the container ID. Usually
+// a UUID, as for example used to [identify Docker
+// containers](https://docs.docker.com/engine/reference/run/#container-identification).
+// The UUID might be abbreviated.
+func ContainerID(val string) attribute.KeyValue {
+ return ContainerIDKey.String(val)
+}
+
+// ContainerRuntime returns an attribute KeyValue conforming to the
+// "container.runtime" semantic conventions. It represents the container
+// runtime managing this container.
+func ContainerRuntime(val string) attribute.KeyValue {
+ return ContainerRuntimeKey.String(val)
+}
+
+// ContainerImageName returns an attribute KeyValue conforming to the
+// "container.image.name" semantic conventions. It represents the name of the
+// image the container was built on.
+func ContainerImageName(val string) attribute.KeyValue {
+ return ContainerImageNameKey.String(val)
+}
+
+// ContainerImageTag returns an attribute KeyValue conforming to the
+// "container.image.tag" semantic conventions. It represents the container
+// image tag.
+func ContainerImageTag(val string) attribute.KeyValue {
+ return ContainerImageTagKey.String(val)
+}
+
+// The software deployment.
+const (
+ // DeploymentEnvironmentKey is the attribute Key conforming to the
+ // "deployment.environment" semantic conventions. It represents the name of
+ // the [deployment
+ // environment](https://en.wikipedia.org/wiki/Deployment_environment) (aka
+ // deployment tier).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'staging', 'production'
+ DeploymentEnvironmentKey = attribute.Key("deployment.environment")
+)
+
+// DeploymentEnvironment returns an attribute KeyValue conforming to the
+// "deployment.environment" semantic conventions. It represents the name of the
+// [deployment
+// environment](https://en.wikipedia.org/wiki/Deployment_environment) (aka
+// deployment tier).
+func DeploymentEnvironment(val string) attribute.KeyValue {
+ return DeploymentEnvironmentKey.String(val)
+}
+
+// The device on which the process represented by this resource is running.
+const (
+ // DeviceIDKey is the attribute Key conforming to the "device.id" semantic
+ // conventions. It represents a unique identifier representing the device
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '2ab2916d-a51f-4ac8-80ee-45ac31a28092'
+ // Note: The device identifier MUST only be defined using the values
+ // outlined below. This value is not an advertising identifier and MUST NOT
+ // be used as such. On iOS (Swift or Objective-C), this value MUST be equal
+ // to the [vendor
+ // identifier](https://developer.apple.com/documentation/uikit/uidevice/1620059-identifierforvendor).
+ // On Android (Java or Kotlin), this value MUST be equal to the Firebase
+ // Installation ID or a globally unique UUID which is persisted across
+ // sessions in your application. More information can be found
+ // [here](https://developer.android.com/training/articles/user-data-ids) on
+ // best practices and exact implementation details. Caution should be taken
+ // when storing personal data or anything which can identify a user. GDPR
+ // and data protection laws may apply, ensure you do your own due
+ // diligence.
+ DeviceIDKey = attribute.Key("device.id")
+
+ // DeviceModelIdentifierKey is the attribute Key conforming to the
+ // "device.model.identifier" semantic conventions. It represents the model
+ // identifier for the device
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'iPhone3,4', 'SM-G920F'
+ // Note: It's recommended this value represents a machine readable version
+ // of the model identifier rather than the market or consumer-friendly name
+ // of the device.
+ DeviceModelIdentifierKey = attribute.Key("device.model.identifier")
+
+ // DeviceModelNameKey is the attribute Key conforming to the
+ // "device.model.name" semantic conventions. It represents the marketing
+ // name for the device model
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'iPhone 6s Plus', 'Samsung Galaxy S6'
+ // Note: It's recommended this value represents a human readable version of
+ // the device model rather than a machine readable alternative.
+ DeviceModelNameKey = attribute.Key("device.model.name")
+
+ // DeviceManufacturerKey is the attribute Key conforming to the
+ // "device.manufacturer" semantic conventions. It represents the name of
+ // the device manufacturer
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Apple', 'Samsung'
+ // Note: The Android OS provides this field via
+ // [Build](https://developer.android.com/reference/android/os/Build#MANUFACTURER).
+ // iOS apps SHOULD hardcode the value `Apple`.
+ DeviceManufacturerKey = attribute.Key("device.manufacturer")
+)
+
+// DeviceID returns an attribute KeyValue conforming to the "device.id"
+// semantic conventions. It represents a unique identifier representing the
+// device
+func DeviceID(val string) attribute.KeyValue {
+ return DeviceIDKey.String(val)
+}
+
+// DeviceModelIdentifier returns an attribute KeyValue conforming to the
+// "device.model.identifier" semantic conventions. It represents the model
+// identifier for the device
+func DeviceModelIdentifier(val string) attribute.KeyValue {
+ return DeviceModelIdentifierKey.String(val)
+}
+
+// DeviceModelName returns an attribute KeyValue conforming to the
+// "device.model.name" semantic conventions. It represents the marketing name
+// for the device model
+func DeviceModelName(val string) attribute.KeyValue {
+ return DeviceModelNameKey.String(val)
+}
+
+// DeviceManufacturer returns an attribute KeyValue conforming to the
+// "device.manufacturer" semantic conventions. It represents the name of the
+// device manufacturer
+func DeviceManufacturer(val string) attribute.KeyValue {
+ return DeviceManufacturerKey.String(val)
+}
+
+// A serverless instance.
+const (
+ // FaaSNameKey is the attribute Key conforming to the "faas.name" semantic
+ // conventions. It represents the name of the single function that this
+ // runtime instance executes.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'my-function', 'myazurefunctionapp/some-function-name'
+ // Note: This is the name of the function as configured/deployed on the
+ // FaaS
+ // platform and is usually different from the name of the callback
+ // function (which may be stored in the
+ // [`code.namespace`/`code.function`](../../trace/semantic_conventions/span-general.md#source-code-attributes)
+ // span attributes).
+ //
+ // For some cloud providers, the above definition is ambiguous. The
+ // following
+ // definition of function name MUST be used for this attribute
+ // (and consequently the span name) for the listed cloud
+ // providers/products:
+ //
+ // * **Azure:** The full name `/`, i.e., function app name
+ // followed by a forward slash followed by the function name (this form
+ // can also be seen in the resource JSON for the function).
+ // This means that a span attribute MUST be used, as an Azure function
+ // app can host multiple functions that would usually share
+ // a TracerProvider (see also the `faas.id` attribute).
+ FaaSNameKey = attribute.Key("faas.name")
+
+ // FaaSIDKey is the attribute Key conforming to the "faas.id" semantic
+ // conventions. It represents the unique ID of the single function that
+ // this runtime instance executes.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'arn:aws:lambda:us-west-2:123456789012:function:my-function'
+ // Note: On some cloud providers, it may not be possible to determine the
+ // full ID at startup,
+ // so consider setting `faas.id` as a span attribute instead.
+ //
+ // The exact value to use for `faas.id` depends on the cloud provider:
+ //
+ // * **AWS Lambda:** The function
+ // [ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
+ // Take care not to use the "invoked ARN" directly but replace any
+ // [alias
+ // suffix](https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html)
+ // with the resolved function version, as the same runtime instance may
+ // be invokable with
+ // multiple different aliases.
+ // * **GCP:** The [URI of the
+ // resource](https://cloud.google.com/iam/docs/full-resource-names)
+ // * **Azure:** The [Fully Qualified Resource
+ // ID](https://docs.microsoft.com/en-us/rest/api/resources/resources/get-by-id)
+ // of the invoked function,
+ // *not* the function app, having the form
+ // `/subscriptions//resourceGroups//providers/Microsoft.Web/sites//functions/`.
+ // This means that a span attribute MUST be used, as an Azure function
+ // app can host multiple functions that would usually share
+ // a TracerProvider.
+ FaaSIDKey = attribute.Key("faas.id")
+
+ // FaaSVersionKey is the attribute Key conforming to the "faas.version"
+ // semantic conventions. It represents the immutable version of the
+ // function being executed.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '26', 'pinkfroid-00002'
+ // Note: Depending on the cloud provider and platform, use:
+ //
+ // * **AWS Lambda:** The [function
+ // version](https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html)
+ // (an integer represented as a decimal string).
+ // * **Google Cloud Run:** The
+ // [revision](https://cloud.google.com/run/docs/managing/revisions)
+ // (i.e., the function name plus the revision suffix).
+ // * **Google Cloud Functions:** The value of the
+ // [`K_REVISION` environment
+ // variable](https://cloud.google.com/functions/docs/env-var#runtime_environment_variables_set_automatically).
+ // * **Azure Functions:** Not applicable. Do not set this attribute.
+ FaaSVersionKey = attribute.Key("faas.version")
+
+ // FaaSInstanceKey is the attribute Key conforming to the "faas.instance"
+ // semantic conventions. It represents the execution environment ID as a
+ // string, that will be potentially reused for other invocations to the
+ // same function/function version.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '2021/06/28/[$LATEST]2f399eb14537447da05ab2a2e39309de'
+ // Note: * **AWS Lambda:** Use the (full) log stream name.
+ FaaSInstanceKey = attribute.Key("faas.instance")
+
+ // FaaSMaxMemoryKey is the attribute Key conforming to the
+ // "faas.max_memory" semantic conventions. It represents the amount of
+ // memory available to the serverless function in MiB.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 128
+ // Note: It's recommended to set this attribute since e.g. too little
+ // memory can easily stop a Java AWS Lambda function from working
+ // correctly. On AWS Lambda, the environment variable
+ // `AWS_LAMBDA_FUNCTION_MEMORY_SIZE` provides this information.
+ FaaSMaxMemoryKey = attribute.Key("faas.max_memory")
+)
+
+// FaaSName returns an attribute KeyValue conforming to the "faas.name"
+// semantic conventions. It represents the name of the single function that
+// this runtime instance executes.
+func FaaSName(val string) attribute.KeyValue {
+ return FaaSNameKey.String(val)
+}
+
+// FaaSID returns an attribute KeyValue conforming to the "faas.id" semantic
+// conventions. It represents the unique ID of the single function that this
+// runtime instance executes.
+func FaaSID(val string) attribute.KeyValue {
+ return FaaSIDKey.String(val)
+}
+
+// FaaSVersion returns an attribute KeyValue conforming to the
+// "faas.version" semantic conventions. It represents the immutable version of
+// the function being executed.
+func FaaSVersion(val string) attribute.KeyValue {
+ return FaaSVersionKey.String(val)
+}
+
+// FaaSInstance returns an attribute KeyValue conforming to the
+// "faas.instance" semantic conventions. It represents the execution
+// environment ID as a string, that will be potentially reused for other
+// invocations to the same function/function version.
+func FaaSInstance(val string) attribute.KeyValue {
+ return FaaSInstanceKey.String(val)
+}
+
+// FaaSMaxMemory returns an attribute KeyValue conforming to the
+// "faas.max_memory" semantic conventions. It represents the amount of memory
+// available to the serverless function in MiB.
+func FaaSMaxMemory(val int) attribute.KeyValue {
+ return FaaSMaxMemoryKey.Int(val)
+}
+
+// A host is defined as a general computing instance.
+const (
+ // HostIDKey is the attribute Key conforming to the "host.id" semantic
+ // conventions. It represents the unique host ID. For Cloud, this must be
+ // the instance_id assigned by the cloud provider. For non-containerized
+ // Linux systems, the `machine-id` located in `/etc/machine-id` or
+ // `/var/lib/dbus/machine-id` may be used.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'fdbf79e8af94cb7f9e8df36789187052'
+ HostIDKey = attribute.Key("host.id")
+
+ // HostNameKey is the attribute Key conforming to the "host.name" semantic
+ // conventions. It represents the name of the host. On Unix systems, it may
+ // contain what the hostname command returns, or the fully qualified
+ // hostname, or another name specified by the user.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry-test'
+ HostNameKey = attribute.Key("host.name")
+
+ // HostTypeKey is the attribute Key conforming to the "host.type" semantic
+ // conventions. It represents the type of host. For Cloud, this must be the
+ // machine type.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'n1-standard-1'
+ HostTypeKey = attribute.Key("host.type")
+
+ // HostArchKey is the attribute Key conforming to the "host.arch" semantic
+ // conventions. It represents the CPU architecture the host system is
+ // running on.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ HostArchKey = attribute.Key("host.arch")
+
+ // HostImageNameKey is the attribute Key conforming to the
+ // "host.image.name" semantic conventions. It represents the name of the VM
+ // image or OS install the host was instantiated from.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'infra-ami-eks-worker-node-7d4ec78312', 'CentOS-8-x86_64-1905'
+ HostImageNameKey = attribute.Key("host.image.name")
+
+ // HostImageIDKey is the attribute Key conforming to the "host.image.id"
+ // semantic conventions. It represents the vM image ID. For Cloud, this
+ // value is from the provider.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'ami-07b06b442921831e5'
+ HostImageIDKey = attribute.Key("host.image.id")
+
+ // HostImageVersionKey is the attribute Key conforming to the
+ // "host.image.version" semantic conventions. It represents the version
+ // string of the VM image as defined in [Version
+ // Attributes](README.md#version-attributes).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '0.1'
+ HostImageVersionKey = attribute.Key("host.image.version")
+)
+
+var (
+ // AMD64
+ HostArchAMD64 = HostArchKey.String("amd64")
+ // ARM32
+ HostArchARM32 = HostArchKey.String("arm32")
+ // ARM64
+ HostArchARM64 = HostArchKey.String("arm64")
+ // Itanium
+ HostArchIA64 = HostArchKey.String("ia64")
+ // 32-bit PowerPC
+ HostArchPPC32 = HostArchKey.String("ppc32")
+ // 64-bit PowerPC
+ HostArchPPC64 = HostArchKey.String("ppc64")
+ // IBM z/Architecture
+ HostArchS390x = HostArchKey.String("s390x")
+ // 32-bit x86
+ HostArchX86 = HostArchKey.String("x86")
+)
+
+// HostID returns an attribute KeyValue conforming to the "host.id" semantic
+// conventions. It represents the unique host ID. For Cloud, this must be the
+// instance_id assigned by the cloud provider. For non-containerized Linux
+// systems, the `machine-id` located in `/etc/machine-id` or
+// `/var/lib/dbus/machine-id` may be used.
+func HostID(val string) attribute.KeyValue {
+ return HostIDKey.String(val)
+}
+
+// HostName returns an attribute KeyValue conforming to the "host.name"
+// semantic conventions. It represents the name of the host. On Unix systems,
+// it may contain what the hostname command returns, or the fully qualified
+// hostname, or another name specified by the user.
+func HostName(val string) attribute.KeyValue {
+ return HostNameKey.String(val)
+}
+
+// HostType returns an attribute KeyValue conforming to the "host.type"
+// semantic conventions. It represents the type of host. For Cloud, this must
+// be the machine type.
+func HostType(val string) attribute.KeyValue {
+ return HostTypeKey.String(val)
+}
+
+// HostImageName returns an attribute KeyValue conforming to the
+// "host.image.name" semantic conventions. It represents the name of the VM
+// image or OS install the host was instantiated from.
+func HostImageName(val string) attribute.KeyValue {
+ return HostImageNameKey.String(val)
+}
+
+// HostImageID returns an attribute KeyValue conforming to the
+// "host.image.id" semantic conventions. It represents the vM image ID. For
+// Cloud, this value is from the provider.
+func HostImageID(val string) attribute.KeyValue {
+ return HostImageIDKey.String(val)
+}
+
+// HostImageVersion returns an attribute KeyValue conforming to the
+// "host.image.version" semantic conventions. It represents the version string
+// of the VM image as defined in [Version
+// Attributes](README.md#version-attributes).
+func HostImageVersion(val string) attribute.KeyValue {
+ return HostImageVersionKey.String(val)
+}
+
+// A Kubernetes Cluster.
+const (
+ // K8SClusterNameKey is the attribute Key conforming to the
+ // "k8s.cluster.name" semantic conventions. It represents the name of the
+ // cluster.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry-cluster'
+ K8SClusterNameKey = attribute.Key("k8s.cluster.name")
+)
+
+// K8SClusterName returns an attribute KeyValue conforming to the
+// "k8s.cluster.name" semantic conventions. It represents the name of the
+// cluster.
+func K8SClusterName(val string) attribute.KeyValue {
+ return K8SClusterNameKey.String(val)
+}
+
+// A Kubernetes Node object.
+const (
+ // K8SNodeNameKey is the attribute Key conforming to the "k8s.node.name"
+ // semantic conventions. It represents the name of the Node.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'node-1'
+ K8SNodeNameKey = attribute.Key("k8s.node.name")
+
+ // K8SNodeUIDKey is the attribute Key conforming to the "k8s.node.uid"
+ // semantic conventions. It represents the UID of the Node.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '1eb3a0c6-0477-4080-a9cb-0cb7db65c6a2'
+ K8SNodeUIDKey = attribute.Key("k8s.node.uid")
+)
+
+// K8SNodeName returns an attribute KeyValue conforming to the
+// "k8s.node.name" semantic conventions. It represents the name of the Node.
+func K8SNodeName(val string) attribute.KeyValue {
+ return K8SNodeNameKey.String(val)
+}
+
+// K8SNodeUID returns an attribute KeyValue conforming to the "k8s.node.uid"
+// semantic conventions. It represents the UID of the Node.
+func K8SNodeUID(val string) attribute.KeyValue {
+ return K8SNodeUIDKey.String(val)
+}
+
+// A Kubernetes Namespace.
+const (
+ // K8SNamespaceNameKey is the attribute Key conforming to the
+ // "k8s.namespace.name" semantic conventions. It represents the name of the
+ // namespace that the pod is running in.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'default'
+ K8SNamespaceNameKey = attribute.Key("k8s.namespace.name")
+)
+
+// K8SNamespaceName returns an attribute KeyValue conforming to the
+// "k8s.namespace.name" semantic conventions. It represents the name of the
+// namespace that the pod is running in.
+func K8SNamespaceName(val string) attribute.KeyValue {
+ return K8SNamespaceNameKey.String(val)
+}
+
+// A Kubernetes Pod object.
+const (
+ // K8SPodUIDKey is the attribute Key conforming to the "k8s.pod.uid"
+ // semantic conventions. It represents the UID of the Pod.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SPodUIDKey = attribute.Key("k8s.pod.uid")
+
+ // K8SPodNameKey is the attribute Key conforming to the "k8s.pod.name"
+ // semantic conventions. It represents the name of the Pod.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry-pod-autoconf'
+ K8SPodNameKey = attribute.Key("k8s.pod.name")
+)
+
+// K8SPodUID returns an attribute KeyValue conforming to the "k8s.pod.uid"
+// semantic conventions. It represents the UID of the Pod.
+func K8SPodUID(val string) attribute.KeyValue {
+ return K8SPodUIDKey.String(val)
+}
+
+// K8SPodName returns an attribute KeyValue conforming to the "k8s.pod.name"
+// semantic conventions. It represents the name of the Pod.
+func K8SPodName(val string) attribute.KeyValue {
+ return K8SPodNameKey.String(val)
+}
+
+// A container in a
+// [PodTemplate](https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates).
+const (
+ // K8SContainerNameKey is the attribute Key conforming to the
+ // "k8s.container.name" semantic conventions. It represents the name of the
+ // Container from Pod specification, must be unique within a Pod. Container
+ // runtime usually uses different globally unique name (`container.name`).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'redis'
+ K8SContainerNameKey = attribute.Key("k8s.container.name")
+
+ // K8SContainerRestartCountKey is the attribute Key conforming to the
+ // "k8s.container.restart_count" semantic conventions. It represents the
+ // number of times the container was restarted. This attribute can be used
+ // to identify a particular container (running or stopped) within a
+ // container spec.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 0, 2
+ K8SContainerRestartCountKey = attribute.Key("k8s.container.restart_count")
+)
+
+// K8SContainerName returns an attribute KeyValue conforming to the
+// "k8s.container.name" semantic conventions. It represents the name of the
+// Container from Pod specification, must be unique within a Pod. Container
+// runtime usually uses different globally unique name (`container.name`).
+func K8SContainerName(val string) attribute.KeyValue {
+ return K8SContainerNameKey.String(val)
+}
+
+// K8SContainerRestartCount returns an attribute KeyValue conforming to the
+// "k8s.container.restart_count" semantic conventions. It represents the number
+// of times the container was restarted. This attribute can be used to identify
+// a particular container (running or stopped) within a container spec.
+func K8SContainerRestartCount(val int) attribute.KeyValue {
+ return K8SContainerRestartCountKey.Int(val)
+}
+
+// A Kubernetes ReplicaSet object.
+const (
+ // K8SReplicaSetUIDKey is the attribute Key conforming to the
+ // "k8s.replicaset.uid" semantic conventions. It represents the UID of the
+ // ReplicaSet.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SReplicaSetUIDKey = attribute.Key("k8s.replicaset.uid")
+
+ // K8SReplicaSetNameKey is the attribute Key conforming to the
+ // "k8s.replicaset.name" semantic conventions. It represents the name of
+ // the ReplicaSet.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ K8SReplicaSetNameKey = attribute.Key("k8s.replicaset.name")
+)
+
+// K8SReplicaSetUID returns an attribute KeyValue conforming to the
+// "k8s.replicaset.uid" semantic conventions. It represents the UID of the
+// ReplicaSet.
+func K8SReplicaSetUID(val string) attribute.KeyValue {
+ return K8SReplicaSetUIDKey.String(val)
+}
+
+// K8SReplicaSetName returns an attribute KeyValue conforming to the
+// "k8s.replicaset.name" semantic conventions. It represents the name of the
+// ReplicaSet.
+func K8SReplicaSetName(val string) attribute.KeyValue {
+ return K8SReplicaSetNameKey.String(val)
+}
+
+// A Kubernetes Deployment object.
+const (
+ // K8SDeploymentUIDKey is the attribute Key conforming to the
+ // "k8s.deployment.uid" semantic conventions. It represents the UID of the
+ // Deployment.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SDeploymentUIDKey = attribute.Key("k8s.deployment.uid")
+
+ // K8SDeploymentNameKey is the attribute Key conforming to the
+ // "k8s.deployment.name" semantic conventions. It represents the name of
+ // the Deployment.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ K8SDeploymentNameKey = attribute.Key("k8s.deployment.name")
+)
+
+// K8SDeploymentUID returns an attribute KeyValue conforming to the
+// "k8s.deployment.uid" semantic conventions. It represents the UID of the
+// Deployment.
+func K8SDeploymentUID(val string) attribute.KeyValue {
+ return K8SDeploymentUIDKey.String(val)
+}
+
+// K8SDeploymentName returns an attribute KeyValue conforming to the
+// "k8s.deployment.name" semantic conventions. It represents the name of the
+// Deployment.
+func K8SDeploymentName(val string) attribute.KeyValue {
+ return K8SDeploymentNameKey.String(val)
+}
+
+// A Kubernetes StatefulSet object.
+const (
+ // K8SStatefulSetUIDKey is the attribute Key conforming to the
+ // "k8s.statefulset.uid" semantic conventions. It represents the UID of the
+ // StatefulSet.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SStatefulSetUIDKey = attribute.Key("k8s.statefulset.uid")
+
+ // K8SStatefulSetNameKey is the attribute Key conforming to the
+ // "k8s.statefulset.name" semantic conventions. It represents the name of
+ // the StatefulSet.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ K8SStatefulSetNameKey = attribute.Key("k8s.statefulset.name")
+)
+
+// K8SStatefulSetUID returns an attribute KeyValue conforming to the
+// "k8s.statefulset.uid" semantic conventions. It represents the UID of the
+// StatefulSet.
+func K8SStatefulSetUID(val string) attribute.KeyValue {
+ return K8SStatefulSetUIDKey.String(val)
+}
+
+// K8SStatefulSetName returns an attribute KeyValue conforming to the
+// "k8s.statefulset.name" semantic conventions. It represents the name of the
+// StatefulSet.
+func K8SStatefulSetName(val string) attribute.KeyValue {
+ return K8SStatefulSetNameKey.String(val)
+}
+
+// A Kubernetes DaemonSet object.
+const (
+ // K8SDaemonSetUIDKey is the attribute Key conforming to the
+ // "k8s.daemonset.uid" semantic conventions. It represents the UID of the
+ // DaemonSet.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SDaemonSetUIDKey = attribute.Key("k8s.daemonset.uid")
+
+ // K8SDaemonSetNameKey is the attribute Key conforming to the
+ // "k8s.daemonset.name" semantic conventions. It represents the name of the
+ // DaemonSet.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ K8SDaemonSetNameKey = attribute.Key("k8s.daemonset.name")
+)
+
+// K8SDaemonSetUID returns an attribute KeyValue conforming to the
+// "k8s.daemonset.uid" semantic conventions. It represents the UID of the
+// DaemonSet.
+func K8SDaemonSetUID(val string) attribute.KeyValue {
+ return K8SDaemonSetUIDKey.String(val)
+}
+
+// K8SDaemonSetName returns an attribute KeyValue conforming to the
+// "k8s.daemonset.name" semantic conventions. It represents the name of the
+// DaemonSet.
+func K8SDaemonSetName(val string) attribute.KeyValue {
+ return K8SDaemonSetNameKey.String(val)
+}
+
+// A Kubernetes Job object.
+const (
+ // K8SJobUIDKey is the attribute Key conforming to the "k8s.job.uid"
+ // semantic conventions. It represents the UID of the Job.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SJobUIDKey = attribute.Key("k8s.job.uid")
+
+ // K8SJobNameKey is the attribute Key conforming to the "k8s.job.name"
+ // semantic conventions. It represents the name of the Job.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ K8SJobNameKey = attribute.Key("k8s.job.name")
+)
+
+// K8SJobUID returns an attribute KeyValue conforming to the "k8s.job.uid"
+// semantic conventions. It represents the UID of the Job.
+func K8SJobUID(val string) attribute.KeyValue {
+ return K8SJobUIDKey.String(val)
+}
+
+// K8SJobName returns an attribute KeyValue conforming to the "k8s.job.name"
+// semantic conventions. It represents the name of the Job.
+func K8SJobName(val string) attribute.KeyValue {
+ return K8SJobNameKey.String(val)
+}
+
+// A Kubernetes CronJob object.
+const (
+ // K8SCronJobUIDKey is the attribute Key conforming to the
+ // "k8s.cronjob.uid" semantic conventions. It represents the UID of the
+ // CronJob.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '275ecb36-5aa8-4c2a-9c47-d8bb681b9aff'
+ K8SCronJobUIDKey = attribute.Key("k8s.cronjob.uid")
+
+ // K8SCronJobNameKey is the attribute Key conforming to the
+ // "k8s.cronjob.name" semantic conventions. It represents the name of the
+ // CronJob.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ K8SCronJobNameKey = attribute.Key("k8s.cronjob.name")
+)
+
+// K8SCronJobUID returns an attribute KeyValue conforming to the
+// "k8s.cronjob.uid" semantic conventions. It represents the UID of the
+// CronJob.
+func K8SCronJobUID(val string) attribute.KeyValue {
+ return K8SCronJobUIDKey.String(val)
+}
+
+// K8SCronJobName returns an attribute KeyValue conforming to the
+// "k8s.cronjob.name" semantic conventions. It represents the name of the
+// CronJob.
+func K8SCronJobName(val string) attribute.KeyValue {
+ return K8SCronJobNameKey.String(val)
+}
+
+// The operating system (OS) on which the process represented by this resource
+// is running.
+const (
+ // OSTypeKey is the attribute Key conforming to the "os.type" semantic
+ // conventions. It represents the operating system type.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ OSTypeKey = attribute.Key("os.type")
+
+ // OSDescriptionKey is the attribute Key conforming to the "os.description"
+ // semantic conventions. It represents the human readable (not intended to
+ // be parsed) OS version information, like e.g. reported by `ver` or
+ // `lsb_release -a` commands.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Microsoft Windows [Version 10.0.18363.778]', 'Ubuntu 18.04.1
+ // LTS'
+ OSDescriptionKey = attribute.Key("os.description")
+
+ // OSNameKey is the attribute Key conforming to the "os.name" semantic
+ // conventions. It represents the human readable operating system name.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'iOS', 'Android', 'Ubuntu'
+ OSNameKey = attribute.Key("os.name")
+
+ // OSVersionKey is the attribute Key conforming to the "os.version"
+ // semantic conventions. It represents the version string of the operating
+ // system as defined in [Version
+ // Attributes](../../resource/semantic_conventions/README.md#version-attributes).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '14.2.1', '18.04.1'
+ OSVersionKey = attribute.Key("os.version")
+)
+
+var (
+ // Microsoft Windows
+ OSTypeWindows = OSTypeKey.String("windows")
+ // Linux
+ OSTypeLinux = OSTypeKey.String("linux")
+ // Apple Darwin
+ OSTypeDarwin = OSTypeKey.String("darwin")
+ // FreeBSD
+ OSTypeFreeBSD = OSTypeKey.String("freebsd")
+ // NetBSD
+ OSTypeNetBSD = OSTypeKey.String("netbsd")
+ // OpenBSD
+ OSTypeOpenBSD = OSTypeKey.String("openbsd")
+ // DragonFly BSD
+ OSTypeDragonflyBSD = OSTypeKey.String("dragonflybsd")
+ // HP-UX (Hewlett Packard Unix)
+ OSTypeHPUX = OSTypeKey.String("hpux")
+ // AIX (Advanced Interactive eXecutive)
+ OSTypeAIX = OSTypeKey.String("aix")
+ // SunOS, Oracle Solaris
+ OSTypeSolaris = OSTypeKey.String("solaris")
+ // IBM z/OS
+ OSTypeZOS = OSTypeKey.String("z_os")
+)
+
+// OSDescription returns an attribute KeyValue conforming to the
+// "os.description" semantic conventions. It represents the human readable (not
+// intended to be parsed) OS version information, like e.g. reported by `ver`
+// or `lsb_release -a` commands.
+func OSDescription(val string) attribute.KeyValue {
+ return OSDescriptionKey.String(val)
+}
+
+// OSName returns an attribute KeyValue conforming to the "os.name" semantic
+// conventions. It represents the human readable operating system name.
+func OSName(val string) attribute.KeyValue {
+ return OSNameKey.String(val)
+}
+
+// OSVersion returns an attribute KeyValue conforming to the "os.version"
+// semantic conventions. It represents the version string of the operating
+// system as defined in [Version
+// Attributes](../../resource/semantic_conventions/README.md#version-attributes).
+func OSVersion(val string) attribute.KeyValue {
+ return OSVersionKey.String(val)
+}
+
+// An operating system process.
+const (
+ // ProcessPIDKey is the attribute Key conforming to the "process.pid"
+ // semantic conventions. It represents the process identifier (PID).
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 1234
+ ProcessPIDKey = attribute.Key("process.pid")
+
+ // ProcessParentPIDKey is the attribute Key conforming to the
+ // "process.parent_pid" semantic conventions. It represents the parent
+ // Process identifier (PID).
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 111
+ ProcessParentPIDKey = attribute.Key("process.parent_pid")
+
+ // ProcessExecutableNameKey is the attribute Key conforming to the
+ // "process.executable.name" semantic conventions. It represents the name
+ // of the process executable. On Linux based systems, can be set to the
+ // `Name` in `proc/[pid]/status`. On Windows, can be set to the base name
+ // of `GetProcessImageFileNameW`.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (See alternative attributes
+ // below.)
+ // Stability: stable
+ // Examples: 'otelcol'
+ ProcessExecutableNameKey = attribute.Key("process.executable.name")
+
+ // ProcessExecutablePathKey is the attribute Key conforming to the
+ // "process.executable.path" semantic conventions. It represents the full
+ // path to the process executable. On Linux based systems, can be set to
+ // the target of `proc/[pid]/exe`. On Windows, can be set to the result of
+ // `GetProcessImageFileNameW`.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (See alternative attributes
+ // below.)
+ // Stability: stable
+ // Examples: '/usr/bin/cmd/otelcol'
+ ProcessExecutablePathKey = attribute.Key("process.executable.path")
+
+ // ProcessCommandKey is the attribute Key conforming to the
+ // "process.command" semantic conventions. It represents the command used
+ // to launch the process (i.e. the command name). On Linux based systems,
+ // can be set to the zeroth string in `proc/[pid]/cmdline`. On Windows, can
+ // be set to the first parameter extracted from `GetCommandLineW`.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (See alternative attributes
+ // below.)
+ // Stability: stable
+ // Examples: 'cmd/otelcol'
+ ProcessCommandKey = attribute.Key("process.command")
+
+ // ProcessCommandLineKey is the attribute Key conforming to the
+ // "process.command_line" semantic conventions. It represents the full
+ // command used to launch the process as a single string representing the
+ // full command. On Windows, can be set to the result of `GetCommandLineW`.
+ // Do not set this if you have to assemble it just for monitoring; use
+ // `process.command_args` instead.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (See alternative attributes
+ // below.)
+ // Stability: stable
+ // Examples: 'C:\\cmd\\otecol --config="my directory\\config.yaml"'
+ ProcessCommandLineKey = attribute.Key("process.command_line")
+
+ // ProcessCommandArgsKey is the attribute Key conforming to the
+ // "process.command_args" semantic conventions. It represents the all the
+ // command arguments (including the command/executable itself) as received
+ // by the process. On Linux-based systems (and some other Unixoid systems
+ // supporting procfs), can be set according to the list of null-delimited
+ // strings extracted from `proc/[pid]/cmdline`. For libc-based executables,
+ // this would be the full argv vector passed to `main`.
+ //
+ // Type: string[]
+ // RequirementLevel: ConditionallyRequired (See alternative attributes
+ // below.)
+ // Stability: stable
+ // Examples: 'cmd/otecol', '--config=config.yaml'
+ ProcessCommandArgsKey = attribute.Key("process.command_args")
+
+ // ProcessOwnerKey is the attribute Key conforming to the "process.owner"
+ // semantic conventions. It represents the username of the user that owns
+ // the process.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'root'
+ ProcessOwnerKey = attribute.Key("process.owner")
+)
+
+// ProcessPID returns an attribute KeyValue conforming to the "process.pid"
+// semantic conventions. It represents the process identifier (PID).
+func ProcessPID(val int) attribute.KeyValue {
+ return ProcessPIDKey.Int(val)
+}
+
+// ProcessParentPID returns an attribute KeyValue conforming to the
+// "process.parent_pid" semantic conventions. It represents the parent Process
+// identifier (PID).
+func ProcessParentPID(val int) attribute.KeyValue {
+ return ProcessParentPIDKey.Int(val)
+}
+
+// ProcessExecutableName returns an attribute KeyValue conforming to the
+// "process.executable.name" semantic conventions. It represents the name of
+// the process executable. On Linux based systems, can be set to the `Name` in
+// `proc/[pid]/status`. On Windows, can be set to the base name of
+// `GetProcessImageFileNameW`.
+func ProcessExecutableName(val string) attribute.KeyValue {
+ return ProcessExecutableNameKey.String(val)
+}
+
+// ProcessExecutablePath returns an attribute KeyValue conforming to the
+// "process.executable.path" semantic conventions. It represents the full path
+// to the process executable. On Linux based systems, can be set to the target
+// of `proc/[pid]/exe`. On Windows, can be set to the result of
+// `GetProcessImageFileNameW`.
+func ProcessExecutablePath(val string) attribute.KeyValue {
+ return ProcessExecutablePathKey.String(val)
+}
+
+// ProcessCommand returns an attribute KeyValue conforming to the
+// "process.command" semantic conventions. It represents the command used to
+// launch the process (i.e. the command name). On Linux based systems, can be
+// set to the zeroth string in `proc/[pid]/cmdline`. On Windows, can be set to
+// the first parameter extracted from `GetCommandLineW`.
+func ProcessCommand(val string) attribute.KeyValue {
+ return ProcessCommandKey.String(val)
+}
+
+// ProcessCommandLine returns an attribute KeyValue conforming to the
+// "process.command_line" semantic conventions. It represents the full command
+// used to launch the process as a single string representing the full command.
+// On Windows, can be set to the result of `GetCommandLineW`. Do not set this
+// if you have to assemble it just for monitoring; use `process.command_args`
+// instead.
+func ProcessCommandLine(val string) attribute.KeyValue {
+ return ProcessCommandLineKey.String(val)
+}
+
+// ProcessCommandArgs returns an attribute KeyValue conforming to the
+// "process.command_args" semantic conventions. It represents the all the
+// command arguments (including the command/executable itself) as received by
+// the process. On Linux-based systems (and some other Unixoid systems
+// supporting procfs), can be set according to the list of null-delimited
+// strings extracted from `proc/[pid]/cmdline`. For libc-based executables,
+// this would be the full argv vector passed to `main`.
+func ProcessCommandArgs(val ...string) attribute.KeyValue {
+ return ProcessCommandArgsKey.StringSlice(val)
+}
+
+// ProcessOwner returns an attribute KeyValue conforming to the
+// "process.owner" semantic conventions. It represents the username of the user
+// that owns the process.
+func ProcessOwner(val string) attribute.KeyValue {
+ return ProcessOwnerKey.String(val)
+}
+
+// The single (language) runtime instance which is monitored.
+const (
+ // ProcessRuntimeNameKey is the attribute Key conforming to the
+ // "process.runtime.name" semantic conventions. It represents the name of
+ // the runtime of this process. For compiled native binaries, this SHOULD
+ // be the name of the compiler.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'OpenJDK Runtime Environment'
+ ProcessRuntimeNameKey = attribute.Key("process.runtime.name")
+
+ // ProcessRuntimeVersionKey is the attribute Key conforming to the
+ // "process.runtime.version" semantic conventions. It represents the
+ // version of the runtime of this process, as returned by the runtime
+ // without modification.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '14.0.2'
+ ProcessRuntimeVersionKey = attribute.Key("process.runtime.version")
+
+ // ProcessRuntimeDescriptionKey is the attribute Key conforming to the
+ // "process.runtime.description" semantic conventions. It represents an
+ // additional description about the runtime of the process, for example a
+ // specific vendor customization of the runtime environment.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Eclipse OpenJ9 Eclipse OpenJ9 VM openj9-0.21.0'
+ ProcessRuntimeDescriptionKey = attribute.Key("process.runtime.description")
+)
+
+// ProcessRuntimeName returns an attribute KeyValue conforming to the
+// "process.runtime.name" semantic conventions. It represents the name of the
+// runtime of this process. For compiled native binaries, this SHOULD be the
+// name of the compiler.
+func ProcessRuntimeName(val string) attribute.KeyValue {
+ return ProcessRuntimeNameKey.String(val)
+}
+
+// ProcessRuntimeVersion returns an attribute KeyValue conforming to the
+// "process.runtime.version" semantic conventions. It represents the version of
+// the runtime of this process, as returned by the runtime without
+// modification.
+func ProcessRuntimeVersion(val string) attribute.KeyValue {
+ return ProcessRuntimeVersionKey.String(val)
+}
+
+// ProcessRuntimeDescription returns an attribute KeyValue conforming to the
+// "process.runtime.description" semantic conventions. It represents an
+// additional description about the runtime of the process, for example a
+// specific vendor customization of the runtime environment.
+func ProcessRuntimeDescription(val string) attribute.KeyValue {
+ return ProcessRuntimeDescriptionKey.String(val)
+}
+
+// A service instance.
+const (
+ // ServiceNameKey is the attribute Key conforming to the "service.name"
+ // semantic conventions. It represents the logical name of the service.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'shoppingcart'
+ // Note: MUST be the same for all instances of horizontally scaled
+ // services. If the value was not specified, SDKs MUST fallback to
+ // `unknown_service:` concatenated with
+ // [`process.executable.name`](process.md#process), e.g.
+ // `unknown_service:bash`. If `process.executable.name` is not available,
+ // the value MUST be set to `unknown_service`.
+ ServiceNameKey = attribute.Key("service.name")
+
+ // ServiceNamespaceKey is the attribute Key conforming to the
+ // "service.namespace" semantic conventions. It represents a namespace for
+ // `service.name`.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Shop'
+ // Note: A string value having a meaning that helps to distinguish a group
+ // of services, for example the team name that owns a group of services.
+ // `service.name` is expected to be unique within the same namespace. If
+ // `service.namespace` is not specified in the Resource then `service.name`
+ // is expected to be unique for all services that have no explicit
+ // namespace defined (so the empty/unspecified namespace is simply one more
+ // valid namespace). Zero-length namespace string is assumed equal to
+ // unspecified namespace.
+ ServiceNamespaceKey = attribute.Key("service.namespace")
+
+ // ServiceInstanceIDKey is the attribute Key conforming to the
+ // "service.instance.id" semantic conventions. It represents the string ID
+ // of the service instance.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '627cc493-f310-47de-96bd-71410b7dec09'
+ // Note: MUST be unique for each instance of the same
+ // `service.namespace,service.name` pair (in other words
+ // `service.namespace,service.name,service.instance.id` triplet MUST be
+ // globally unique). The ID helps to distinguish instances of the same
+ // service that exist at the same time (e.g. instances of a horizontally
+ // scaled service). It is preferable for the ID to be persistent and stay
+ // the same for the lifetime of the service instance, however it is
+ // acceptable that the ID is ephemeral and changes during important
+ // lifetime events for the service (e.g. service restarts). If the service
+ // has no inherent unique ID that can be used as the value of this
+ // attribute it is recommended to generate a random Version 1 or Version 4
+ // RFC 4122 UUID (services aiming for reproducible UUIDs may also use
+ // Version 5, see RFC 4122 for more recommendations).
+ ServiceInstanceIDKey = attribute.Key("service.instance.id")
+
+ // ServiceVersionKey is the attribute Key conforming to the
+ // "service.version" semantic conventions. It represents the version string
+ // of the service API or implementation.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '2.0.0'
+ ServiceVersionKey = attribute.Key("service.version")
+)
+
+// ServiceName returns an attribute KeyValue conforming to the
+// "service.name" semantic conventions. It represents the logical name of the
+// service.
+func ServiceName(val string) attribute.KeyValue {
+ return ServiceNameKey.String(val)
+}
+
+// ServiceNamespace returns an attribute KeyValue conforming to the
+// "service.namespace" semantic conventions. It represents a namespace for
+// `service.name`.
+func ServiceNamespace(val string) attribute.KeyValue {
+ return ServiceNamespaceKey.String(val)
+}
+
+// ServiceInstanceID returns an attribute KeyValue conforming to the
+// "service.instance.id" semantic conventions. It represents the string ID of
+// the service instance.
+func ServiceInstanceID(val string) attribute.KeyValue {
+ return ServiceInstanceIDKey.String(val)
+}
+
+// ServiceVersion returns an attribute KeyValue conforming to the
+// "service.version" semantic conventions. It represents the version string of
+// the service API or implementation.
+func ServiceVersion(val string) attribute.KeyValue {
+ return ServiceVersionKey.String(val)
+}
+
+// The telemetry SDK used to capture data recorded by the instrumentation
+// libraries.
+const (
+ // TelemetrySDKNameKey is the attribute Key conforming to the
+ // "telemetry.sdk.name" semantic conventions. It represents the name of the
+ // telemetry SDK as defined above.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'opentelemetry'
+ TelemetrySDKNameKey = attribute.Key("telemetry.sdk.name")
+
+ // TelemetrySDKLanguageKey is the attribute Key conforming to the
+ // "telemetry.sdk.language" semantic conventions. It represents the
+ // language of the telemetry SDK.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ TelemetrySDKLanguageKey = attribute.Key("telemetry.sdk.language")
+
+ // TelemetrySDKVersionKey is the attribute Key conforming to the
+ // "telemetry.sdk.version" semantic conventions. It represents the version
+ // string of the telemetry SDK.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '1.2.3'
+ TelemetrySDKVersionKey = attribute.Key("telemetry.sdk.version")
+
+ // TelemetryAutoVersionKey is the attribute Key conforming to the
+ // "telemetry.auto.version" semantic conventions. It represents the version
+ // string of the auto instrumentation agent, if used.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '1.2.3'
+ TelemetryAutoVersionKey = attribute.Key("telemetry.auto.version")
+)
+
+var (
+ // cpp
+ TelemetrySDKLanguageCPP = TelemetrySDKLanguageKey.String("cpp")
+ // dotnet
+ TelemetrySDKLanguageDotnet = TelemetrySDKLanguageKey.String("dotnet")
+ // erlang
+ TelemetrySDKLanguageErlang = TelemetrySDKLanguageKey.String("erlang")
+ // go
+ TelemetrySDKLanguageGo = TelemetrySDKLanguageKey.String("go")
+ // java
+ TelemetrySDKLanguageJava = TelemetrySDKLanguageKey.String("java")
+ // nodejs
+ TelemetrySDKLanguageNodejs = TelemetrySDKLanguageKey.String("nodejs")
+ // php
+ TelemetrySDKLanguagePHP = TelemetrySDKLanguageKey.String("php")
+ // python
+ TelemetrySDKLanguagePython = TelemetrySDKLanguageKey.String("python")
+ // ruby
+ TelemetrySDKLanguageRuby = TelemetrySDKLanguageKey.String("ruby")
+ // webjs
+ TelemetrySDKLanguageWebjs = TelemetrySDKLanguageKey.String("webjs")
+ // swift
+ TelemetrySDKLanguageSwift = TelemetrySDKLanguageKey.String("swift")
+)
+
+// TelemetrySDKName returns an attribute KeyValue conforming to the
+// "telemetry.sdk.name" semantic conventions. It represents the name of the
+// telemetry SDK as defined above.
+func TelemetrySDKName(val string) attribute.KeyValue {
+ return TelemetrySDKNameKey.String(val)
+}
+
+// TelemetrySDKVersion returns an attribute KeyValue conforming to the
+// "telemetry.sdk.version" semantic conventions. It represents the version
+// string of the telemetry SDK.
+func TelemetrySDKVersion(val string) attribute.KeyValue {
+ return TelemetrySDKVersionKey.String(val)
+}
+
+// TelemetryAutoVersion returns an attribute KeyValue conforming to the
+// "telemetry.auto.version" semantic conventions. It represents the version
+// string of the auto instrumentation agent, if used.
+func TelemetryAutoVersion(val string) attribute.KeyValue {
+ return TelemetryAutoVersionKey.String(val)
+}
+
+// Resource describing the packaged software running the application code. Web
+// engines are typically executed using process.runtime.
+const (
+ // WebEngineNameKey is the attribute Key conforming to the "webengine.name"
+ // semantic conventions. It represents the name of the web engine.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'WildFly'
+ WebEngineNameKey = attribute.Key("webengine.name")
+
+ // WebEngineVersionKey is the attribute Key conforming to the
+ // "webengine.version" semantic conventions. It represents the version of
+ // the web engine.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '21.0.0'
+ WebEngineVersionKey = attribute.Key("webengine.version")
+
+ // WebEngineDescriptionKey is the attribute Key conforming to the
+ // "webengine.description" semantic conventions. It represents the
+ // additional description of the web engine (e.g. detailed version and
+ // edition information).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'WildFly Full 21.0.0.Final (WildFly Core 13.0.1.Final) -
+ // 2.2.2.Final'
+ WebEngineDescriptionKey = attribute.Key("webengine.description")
+)
+
+// WebEngineName returns an attribute KeyValue conforming to the
+// "webengine.name" semantic conventions. It represents the name of the web
+// engine.
+func WebEngineName(val string) attribute.KeyValue {
+ return WebEngineNameKey.String(val)
+}
+
+// WebEngineVersion returns an attribute KeyValue conforming to the
+// "webengine.version" semantic conventions. It represents the version of the
+// web engine.
+func WebEngineVersion(val string) attribute.KeyValue {
+ return WebEngineVersionKey.String(val)
+}
+
+// WebEngineDescription returns an attribute KeyValue conforming to the
+// "webengine.description" semantic conventions. It represents the additional
+// description of the web engine (e.g. detailed version and edition
+// information).
+func WebEngineDescription(val string) attribute.KeyValue {
+ return WebEngineDescriptionKey.String(val)
+}
+
+// Attributes used by non-OTLP exporters to represent OpenTelemetry Scope's
+// concepts.
+const (
+ // OTelScopeNameKey is the attribute Key conforming to the
+ // "otel.scope.name" semantic conventions. It represents the name of the
+ // instrumentation scope - (`InstrumentationScope.Name` in OTLP).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'io.opentelemetry.contrib.mongodb'
+ OTelScopeNameKey = attribute.Key("otel.scope.name")
+
+ // OTelScopeVersionKey is the attribute Key conforming to the
+ // "otel.scope.version" semantic conventions. It represents the version of
+ // the instrumentation scope - (`InstrumentationScope.Version` in OTLP).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '1.0.0'
+ OTelScopeVersionKey = attribute.Key("otel.scope.version")
+)
+
+// OTelScopeName returns an attribute KeyValue conforming to the
+// "otel.scope.name" semantic conventions. It represents the name of the
+// instrumentation scope - (`InstrumentationScope.Name` in OTLP).
+func OTelScopeName(val string) attribute.KeyValue {
+ return OTelScopeNameKey.String(val)
+}
+
+// OTelScopeVersion returns an attribute KeyValue conforming to the
+// "otel.scope.version" semantic conventions. It represents the version of the
+// instrumentation scope - (`InstrumentationScope.Version` in OTLP).
+func OTelScopeVersion(val string) attribute.KeyValue {
+ return OTelScopeVersionKey.String(val)
+}
+
+// Span attributes used by non-OTLP exporters to represent OpenTelemetry
+// Scope's concepts.
+const (
+ // OTelLibraryNameKey is the attribute Key conforming to the
+ // "otel.library.name" semantic conventions. It represents the deprecated,
+ // use the `otel.scope.name` attribute.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: deprecated
+ // Examples: 'io.opentelemetry.contrib.mongodb'
+ OTelLibraryNameKey = attribute.Key("otel.library.name")
+
+ // OTelLibraryVersionKey is the attribute Key conforming to the
+ // "otel.library.version" semantic conventions. It represents the
+ // deprecated, use the `otel.scope.version` attribute.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: deprecated
+ // Examples: '1.0.0'
+ OTelLibraryVersionKey = attribute.Key("otel.library.version")
+)
+
+// OTelLibraryName returns an attribute KeyValue conforming to the
+// "otel.library.name" semantic conventions. It represents the deprecated, use
+// the `otel.scope.name` attribute.
+func OTelLibraryName(val string) attribute.KeyValue {
+ return OTelLibraryNameKey.String(val)
+}
+
+// OTelLibraryVersion returns an attribute KeyValue conforming to the
+// "otel.library.version" semantic conventions. It represents the deprecated,
+// use the `otel.scope.version` attribute.
+func OTelLibraryVersion(val string) attribute.KeyValue {
+ return OTelLibraryVersionKey.String(val)
+}
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/schema.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/schema.go
new file mode 100644
index 0000000000000..70ad4d1b6cce2
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/schema.go
@@ -0,0 +1,9 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
+
+// SchemaURL is the schema URL that matches the version of the semantic conventions
+// that this package defines. Semconv packages starting from v1.4.0 must declare
+// non-empty schema URL in the form https://opentelemetry.io/schemas/
+const SchemaURL = "https://opentelemetry.io/schemas/1.18.0"
diff --git a/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/trace.go b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/trace.go
new file mode 100644
index 0000000000000..03a64d90e420a
--- /dev/null
+++ b/vendor/go.opentelemetry.io/otel/semconv/v1.18.0/trace.go
@@ -0,0 +1,3370 @@
+// Copyright The OpenTelemetry Authors
+// SPDX-License-Identifier: Apache-2.0
+
+// Code generated from semantic convention specification. DO NOT EDIT.
+
+package semconv // import "go.opentelemetry.io/otel/semconv/v1.18.0"
+
+import "go.opentelemetry.io/otel/attribute"
+
+// The shared attributes used to report a single exception associated with a
+// span or log.
+const (
+ // ExceptionTypeKey is the attribute Key conforming to the "exception.type"
+ // semantic conventions. It represents the type of the exception (its
+ // fully-qualified class name, if applicable). The dynamic type of the
+ // exception should be preferred over the static type in languages that
+ // support it.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'java.net.ConnectException', 'OSError'
+ ExceptionTypeKey = attribute.Key("exception.type")
+
+ // ExceptionMessageKey is the attribute Key conforming to the
+ // "exception.message" semantic conventions. It represents the exception
+ // message.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Division by zero', "Can't convert 'int' object to str
+ // implicitly"
+ ExceptionMessageKey = attribute.Key("exception.message")
+
+ // ExceptionStacktraceKey is the attribute Key conforming to the
+ // "exception.stacktrace" semantic conventions. It represents a stacktrace
+ // as a string in the natural representation for the language runtime. The
+ // representation is to be determined and documented by each language SIG.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Exception in thread "main" java.lang.RuntimeException: Test
+ // exception\\n at '
+ // 'com.example.GenerateTrace.methodB(GenerateTrace.java:13)\\n at '
+ // 'com.example.GenerateTrace.methodA(GenerateTrace.java:9)\\n at '
+ // 'com.example.GenerateTrace.main(GenerateTrace.java:5)'
+ ExceptionStacktraceKey = attribute.Key("exception.stacktrace")
+)
+
+// ExceptionType returns an attribute KeyValue conforming to the
+// "exception.type" semantic conventions. It represents the type of the
+// exception (its fully-qualified class name, if applicable). The dynamic type
+// of the exception should be preferred over the static type in languages that
+// support it.
+func ExceptionType(val string) attribute.KeyValue {
+ return ExceptionTypeKey.String(val)
+}
+
+// ExceptionMessage returns an attribute KeyValue conforming to the
+// "exception.message" semantic conventions. It represents the exception
+// message.
+func ExceptionMessage(val string) attribute.KeyValue {
+ return ExceptionMessageKey.String(val)
+}
+
+// ExceptionStacktrace returns an attribute KeyValue conforming to the
+// "exception.stacktrace" semantic conventions. It represents a stacktrace as a
+// string in the natural representation for the language runtime. The
+// representation is to be determined and documented by each language SIG.
+func ExceptionStacktrace(val string) attribute.KeyValue {
+ return ExceptionStacktraceKey.String(val)
+}
+
+// Attributes for Events represented using Log Records.
+const (
+ // EventNameKey is the attribute Key conforming to the "event.name"
+ // semantic conventions. It represents the name identifies the event.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'click', 'exception'
+ EventNameKey = attribute.Key("event.name")
+
+ // EventDomainKey is the attribute Key conforming to the "event.domain"
+ // semantic conventions. It represents the domain identifies the business
+ // context for the events.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ // Note: Events across different domains may have same `event.name`, yet be
+ // unrelated events.
+ EventDomainKey = attribute.Key("event.domain")
+)
+
+var (
+ // Events from browser apps
+ EventDomainBrowser = EventDomainKey.String("browser")
+ // Events from mobile apps
+ EventDomainDevice = EventDomainKey.String("device")
+ // Events from Kubernetes
+ EventDomainK8S = EventDomainKey.String("k8s")
+)
+
+// EventName returns an attribute KeyValue conforming to the "event.name"
+// semantic conventions. It represents the name identifies the event.
+func EventName(val string) attribute.KeyValue {
+ return EventNameKey.String(val)
+}
+
+// Span attributes used by AWS Lambda (in addition to general `faas`
+// attributes).
+const (
+ // AWSLambdaInvokedARNKey is the attribute Key conforming to the
+ // "aws.lambda.invoked_arn" semantic conventions. It represents the full
+ // invoked ARN as provided on the `Context` passed to the function
+ // (`Lambda-Runtime-Invoked-Function-ARN` header on the
+ // `/runtime/invocation/next` applicable).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'arn:aws:lambda:us-east-1:123456:function:myfunction:myalias'
+ // Note: This may be different from `faas.id` if an alias is involved.
+ AWSLambdaInvokedARNKey = attribute.Key("aws.lambda.invoked_arn")
+)
+
+// AWSLambdaInvokedARN returns an attribute KeyValue conforming to the
+// "aws.lambda.invoked_arn" semantic conventions. It represents the full
+// invoked ARN as provided on the `Context` passed to the function
+// (`Lambda-Runtime-Invoked-Function-ARN` header on the
+// `/runtime/invocation/next` applicable).
+func AWSLambdaInvokedARN(val string) attribute.KeyValue {
+ return AWSLambdaInvokedARNKey.String(val)
+}
+
+// Attributes for CloudEvents. CloudEvents is a specification on how to define
+// event data in a standard way. These attributes can be attached to spans when
+// performing operations with CloudEvents, regardless of the protocol being
+// used.
+const (
+ // CloudeventsEventIDKey is the attribute Key conforming to the
+ // "cloudevents.event_id" semantic conventions. It represents the
+ // [event_id](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#id)
+ // uniquely identifies the event.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: '123e4567-e89b-12d3-a456-426614174000', '0001'
+ CloudeventsEventIDKey = attribute.Key("cloudevents.event_id")
+
+ // CloudeventsEventSourceKey is the attribute Key conforming to the
+ // "cloudevents.event_source" semantic conventions. It represents the
+ // [source](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#source-1)
+ // identifies the context in which an event happened.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'https://github.com/cloudevents',
+ // '/cloudevents/spec/pull/123', 'my-service'
+ CloudeventsEventSourceKey = attribute.Key("cloudevents.event_source")
+
+ // CloudeventsEventSpecVersionKey is the attribute Key conforming to the
+ // "cloudevents.event_spec_version" semantic conventions. It represents the
+ // [version of the CloudEvents
+ // specification](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#specversion)
+ // which the event uses.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '1.0'
+ CloudeventsEventSpecVersionKey = attribute.Key("cloudevents.event_spec_version")
+
+ // CloudeventsEventTypeKey is the attribute Key conforming to the
+ // "cloudevents.event_type" semantic conventions. It represents the
+ // [event_type](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#type)
+ // contains a value describing the type of event related to the originating
+ // occurrence.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'com.github.pull_request.opened',
+ // 'com.example.object.deleted.v2'
+ CloudeventsEventTypeKey = attribute.Key("cloudevents.event_type")
+
+ // CloudeventsEventSubjectKey is the attribute Key conforming to the
+ // "cloudevents.event_subject" semantic conventions. It represents the
+ // [subject](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#subject)
+ // of the event in the context of the event producer (identified by
+ // source).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'mynewfile.jpg'
+ CloudeventsEventSubjectKey = attribute.Key("cloudevents.event_subject")
+)
+
+// CloudeventsEventID returns an attribute KeyValue conforming to the
+// "cloudevents.event_id" semantic conventions. It represents the
+// [event_id](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#id)
+// uniquely identifies the event.
+func CloudeventsEventID(val string) attribute.KeyValue {
+ return CloudeventsEventIDKey.String(val)
+}
+
+// CloudeventsEventSource returns an attribute KeyValue conforming to the
+// "cloudevents.event_source" semantic conventions. It represents the
+// [source](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#source-1)
+// identifies the context in which an event happened.
+func CloudeventsEventSource(val string) attribute.KeyValue {
+ return CloudeventsEventSourceKey.String(val)
+}
+
+// CloudeventsEventSpecVersion returns an attribute KeyValue conforming to
+// the "cloudevents.event_spec_version" semantic conventions. It represents the
+// [version of the CloudEvents
+// specification](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#specversion)
+// which the event uses.
+func CloudeventsEventSpecVersion(val string) attribute.KeyValue {
+ return CloudeventsEventSpecVersionKey.String(val)
+}
+
+// CloudeventsEventType returns an attribute KeyValue conforming to the
+// "cloudevents.event_type" semantic conventions. It represents the
+// [event_type](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#type)
+// contains a value describing the type of event related to the originating
+// occurrence.
+func CloudeventsEventType(val string) attribute.KeyValue {
+ return CloudeventsEventTypeKey.String(val)
+}
+
+// CloudeventsEventSubject returns an attribute KeyValue conforming to the
+// "cloudevents.event_subject" semantic conventions. It represents the
+// [subject](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#subject)
+// of the event in the context of the event producer (identified by source).
+func CloudeventsEventSubject(val string) attribute.KeyValue {
+ return CloudeventsEventSubjectKey.String(val)
+}
+
+// Semantic conventions for the OpenTracing Shim
+const (
+ // OpentracingRefTypeKey is the attribute Key conforming to the
+ // "opentracing.ref_type" semantic conventions. It represents the
+ // parent-child Reference type
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: The causal relationship between a child Span and a parent Span.
+ OpentracingRefTypeKey = attribute.Key("opentracing.ref_type")
+)
+
+var (
+ // The parent Span depends on the child Span in some capacity
+ OpentracingRefTypeChildOf = OpentracingRefTypeKey.String("child_of")
+ // The parent Span does not depend in any way on the result of the child Span
+ OpentracingRefTypeFollowsFrom = OpentracingRefTypeKey.String("follows_from")
+)
+
+// The attributes used to perform database client calls.
+const (
+ // DBSystemKey is the attribute Key conforming to the "db.system" semantic
+ // conventions. It represents an identifier for the database management
+ // system (DBMS) product being used. See below for a list of well-known
+ // identifiers.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ DBSystemKey = attribute.Key("db.system")
+
+ // DBConnectionStringKey is the attribute Key conforming to the
+ // "db.connection_string" semantic conventions. It represents the
+ // connection string used to connect to the database. It is recommended to
+ // remove embedded credentials.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Server=(localdb)\\v11.0;Integrated Security=true;'
+ DBConnectionStringKey = attribute.Key("db.connection_string")
+
+ // DBUserKey is the attribute Key conforming to the "db.user" semantic
+ // conventions. It represents the username for accessing the database.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'readonly_user', 'reporting_user'
+ DBUserKey = attribute.Key("db.user")
+
+ // DBJDBCDriverClassnameKey is the attribute Key conforming to the
+ // "db.jdbc.driver_classname" semantic conventions. It represents the
+ // fully-qualified class name of the [Java Database Connectivity
+ // (JDBC)](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/)
+ // driver used to connect.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'org.postgresql.Driver',
+ // 'com.microsoft.sqlserver.jdbc.SQLServerDriver'
+ DBJDBCDriverClassnameKey = attribute.Key("db.jdbc.driver_classname")
+
+ // DBNameKey is the attribute Key conforming to the "db.name" semantic
+ // conventions. It represents the this attribute is used to report the name
+ // of the database being accessed. For commands that switch the database,
+ // this should be set to the target database (even if the command fails).
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If applicable.)
+ // Stability: stable
+ // Examples: 'customers', 'main'
+ // Note: In some SQL databases, the database name to be used is called
+ // "schema name". In case there are multiple layers that could be
+ // considered for database name (e.g. Oracle instance name and schema
+ // name), the database name to be used is the more specific layer (e.g.
+ // Oracle schema name).
+ DBNameKey = attribute.Key("db.name")
+
+ // DBStatementKey is the attribute Key conforming to the "db.statement"
+ // semantic conventions. It represents the database statement being
+ // executed.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If applicable and not
+ // explicitly disabled via instrumentation configuration.)
+ // Stability: stable
+ // Examples: 'SELECT * FROM wuser_table', 'SET mykey "WuValue"'
+ // Note: The value may be sanitized to exclude sensitive information.
+ DBStatementKey = attribute.Key("db.statement")
+
+ // DBOperationKey is the attribute Key conforming to the "db.operation"
+ // semantic conventions. It represents the name of the operation being
+ // executed, e.g. the [MongoDB command
+ // name](https://docs.mongodb.com/manual/reference/command/#database-operations)
+ // such as `findAndModify`, or the SQL keyword.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If `db.statement` is not
+ // applicable.)
+ // Stability: stable
+ // Examples: 'findAndModify', 'HMSET', 'SELECT'
+ // Note: When setting this to an SQL keyword, it is not recommended to
+ // attempt any client-side parsing of `db.statement` just to get this
+ // property, but it should be set if the operation name is provided by the
+ // library being instrumented. If the SQL statement has an ambiguous
+ // operation, or performs more than one operation, this value may be
+ // omitted.
+ DBOperationKey = attribute.Key("db.operation")
+)
+
+var (
+ // Some other SQL database. Fallback only. See notes
+ DBSystemOtherSQL = DBSystemKey.String("other_sql")
+ // Microsoft SQL Server
+ DBSystemMSSQL = DBSystemKey.String("mssql")
+ // Microsoft SQL Server Compact
+ DBSystemMssqlcompact = DBSystemKey.String("mssqlcompact")
+ // MySQL
+ DBSystemMySQL = DBSystemKey.String("mysql")
+ // Oracle Database
+ DBSystemOracle = DBSystemKey.String("oracle")
+ // IBM DB2
+ DBSystemDB2 = DBSystemKey.String("db2")
+ // PostgreSQL
+ DBSystemPostgreSQL = DBSystemKey.String("postgresql")
+ // Amazon Redshift
+ DBSystemRedshift = DBSystemKey.String("redshift")
+ // Apache Hive
+ DBSystemHive = DBSystemKey.String("hive")
+ // Cloudscape
+ DBSystemCloudscape = DBSystemKey.String("cloudscape")
+ // HyperSQL DataBase
+ DBSystemHSQLDB = DBSystemKey.String("hsqldb")
+ // Progress Database
+ DBSystemProgress = DBSystemKey.String("progress")
+ // SAP MaxDB
+ DBSystemMaxDB = DBSystemKey.String("maxdb")
+ // SAP HANA
+ DBSystemHanaDB = DBSystemKey.String("hanadb")
+ // Ingres
+ DBSystemIngres = DBSystemKey.String("ingres")
+ // FirstSQL
+ DBSystemFirstSQL = DBSystemKey.String("firstsql")
+ // EnterpriseDB
+ DBSystemEDB = DBSystemKey.String("edb")
+ // InterSystems Caché
+ DBSystemCache = DBSystemKey.String("cache")
+ // Adabas (Adaptable Database System)
+ DBSystemAdabas = DBSystemKey.String("adabas")
+ // Firebird
+ DBSystemFirebird = DBSystemKey.String("firebird")
+ // Apache Derby
+ DBSystemDerby = DBSystemKey.String("derby")
+ // FileMaker
+ DBSystemFilemaker = DBSystemKey.String("filemaker")
+ // Informix
+ DBSystemInformix = DBSystemKey.String("informix")
+ // InstantDB
+ DBSystemInstantDB = DBSystemKey.String("instantdb")
+ // InterBase
+ DBSystemInterbase = DBSystemKey.String("interbase")
+ // MariaDB
+ DBSystemMariaDB = DBSystemKey.String("mariadb")
+ // Netezza
+ DBSystemNetezza = DBSystemKey.String("netezza")
+ // Pervasive PSQL
+ DBSystemPervasive = DBSystemKey.String("pervasive")
+ // PointBase
+ DBSystemPointbase = DBSystemKey.String("pointbase")
+ // SQLite
+ DBSystemSqlite = DBSystemKey.String("sqlite")
+ // Sybase
+ DBSystemSybase = DBSystemKey.String("sybase")
+ // Teradata
+ DBSystemTeradata = DBSystemKey.String("teradata")
+ // Vertica
+ DBSystemVertica = DBSystemKey.String("vertica")
+ // H2
+ DBSystemH2 = DBSystemKey.String("h2")
+ // ColdFusion IMQ
+ DBSystemColdfusion = DBSystemKey.String("coldfusion")
+ // Apache Cassandra
+ DBSystemCassandra = DBSystemKey.String("cassandra")
+ // Apache HBase
+ DBSystemHBase = DBSystemKey.String("hbase")
+ // MongoDB
+ DBSystemMongoDB = DBSystemKey.String("mongodb")
+ // Redis
+ DBSystemRedis = DBSystemKey.String("redis")
+ // Couchbase
+ DBSystemCouchbase = DBSystemKey.String("couchbase")
+ // CouchDB
+ DBSystemCouchDB = DBSystemKey.String("couchdb")
+ // Microsoft Azure Cosmos DB
+ DBSystemCosmosDB = DBSystemKey.String("cosmosdb")
+ // Amazon DynamoDB
+ DBSystemDynamoDB = DBSystemKey.String("dynamodb")
+ // Neo4j
+ DBSystemNeo4j = DBSystemKey.String("neo4j")
+ // Apache Geode
+ DBSystemGeode = DBSystemKey.String("geode")
+ // Elasticsearch
+ DBSystemElasticsearch = DBSystemKey.String("elasticsearch")
+ // Memcached
+ DBSystemMemcached = DBSystemKey.String("memcached")
+ // CockroachDB
+ DBSystemCockroachdb = DBSystemKey.String("cockroachdb")
+ // OpenSearch
+ DBSystemOpensearch = DBSystemKey.String("opensearch")
+ // ClickHouse
+ DBSystemClickhouse = DBSystemKey.String("clickhouse")
+ // Cloud Spanner
+ DBSystemSpanner = DBSystemKey.String("spanner")
+)
+
+// DBConnectionString returns an attribute KeyValue conforming to the
+// "db.connection_string" semantic conventions. It represents the connection
+// string used to connect to the database. It is recommended to remove embedded
+// credentials.
+func DBConnectionString(val string) attribute.KeyValue {
+ return DBConnectionStringKey.String(val)
+}
+
+// DBUser returns an attribute KeyValue conforming to the "db.user" semantic
+// conventions. It represents the username for accessing the database.
+func DBUser(val string) attribute.KeyValue {
+ return DBUserKey.String(val)
+}
+
+// DBJDBCDriverClassname returns an attribute KeyValue conforming to the
+// "db.jdbc.driver_classname" semantic conventions. It represents the
+// fully-qualified class name of the [Java Database Connectivity
+// (JDBC)](https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/) driver
+// used to connect.
+func DBJDBCDriverClassname(val string) attribute.KeyValue {
+ return DBJDBCDriverClassnameKey.String(val)
+}
+
+// DBName returns an attribute KeyValue conforming to the "db.name" semantic
+// conventions. It represents the this attribute is used to report the name of
+// the database being accessed. For commands that switch the database, this
+// should be set to the target database (even if the command fails).
+func DBName(val string) attribute.KeyValue {
+ return DBNameKey.String(val)
+}
+
+// DBStatement returns an attribute KeyValue conforming to the
+// "db.statement" semantic conventions. It represents the database statement
+// being executed.
+func DBStatement(val string) attribute.KeyValue {
+ return DBStatementKey.String(val)
+}
+
+// DBOperation returns an attribute KeyValue conforming to the
+// "db.operation" semantic conventions. It represents the name of the operation
+// being executed, e.g. the [MongoDB command
+// name](https://docs.mongodb.com/manual/reference/command/#database-operations)
+// such as `findAndModify`, or the SQL keyword.
+func DBOperation(val string) attribute.KeyValue {
+ return DBOperationKey.String(val)
+}
+
+// Connection-level attributes for Microsoft SQL Server
+const (
+ // DBMSSQLInstanceNameKey is the attribute Key conforming to the
+ // "db.mssql.instance_name" semantic conventions. It represents the
+ // Microsoft SQL Server [instance
+ // name](https://docs.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url?view=sql-server-ver15)
+ // connecting to. This name is used to determine the port of a named
+ // instance.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'MSSQLSERVER'
+ // Note: If setting a `db.mssql.instance_name`, `net.peer.port` is no
+ // longer required (but still recommended if non-standard).
+ DBMSSQLInstanceNameKey = attribute.Key("db.mssql.instance_name")
+)
+
+// DBMSSQLInstanceName returns an attribute KeyValue conforming to the
+// "db.mssql.instance_name" semantic conventions. It represents the Microsoft
+// SQL Server [instance
+// name](https://docs.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url?view=sql-server-ver15)
+// connecting to. This name is used to determine the port of a named instance.
+func DBMSSQLInstanceName(val string) attribute.KeyValue {
+ return DBMSSQLInstanceNameKey.String(val)
+}
+
+// Call-level attributes for Cassandra
+const (
+ // DBCassandraPageSizeKey is the attribute Key conforming to the
+ // "db.cassandra.page_size" semantic conventions. It represents the fetch
+ // size used for paging, i.e. how many rows will be returned at once.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 5000
+ DBCassandraPageSizeKey = attribute.Key("db.cassandra.page_size")
+
+ // DBCassandraConsistencyLevelKey is the attribute Key conforming to the
+ // "db.cassandra.consistency_level" semantic conventions. It represents the
+ // consistency level of the query. Based on consistency values from
+ // [CQL](https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/dml/dmlConfigConsistency.html).
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ DBCassandraConsistencyLevelKey = attribute.Key("db.cassandra.consistency_level")
+
+ // DBCassandraTableKey is the attribute Key conforming to the
+ // "db.cassandra.table" semantic conventions. It represents the name of the
+ // primary table that the operation is acting upon, including the keyspace
+ // name (if applicable).
+ //
+ // Type: string
+ // RequirementLevel: Recommended
+ // Stability: stable
+ // Examples: 'mytable'
+ // Note: This mirrors the db.sql.table attribute but references cassandra
+ // rather than sql. It is not recommended to attempt any client-side
+ // parsing of `db.statement` just to get this property, but it should be
+ // set if it is provided by the library being instrumented. If the
+ // operation is acting upon an anonymous table, or more than one table,
+ // this value MUST NOT be set.
+ DBCassandraTableKey = attribute.Key("db.cassandra.table")
+
+ // DBCassandraIdempotenceKey is the attribute Key conforming to the
+ // "db.cassandra.idempotence" semantic conventions. It represents the
+ // whether or not the query is idempotent.
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ DBCassandraIdempotenceKey = attribute.Key("db.cassandra.idempotence")
+
+ // DBCassandraSpeculativeExecutionCountKey is the attribute Key conforming
+ // to the "db.cassandra.speculative_execution_count" semantic conventions.
+ // It represents the number of times a query was speculatively executed.
+ // Not set or `0` if the query was not executed speculatively.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 0, 2
+ DBCassandraSpeculativeExecutionCountKey = attribute.Key("db.cassandra.speculative_execution_count")
+
+ // DBCassandraCoordinatorIDKey is the attribute Key conforming to the
+ // "db.cassandra.coordinator.id" semantic conventions. It represents the ID
+ // of the coordinating node for a query.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'be13faa2-8574-4d71-926d-27f16cf8a7af'
+ DBCassandraCoordinatorIDKey = attribute.Key("db.cassandra.coordinator.id")
+
+ // DBCassandraCoordinatorDCKey is the attribute Key conforming to the
+ // "db.cassandra.coordinator.dc" semantic conventions. It represents the
+ // data center of the coordinating node for a query.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'us-west-2'
+ DBCassandraCoordinatorDCKey = attribute.Key("db.cassandra.coordinator.dc")
+)
+
+var (
+ // all
+ DBCassandraConsistencyLevelAll = DBCassandraConsistencyLevelKey.String("all")
+ // each_quorum
+ DBCassandraConsistencyLevelEachQuorum = DBCassandraConsistencyLevelKey.String("each_quorum")
+ // quorum
+ DBCassandraConsistencyLevelQuorum = DBCassandraConsistencyLevelKey.String("quorum")
+ // local_quorum
+ DBCassandraConsistencyLevelLocalQuorum = DBCassandraConsistencyLevelKey.String("local_quorum")
+ // one
+ DBCassandraConsistencyLevelOne = DBCassandraConsistencyLevelKey.String("one")
+ // two
+ DBCassandraConsistencyLevelTwo = DBCassandraConsistencyLevelKey.String("two")
+ // three
+ DBCassandraConsistencyLevelThree = DBCassandraConsistencyLevelKey.String("three")
+ // local_one
+ DBCassandraConsistencyLevelLocalOne = DBCassandraConsistencyLevelKey.String("local_one")
+ // any
+ DBCassandraConsistencyLevelAny = DBCassandraConsistencyLevelKey.String("any")
+ // serial
+ DBCassandraConsistencyLevelSerial = DBCassandraConsistencyLevelKey.String("serial")
+ // local_serial
+ DBCassandraConsistencyLevelLocalSerial = DBCassandraConsistencyLevelKey.String("local_serial")
+)
+
+// DBCassandraPageSize returns an attribute KeyValue conforming to the
+// "db.cassandra.page_size" semantic conventions. It represents the fetch size
+// used for paging, i.e. how many rows will be returned at once.
+func DBCassandraPageSize(val int) attribute.KeyValue {
+ return DBCassandraPageSizeKey.Int(val)
+}
+
+// DBCassandraTable returns an attribute KeyValue conforming to the
+// "db.cassandra.table" semantic conventions. It represents the name of the
+// primary table that the operation is acting upon, including the keyspace name
+// (if applicable).
+func DBCassandraTable(val string) attribute.KeyValue {
+ return DBCassandraTableKey.String(val)
+}
+
+// DBCassandraIdempotence returns an attribute KeyValue conforming to the
+// "db.cassandra.idempotence" semantic conventions. It represents the whether
+// or not the query is idempotent.
+func DBCassandraIdempotence(val bool) attribute.KeyValue {
+ return DBCassandraIdempotenceKey.Bool(val)
+}
+
+// DBCassandraSpeculativeExecutionCount returns an attribute KeyValue
+// conforming to the "db.cassandra.speculative_execution_count" semantic
+// conventions. It represents the number of times a query was speculatively
+// executed. Not set or `0` if the query was not executed speculatively.
+func DBCassandraSpeculativeExecutionCount(val int) attribute.KeyValue {
+ return DBCassandraSpeculativeExecutionCountKey.Int(val)
+}
+
+// DBCassandraCoordinatorID returns an attribute KeyValue conforming to the
+// "db.cassandra.coordinator.id" semantic conventions. It represents the ID of
+// the coordinating node for a query.
+func DBCassandraCoordinatorID(val string) attribute.KeyValue {
+ return DBCassandraCoordinatorIDKey.String(val)
+}
+
+// DBCassandraCoordinatorDC returns an attribute KeyValue conforming to the
+// "db.cassandra.coordinator.dc" semantic conventions. It represents the data
+// center of the coordinating node for a query.
+func DBCassandraCoordinatorDC(val string) attribute.KeyValue {
+ return DBCassandraCoordinatorDCKey.String(val)
+}
+
+// Call-level attributes for Redis
+const (
+ // DBRedisDBIndexKey is the attribute Key conforming to the
+ // "db.redis.database_index" semantic conventions. It represents the index
+ // of the database being accessed as used in the [`SELECT`
+ // command](https://redis.io/commands/select), provided as an integer. To
+ // be used instead of the generic `db.name` attribute.
+ //
+ // Type: int
+ // RequirementLevel: ConditionallyRequired (If other than the default
+ // database (`0`).)
+ // Stability: stable
+ // Examples: 0, 1, 15
+ DBRedisDBIndexKey = attribute.Key("db.redis.database_index")
+)
+
+// DBRedisDBIndex returns an attribute KeyValue conforming to the
+// "db.redis.database_index" semantic conventions. It represents the index of
+// the database being accessed as used in the [`SELECT`
+// command](https://redis.io/commands/select), provided as an integer. To be
+// used instead of the generic `db.name` attribute.
+func DBRedisDBIndex(val int) attribute.KeyValue {
+ return DBRedisDBIndexKey.Int(val)
+}
+
+// Call-level attributes for MongoDB
+const (
+ // DBMongoDBCollectionKey is the attribute Key conforming to the
+ // "db.mongodb.collection" semantic conventions. It represents the
+ // collection being accessed within the database stated in `db.name`.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'customers', 'products'
+ DBMongoDBCollectionKey = attribute.Key("db.mongodb.collection")
+)
+
+// DBMongoDBCollection returns an attribute KeyValue conforming to the
+// "db.mongodb.collection" semantic conventions. It represents the collection
+// being accessed within the database stated in `db.name`.
+func DBMongoDBCollection(val string) attribute.KeyValue {
+ return DBMongoDBCollectionKey.String(val)
+}
+
+// Call-level attributes for SQL databases
+const (
+ // DBSQLTableKey is the attribute Key conforming to the "db.sql.table"
+ // semantic conventions. It represents the name of the primary table that
+ // the operation is acting upon, including the database name (if
+ // applicable).
+ //
+ // Type: string
+ // RequirementLevel: Recommended
+ // Stability: stable
+ // Examples: 'public.users', 'customers'
+ // Note: It is not recommended to attempt any client-side parsing of
+ // `db.statement` just to get this property, but it should be set if it is
+ // provided by the library being instrumented. If the operation is acting
+ // upon an anonymous table, or more than one table, this value MUST NOT be
+ // set.
+ DBSQLTableKey = attribute.Key("db.sql.table")
+)
+
+// DBSQLTable returns an attribute KeyValue conforming to the "db.sql.table"
+// semantic conventions. It represents the name of the primary table that the
+// operation is acting upon, including the database name (if applicable).
+func DBSQLTable(val string) attribute.KeyValue {
+ return DBSQLTableKey.String(val)
+}
+
+// Span attributes used by non-OTLP exporters to represent OpenTelemetry Span's
+// concepts.
+const (
+ // OTelStatusCodeKey is the attribute Key conforming to the
+ // "otel.status_code" semantic conventions. It represents the name of the
+ // code, either "OK" or "ERROR". MUST NOT be set if the status code is
+ // UNSET.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ OTelStatusCodeKey = attribute.Key("otel.status_code")
+
+ // OTelStatusDescriptionKey is the attribute Key conforming to the
+ // "otel.status_description" semantic conventions. It represents the
+ // description of the Status if it has a value, otherwise not set.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'resource not found'
+ OTelStatusDescriptionKey = attribute.Key("otel.status_description")
+)
+
+var (
+ // The operation has been validated by an Application developer or Operator to have completed successfully
+ OTelStatusCodeOk = OTelStatusCodeKey.String("OK")
+ // The operation contains an error
+ OTelStatusCodeError = OTelStatusCodeKey.String("ERROR")
+)
+
+// OTelStatusDescription returns an attribute KeyValue conforming to the
+// "otel.status_description" semantic conventions. It represents the
+// description of the Status if it has a value, otherwise not set.
+func OTelStatusDescription(val string) attribute.KeyValue {
+ return OTelStatusDescriptionKey.String(val)
+}
+
+// This semantic convention describes an instance of a function that runs
+// without provisioning or managing of servers (also known as serverless
+// functions or Function as a Service (FaaS)) with spans.
+const (
+ // FaaSTriggerKey is the attribute Key conforming to the "faas.trigger"
+ // semantic conventions. It represents the type of the trigger which caused
+ // this function execution.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: For the server/consumer span on the incoming side,
+ // `faas.trigger` MUST be set.
+ //
+ // Clients invoking FaaS instances usually cannot set `faas.trigger`,
+ // since they would typically need to look in the payload to determine
+ // the event type. If clients set it, it should be the same as the
+ // trigger that corresponding incoming would have (i.e., this has
+ // nothing to do with the underlying transport used to make the API
+ // call to invoke the lambda, which is often HTTP).
+ FaaSTriggerKey = attribute.Key("faas.trigger")
+
+ // FaaSExecutionKey is the attribute Key conforming to the "faas.execution"
+ // semantic conventions. It represents the execution ID of the current
+ // function execution.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'af9d5aa4-a685-4c5f-a22b-444f80b3cc28'
+ FaaSExecutionKey = attribute.Key("faas.execution")
+)
+
+var (
+ // A response to some data source operation such as a database or filesystem read/write
+ FaaSTriggerDatasource = FaaSTriggerKey.String("datasource")
+ // To provide an answer to an inbound HTTP request
+ FaaSTriggerHTTP = FaaSTriggerKey.String("http")
+ // A function is set to be executed when messages are sent to a messaging system
+ FaaSTriggerPubsub = FaaSTriggerKey.String("pubsub")
+ // A function is scheduled to be executed regularly
+ FaaSTriggerTimer = FaaSTriggerKey.String("timer")
+ // If none of the others apply
+ FaaSTriggerOther = FaaSTriggerKey.String("other")
+)
+
+// FaaSExecution returns an attribute KeyValue conforming to the
+// "faas.execution" semantic conventions. It represents the execution ID of the
+// current function execution.
+func FaaSExecution(val string) attribute.KeyValue {
+ return FaaSExecutionKey.String(val)
+}
+
+// Semantic Convention for FaaS triggered as a response to some data source
+// operation such as a database or filesystem read/write.
+const (
+ // FaaSDocumentCollectionKey is the attribute Key conforming to the
+ // "faas.document.collection" semantic conventions. It represents the name
+ // of the source on which the triggering operation was performed. For
+ // example, in Cloud Storage or S3 corresponds to the bucket name, and in
+ // Cosmos DB to the database name.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'myBucketName', 'myDBName'
+ FaaSDocumentCollectionKey = attribute.Key("faas.document.collection")
+
+ // FaaSDocumentOperationKey is the attribute Key conforming to the
+ // "faas.document.operation" semantic conventions. It represents the
+ // describes the type of the operation that was performed on the data.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ FaaSDocumentOperationKey = attribute.Key("faas.document.operation")
+
+ // FaaSDocumentTimeKey is the attribute Key conforming to the
+ // "faas.document.time" semantic conventions. It represents a string
+ // containing the time when the data was accessed in the [ISO
+ // 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format
+ // expressed in [UTC](https://www.w3.org/TR/NOTE-datetime).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '2020-01-23T13:47:06Z'
+ FaaSDocumentTimeKey = attribute.Key("faas.document.time")
+
+ // FaaSDocumentNameKey is the attribute Key conforming to the
+ // "faas.document.name" semantic conventions. It represents the document
+ // name/table subjected to the operation. For example, in Cloud Storage or
+ // S3 is the name of the file, and in Cosmos DB the table name.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'myFile.txt', 'myTableName'
+ FaaSDocumentNameKey = attribute.Key("faas.document.name")
+)
+
+var (
+ // When a new object is created
+ FaaSDocumentOperationInsert = FaaSDocumentOperationKey.String("insert")
+ // When an object is modified
+ FaaSDocumentOperationEdit = FaaSDocumentOperationKey.String("edit")
+ // When an object is deleted
+ FaaSDocumentOperationDelete = FaaSDocumentOperationKey.String("delete")
+)
+
+// FaaSDocumentCollection returns an attribute KeyValue conforming to the
+// "faas.document.collection" semantic conventions. It represents the name of
+// the source on which the triggering operation was performed. For example, in
+// Cloud Storage or S3 corresponds to the bucket name, and in Cosmos DB to the
+// database name.
+func FaaSDocumentCollection(val string) attribute.KeyValue {
+ return FaaSDocumentCollectionKey.String(val)
+}
+
+// FaaSDocumentTime returns an attribute KeyValue conforming to the
+// "faas.document.time" semantic conventions. It represents a string containing
+// the time when the data was accessed in the [ISO
+// 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format
+// expressed in [UTC](https://www.w3.org/TR/NOTE-datetime).
+func FaaSDocumentTime(val string) attribute.KeyValue {
+ return FaaSDocumentTimeKey.String(val)
+}
+
+// FaaSDocumentName returns an attribute KeyValue conforming to the
+// "faas.document.name" semantic conventions. It represents the document
+// name/table subjected to the operation. For example, in Cloud Storage or S3
+// is the name of the file, and in Cosmos DB the table name.
+func FaaSDocumentName(val string) attribute.KeyValue {
+ return FaaSDocumentNameKey.String(val)
+}
+
+// Semantic Convention for FaaS scheduled to be executed regularly.
+const (
+ // FaaSTimeKey is the attribute Key conforming to the "faas.time" semantic
+ // conventions. It represents a string containing the function invocation
+ // time in the [ISO
+ // 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format
+ // expressed in [UTC](https://www.w3.org/TR/NOTE-datetime).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '2020-01-23T13:47:06Z'
+ FaaSTimeKey = attribute.Key("faas.time")
+
+ // FaaSCronKey is the attribute Key conforming to the "faas.cron" semantic
+ // conventions. It represents a string containing the schedule period as
+ // [Cron
+ // Expression](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '0/5 * * * ? *'
+ FaaSCronKey = attribute.Key("faas.cron")
+)
+
+// FaaSTime returns an attribute KeyValue conforming to the "faas.time"
+// semantic conventions. It represents a string containing the function
+// invocation time in the [ISO
+// 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format
+// expressed in [UTC](https://www.w3.org/TR/NOTE-datetime).
+func FaaSTime(val string) attribute.KeyValue {
+ return FaaSTimeKey.String(val)
+}
+
+// FaaSCron returns an attribute KeyValue conforming to the "faas.cron"
+// semantic conventions. It represents a string containing the schedule period
+// as [Cron
+// Expression](https://docs.oracle.com/cd/E12058_01/doc/doc.1014/e12030/cron_expressions.htm).
+func FaaSCron(val string) attribute.KeyValue {
+ return FaaSCronKey.String(val)
+}
+
+// Contains additional attributes for incoming FaaS spans.
+const (
+ // FaaSColdstartKey is the attribute Key conforming to the "faas.coldstart"
+ // semantic conventions. It represents a boolean that is true if the
+ // serverless function is executed for the first time (aka cold-start).
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ FaaSColdstartKey = attribute.Key("faas.coldstart")
+)
+
+// FaaSColdstart returns an attribute KeyValue conforming to the
+// "faas.coldstart" semantic conventions. It represents a boolean that is true
+// if the serverless function is executed for the first time (aka cold-start).
+func FaaSColdstart(val bool) attribute.KeyValue {
+ return FaaSColdstartKey.Bool(val)
+}
+
+// Contains additional attributes for outgoing FaaS spans.
+const (
+ // FaaSInvokedNameKey is the attribute Key conforming to the
+ // "faas.invoked_name" semantic conventions. It represents the name of the
+ // invoked function.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'my-function'
+ // Note: SHOULD be equal to the `faas.name` resource attribute of the
+ // invoked function.
+ FaaSInvokedNameKey = attribute.Key("faas.invoked_name")
+
+ // FaaSInvokedProviderKey is the attribute Key conforming to the
+ // "faas.invoked_provider" semantic conventions. It represents the cloud
+ // provider of the invoked function.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ // Note: SHOULD be equal to the `cloud.provider` resource attribute of the
+ // invoked function.
+ FaaSInvokedProviderKey = attribute.Key("faas.invoked_provider")
+
+ // FaaSInvokedRegionKey is the attribute Key conforming to the
+ // "faas.invoked_region" semantic conventions. It represents the cloud
+ // region of the invoked function.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (For some cloud providers, like
+ // AWS or GCP, the region in which a function is hosted is essential to
+ // uniquely identify the function and also part of its endpoint. Since it's
+ // part of the endpoint being called, the region is always known to
+ // clients. In these cases, `faas.invoked_region` MUST be set accordingly.
+ // If the region is unknown to the client or not required for identifying
+ // the invoked function, setting `faas.invoked_region` is optional.)
+ // Stability: stable
+ // Examples: 'eu-central-1'
+ // Note: SHOULD be equal to the `cloud.region` resource attribute of the
+ // invoked function.
+ FaaSInvokedRegionKey = attribute.Key("faas.invoked_region")
+)
+
+var (
+ // Alibaba Cloud
+ FaaSInvokedProviderAlibabaCloud = FaaSInvokedProviderKey.String("alibaba_cloud")
+ // Amazon Web Services
+ FaaSInvokedProviderAWS = FaaSInvokedProviderKey.String("aws")
+ // Microsoft Azure
+ FaaSInvokedProviderAzure = FaaSInvokedProviderKey.String("azure")
+ // Google Cloud Platform
+ FaaSInvokedProviderGCP = FaaSInvokedProviderKey.String("gcp")
+ // Tencent Cloud
+ FaaSInvokedProviderTencentCloud = FaaSInvokedProviderKey.String("tencent_cloud")
+)
+
+// FaaSInvokedName returns an attribute KeyValue conforming to the
+// "faas.invoked_name" semantic conventions. It represents the name of the
+// invoked function.
+func FaaSInvokedName(val string) attribute.KeyValue {
+ return FaaSInvokedNameKey.String(val)
+}
+
+// FaaSInvokedRegion returns an attribute KeyValue conforming to the
+// "faas.invoked_region" semantic conventions. It represents the cloud region
+// of the invoked function.
+func FaaSInvokedRegion(val string) attribute.KeyValue {
+ return FaaSInvokedRegionKey.String(val)
+}
+
+// These attributes may be used for any network related operation.
+const (
+ // NetTransportKey is the attribute Key conforming to the "net.transport"
+ // semantic conventions. It represents the transport protocol used. See
+ // note below.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ NetTransportKey = attribute.Key("net.transport")
+
+ // NetAppProtocolNameKey is the attribute Key conforming to the
+ // "net.app.protocol.name" semantic conventions. It represents the
+ // application layer protocol used. The value SHOULD be normalized to
+ // lowercase.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'amqp', 'http', 'mqtt'
+ NetAppProtocolNameKey = attribute.Key("net.app.protocol.name")
+
+ // NetAppProtocolVersionKey is the attribute Key conforming to the
+ // "net.app.protocol.version" semantic conventions. It represents the
+ // version of the application layer protocol used. See note below.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '3.1.1'
+ // Note: `net.app.protocol.version` refers to the version of the protocol
+ // used and might be different from the protocol client's version. If the
+ // HTTP client used has a version of `0.27.2`, but sends HTTP version
+ // `1.1`, this attribute should be set to `1.1`.
+ NetAppProtocolVersionKey = attribute.Key("net.app.protocol.version")
+
+ // NetSockPeerNameKey is the attribute Key conforming to the
+ // "net.sock.peer.name" semantic conventions. It represents the remote
+ // socket peer name.
+ //
+ // Type: string
+ // RequirementLevel: Recommended (If available and different from
+ // `net.peer.name` and if `net.sock.peer.addr` is set.)
+ // Stability: stable
+ // Examples: 'proxy.example.com'
+ NetSockPeerNameKey = attribute.Key("net.sock.peer.name")
+
+ // NetSockPeerAddrKey is the attribute Key conforming to the
+ // "net.sock.peer.addr" semantic conventions. It represents the remote
+ // socket peer address: IPv4 or IPv6 for internet protocols, path for local
+ // communication,
+ // [etc](https://man7.org/linux/man-pages/man7/address_families.7.html).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '127.0.0.1', '/tmp/mysql.sock'
+ NetSockPeerAddrKey = attribute.Key("net.sock.peer.addr")
+
+ // NetSockPeerPortKey is the attribute Key conforming to the
+ // "net.sock.peer.port" semantic conventions. It represents the remote
+ // socket peer port.
+ //
+ // Type: int
+ // RequirementLevel: Recommended (If defined for the address family and if
+ // different than `net.peer.port` and if `net.sock.peer.addr` is set.)
+ // Stability: stable
+ // Examples: 16456
+ NetSockPeerPortKey = attribute.Key("net.sock.peer.port")
+
+ // NetSockFamilyKey is the attribute Key conforming to the
+ // "net.sock.family" semantic conventions. It represents the protocol
+ // [address
+ // family](https://man7.org/linux/man-pages/man7/address_families.7.html)
+ // which is used for communication.
+ //
+ // Type: Enum
+ // RequirementLevel: ConditionallyRequired (If different than `inet` and if
+ // any of `net.sock.peer.addr` or `net.sock.host.addr` are set. Consumers
+ // of telemetry SHOULD accept both IPv4 and IPv6 formats for the address in
+ // `net.sock.peer.addr` if `net.sock.family` is not set. This is to support
+ // instrumentations that follow previous versions of this document.)
+ // Stability: stable
+ // Examples: 'inet6', 'bluetooth'
+ NetSockFamilyKey = attribute.Key("net.sock.family")
+
+ // NetPeerNameKey is the attribute Key conforming to the "net.peer.name"
+ // semantic conventions. It represents the logical remote hostname, see
+ // note below.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'example.com'
+ // Note: `net.peer.name` SHOULD NOT be set if capturing it would require an
+ // extra DNS lookup.
+ NetPeerNameKey = attribute.Key("net.peer.name")
+
+ // NetPeerPortKey is the attribute Key conforming to the "net.peer.port"
+ // semantic conventions. It represents the logical remote port number
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 80, 8080, 443
+ NetPeerPortKey = attribute.Key("net.peer.port")
+
+ // NetHostNameKey is the attribute Key conforming to the "net.host.name"
+ // semantic conventions. It represents the logical local hostname or
+ // similar, see note below.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'localhost'
+ NetHostNameKey = attribute.Key("net.host.name")
+
+ // NetHostPortKey is the attribute Key conforming to the "net.host.port"
+ // semantic conventions. It represents the logical local port number,
+ // preferably the one that the peer used to connect
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 8080
+ NetHostPortKey = attribute.Key("net.host.port")
+
+ // NetSockHostAddrKey is the attribute Key conforming to the
+ // "net.sock.host.addr" semantic conventions. It represents the local
+ // socket address. Useful in case of a multi-IP host.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '192.168.0.1'
+ NetSockHostAddrKey = attribute.Key("net.sock.host.addr")
+
+ // NetSockHostPortKey is the attribute Key conforming to the
+ // "net.sock.host.port" semantic conventions. It represents the local
+ // socket port number.
+ //
+ // Type: int
+ // RequirementLevel: Recommended (If defined for the address family and if
+ // different than `net.host.port` and if `net.sock.host.addr` is set.)
+ // Stability: stable
+ // Examples: 35555
+ NetSockHostPortKey = attribute.Key("net.sock.host.port")
+
+ // NetHostConnectionTypeKey is the attribute Key conforming to the
+ // "net.host.connection.type" semantic conventions. It represents the
+ // internet connection type currently being used by the host.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'wifi'
+ NetHostConnectionTypeKey = attribute.Key("net.host.connection.type")
+
+ // NetHostConnectionSubtypeKey is the attribute Key conforming to the
+ // "net.host.connection.subtype" semantic conventions. It represents the
+ // this describes more details regarding the connection.type. It may be the
+ // type of cell technology connection, but it could be used for describing
+ // details about a wifi connection.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'LTE'
+ NetHostConnectionSubtypeKey = attribute.Key("net.host.connection.subtype")
+
+ // NetHostCarrierNameKey is the attribute Key conforming to the
+ // "net.host.carrier.name" semantic conventions. It represents the name of
+ // the mobile carrier.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'sprint'
+ NetHostCarrierNameKey = attribute.Key("net.host.carrier.name")
+
+ // NetHostCarrierMccKey is the attribute Key conforming to the
+ // "net.host.carrier.mcc" semantic conventions. It represents the mobile
+ // carrier country code.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '310'
+ NetHostCarrierMccKey = attribute.Key("net.host.carrier.mcc")
+
+ // NetHostCarrierMncKey is the attribute Key conforming to the
+ // "net.host.carrier.mnc" semantic conventions. It represents the mobile
+ // carrier network code.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '001'
+ NetHostCarrierMncKey = attribute.Key("net.host.carrier.mnc")
+
+ // NetHostCarrierIccKey is the attribute Key conforming to the
+ // "net.host.carrier.icc" semantic conventions. It represents the ISO
+ // 3166-1 alpha-2 2-character country code associated with the mobile
+ // carrier network.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'DE'
+ NetHostCarrierIccKey = attribute.Key("net.host.carrier.icc")
+)
+
+var (
+ // ip_tcp
+ NetTransportTCP = NetTransportKey.String("ip_tcp")
+ // ip_udp
+ NetTransportUDP = NetTransportKey.String("ip_udp")
+ // Named or anonymous pipe. See note below
+ NetTransportPipe = NetTransportKey.String("pipe")
+ // In-process communication
+ NetTransportInProc = NetTransportKey.String("inproc")
+ // Something else (non IP-based)
+ NetTransportOther = NetTransportKey.String("other")
+)
+
+var (
+ // IPv4 address
+ NetSockFamilyInet = NetSockFamilyKey.String("inet")
+ // IPv6 address
+ NetSockFamilyInet6 = NetSockFamilyKey.String("inet6")
+ // Unix domain socket path
+ NetSockFamilyUnix = NetSockFamilyKey.String("unix")
+)
+
+var (
+ // wifi
+ NetHostConnectionTypeWifi = NetHostConnectionTypeKey.String("wifi")
+ // wired
+ NetHostConnectionTypeWired = NetHostConnectionTypeKey.String("wired")
+ // cell
+ NetHostConnectionTypeCell = NetHostConnectionTypeKey.String("cell")
+ // unavailable
+ NetHostConnectionTypeUnavailable = NetHostConnectionTypeKey.String("unavailable")
+ // unknown
+ NetHostConnectionTypeUnknown = NetHostConnectionTypeKey.String("unknown")
+)
+
+var (
+ // GPRS
+ NetHostConnectionSubtypeGprs = NetHostConnectionSubtypeKey.String("gprs")
+ // EDGE
+ NetHostConnectionSubtypeEdge = NetHostConnectionSubtypeKey.String("edge")
+ // UMTS
+ NetHostConnectionSubtypeUmts = NetHostConnectionSubtypeKey.String("umts")
+ // CDMA
+ NetHostConnectionSubtypeCdma = NetHostConnectionSubtypeKey.String("cdma")
+ // EVDO Rel. 0
+ NetHostConnectionSubtypeEvdo0 = NetHostConnectionSubtypeKey.String("evdo_0")
+ // EVDO Rev. A
+ NetHostConnectionSubtypeEvdoA = NetHostConnectionSubtypeKey.String("evdo_a")
+ // CDMA2000 1XRTT
+ NetHostConnectionSubtypeCdma20001xrtt = NetHostConnectionSubtypeKey.String("cdma2000_1xrtt")
+ // HSDPA
+ NetHostConnectionSubtypeHsdpa = NetHostConnectionSubtypeKey.String("hsdpa")
+ // HSUPA
+ NetHostConnectionSubtypeHsupa = NetHostConnectionSubtypeKey.String("hsupa")
+ // HSPA
+ NetHostConnectionSubtypeHspa = NetHostConnectionSubtypeKey.String("hspa")
+ // IDEN
+ NetHostConnectionSubtypeIden = NetHostConnectionSubtypeKey.String("iden")
+ // EVDO Rev. B
+ NetHostConnectionSubtypeEvdoB = NetHostConnectionSubtypeKey.String("evdo_b")
+ // LTE
+ NetHostConnectionSubtypeLte = NetHostConnectionSubtypeKey.String("lte")
+ // EHRPD
+ NetHostConnectionSubtypeEhrpd = NetHostConnectionSubtypeKey.String("ehrpd")
+ // HSPAP
+ NetHostConnectionSubtypeHspap = NetHostConnectionSubtypeKey.String("hspap")
+ // GSM
+ NetHostConnectionSubtypeGsm = NetHostConnectionSubtypeKey.String("gsm")
+ // TD-SCDMA
+ NetHostConnectionSubtypeTdScdma = NetHostConnectionSubtypeKey.String("td_scdma")
+ // IWLAN
+ NetHostConnectionSubtypeIwlan = NetHostConnectionSubtypeKey.String("iwlan")
+ // 5G NR (New Radio)
+ NetHostConnectionSubtypeNr = NetHostConnectionSubtypeKey.String("nr")
+ // 5G NRNSA (New Radio Non-Standalone)
+ NetHostConnectionSubtypeNrnsa = NetHostConnectionSubtypeKey.String("nrnsa")
+ // LTE CA
+ NetHostConnectionSubtypeLteCa = NetHostConnectionSubtypeKey.String("lte_ca")
+)
+
+// NetAppProtocolName returns an attribute KeyValue conforming to the
+// "net.app.protocol.name" semantic conventions. It represents the application
+// layer protocol used. The value SHOULD be normalized to lowercase.
+func NetAppProtocolName(val string) attribute.KeyValue {
+ return NetAppProtocolNameKey.String(val)
+}
+
+// NetAppProtocolVersion returns an attribute KeyValue conforming to the
+// "net.app.protocol.version" semantic conventions. It represents the version
+// of the application layer protocol used. See note below.
+func NetAppProtocolVersion(val string) attribute.KeyValue {
+ return NetAppProtocolVersionKey.String(val)
+}
+
+// NetSockPeerName returns an attribute KeyValue conforming to the
+// "net.sock.peer.name" semantic conventions. It represents the remote socket
+// peer name.
+func NetSockPeerName(val string) attribute.KeyValue {
+ return NetSockPeerNameKey.String(val)
+}
+
+// NetSockPeerAddr returns an attribute KeyValue conforming to the
+// "net.sock.peer.addr" semantic conventions. It represents the remote socket
+// peer address: IPv4 or IPv6 for internet protocols, path for local
+// communication,
+// [etc](https://man7.org/linux/man-pages/man7/address_families.7.html).
+func NetSockPeerAddr(val string) attribute.KeyValue {
+ return NetSockPeerAddrKey.String(val)
+}
+
+// NetSockPeerPort returns an attribute KeyValue conforming to the
+// "net.sock.peer.port" semantic conventions. It represents the remote socket
+// peer port.
+func NetSockPeerPort(val int) attribute.KeyValue {
+ return NetSockPeerPortKey.Int(val)
+}
+
+// NetPeerName returns an attribute KeyValue conforming to the
+// "net.peer.name" semantic conventions. It represents the logical remote
+// hostname, see note below.
+func NetPeerName(val string) attribute.KeyValue {
+ return NetPeerNameKey.String(val)
+}
+
+// NetPeerPort returns an attribute KeyValue conforming to the
+// "net.peer.port" semantic conventions. It represents the logical remote port
+// number
+func NetPeerPort(val int) attribute.KeyValue {
+ return NetPeerPortKey.Int(val)
+}
+
+// NetHostName returns an attribute KeyValue conforming to the
+// "net.host.name" semantic conventions. It represents the logical local
+// hostname or similar, see note below.
+func NetHostName(val string) attribute.KeyValue {
+ return NetHostNameKey.String(val)
+}
+
+// NetHostPort returns an attribute KeyValue conforming to the
+// "net.host.port" semantic conventions. It represents the logical local port
+// number, preferably the one that the peer used to connect
+func NetHostPort(val int) attribute.KeyValue {
+ return NetHostPortKey.Int(val)
+}
+
+// NetSockHostAddr returns an attribute KeyValue conforming to the
+// "net.sock.host.addr" semantic conventions. It represents the local socket
+// address. Useful in case of a multi-IP host.
+func NetSockHostAddr(val string) attribute.KeyValue {
+ return NetSockHostAddrKey.String(val)
+}
+
+// NetSockHostPort returns an attribute KeyValue conforming to the
+// "net.sock.host.port" semantic conventions. It represents the local socket
+// port number.
+func NetSockHostPort(val int) attribute.KeyValue {
+ return NetSockHostPortKey.Int(val)
+}
+
+// NetHostCarrierName returns an attribute KeyValue conforming to the
+// "net.host.carrier.name" semantic conventions. It represents the name of the
+// mobile carrier.
+func NetHostCarrierName(val string) attribute.KeyValue {
+ return NetHostCarrierNameKey.String(val)
+}
+
+// NetHostCarrierMcc returns an attribute KeyValue conforming to the
+// "net.host.carrier.mcc" semantic conventions. It represents the mobile
+// carrier country code.
+func NetHostCarrierMcc(val string) attribute.KeyValue {
+ return NetHostCarrierMccKey.String(val)
+}
+
+// NetHostCarrierMnc returns an attribute KeyValue conforming to the
+// "net.host.carrier.mnc" semantic conventions. It represents the mobile
+// carrier network code.
+func NetHostCarrierMnc(val string) attribute.KeyValue {
+ return NetHostCarrierMncKey.String(val)
+}
+
+// NetHostCarrierIcc returns an attribute KeyValue conforming to the
+// "net.host.carrier.icc" semantic conventions. It represents the ISO 3166-1
+// alpha-2 2-character country code associated with the mobile carrier network.
+func NetHostCarrierIcc(val string) attribute.KeyValue {
+ return NetHostCarrierIccKey.String(val)
+}
+
+// Operations that access some remote service.
+const (
+ // PeerServiceKey is the attribute Key conforming to the "peer.service"
+ // semantic conventions. It represents the
+ // [`service.name`](../../resource/semantic_conventions/README.md#service)
+ // of the remote service. SHOULD be equal to the actual `service.name`
+ // resource attribute of the remote service if any.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'AuthTokenCache'
+ PeerServiceKey = attribute.Key("peer.service")
+)
+
+// PeerService returns an attribute KeyValue conforming to the
+// "peer.service" semantic conventions. It represents the
+// [`service.name`](../../resource/semantic_conventions/README.md#service) of
+// the remote service. SHOULD be equal to the actual `service.name` resource
+// attribute of the remote service if any.
+func PeerService(val string) attribute.KeyValue {
+ return PeerServiceKey.String(val)
+}
+
+// These attributes may be used for any operation with an authenticated and/or
+// authorized enduser.
+const (
+ // EnduserIDKey is the attribute Key conforming to the "enduser.id"
+ // semantic conventions. It represents the username or client_id extracted
+ // from the access token or
+ // [Authorization](https://tools.ietf.org/html/rfc7235#section-4.2) header
+ // in the inbound request from outside the system.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'username'
+ EnduserIDKey = attribute.Key("enduser.id")
+
+ // EnduserRoleKey is the attribute Key conforming to the "enduser.role"
+ // semantic conventions. It represents the actual/assumed role the client
+ // is making the request under extracted from token or application security
+ // context.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'admin'
+ EnduserRoleKey = attribute.Key("enduser.role")
+
+ // EnduserScopeKey is the attribute Key conforming to the "enduser.scope"
+ // semantic conventions. It represents the scopes or granted authorities
+ // the client currently possesses extracted from token or application
+ // security context. The value would come from the scope associated with an
+ // [OAuth 2.0 Access
+ // Token](https://tools.ietf.org/html/rfc6749#section-3.3) or an attribute
+ // value in a [SAML 2.0
+ // Assertion](http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0.html).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'read:message, write:files'
+ EnduserScopeKey = attribute.Key("enduser.scope")
+)
+
+// EnduserID returns an attribute KeyValue conforming to the "enduser.id"
+// semantic conventions. It represents the username or client_id extracted from
+// the access token or
+// [Authorization](https://tools.ietf.org/html/rfc7235#section-4.2) header in
+// the inbound request from outside the system.
+func EnduserID(val string) attribute.KeyValue {
+ return EnduserIDKey.String(val)
+}
+
+// EnduserRole returns an attribute KeyValue conforming to the
+// "enduser.role" semantic conventions. It represents the actual/assumed role
+// the client is making the request under extracted from token or application
+// security context.
+func EnduserRole(val string) attribute.KeyValue {
+ return EnduserRoleKey.String(val)
+}
+
+// EnduserScope returns an attribute KeyValue conforming to the
+// "enduser.scope" semantic conventions. It represents the scopes or granted
+// authorities the client currently possesses extracted from token or
+// application security context. The value would come from the scope associated
+// with an [OAuth 2.0 Access
+// Token](https://tools.ietf.org/html/rfc6749#section-3.3) or an attribute
+// value in a [SAML 2.0
+// Assertion](http://docs.oasis-open.org/security/saml/Post2.0/sstc-saml-tech-overview-2.0.html).
+func EnduserScope(val string) attribute.KeyValue {
+ return EnduserScopeKey.String(val)
+}
+
+// These attributes may be used for any operation to store information about a
+// thread that started a span.
+const (
+ // ThreadIDKey is the attribute Key conforming to the "thread.id" semantic
+ // conventions. It represents the current "managed" thread ID (as opposed
+ // to OS thread ID).
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 42
+ ThreadIDKey = attribute.Key("thread.id")
+
+ // ThreadNameKey is the attribute Key conforming to the "thread.name"
+ // semantic conventions. It represents the current thread name.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'main'
+ ThreadNameKey = attribute.Key("thread.name")
+)
+
+// ThreadID returns an attribute KeyValue conforming to the "thread.id"
+// semantic conventions. It represents the current "managed" thread ID (as
+// opposed to OS thread ID).
+func ThreadID(val int) attribute.KeyValue {
+ return ThreadIDKey.Int(val)
+}
+
+// ThreadName returns an attribute KeyValue conforming to the "thread.name"
+// semantic conventions. It represents the current thread name.
+func ThreadName(val string) attribute.KeyValue {
+ return ThreadNameKey.String(val)
+}
+
+// These attributes allow to report this unit of code and therefore to provide
+// more context about the span.
+const (
+ // CodeFunctionKey is the attribute Key conforming to the "code.function"
+ // semantic conventions. It represents the method or function name, or
+ // equivalent (usually rightmost part of the code unit's name).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'serveRequest'
+ CodeFunctionKey = attribute.Key("code.function")
+
+ // CodeNamespaceKey is the attribute Key conforming to the "code.namespace"
+ // semantic conventions. It represents the "namespace" within which
+ // `code.function` is defined. Usually the qualified class or module name,
+ // such that `code.namespace` + some separator + `code.function` form a
+ // unique identifier for the code unit.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'com.example.MyHTTPService'
+ CodeNamespaceKey = attribute.Key("code.namespace")
+
+ // CodeFilepathKey is the attribute Key conforming to the "code.filepath"
+ // semantic conventions. It represents the source code file name that
+ // identifies the code unit as uniquely as possible (preferably an absolute
+ // file path).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '/usr/local/MyApplication/content_root/app/index.php'
+ CodeFilepathKey = attribute.Key("code.filepath")
+
+ // CodeLineNumberKey is the attribute Key conforming to the "code.lineno"
+ // semantic conventions. It represents the line number in `code.filepath`
+ // best representing the operation. It SHOULD point within the code unit
+ // named in `code.function`.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 42
+ CodeLineNumberKey = attribute.Key("code.lineno")
+
+ // CodeColumnKey is the attribute Key conforming to the "code.column"
+ // semantic conventions. It represents the column number in `code.filepath`
+ // best representing the operation. It SHOULD point within the code unit
+ // named in `code.function`.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 16
+ CodeColumnKey = attribute.Key("code.column")
+)
+
+// CodeFunction returns an attribute KeyValue conforming to the
+// "code.function" semantic conventions. It represents the method or function
+// name, or equivalent (usually rightmost part of the code unit's name).
+func CodeFunction(val string) attribute.KeyValue {
+ return CodeFunctionKey.String(val)
+}
+
+// CodeNamespace returns an attribute KeyValue conforming to the
+// "code.namespace" semantic conventions. It represents the "namespace" within
+// which `code.function` is defined. Usually the qualified class or module
+// name, such that `code.namespace` + some separator + `code.function` form a
+// unique identifier for the code unit.
+func CodeNamespace(val string) attribute.KeyValue {
+ return CodeNamespaceKey.String(val)
+}
+
+// CodeFilepath returns an attribute KeyValue conforming to the
+// "code.filepath" semantic conventions. It represents the source code file
+// name that identifies the code unit as uniquely as possible (preferably an
+// absolute file path).
+func CodeFilepath(val string) attribute.KeyValue {
+ return CodeFilepathKey.String(val)
+}
+
+// CodeLineNumber returns an attribute KeyValue conforming to the "code.lineno"
+// semantic conventions. It represents the line number in `code.filepath` best
+// representing the operation. It SHOULD point within the code unit named in
+// `code.function`.
+func CodeLineNumber(val int) attribute.KeyValue {
+ return CodeLineNumberKey.Int(val)
+}
+
+// CodeColumn returns an attribute KeyValue conforming to the "code.column"
+// semantic conventions. It represents the column number in `code.filepath`
+// best representing the operation. It SHOULD point within the code unit named
+// in `code.function`.
+func CodeColumn(val int) attribute.KeyValue {
+ return CodeColumnKey.Int(val)
+}
+
+// Semantic conventions for HTTP client and server Spans.
+const (
+ // HTTPMethodKey is the attribute Key conforming to the "http.method"
+ // semantic conventions. It represents the hTTP request method.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'GET', 'POST', 'HEAD'
+ HTTPMethodKey = attribute.Key("http.method")
+
+ // HTTPStatusCodeKey is the attribute Key conforming to the
+ // "http.status_code" semantic conventions. It represents the [HTTP
+ // response status code](https://tools.ietf.org/html/rfc7231#section-6).
+ //
+ // Type: int
+ // RequirementLevel: ConditionallyRequired (If and only if one was
+ // received/sent.)
+ // Stability: stable
+ // Examples: 200
+ HTTPStatusCodeKey = attribute.Key("http.status_code")
+
+ // HTTPFlavorKey is the attribute Key conforming to the "http.flavor"
+ // semantic conventions. It represents the kind of HTTP protocol used.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Note: If `net.transport` is not specified, it can be assumed to be
+ // `IP.TCP` except if `http.flavor` is `QUIC`, in which case `IP.UDP` is
+ // assumed.
+ HTTPFlavorKey = attribute.Key("http.flavor")
+
+ // HTTPUserAgentKey is the attribute Key conforming to the
+ // "http.user_agent" semantic conventions. It represents the value of the
+ // [HTTP
+ // User-Agent](https://www.rfc-editor.org/rfc/rfc9110.html#field.user-agent)
+ // header sent by the client.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'CERN-LineMode/2.15 libwww/2.17b3'
+ HTTPUserAgentKey = attribute.Key("http.user_agent")
+
+ // HTTPRequestContentLengthKey is the attribute Key conforming to the
+ // "http.request_content_length" semantic conventions. It represents the
+ // size of the request payload body in bytes. This is the number of bytes
+ // transferred excluding headers and is often, but not always, present as
+ // the
+ // [Content-Length](https://www.rfc-editor.org/rfc/rfc9110.html#field.content-length)
+ // header. For requests using transport encoding, this should be the
+ // compressed size.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 3495
+ HTTPRequestContentLengthKey = attribute.Key("http.request_content_length")
+
+ // HTTPResponseContentLengthKey is the attribute Key conforming to the
+ // "http.response_content_length" semantic conventions. It represents the
+ // size of the response payload body in bytes. This is the number of bytes
+ // transferred excluding headers and is often, but not always, present as
+ // the
+ // [Content-Length](https://www.rfc-editor.org/rfc/rfc9110.html#field.content-length)
+ // header. For requests using transport encoding, this should be the
+ // compressed size.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 3495
+ HTTPResponseContentLengthKey = attribute.Key("http.response_content_length")
+)
+
+var (
+ // HTTP/1.0
+ HTTPFlavorHTTP10 = HTTPFlavorKey.String("1.0")
+ // HTTP/1.1
+ HTTPFlavorHTTP11 = HTTPFlavorKey.String("1.1")
+ // HTTP/2
+ HTTPFlavorHTTP20 = HTTPFlavorKey.String("2.0")
+ // HTTP/3
+ HTTPFlavorHTTP30 = HTTPFlavorKey.String("3.0")
+ // SPDY protocol
+ HTTPFlavorSPDY = HTTPFlavorKey.String("SPDY")
+ // QUIC protocol
+ HTTPFlavorQUIC = HTTPFlavorKey.String("QUIC")
+)
+
+// HTTPMethod returns an attribute KeyValue conforming to the "http.method"
+// semantic conventions. It represents the hTTP request method.
+func HTTPMethod(val string) attribute.KeyValue {
+ return HTTPMethodKey.String(val)
+}
+
+// HTTPStatusCode returns an attribute KeyValue conforming to the
+// "http.status_code" semantic conventions. It represents the [HTTP response
+// status code](https://tools.ietf.org/html/rfc7231#section-6).
+func HTTPStatusCode(val int) attribute.KeyValue {
+ return HTTPStatusCodeKey.Int(val)
+}
+
+// HTTPUserAgent returns an attribute KeyValue conforming to the
+// "http.user_agent" semantic conventions. It represents the value of the [HTTP
+// User-Agent](https://www.rfc-editor.org/rfc/rfc9110.html#field.user-agent)
+// header sent by the client.
+func HTTPUserAgent(val string) attribute.KeyValue {
+ return HTTPUserAgentKey.String(val)
+}
+
+// HTTPRequestContentLength returns an attribute KeyValue conforming to the
+// "http.request_content_length" semantic conventions. It represents the size
+// of the request payload body in bytes. This is the number of bytes
+// transferred excluding headers and is often, but not always, present as the
+// [Content-Length](https://www.rfc-editor.org/rfc/rfc9110.html#field.content-length)
+// header. For requests using transport encoding, this should be the compressed
+// size.
+func HTTPRequestContentLength(val int) attribute.KeyValue {
+ return HTTPRequestContentLengthKey.Int(val)
+}
+
+// HTTPResponseContentLength returns an attribute KeyValue conforming to the
+// "http.response_content_length" semantic conventions. It represents the size
+// of the response payload body in bytes. This is the number of bytes
+// transferred excluding headers and is often, but not always, present as the
+// [Content-Length](https://www.rfc-editor.org/rfc/rfc9110.html#field.content-length)
+// header. For requests using transport encoding, this should be the compressed
+// size.
+func HTTPResponseContentLength(val int) attribute.KeyValue {
+ return HTTPResponseContentLengthKey.Int(val)
+}
+
+// Semantic Convention for HTTP Client
+const (
+ // HTTPURLKey is the attribute Key conforming to the "http.url" semantic
+ // conventions. It represents the full HTTP request URL in the form
+ // `scheme://host[:port]/path?query[#fragment]`. Usually the fragment is
+ // not transmitted over HTTP, but if it is known, it should be included
+ // nevertheless.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'https://www.foo.bar/search?q=OpenTelemetry#SemConv'
+ // Note: `http.url` MUST NOT contain credentials passed via URL in form of
+ // `https://username:password@www.example.com/`. In such case the
+ // attribute's value should be `https://www.example.com/`.
+ HTTPURLKey = attribute.Key("http.url")
+
+ // HTTPResendCountKey is the attribute Key conforming to the
+ // "http.resend_count" semantic conventions. It represents the ordinal
+ // number of request resending attempt (for any reason, including
+ // redirects).
+ //
+ // Type: int
+ // RequirementLevel: Recommended (if and only if request was retried.)
+ // Stability: stable
+ // Examples: 3
+ // Note: The resend count SHOULD be updated each time an HTTP request gets
+ // resent by the client, regardless of what was the cause of the resending
+ // (e.g. redirection, authorization failure, 503 Server Unavailable,
+ // network issues, or any other).
+ HTTPResendCountKey = attribute.Key("http.resend_count")
+)
+
+// HTTPURL returns an attribute KeyValue conforming to the "http.url"
+// semantic conventions. It represents the full HTTP request URL in the form
+// `scheme://host[:port]/path?query[#fragment]`. Usually the fragment is not
+// transmitted over HTTP, but if it is known, it should be included
+// nevertheless.
+func HTTPURL(val string) attribute.KeyValue {
+ return HTTPURLKey.String(val)
+}
+
+// HTTPResendCount returns an attribute KeyValue conforming to the
+// "http.resend_count" semantic conventions. It represents the ordinal number
+// of request resending attempt (for any reason, including redirects).
+func HTTPResendCount(val int) attribute.KeyValue {
+ return HTTPResendCountKey.Int(val)
+}
+
+// Semantic Convention for HTTP Server
+const (
+ // HTTPSchemeKey is the attribute Key conforming to the "http.scheme"
+ // semantic conventions. It represents the URI scheme identifying the used
+ // protocol.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'http', 'https'
+ HTTPSchemeKey = attribute.Key("http.scheme")
+
+ // HTTPTargetKey is the attribute Key conforming to the "http.target"
+ // semantic conventions. It represents the full request target as passed in
+ // a HTTP request line or equivalent.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: '/path/12314/?q=ddds'
+ HTTPTargetKey = attribute.Key("http.target")
+
+ // HTTPRouteKey is the attribute Key conforming to the "http.route"
+ // semantic conventions. It represents the matched route (path template in
+ // the format used by the respective server framework). See note below
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If and only if it's available)
+ // Stability: stable
+ // Examples: '/users/:userID?', '{controller}/{action}/{id?}'
+ // Note: MUST NOT be populated when this is not supported by the HTTP
+ // server framework as the route attribute should have low-cardinality and
+ // the URI path can NOT substitute it.
+ // SHOULD include the [application root](#http-server-definitions) if there
+ // is one.
+ HTTPRouteKey = attribute.Key("http.route")
+
+ // HTTPClientIPKey is the attribute Key conforming to the "http.client_ip"
+ // semantic conventions. It represents the IP address of the original
+ // client behind all proxies, if known (e.g. from
+ // [X-Forwarded-For](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For)).
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '83.164.160.102'
+ // Note: This is not necessarily the same as `net.sock.peer.addr`, which
+ // would
+ // identify the network-level peer, which may be a proxy.
+ //
+ // This attribute should be set when a source of information different
+ // from the one used for `net.sock.peer.addr`, is available even if that
+ // other
+ // source just confirms the same value as `net.sock.peer.addr`.
+ // Rationale: For `net.sock.peer.addr`, one typically does not know if it
+ // comes from a proxy, reverse proxy, or the actual client. Setting
+ // `http.client_ip` when it's the same as `net.sock.peer.addr` means that
+ // one is at least somewhat confident that the address is not that of
+ // the closest proxy.
+ HTTPClientIPKey = attribute.Key("http.client_ip")
+)
+
+// HTTPScheme returns an attribute KeyValue conforming to the "http.scheme"
+// semantic conventions. It represents the URI scheme identifying the used
+// protocol.
+func HTTPScheme(val string) attribute.KeyValue {
+ return HTTPSchemeKey.String(val)
+}
+
+// HTTPTarget returns an attribute KeyValue conforming to the "http.target"
+// semantic conventions. It represents the full request target as passed in a
+// HTTP request line or equivalent.
+func HTTPTarget(val string) attribute.KeyValue {
+ return HTTPTargetKey.String(val)
+}
+
+// HTTPRoute returns an attribute KeyValue conforming to the "http.route"
+// semantic conventions. It represents the matched route (path template in the
+// format used by the respective server framework). See note below
+func HTTPRoute(val string) attribute.KeyValue {
+ return HTTPRouteKey.String(val)
+}
+
+// HTTPClientIP returns an attribute KeyValue conforming to the
+// "http.client_ip" semantic conventions. It represents the IP address of the
+// original client behind all proxies, if known (e.g. from
+// [X-Forwarded-For](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For)).
+func HTTPClientIP(val string) attribute.KeyValue {
+ return HTTPClientIPKey.String(val)
+}
+
+// Attributes that exist for multiple DynamoDB request types.
+const (
+ // AWSDynamoDBTableNamesKey is the attribute Key conforming to the
+ // "aws.dynamodb.table_names" semantic conventions. It represents the keys
+ // in the `RequestItems` object field.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Users', 'Cats'
+ AWSDynamoDBTableNamesKey = attribute.Key("aws.dynamodb.table_names")
+
+ // AWSDynamoDBConsumedCapacityKey is the attribute Key conforming to the
+ // "aws.dynamodb.consumed_capacity" semantic conventions. It represents the
+ // JSON-serialized value of each item in the `ConsumedCapacity` response
+ // field.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '{ "CapacityUnits": number, "GlobalSecondaryIndexes": {
+ // "string" : { "CapacityUnits": number, "ReadCapacityUnits": number,
+ // "WriteCapacityUnits": number } }, "LocalSecondaryIndexes": { "string" :
+ // { "CapacityUnits": number, "ReadCapacityUnits": number,
+ // "WriteCapacityUnits": number } }, "ReadCapacityUnits": number, "Table":
+ // { "CapacityUnits": number, "ReadCapacityUnits": number,
+ // "WriteCapacityUnits": number }, "TableName": "string",
+ // "WriteCapacityUnits": number }'
+ AWSDynamoDBConsumedCapacityKey = attribute.Key("aws.dynamodb.consumed_capacity")
+
+ // AWSDynamoDBItemCollectionMetricsKey is the attribute Key conforming to
+ // the "aws.dynamodb.item_collection_metrics" semantic conventions. It
+ // represents the JSON-serialized value of the `ItemCollectionMetrics`
+ // response field.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '{ "string" : [ { "ItemCollectionKey": { "string" : { "B":
+ // blob, "BOOL": boolean, "BS": [ blob ], "L": [ "AttributeValue" ], "M": {
+ // "string" : "AttributeValue" }, "N": "string", "NS": [ "string" ],
+ // "NULL": boolean, "S": "string", "SS": [ "string" ] } },
+ // "SizeEstimateRangeGB": [ number ] } ] }'
+ AWSDynamoDBItemCollectionMetricsKey = attribute.Key("aws.dynamodb.item_collection_metrics")
+
+ // AWSDynamoDBProvisionedReadCapacityKey is the attribute Key conforming to
+ // the "aws.dynamodb.provisioned_read_capacity" semantic conventions. It
+ // represents the value of the `ProvisionedThroughput.ReadCapacityUnits`
+ // request parameter.
+ //
+ // Type: double
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 1.0, 2.0
+ AWSDynamoDBProvisionedReadCapacityKey = attribute.Key("aws.dynamodb.provisioned_read_capacity")
+
+ // AWSDynamoDBProvisionedWriteCapacityKey is the attribute Key conforming
+ // to the "aws.dynamodb.provisioned_write_capacity" semantic conventions.
+ // It represents the value of the
+ // `ProvisionedThroughput.WriteCapacityUnits` request parameter.
+ //
+ // Type: double
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 1.0, 2.0
+ AWSDynamoDBProvisionedWriteCapacityKey = attribute.Key("aws.dynamodb.provisioned_write_capacity")
+
+ // AWSDynamoDBConsistentReadKey is the attribute Key conforming to the
+ // "aws.dynamodb.consistent_read" semantic conventions. It represents the
+ // value of the `ConsistentRead` request parameter.
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ AWSDynamoDBConsistentReadKey = attribute.Key("aws.dynamodb.consistent_read")
+
+ // AWSDynamoDBProjectionKey is the attribute Key conforming to the
+ // "aws.dynamodb.projection" semantic conventions. It represents the value
+ // of the `ProjectionExpression` request parameter.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Title', 'Title, Price, Color', 'Title, Description,
+ // RelatedItems, ProductReviews'
+ AWSDynamoDBProjectionKey = attribute.Key("aws.dynamodb.projection")
+
+ // AWSDynamoDBLimitKey is the attribute Key conforming to the
+ // "aws.dynamodb.limit" semantic conventions. It represents the value of
+ // the `Limit` request parameter.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 10
+ AWSDynamoDBLimitKey = attribute.Key("aws.dynamodb.limit")
+
+ // AWSDynamoDBAttributesToGetKey is the attribute Key conforming to the
+ // "aws.dynamodb.attributes_to_get" semantic conventions. It represents the
+ // value of the `AttributesToGet` request parameter.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'lives', 'id'
+ AWSDynamoDBAttributesToGetKey = attribute.Key("aws.dynamodb.attributes_to_get")
+
+ // AWSDynamoDBIndexNameKey is the attribute Key conforming to the
+ // "aws.dynamodb.index_name" semantic conventions. It represents the value
+ // of the `IndexName` request parameter.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'name_to_group'
+ AWSDynamoDBIndexNameKey = attribute.Key("aws.dynamodb.index_name")
+
+ // AWSDynamoDBSelectKey is the attribute Key conforming to the
+ // "aws.dynamodb.select" semantic conventions. It represents the value of
+ // the `Select` request parameter.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'ALL_ATTRIBUTES', 'COUNT'
+ AWSDynamoDBSelectKey = attribute.Key("aws.dynamodb.select")
+)
+
+// AWSDynamoDBTableNames returns an attribute KeyValue conforming to the
+// "aws.dynamodb.table_names" semantic conventions. It represents the keys in
+// the `RequestItems` object field.
+func AWSDynamoDBTableNames(val ...string) attribute.KeyValue {
+ return AWSDynamoDBTableNamesKey.StringSlice(val)
+}
+
+// AWSDynamoDBConsumedCapacity returns an attribute KeyValue conforming to
+// the "aws.dynamodb.consumed_capacity" semantic conventions. It represents the
+// JSON-serialized value of each item in the `ConsumedCapacity` response field.
+func AWSDynamoDBConsumedCapacity(val ...string) attribute.KeyValue {
+ return AWSDynamoDBConsumedCapacityKey.StringSlice(val)
+}
+
+// AWSDynamoDBItemCollectionMetrics returns an attribute KeyValue conforming
+// to the "aws.dynamodb.item_collection_metrics" semantic conventions. It
+// represents the JSON-serialized value of the `ItemCollectionMetrics` response
+// field.
+func AWSDynamoDBItemCollectionMetrics(val string) attribute.KeyValue {
+ return AWSDynamoDBItemCollectionMetricsKey.String(val)
+}
+
+// AWSDynamoDBProvisionedReadCapacity returns an attribute KeyValue
+// conforming to the "aws.dynamodb.provisioned_read_capacity" semantic
+// conventions. It represents the value of the
+// `ProvisionedThroughput.ReadCapacityUnits` request parameter.
+func AWSDynamoDBProvisionedReadCapacity(val float64) attribute.KeyValue {
+ return AWSDynamoDBProvisionedReadCapacityKey.Float64(val)
+}
+
+// AWSDynamoDBProvisionedWriteCapacity returns an attribute KeyValue
+// conforming to the "aws.dynamodb.provisioned_write_capacity" semantic
+// conventions. It represents the value of the
+// `ProvisionedThroughput.WriteCapacityUnits` request parameter.
+func AWSDynamoDBProvisionedWriteCapacity(val float64) attribute.KeyValue {
+ return AWSDynamoDBProvisionedWriteCapacityKey.Float64(val)
+}
+
+// AWSDynamoDBConsistentRead returns an attribute KeyValue conforming to the
+// "aws.dynamodb.consistent_read" semantic conventions. It represents the value
+// of the `ConsistentRead` request parameter.
+func AWSDynamoDBConsistentRead(val bool) attribute.KeyValue {
+ return AWSDynamoDBConsistentReadKey.Bool(val)
+}
+
+// AWSDynamoDBProjection returns an attribute KeyValue conforming to the
+// "aws.dynamodb.projection" semantic conventions. It represents the value of
+// the `ProjectionExpression` request parameter.
+func AWSDynamoDBProjection(val string) attribute.KeyValue {
+ return AWSDynamoDBProjectionKey.String(val)
+}
+
+// AWSDynamoDBLimit returns an attribute KeyValue conforming to the
+// "aws.dynamodb.limit" semantic conventions. It represents the value of the
+// `Limit` request parameter.
+func AWSDynamoDBLimit(val int) attribute.KeyValue {
+ return AWSDynamoDBLimitKey.Int(val)
+}
+
+// AWSDynamoDBAttributesToGet returns an attribute KeyValue conforming to
+// the "aws.dynamodb.attributes_to_get" semantic conventions. It represents the
+// value of the `AttributesToGet` request parameter.
+func AWSDynamoDBAttributesToGet(val ...string) attribute.KeyValue {
+ return AWSDynamoDBAttributesToGetKey.StringSlice(val)
+}
+
+// AWSDynamoDBIndexName returns an attribute KeyValue conforming to the
+// "aws.dynamodb.index_name" semantic conventions. It represents the value of
+// the `IndexName` request parameter.
+func AWSDynamoDBIndexName(val string) attribute.KeyValue {
+ return AWSDynamoDBIndexNameKey.String(val)
+}
+
+// AWSDynamoDBSelect returns an attribute KeyValue conforming to the
+// "aws.dynamodb.select" semantic conventions. It represents the value of the
+// `Select` request parameter.
+func AWSDynamoDBSelect(val string) attribute.KeyValue {
+ return AWSDynamoDBSelectKey.String(val)
+}
+
+// DynamoDB.CreateTable
+const (
+ // AWSDynamoDBGlobalSecondaryIndexesKey is the attribute Key conforming to
+ // the "aws.dynamodb.global_secondary_indexes" semantic conventions. It
+ // represents the JSON-serialized value of each item of the
+ // `GlobalSecondaryIndexes` request field
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '{ "IndexName": "string", "KeySchema": [ { "AttributeName":
+ // "string", "KeyType": "string" } ], "Projection": { "NonKeyAttributes": [
+ // "string" ], "ProjectionType": "string" }, "ProvisionedThroughput": {
+ // "ReadCapacityUnits": number, "WriteCapacityUnits": number } }'
+ AWSDynamoDBGlobalSecondaryIndexesKey = attribute.Key("aws.dynamodb.global_secondary_indexes")
+
+ // AWSDynamoDBLocalSecondaryIndexesKey is the attribute Key conforming to
+ // the "aws.dynamodb.local_secondary_indexes" semantic conventions. It
+ // represents the JSON-serialized value of each item of the
+ // `LocalSecondaryIndexes` request field.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '{ "IndexARN": "string", "IndexName": "string",
+ // "IndexSizeBytes": number, "ItemCount": number, "KeySchema": [ {
+ // "AttributeName": "string", "KeyType": "string" } ], "Projection": {
+ // "NonKeyAttributes": [ "string" ], "ProjectionType": "string" } }'
+ AWSDynamoDBLocalSecondaryIndexesKey = attribute.Key("aws.dynamodb.local_secondary_indexes")
+)
+
+// AWSDynamoDBGlobalSecondaryIndexes returns an attribute KeyValue
+// conforming to the "aws.dynamodb.global_secondary_indexes" semantic
+// conventions. It represents the JSON-serialized value of each item of the
+// `GlobalSecondaryIndexes` request field
+func AWSDynamoDBGlobalSecondaryIndexes(val ...string) attribute.KeyValue {
+ return AWSDynamoDBGlobalSecondaryIndexesKey.StringSlice(val)
+}
+
+// AWSDynamoDBLocalSecondaryIndexes returns an attribute KeyValue conforming
+// to the "aws.dynamodb.local_secondary_indexes" semantic conventions. It
+// represents the JSON-serialized value of each item of the
+// `LocalSecondaryIndexes` request field.
+func AWSDynamoDBLocalSecondaryIndexes(val ...string) attribute.KeyValue {
+ return AWSDynamoDBLocalSecondaryIndexesKey.StringSlice(val)
+}
+
+// DynamoDB.ListTables
+const (
+ // AWSDynamoDBExclusiveStartTableKey is the attribute Key conforming to the
+ // "aws.dynamodb.exclusive_start_table" semantic conventions. It represents
+ // the value of the `ExclusiveStartTableName` request parameter.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Users', 'CatsTable'
+ AWSDynamoDBExclusiveStartTableKey = attribute.Key("aws.dynamodb.exclusive_start_table")
+
+ // AWSDynamoDBTableCountKey is the attribute Key conforming to the
+ // "aws.dynamodb.table_count" semantic conventions. It represents the the
+ // number of items in the `TableNames` response parameter.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 20
+ AWSDynamoDBTableCountKey = attribute.Key("aws.dynamodb.table_count")
+)
+
+// AWSDynamoDBExclusiveStartTable returns an attribute KeyValue conforming
+// to the "aws.dynamodb.exclusive_start_table" semantic conventions. It
+// represents the value of the `ExclusiveStartTableName` request parameter.
+func AWSDynamoDBExclusiveStartTable(val string) attribute.KeyValue {
+ return AWSDynamoDBExclusiveStartTableKey.String(val)
+}
+
+// AWSDynamoDBTableCount returns an attribute KeyValue conforming to the
+// "aws.dynamodb.table_count" semantic conventions. It represents the the
+// number of items in the `TableNames` response parameter.
+func AWSDynamoDBTableCount(val int) attribute.KeyValue {
+ return AWSDynamoDBTableCountKey.Int(val)
+}
+
+// DynamoDB.Query
+const (
+ // AWSDynamoDBScanForwardKey is the attribute Key conforming to the
+ // "aws.dynamodb.scan_forward" semantic conventions. It represents the
+ // value of the `ScanIndexForward` request parameter.
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ AWSDynamoDBScanForwardKey = attribute.Key("aws.dynamodb.scan_forward")
+)
+
+// AWSDynamoDBScanForward returns an attribute KeyValue conforming to the
+// "aws.dynamodb.scan_forward" semantic conventions. It represents the value of
+// the `ScanIndexForward` request parameter.
+func AWSDynamoDBScanForward(val bool) attribute.KeyValue {
+ return AWSDynamoDBScanForwardKey.Bool(val)
+}
+
+// DynamoDB.Scan
+const (
+ // AWSDynamoDBSegmentKey is the attribute Key conforming to the
+ // "aws.dynamodb.segment" semantic conventions. It represents the value of
+ // the `Segment` request parameter.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 10
+ AWSDynamoDBSegmentKey = attribute.Key("aws.dynamodb.segment")
+
+ // AWSDynamoDBTotalSegmentsKey is the attribute Key conforming to the
+ // "aws.dynamodb.total_segments" semantic conventions. It represents the
+ // value of the `TotalSegments` request parameter.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 100
+ AWSDynamoDBTotalSegmentsKey = attribute.Key("aws.dynamodb.total_segments")
+
+ // AWSDynamoDBCountKey is the attribute Key conforming to the
+ // "aws.dynamodb.count" semantic conventions. It represents the value of
+ // the `Count` response parameter.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 10
+ AWSDynamoDBCountKey = attribute.Key("aws.dynamodb.count")
+
+ // AWSDynamoDBScannedCountKey is the attribute Key conforming to the
+ // "aws.dynamodb.scanned_count" semantic conventions. It represents the
+ // value of the `ScannedCount` response parameter.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 50
+ AWSDynamoDBScannedCountKey = attribute.Key("aws.dynamodb.scanned_count")
+)
+
+// AWSDynamoDBSegment returns an attribute KeyValue conforming to the
+// "aws.dynamodb.segment" semantic conventions. It represents the value of the
+// `Segment` request parameter.
+func AWSDynamoDBSegment(val int) attribute.KeyValue {
+ return AWSDynamoDBSegmentKey.Int(val)
+}
+
+// AWSDynamoDBTotalSegments returns an attribute KeyValue conforming to the
+// "aws.dynamodb.total_segments" semantic conventions. It represents the value
+// of the `TotalSegments` request parameter.
+func AWSDynamoDBTotalSegments(val int) attribute.KeyValue {
+ return AWSDynamoDBTotalSegmentsKey.Int(val)
+}
+
+// AWSDynamoDBCount returns an attribute KeyValue conforming to the
+// "aws.dynamodb.count" semantic conventions. It represents the value of the
+// `Count` response parameter.
+func AWSDynamoDBCount(val int) attribute.KeyValue {
+ return AWSDynamoDBCountKey.Int(val)
+}
+
+// AWSDynamoDBScannedCount returns an attribute KeyValue conforming to the
+// "aws.dynamodb.scanned_count" semantic conventions. It represents the value
+// of the `ScannedCount` response parameter.
+func AWSDynamoDBScannedCount(val int) attribute.KeyValue {
+ return AWSDynamoDBScannedCountKey.Int(val)
+}
+
+// DynamoDB.UpdateTable
+const (
+ // AWSDynamoDBAttributeDefinitionsKey is the attribute Key conforming to
+ // the "aws.dynamodb.attribute_definitions" semantic conventions. It
+ // represents the JSON-serialized value of each item in the
+ // `AttributeDefinitions` request field.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '{ "AttributeName": "string", "AttributeType": "string" }'
+ AWSDynamoDBAttributeDefinitionsKey = attribute.Key("aws.dynamodb.attribute_definitions")
+
+ // AWSDynamoDBGlobalSecondaryIndexUpdatesKey is the attribute Key
+ // conforming to the "aws.dynamodb.global_secondary_index_updates" semantic
+ // conventions. It represents the JSON-serialized value of each item in the
+ // the `GlobalSecondaryIndexUpdates` request field.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '{ "Create": { "IndexName": "string", "KeySchema": [ {
+ // "AttributeName": "string", "KeyType": "string" } ], "Projection": {
+ // "NonKeyAttributes": [ "string" ], "ProjectionType": "string" },
+ // "ProvisionedThroughput": { "ReadCapacityUnits": number,
+ // "WriteCapacityUnits": number } }'
+ AWSDynamoDBGlobalSecondaryIndexUpdatesKey = attribute.Key("aws.dynamodb.global_secondary_index_updates")
+)
+
+// AWSDynamoDBAttributeDefinitions returns an attribute KeyValue conforming
+// to the "aws.dynamodb.attribute_definitions" semantic conventions. It
+// represents the JSON-serialized value of each item in the
+// `AttributeDefinitions` request field.
+func AWSDynamoDBAttributeDefinitions(val ...string) attribute.KeyValue {
+ return AWSDynamoDBAttributeDefinitionsKey.StringSlice(val)
+}
+
+// AWSDynamoDBGlobalSecondaryIndexUpdates returns an attribute KeyValue
+// conforming to the "aws.dynamodb.global_secondary_index_updates" semantic
+// conventions. It represents the JSON-serialized value of each item in the the
+// `GlobalSecondaryIndexUpdates` request field.
+func AWSDynamoDBGlobalSecondaryIndexUpdates(val ...string) attribute.KeyValue {
+ return AWSDynamoDBGlobalSecondaryIndexUpdatesKey.StringSlice(val)
+}
+
+// Semantic conventions to apply when instrumenting the GraphQL implementation.
+// They map GraphQL operations to attributes on a Span.
+const (
+ // GraphqlOperationNameKey is the attribute Key conforming to the
+ // "graphql.operation.name" semantic conventions. It represents the name of
+ // the operation being executed.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'findBookByID'
+ GraphqlOperationNameKey = attribute.Key("graphql.operation.name")
+
+ // GraphqlOperationTypeKey is the attribute Key conforming to the
+ // "graphql.operation.type" semantic conventions. It represents the type of
+ // the operation being executed.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'query', 'mutation', 'subscription'
+ GraphqlOperationTypeKey = attribute.Key("graphql.operation.type")
+
+ // GraphqlDocumentKey is the attribute Key conforming to the
+ // "graphql.document" semantic conventions. It represents the GraphQL
+ // document being executed.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'query findBookByID { bookByID(id: ?) { name } }'
+ // Note: The value may be sanitized to exclude sensitive information.
+ GraphqlDocumentKey = attribute.Key("graphql.document")
+)
+
+var (
+ // GraphQL query
+ GraphqlOperationTypeQuery = GraphqlOperationTypeKey.String("query")
+ // GraphQL mutation
+ GraphqlOperationTypeMutation = GraphqlOperationTypeKey.String("mutation")
+ // GraphQL subscription
+ GraphqlOperationTypeSubscription = GraphqlOperationTypeKey.String("subscription")
+)
+
+// GraphqlOperationName returns an attribute KeyValue conforming to the
+// "graphql.operation.name" semantic conventions. It represents the name of the
+// operation being executed.
+func GraphqlOperationName(val string) attribute.KeyValue {
+ return GraphqlOperationNameKey.String(val)
+}
+
+// GraphqlDocument returns an attribute KeyValue conforming to the
+// "graphql.document" semantic conventions. It represents the GraphQL document
+// being executed.
+func GraphqlDocument(val string) attribute.KeyValue {
+ return GraphqlDocumentKey.String(val)
+}
+
+// Semantic convention describing per-message attributes populated on messaging
+// spans or links.
+const (
+ // MessagingMessageIDKey is the attribute Key conforming to the
+ // "messaging.message.id" semantic conventions. It represents a value used
+ // by the messaging system as an identifier for the message, represented as
+ // a string.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '452a7c7c7c7048c2f887f61572b18fc2'
+ MessagingMessageIDKey = attribute.Key("messaging.message.id")
+
+ // MessagingMessageConversationIDKey is the attribute Key conforming to the
+ // "messaging.message.conversation_id" semantic conventions. It represents
+ // the [conversation ID](#conversations) identifying the conversation to
+ // which the message belongs, represented as a string. Sometimes called
+ // "Correlation ID".
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'MyConversationID'
+ MessagingMessageConversationIDKey = attribute.Key("messaging.message.conversation_id")
+
+ // MessagingMessagePayloadSizeBytesKey is the attribute Key conforming to
+ // the "messaging.message.payload_size_bytes" semantic conventions. It
+ // represents the (uncompressed) size of the message payload in bytes. Also
+ // use this attribute if it is unknown whether the compressed or
+ // uncompressed payload size is reported.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 2738
+ MessagingMessagePayloadSizeBytesKey = attribute.Key("messaging.message.payload_size_bytes")
+
+ // MessagingMessagePayloadCompressedSizeBytesKey is the attribute Key
+ // conforming to the "messaging.message.payload_compressed_size_bytes"
+ // semantic conventions. It represents the compressed size of the message
+ // payload in bytes.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 2048
+ MessagingMessagePayloadCompressedSizeBytesKey = attribute.Key("messaging.message.payload_compressed_size_bytes")
+)
+
+// MessagingMessageID returns an attribute KeyValue conforming to the
+// "messaging.message.id" semantic conventions. It represents a value used by
+// the messaging system as an identifier for the message, represented as a
+// string.
+func MessagingMessageID(val string) attribute.KeyValue {
+ return MessagingMessageIDKey.String(val)
+}
+
+// MessagingMessageConversationID returns an attribute KeyValue conforming
+// to the "messaging.message.conversation_id" semantic conventions. It
+// represents the [conversation ID](#conversations) identifying the
+// conversation to which the message belongs, represented as a string.
+// Sometimes called "Correlation ID".
+func MessagingMessageConversationID(val string) attribute.KeyValue {
+ return MessagingMessageConversationIDKey.String(val)
+}
+
+// MessagingMessagePayloadSizeBytes returns an attribute KeyValue conforming
+// to the "messaging.message.payload_size_bytes" semantic conventions. It
+// represents the (uncompressed) size of the message payload in bytes. Also use
+// this attribute if it is unknown whether the compressed or uncompressed
+// payload size is reported.
+func MessagingMessagePayloadSizeBytes(val int) attribute.KeyValue {
+ return MessagingMessagePayloadSizeBytesKey.Int(val)
+}
+
+// MessagingMessagePayloadCompressedSizeBytes returns an attribute KeyValue
+// conforming to the "messaging.message.payload_compressed_size_bytes" semantic
+// conventions. It represents the compressed size of the message payload in
+// bytes.
+func MessagingMessagePayloadCompressedSizeBytes(val int) attribute.KeyValue {
+ return MessagingMessagePayloadCompressedSizeBytesKey.Int(val)
+}
+
+// Semantic convention for attributes that describe messaging destination on
+// broker
+const (
+ // MessagingDestinationNameKey is the attribute Key conforming to the
+ // "messaging.destination.name" semantic conventions. It represents the
+ // message destination name
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'MyQueue', 'MyTopic'
+ // Note: Destination name SHOULD uniquely identify a specific queue, topic
+ // or other entity within the broker. If
+ // the broker does not have such notion, the destination name SHOULD
+ // uniquely identify the broker.
+ MessagingDestinationNameKey = attribute.Key("messaging.destination.name")
+
+ // MessagingDestinationKindKey is the attribute Key conforming to the
+ // "messaging.destination.kind" semantic conventions. It represents the
+ // kind of message destination
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingDestinationKindKey = attribute.Key("messaging.destination.kind")
+
+ // MessagingDestinationTemplateKey is the attribute Key conforming to the
+ // "messaging.destination.template" semantic conventions. It represents the
+ // low cardinality representation of the messaging destination name
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '/customers/{customerID}'
+ // Note: Destination names could be constructed from templates. An example
+ // would be a destination name involving a user name or product id.
+ // Although the destination name in this case is of high cardinality, the
+ // underlying template is of low cardinality and can be effectively used
+ // for grouping and aggregation.
+ MessagingDestinationTemplateKey = attribute.Key("messaging.destination.template")
+
+ // MessagingDestinationTemporaryKey is the attribute Key conforming to the
+ // "messaging.destination.temporary" semantic conventions. It represents a
+ // boolean that is true if the message destination is temporary and might
+ // not exist anymore after messages are processed.
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingDestinationTemporaryKey = attribute.Key("messaging.destination.temporary")
+
+ // MessagingDestinationAnonymousKey is the attribute Key conforming to the
+ // "messaging.destination.anonymous" semantic conventions. It represents a
+ // boolean that is true if the message destination is anonymous (could be
+ // unnamed or have auto-generated name).
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingDestinationAnonymousKey = attribute.Key("messaging.destination.anonymous")
+)
+
+var (
+ // A message sent to a queue
+ MessagingDestinationKindQueue = MessagingDestinationKindKey.String("queue")
+ // A message sent to a topic
+ MessagingDestinationKindTopic = MessagingDestinationKindKey.String("topic")
+)
+
+// MessagingDestinationName returns an attribute KeyValue conforming to the
+// "messaging.destination.name" semantic conventions. It represents the message
+// destination name
+func MessagingDestinationName(val string) attribute.KeyValue {
+ return MessagingDestinationNameKey.String(val)
+}
+
+// MessagingDestinationTemplate returns an attribute KeyValue conforming to
+// the "messaging.destination.template" semantic conventions. It represents the
+// low cardinality representation of the messaging destination name
+func MessagingDestinationTemplate(val string) attribute.KeyValue {
+ return MessagingDestinationTemplateKey.String(val)
+}
+
+// MessagingDestinationTemporary returns an attribute KeyValue conforming to
+// the "messaging.destination.temporary" semantic conventions. It represents a
+// boolean that is true if the message destination is temporary and might not
+// exist anymore after messages are processed.
+func MessagingDestinationTemporary(val bool) attribute.KeyValue {
+ return MessagingDestinationTemporaryKey.Bool(val)
+}
+
+// MessagingDestinationAnonymous returns an attribute KeyValue conforming to
+// the "messaging.destination.anonymous" semantic conventions. It represents a
+// boolean that is true if the message destination is anonymous (could be
+// unnamed or have auto-generated name).
+func MessagingDestinationAnonymous(val bool) attribute.KeyValue {
+ return MessagingDestinationAnonymousKey.Bool(val)
+}
+
+// Semantic convention for attributes that describe messaging source on broker
+const (
+ // MessagingSourceNameKey is the attribute Key conforming to the
+ // "messaging.source.name" semantic conventions. It represents the message
+ // source name
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'MyQueue', 'MyTopic'
+ // Note: Source name SHOULD uniquely identify a specific queue, topic, or
+ // other entity within the broker. If
+ // the broker does not have such notion, the source name SHOULD uniquely
+ // identify the broker.
+ MessagingSourceNameKey = attribute.Key("messaging.source.name")
+
+ // MessagingSourceKindKey is the attribute Key conforming to the
+ // "messaging.source.kind" semantic conventions. It represents the kind of
+ // message source
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingSourceKindKey = attribute.Key("messaging.source.kind")
+
+ // MessagingSourceTemplateKey is the attribute Key conforming to the
+ // "messaging.source.template" semantic conventions. It represents the low
+ // cardinality representation of the messaging source name
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '/customers/{customerID}'
+ // Note: Source names could be constructed from templates. An example would
+ // be a source name involving a user name or product id. Although the
+ // source name in this case is of high cardinality, the underlying template
+ // is of low cardinality and can be effectively used for grouping and
+ // aggregation.
+ MessagingSourceTemplateKey = attribute.Key("messaging.source.template")
+
+ // MessagingSourceTemporaryKey is the attribute Key conforming to the
+ // "messaging.source.temporary" semantic conventions. It represents a
+ // boolean that is true if the message source is temporary and might not
+ // exist anymore after messages are processed.
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingSourceTemporaryKey = attribute.Key("messaging.source.temporary")
+
+ // MessagingSourceAnonymousKey is the attribute Key conforming to the
+ // "messaging.source.anonymous" semantic conventions. It represents a
+ // boolean that is true if the message source is anonymous (could be
+ // unnamed or have auto-generated name).
+ //
+ // Type: boolean
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingSourceAnonymousKey = attribute.Key("messaging.source.anonymous")
+)
+
+var (
+ // A message received from a queue
+ MessagingSourceKindQueue = MessagingSourceKindKey.String("queue")
+ // A message received from a topic
+ MessagingSourceKindTopic = MessagingSourceKindKey.String("topic")
+)
+
+// MessagingSourceName returns an attribute KeyValue conforming to the
+// "messaging.source.name" semantic conventions. It represents the message
+// source name
+func MessagingSourceName(val string) attribute.KeyValue {
+ return MessagingSourceNameKey.String(val)
+}
+
+// MessagingSourceTemplate returns an attribute KeyValue conforming to the
+// "messaging.source.template" semantic conventions. It represents the low
+// cardinality representation of the messaging source name
+func MessagingSourceTemplate(val string) attribute.KeyValue {
+ return MessagingSourceTemplateKey.String(val)
+}
+
+// MessagingSourceTemporary returns an attribute KeyValue conforming to the
+// "messaging.source.temporary" semantic conventions. It represents a boolean
+// that is true if the message source is temporary and might not exist anymore
+// after messages are processed.
+func MessagingSourceTemporary(val bool) attribute.KeyValue {
+ return MessagingSourceTemporaryKey.Bool(val)
+}
+
+// MessagingSourceAnonymous returns an attribute KeyValue conforming to the
+// "messaging.source.anonymous" semantic conventions. It represents a boolean
+// that is true if the message source is anonymous (could be unnamed or have
+// auto-generated name).
+func MessagingSourceAnonymous(val bool) attribute.KeyValue {
+ return MessagingSourceAnonymousKey.Bool(val)
+}
+
+// General attributes used in messaging systems.
+const (
+ // MessagingSystemKey is the attribute Key conforming to the
+ // "messaging.system" semantic conventions. It represents a string
+ // identifying the messaging system.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'kafka', 'rabbitmq', 'rocketmq', 'activemq', 'AmazonSQS'
+ MessagingSystemKey = attribute.Key("messaging.system")
+
+ // MessagingOperationKey is the attribute Key conforming to the
+ // "messaging.operation" semantic conventions. It represents a string
+ // identifying the kind of messaging operation as defined in the [Operation
+ // names](#operation-names) section above.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ // Note: If a custom value is used, it MUST be of low cardinality.
+ MessagingOperationKey = attribute.Key("messaging.operation")
+
+ // MessagingBatchMessageCountKey is the attribute Key conforming to the
+ // "messaging.batch.message_count" semantic conventions. It represents the
+ // number of messages sent, received, or processed in the scope of the
+ // batching operation.
+ //
+ // Type: int
+ // RequirementLevel: ConditionallyRequired (If the span describes an
+ // operation on a batch of messages.)
+ // Stability: stable
+ // Examples: 0, 1, 2
+ // Note: Instrumentations SHOULD NOT set `messaging.batch.message_count` on
+ // spans that operate with a single message. When a messaging client
+ // library supports both batch and single-message API for the same
+ // operation, instrumentations SHOULD use `messaging.batch.message_count`
+ // for batching APIs and SHOULD NOT use it for single-message APIs.
+ MessagingBatchMessageCountKey = attribute.Key("messaging.batch.message_count")
+)
+
+var (
+ // publish
+ MessagingOperationPublish = MessagingOperationKey.String("publish")
+ // receive
+ MessagingOperationReceive = MessagingOperationKey.String("receive")
+ // process
+ MessagingOperationProcess = MessagingOperationKey.String("process")
+)
+
+// MessagingSystem returns an attribute KeyValue conforming to the
+// "messaging.system" semantic conventions. It represents a string identifying
+// the messaging system.
+func MessagingSystem(val string) attribute.KeyValue {
+ return MessagingSystemKey.String(val)
+}
+
+// MessagingBatchMessageCount returns an attribute KeyValue conforming to
+// the "messaging.batch.message_count" semantic conventions. It represents the
+// number of messages sent, received, or processed in the scope of the batching
+// operation.
+func MessagingBatchMessageCount(val int) attribute.KeyValue {
+ return MessagingBatchMessageCountKey.Int(val)
+}
+
+// Semantic convention for a consumer of messages received from a messaging
+// system
+const (
+ // MessagingConsumerIDKey is the attribute Key conforming to the
+ // "messaging.consumer.id" semantic conventions. It represents the
+ // identifier for the consumer receiving a message. For Kafka, set it to
+ // `{messaging.kafka.consumer.group} - {messaging.kafka.client_id}`, if
+ // both are present, or only `messaging.kafka.consumer.group`. For brokers,
+ // such as RabbitMQ and Artemis, set it to the `client_id` of the client
+ // consuming the message.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'mygroup - client-6'
+ MessagingConsumerIDKey = attribute.Key("messaging.consumer.id")
+)
+
+// MessagingConsumerID returns an attribute KeyValue conforming to the
+// "messaging.consumer.id" semantic conventions. It represents the identifier
+// for the consumer receiving a message. For Kafka, set it to
+// `{messaging.kafka.consumer.group} - {messaging.kafka.client_id}`, if both
+// are present, or only `messaging.kafka.consumer.group`. For brokers, such as
+// RabbitMQ and Artemis, set it to the `client_id` of the client consuming the
+// message.
+func MessagingConsumerID(val string) attribute.KeyValue {
+ return MessagingConsumerIDKey.String(val)
+}
+
+// Attributes for RabbitMQ
+const (
+ // MessagingRabbitmqDestinationRoutingKeyKey is the attribute Key
+ // conforming to the "messaging.rabbitmq.destination.routing_key" semantic
+ // conventions. It represents the rabbitMQ message routing key.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If not empty.)
+ // Stability: stable
+ // Examples: 'myKey'
+ MessagingRabbitmqDestinationRoutingKeyKey = attribute.Key("messaging.rabbitmq.destination.routing_key")
+)
+
+// MessagingRabbitmqDestinationRoutingKey returns an attribute KeyValue
+// conforming to the "messaging.rabbitmq.destination.routing_key" semantic
+// conventions. It represents the rabbitMQ message routing key.
+func MessagingRabbitmqDestinationRoutingKey(val string) attribute.KeyValue {
+ return MessagingRabbitmqDestinationRoutingKeyKey.String(val)
+}
+
+// Attributes for Apache Kafka
+const (
+ // MessagingKafkaMessageKeyKey is the attribute Key conforming to the
+ // "messaging.kafka.message.key" semantic conventions. It represents the
+ // message keys in Kafka are used for grouping alike messages to ensure
+ // they're processed on the same partition. They differ from
+ // `messaging.message.id` in that they're not unique. If the key is `null`,
+ // the attribute MUST NOT be set.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'myKey'
+ // Note: If the key type is not string, it's string representation has to
+ // be supplied for the attribute. If the key has no unambiguous, canonical
+ // string form, don't include its value.
+ MessagingKafkaMessageKeyKey = attribute.Key("messaging.kafka.message.key")
+
+ // MessagingKafkaConsumerGroupKey is the attribute Key conforming to the
+ // "messaging.kafka.consumer.group" semantic conventions. It represents the
+ // name of the Kafka Consumer Group that is handling the message. Only
+ // applies to consumers, not producers.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'my-group'
+ MessagingKafkaConsumerGroupKey = attribute.Key("messaging.kafka.consumer.group")
+
+ // MessagingKafkaClientIDKey is the attribute Key conforming to the
+ // "messaging.kafka.client_id" semantic conventions. It represents the
+ // client ID for the Consumer or Producer that is handling the message.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'client-5'
+ MessagingKafkaClientIDKey = attribute.Key("messaging.kafka.client_id")
+
+ // MessagingKafkaDestinationPartitionKey is the attribute Key conforming to
+ // the "messaging.kafka.destination.partition" semantic conventions. It
+ // represents the partition the message is sent to.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 2
+ MessagingKafkaDestinationPartitionKey = attribute.Key("messaging.kafka.destination.partition")
+
+ // MessagingKafkaSourcePartitionKey is the attribute Key conforming to the
+ // "messaging.kafka.source.partition" semantic conventions. It represents
+ // the partition the message is received from.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 2
+ MessagingKafkaSourcePartitionKey = attribute.Key("messaging.kafka.source.partition")
+
+ // MessagingKafkaMessageOffsetKey is the attribute Key conforming to the
+ // "messaging.kafka.message.offset" semantic conventions. It represents the
+ // offset of a record in the corresponding Kafka partition.
+ //
+ // Type: int
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 42
+ MessagingKafkaMessageOffsetKey = attribute.Key("messaging.kafka.message.offset")
+
+ // MessagingKafkaMessageTombstoneKey is the attribute Key conforming to the
+ // "messaging.kafka.message.tombstone" semantic conventions. It represents
+ // a boolean that is true if the message is a tombstone.
+ //
+ // Type: boolean
+ // RequirementLevel: ConditionallyRequired (If value is `true`. When
+ // missing, the value is assumed to be `false`.)
+ // Stability: stable
+ MessagingKafkaMessageTombstoneKey = attribute.Key("messaging.kafka.message.tombstone")
+)
+
+// MessagingKafkaMessageKey returns an attribute KeyValue conforming to the
+// "messaging.kafka.message.key" semantic conventions. It represents the
+// message keys in Kafka are used for grouping alike messages to ensure they're
+// processed on the same partition. They differ from `messaging.message.id` in
+// that they're not unique. If the key is `null`, the attribute MUST NOT be
+// set.
+func MessagingKafkaMessageKey(val string) attribute.KeyValue {
+ return MessagingKafkaMessageKeyKey.String(val)
+}
+
+// MessagingKafkaConsumerGroup returns an attribute KeyValue conforming to
+// the "messaging.kafka.consumer.group" semantic conventions. It represents the
+// name of the Kafka Consumer Group that is handling the message. Only applies
+// to consumers, not producers.
+func MessagingKafkaConsumerGroup(val string) attribute.KeyValue {
+ return MessagingKafkaConsumerGroupKey.String(val)
+}
+
+// MessagingKafkaClientID returns an attribute KeyValue conforming to the
+// "messaging.kafka.client_id" semantic conventions. It represents the client
+// ID for the Consumer or Producer that is handling the message.
+func MessagingKafkaClientID(val string) attribute.KeyValue {
+ return MessagingKafkaClientIDKey.String(val)
+}
+
+// MessagingKafkaDestinationPartition returns an attribute KeyValue
+// conforming to the "messaging.kafka.destination.partition" semantic
+// conventions. It represents the partition the message is sent to.
+func MessagingKafkaDestinationPartition(val int) attribute.KeyValue {
+ return MessagingKafkaDestinationPartitionKey.Int(val)
+}
+
+// MessagingKafkaSourcePartition returns an attribute KeyValue conforming to
+// the "messaging.kafka.source.partition" semantic conventions. It represents
+// the partition the message is received from.
+func MessagingKafkaSourcePartition(val int) attribute.KeyValue {
+ return MessagingKafkaSourcePartitionKey.Int(val)
+}
+
+// MessagingKafkaMessageOffset returns an attribute KeyValue conforming to
+// the "messaging.kafka.message.offset" semantic conventions. It represents the
+// offset of a record in the corresponding Kafka partition.
+func MessagingKafkaMessageOffset(val int) attribute.KeyValue {
+ return MessagingKafkaMessageOffsetKey.Int(val)
+}
+
+// MessagingKafkaMessageTombstone returns an attribute KeyValue conforming
+// to the "messaging.kafka.message.tombstone" semantic conventions. It
+// represents a boolean that is true if the message is a tombstone.
+func MessagingKafkaMessageTombstone(val bool) attribute.KeyValue {
+ return MessagingKafkaMessageTombstoneKey.Bool(val)
+}
+
+// Attributes for Apache RocketMQ
+const (
+ // MessagingRocketmqNamespaceKey is the attribute Key conforming to the
+ // "messaging.rocketmq.namespace" semantic conventions. It represents the
+ // namespace of RocketMQ resources, resources in different namespaces are
+ // individual.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'myNamespace'
+ MessagingRocketmqNamespaceKey = attribute.Key("messaging.rocketmq.namespace")
+
+ // MessagingRocketmqClientGroupKey is the attribute Key conforming to the
+ // "messaging.rocketmq.client_group" semantic conventions. It represents
+ // the name of the RocketMQ producer/consumer group that is handling the
+ // message. The client type is identified by the SpanKind.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'myConsumerGroup'
+ MessagingRocketmqClientGroupKey = attribute.Key("messaging.rocketmq.client_group")
+
+ // MessagingRocketmqClientIDKey is the attribute Key conforming to the
+ // "messaging.rocketmq.client_id" semantic conventions. It represents the
+ // unique identifier for each client.
+ //
+ // Type: string
+ // RequirementLevel: Required
+ // Stability: stable
+ // Examples: 'myhost@8742@s8083jm'
+ MessagingRocketmqClientIDKey = attribute.Key("messaging.rocketmq.client_id")
+
+ // MessagingRocketmqMessageDeliveryTimestampKey is the attribute Key
+ // conforming to the "messaging.rocketmq.message.delivery_timestamp"
+ // semantic conventions. It represents the timestamp in milliseconds that
+ // the delay message is expected to be delivered to consumer.
+ //
+ // Type: int
+ // RequirementLevel: ConditionallyRequired (If the message type is delay
+ // and delay time level is not specified.)
+ // Stability: stable
+ // Examples: 1665987217045
+ MessagingRocketmqMessageDeliveryTimestampKey = attribute.Key("messaging.rocketmq.message.delivery_timestamp")
+
+ // MessagingRocketmqMessageDelayTimeLevelKey is the attribute Key
+ // conforming to the "messaging.rocketmq.message.delay_time_level" semantic
+ // conventions. It represents the delay time level for delay message, which
+ // determines the message delay time.
+ //
+ // Type: int
+ // RequirementLevel: ConditionallyRequired (If the message type is delay
+ // and delivery timestamp is not specified.)
+ // Stability: stable
+ // Examples: 3
+ MessagingRocketmqMessageDelayTimeLevelKey = attribute.Key("messaging.rocketmq.message.delay_time_level")
+
+ // MessagingRocketmqMessageGroupKey is the attribute Key conforming to the
+ // "messaging.rocketmq.message.group" semantic conventions. It represents
+ // the it is essential for FIFO message. Messages that belong to the same
+ // message group are always processed one by one within the same consumer
+ // group.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If the message type is FIFO.)
+ // Stability: stable
+ // Examples: 'myMessageGroup'
+ MessagingRocketmqMessageGroupKey = attribute.Key("messaging.rocketmq.message.group")
+
+ // MessagingRocketmqMessageTypeKey is the attribute Key conforming to the
+ // "messaging.rocketmq.message.type" semantic conventions. It represents
+ // the type of message.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingRocketmqMessageTypeKey = attribute.Key("messaging.rocketmq.message.type")
+
+ // MessagingRocketmqMessageTagKey is the attribute Key conforming to the
+ // "messaging.rocketmq.message.tag" semantic conventions. It represents the
+ // secondary classifier of message besides topic.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'tagA'
+ MessagingRocketmqMessageTagKey = attribute.Key("messaging.rocketmq.message.tag")
+
+ // MessagingRocketmqMessageKeysKey is the attribute Key conforming to the
+ // "messaging.rocketmq.message.keys" semantic conventions. It represents
+ // the key(s) of message, another way to mark message besides message id.
+ //
+ // Type: string[]
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'keyA', 'keyB'
+ MessagingRocketmqMessageKeysKey = attribute.Key("messaging.rocketmq.message.keys")
+
+ // MessagingRocketmqConsumptionModelKey is the attribute Key conforming to
+ // the "messaging.rocketmq.consumption_model" semantic conventions. It
+ // represents the model of message consumption. This only applies to
+ // consumer spans.
+ //
+ // Type: Enum
+ // RequirementLevel: Optional
+ // Stability: stable
+ MessagingRocketmqConsumptionModelKey = attribute.Key("messaging.rocketmq.consumption_model")
+)
+
+var (
+ // Normal message
+ MessagingRocketmqMessageTypeNormal = MessagingRocketmqMessageTypeKey.String("normal")
+ // FIFO message
+ MessagingRocketmqMessageTypeFifo = MessagingRocketmqMessageTypeKey.String("fifo")
+ // Delay message
+ MessagingRocketmqMessageTypeDelay = MessagingRocketmqMessageTypeKey.String("delay")
+ // Transaction message
+ MessagingRocketmqMessageTypeTransaction = MessagingRocketmqMessageTypeKey.String("transaction")
+)
+
+var (
+ // Clustering consumption model
+ MessagingRocketmqConsumptionModelClustering = MessagingRocketmqConsumptionModelKey.String("clustering")
+ // Broadcasting consumption model
+ MessagingRocketmqConsumptionModelBroadcasting = MessagingRocketmqConsumptionModelKey.String("broadcasting")
+)
+
+// MessagingRocketmqNamespace returns an attribute KeyValue conforming to
+// the "messaging.rocketmq.namespace" semantic conventions. It represents the
+// namespace of RocketMQ resources, resources in different namespaces are
+// individual.
+func MessagingRocketmqNamespace(val string) attribute.KeyValue {
+ return MessagingRocketmqNamespaceKey.String(val)
+}
+
+// MessagingRocketmqClientGroup returns an attribute KeyValue conforming to
+// the "messaging.rocketmq.client_group" semantic conventions. It represents
+// the name of the RocketMQ producer/consumer group that is handling the
+// message. The client type is identified by the SpanKind.
+func MessagingRocketmqClientGroup(val string) attribute.KeyValue {
+ return MessagingRocketmqClientGroupKey.String(val)
+}
+
+// MessagingRocketmqClientID returns an attribute KeyValue conforming to the
+// "messaging.rocketmq.client_id" semantic conventions. It represents the
+// unique identifier for each client.
+func MessagingRocketmqClientID(val string) attribute.KeyValue {
+ return MessagingRocketmqClientIDKey.String(val)
+}
+
+// MessagingRocketmqMessageDeliveryTimestamp returns an attribute KeyValue
+// conforming to the "messaging.rocketmq.message.delivery_timestamp" semantic
+// conventions. It represents the timestamp in milliseconds that the delay
+// message is expected to be delivered to consumer.
+func MessagingRocketmqMessageDeliveryTimestamp(val int) attribute.KeyValue {
+ return MessagingRocketmqMessageDeliveryTimestampKey.Int(val)
+}
+
+// MessagingRocketmqMessageDelayTimeLevel returns an attribute KeyValue
+// conforming to the "messaging.rocketmq.message.delay_time_level" semantic
+// conventions. It represents the delay time level for delay message, which
+// determines the message delay time.
+func MessagingRocketmqMessageDelayTimeLevel(val int) attribute.KeyValue {
+ return MessagingRocketmqMessageDelayTimeLevelKey.Int(val)
+}
+
+// MessagingRocketmqMessageGroup returns an attribute KeyValue conforming to
+// the "messaging.rocketmq.message.group" semantic conventions. It represents
+// the it is essential for FIFO message. Messages that belong to the same
+// message group are always processed one by one within the same consumer
+// group.
+func MessagingRocketmqMessageGroup(val string) attribute.KeyValue {
+ return MessagingRocketmqMessageGroupKey.String(val)
+}
+
+// MessagingRocketmqMessageTag returns an attribute KeyValue conforming to
+// the "messaging.rocketmq.message.tag" semantic conventions. It represents the
+// secondary classifier of message besides topic.
+func MessagingRocketmqMessageTag(val string) attribute.KeyValue {
+ return MessagingRocketmqMessageTagKey.String(val)
+}
+
+// MessagingRocketmqMessageKeys returns an attribute KeyValue conforming to
+// the "messaging.rocketmq.message.keys" semantic conventions. It represents
+// the key(s) of message, another way to mark message besides message id.
+func MessagingRocketmqMessageKeys(val ...string) attribute.KeyValue {
+ return MessagingRocketmqMessageKeysKey.StringSlice(val)
+}
+
+// Semantic conventions for remote procedure calls.
+const (
+ // RPCSystemKey is the attribute Key conforming to the "rpc.system"
+ // semantic conventions. It represents a string identifying the remoting
+ // system. See below for a list of well-known identifiers.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ RPCSystemKey = attribute.Key("rpc.system")
+
+ // RPCServiceKey is the attribute Key conforming to the "rpc.service"
+ // semantic conventions. It represents the full (logical) name of the
+ // service being called, including its package name, if applicable.
+ //
+ // Type: string
+ // RequirementLevel: Recommended
+ // Stability: stable
+ // Examples: 'myservice.EchoService'
+ // Note: This is the logical name of the service from the RPC interface
+ // perspective, which can be different from the name of any implementing
+ // class. The `code.namespace` attribute may be used to store the latter
+ // (despite the attribute name, it may include a class name; e.g., class
+ // with method actually executing the call on the server side, RPC client
+ // stub class on the client side).
+ RPCServiceKey = attribute.Key("rpc.service")
+
+ // RPCMethodKey is the attribute Key conforming to the "rpc.method"
+ // semantic conventions. It represents the name of the (logical) method
+ // being called, must be equal to the $method part in the span name.
+ //
+ // Type: string
+ // RequirementLevel: Recommended
+ // Stability: stable
+ // Examples: 'exampleMethod'
+ // Note: This is the logical name of the method from the RPC interface
+ // perspective, which can be different from the name of any implementing
+ // method/function. The `code.function` attribute may be used to store the
+ // latter (e.g., method actually executing the call on the server side, RPC
+ // client stub method on the client side).
+ RPCMethodKey = attribute.Key("rpc.method")
+)
+
+var (
+ // gRPC
+ RPCSystemGRPC = RPCSystemKey.String("grpc")
+ // Java RMI
+ RPCSystemJavaRmi = RPCSystemKey.String("java_rmi")
+ // .NET WCF
+ RPCSystemDotnetWcf = RPCSystemKey.String("dotnet_wcf")
+ // Apache Dubbo
+ RPCSystemApacheDubbo = RPCSystemKey.String("apache_dubbo")
+)
+
+// RPCService returns an attribute KeyValue conforming to the "rpc.service"
+// semantic conventions. It represents the full (logical) name of the service
+// being called, including its package name, if applicable.
+func RPCService(val string) attribute.KeyValue {
+ return RPCServiceKey.String(val)
+}
+
+// RPCMethod returns an attribute KeyValue conforming to the "rpc.method"
+// semantic conventions. It represents the name of the (logical) method being
+// called, must be equal to the $method part in the span name.
+func RPCMethod(val string) attribute.KeyValue {
+ return RPCMethodKey.String(val)
+}
+
+// Tech-specific attributes for gRPC.
+const (
+ // RPCGRPCStatusCodeKey is the attribute Key conforming to the
+ // "rpc.grpc.status_code" semantic conventions. It represents the [numeric
+ // status
+ // code](https://github.com/grpc/grpc/blob/v1.33.2/doc/statuscodes.md) of
+ // the gRPC request.
+ //
+ // Type: Enum
+ // RequirementLevel: Required
+ // Stability: stable
+ RPCGRPCStatusCodeKey = attribute.Key("rpc.grpc.status_code")
+)
+
+var (
+ // OK
+ RPCGRPCStatusCodeOk = RPCGRPCStatusCodeKey.Int(0)
+ // CANCELLED
+ RPCGRPCStatusCodeCancelled = RPCGRPCStatusCodeKey.Int(1)
+ // UNKNOWN
+ RPCGRPCStatusCodeUnknown = RPCGRPCStatusCodeKey.Int(2)
+ // INVALID_ARGUMENT
+ RPCGRPCStatusCodeInvalidArgument = RPCGRPCStatusCodeKey.Int(3)
+ // DEADLINE_EXCEEDED
+ RPCGRPCStatusCodeDeadlineExceeded = RPCGRPCStatusCodeKey.Int(4)
+ // NOT_FOUND
+ RPCGRPCStatusCodeNotFound = RPCGRPCStatusCodeKey.Int(5)
+ // ALREADY_EXISTS
+ RPCGRPCStatusCodeAlreadyExists = RPCGRPCStatusCodeKey.Int(6)
+ // PERMISSION_DENIED
+ RPCGRPCStatusCodePermissionDenied = RPCGRPCStatusCodeKey.Int(7)
+ // RESOURCE_EXHAUSTED
+ RPCGRPCStatusCodeResourceExhausted = RPCGRPCStatusCodeKey.Int(8)
+ // FAILED_PRECONDITION
+ RPCGRPCStatusCodeFailedPrecondition = RPCGRPCStatusCodeKey.Int(9)
+ // ABORTED
+ RPCGRPCStatusCodeAborted = RPCGRPCStatusCodeKey.Int(10)
+ // OUT_OF_RANGE
+ RPCGRPCStatusCodeOutOfRange = RPCGRPCStatusCodeKey.Int(11)
+ // UNIMPLEMENTED
+ RPCGRPCStatusCodeUnimplemented = RPCGRPCStatusCodeKey.Int(12)
+ // INTERNAL
+ RPCGRPCStatusCodeInternal = RPCGRPCStatusCodeKey.Int(13)
+ // UNAVAILABLE
+ RPCGRPCStatusCodeUnavailable = RPCGRPCStatusCodeKey.Int(14)
+ // DATA_LOSS
+ RPCGRPCStatusCodeDataLoss = RPCGRPCStatusCodeKey.Int(15)
+ // UNAUTHENTICATED
+ RPCGRPCStatusCodeUnauthenticated = RPCGRPCStatusCodeKey.Int(16)
+)
+
+// Tech-specific attributes for [JSON RPC](https://www.jsonrpc.org/).
+const (
+ // RPCJsonrpcVersionKey is the attribute Key conforming to the
+ // "rpc.jsonrpc.version" semantic conventions. It represents the protocol
+ // version as in `jsonrpc` property of request/response. Since JSON-RPC 1.0
+ // does not specify this, the value can be omitted.
+ //
+ // Type: string
+ // RequirementLevel: ConditionallyRequired (If other than the default
+ // version (`1.0`))
+ // Stability: stable
+ // Examples: '2.0', '1.0'
+ RPCJsonrpcVersionKey = attribute.Key("rpc.jsonrpc.version")
+
+ // RPCJsonrpcRequestIDKey is the attribute Key conforming to the
+ // "rpc.jsonrpc.request_id" semantic conventions. It represents the `id`
+ // property of request or response. Since protocol allows id to be int,
+ // string, `null` or missing (for notifications), value is expected to be
+ // cast to string for simplicity. Use empty string in case of `null` value.
+ // Omit entirely if this is a notification.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: '10', 'request-7', ''
+ RPCJsonrpcRequestIDKey = attribute.Key("rpc.jsonrpc.request_id")
+
+ // RPCJsonrpcErrorCodeKey is the attribute Key conforming to the
+ // "rpc.jsonrpc.error_code" semantic conventions. It represents the
+ // `error.code` property of response if it is an error response.
+ //
+ // Type: int
+ // RequirementLevel: ConditionallyRequired (If response is not successful.)
+ // Stability: stable
+ // Examples: -32700, 100
+ RPCJsonrpcErrorCodeKey = attribute.Key("rpc.jsonrpc.error_code")
+
+ // RPCJsonrpcErrorMessageKey is the attribute Key conforming to the
+ // "rpc.jsonrpc.error_message" semantic conventions. It represents the
+ // `error.message` property of response if it is an error response.
+ //
+ // Type: string
+ // RequirementLevel: Optional
+ // Stability: stable
+ // Examples: 'Parse error', 'User already exists'
+ RPCJsonrpcErrorMessageKey = attribute.Key("rpc.jsonrpc.error_message")
+)
+
+// RPCJsonrpcVersion returns an attribute KeyValue conforming to the
+// "rpc.jsonrpc.version" semantic conventions. It represents the protocol
+// version as in `jsonrpc` property of request/response. Since JSON-RPC 1.0
+// does not specify this, the value can be omitted.
+func RPCJsonrpcVersion(val string) attribute.KeyValue {
+ return RPCJsonrpcVersionKey.String(val)
+}
+
+// RPCJsonrpcRequestID returns an attribute KeyValue conforming to the
+// "rpc.jsonrpc.request_id" semantic conventions. It represents the `id`
+// property of request or response. Since protocol allows id to be int, string,
+// `null` or missing (for notifications), value is expected to be cast to
+// string for simplicity. Use empty string in case of `null` value. Omit
+// entirely if this is a notification.
+func RPCJsonrpcRequestID(val string) attribute.KeyValue {
+ return RPCJsonrpcRequestIDKey.String(val)
+}
+
+// RPCJsonrpcErrorCode returns an attribute KeyValue conforming to the
+// "rpc.jsonrpc.error_code" semantic conventions. It represents the
+// `error.code` property of response if it is an error response.
+func RPCJsonrpcErrorCode(val int) attribute.KeyValue {
+ return RPCJsonrpcErrorCodeKey.Int(val)
+}
+
+// RPCJsonrpcErrorMessage returns an attribute KeyValue conforming to the
+// "rpc.jsonrpc.error_message" semantic conventions. It represents the
+// `error.message` property of response if it is an error response.
+func RPCJsonrpcErrorMessage(val string) attribute.KeyValue {
+ return RPCJsonrpcErrorMessageKey.String(val)
+}
diff --git a/vendor/k8s.io/utils/integer/integer.go b/vendor/k8s.io/utils/integer/integer.go
index e4e740cad4c82..e0811e8344c52 100644
--- a/vendor/k8s.io/utils/integer/integer.go
+++ b/vendor/k8s.io/utils/integer/integer.go
@@ -16,6 +16,8 @@ limitations under the License.
package integer
+import "math"
+
// IntMax returns the maximum of the params
func IntMax(a, b int) int {
if b > a {
@@ -65,9 +67,7 @@ func Int64Min(a, b int64) int64 {
}
// RoundToInt32 rounds floats into integer numbers.
+// Deprecated: use math.Round() and a cast directly.
func RoundToInt32(a float64) int32 {
- if a < 0 {
- return int32(a - 0.5)
- }
- return int32(a + 0.5)
+ return int32(math.Round(a))
}
diff --git a/vendor/k8s.io/utils/net/multi_listen.go b/vendor/k8s.io/utils/net/multi_listen.go
new file mode 100644
index 0000000000000..7cb7795beca7f
--- /dev/null
+++ b/vendor/k8s.io/utils/net/multi_listen.go
@@ -0,0 +1,195 @@
+/*
+Copyright 2024 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package net
+
+import (
+ "context"
+ "fmt"
+ "net"
+ "sync"
+)
+
+// connErrPair pairs conn and error which is returned by accept on sub-listeners.
+type connErrPair struct {
+ conn net.Conn
+ err error
+}
+
+// multiListener implements net.Listener
+type multiListener struct {
+ listeners []net.Listener
+ wg sync.WaitGroup
+
+ // connCh passes accepted connections, from child listeners to parent.
+ connCh chan connErrPair
+ // stopCh communicates from parent to child listeners.
+ stopCh chan struct{}
+}
+
+// compile time check to ensure *multiListener implements net.Listener
+var _ net.Listener = &multiListener{}
+
+// MultiListen returns net.Listener which can listen on and accept connections for
+// the given network on multiple addresses. Internally it uses stdlib to create
+// sub-listener and multiplexes connection requests using go-routines.
+// The network must be "tcp", "tcp4" or "tcp6".
+// It follows the semantics of net.Listen that primarily means:
+// 1. If the host is an unspecified/zero IP address with "tcp" network, MultiListen
+// listens on all available unicast and anycast IP addresses of the local system.
+// 2. Use "tcp4" or "tcp6" to exclusively listen on IPv4 or IPv6 family, respectively.
+// 3. The host can accept names (e.g, localhost) and it will create a listener for at
+// most one of the host's IP.
+func MultiListen(ctx context.Context, network string, addrs ...string) (net.Listener, error) {
+ var lc net.ListenConfig
+ return multiListen(
+ ctx,
+ network,
+ addrs,
+ func(ctx context.Context, network, address string) (net.Listener, error) {
+ return lc.Listen(ctx, network, address)
+ })
+}
+
+// multiListen implements MultiListen by consuming stdlib functions as dependency allowing
+// mocking for unit-testing.
+func multiListen(
+ ctx context.Context,
+ network string,
+ addrs []string,
+ listenFunc func(ctx context.Context, network, address string) (net.Listener, error),
+) (net.Listener, error) {
+ if !(network == "tcp" || network == "tcp4" || network == "tcp6") {
+ return nil, fmt.Errorf("network %q not supported", network)
+ }
+ if len(addrs) == 0 {
+ return nil, fmt.Errorf("no address provided to listen on")
+ }
+
+ ml := &multiListener{
+ connCh: make(chan connErrPair),
+ stopCh: make(chan struct{}),
+ }
+ for _, addr := range addrs {
+ l, err := listenFunc(ctx, network, addr)
+ if err != nil {
+ // close all the sub-listeners and exit
+ _ = ml.Close()
+ return nil, err
+ }
+ ml.listeners = append(ml.listeners, l)
+ }
+
+ for _, l := range ml.listeners {
+ ml.wg.Add(1)
+ go func(l net.Listener) {
+ defer ml.wg.Done()
+ for {
+ // Accept() is blocking, unless ml.Close() is called, in which
+ // case it will return immediately with an error.
+ conn, err := l.Accept()
+ // This assumes that ANY error from Accept() will terminate the
+ // sub-listener. We could maybe be more precise, but it
+ // doesn't seem necessary.
+ terminate := err != nil
+
+ select {
+ case ml.connCh <- connErrPair{conn: conn, err: err}:
+ case <-ml.stopCh:
+ // In case we accepted a connection AND were stopped, and
+ // this select-case was chosen, just throw away the
+ // connection. This avoids potentially blocking on connCh
+ // or leaking a connection.
+ if conn != nil {
+ _ = conn.Close()
+ }
+ terminate = true
+ }
+ // Make sure we don't loop on Accept() returning an error and
+ // the select choosing the channel case.
+ if terminate {
+ return
+ }
+ }
+ }(l)
+ }
+ return ml, nil
+}
+
+// Accept implements net.Listener. It waits for and returns a connection from
+// any of the sub-listener.
+func (ml *multiListener) Accept() (net.Conn, error) {
+ // wait for any sub-listener to enqueue an accepted connection
+ connErr, ok := <-ml.connCh
+ if !ok {
+ // The channel will be closed only when Close() is called on the
+ // multiListener. Closing of this channel implies that all
+ // sub-listeners are also closed, which causes a "use of closed
+ // network connection" error on their Accept() calls. We return the
+ // same error for multiListener.Accept() if multiListener.Close()
+ // has already been called.
+ return nil, fmt.Errorf("use of closed network connection")
+ }
+ return connErr.conn, connErr.err
+}
+
+// Close implements net.Listener. It will close all sub-listeners and wait for
+// the go-routines to exit.
+func (ml *multiListener) Close() error {
+ // Make sure this can be called repeatedly without explosions.
+ select {
+ case <-ml.stopCh:
+ return fmt.Errorf("use of closed network connection")
+ default:
+ }
+
+ // Tell all sub-listeners to stop.
+ close(ml.stopCh)
+
+ // Closing the listeners causes Accept() to immediately return an error in
+ // the sub-listener go-routines.
+ for _, l := range ml.listeners {
+ _ = l.Close()
+ }
+
+ // Wait for all the sub-listener go-routines to exit.
+ ml.wg.Wait()
+ close(ml.connCh)
+
+ // Drain any already-queued connections.
+ for connErr := range ml.connCh {
+ if connErr.conn != nil {
+ _ = connErr.conn.Close()
+ }
+ }
+ return nil
+}
+
+// Addr is an implementation of the net.Listener interface. It always returns
+// the address of the first listener. Callers should use conn.LocalAddr() to
+// obtain the actual local address of the sub-listener.
+func (ml *multiListener) Addr() net.Addr {
+ return ml.listeners[0].Addr()
+}
+
+// Addrs is like Addr, but returns the address for all registered listeners.
+func (ml *multiListener) Addrs() []net.Addr {
+ var ret []net.Addr
+ for _, l := range ml.listeners {
+ ret = append(ret, l.Addr())
+ }
+ return ret
+}
diff --git a/vendor/k8s.io/utils/trace/trace.go b/vendor/k8s.io/utils/trace/trace.go
index 187eb5d8c5e9f..559aebb59a545 100644
--- a/vendor/k8s.io/utils/trace/trace.go
+++ b/vendor/k8s.io/utils/trace/trace.go
@@ -192,7 +192,7 @@ func (t *Trace) Log() {
t.endTime = &endTime
t.lock.Unlock()
// an explicit logging request should dump all the steps out at the higher level
- if t.parentTrace == nil { // We don't start logging until Log or LogIfLong is called on the root trace
+ if t.parentTrace == nil && klogV(2) { // We don't start logging until Log or LogIfLong is called on the root trace
t.logTrace()
}
}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index b4e59d5e240b8..9705d54b3fb34 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -206,7 +206,7 @@ github.com/DataDog/sketches-go/ddsketch/store
# github.com/DmitriyVTitov/size v1.5.0
## explicit; go 1.14
github.com/DmitriyVTitov/size
-# github.com/IBM/go-sdk-core/v5 v5.17.4
+# github.com/IBM/go-sdk-core/v5 v5.17.5
## explicit; go 1.20
github.com/IBM/go-sdk-core/v5/core
# github.com/IBM/ibm-cos-sdk-go v1.11.0
@@ -476,7 +476,7 @@ github.com/aws/smithy-go/transport/http/internal/io
# github.com/axiomhq/hyperloglog v0.0.0-20240507144631-af9851f82b27
## explicit; go 1.12
github.com/axiomhq/hyperloglog
-# github.com/baidubce/bce-sdk-go v0.9.187
+# github.com/baidubce/bce-sdk-go v0.9.189
## explicit; go 1.11
github.com/baidubce/bce-sdk-go/auth
github.com/baidubce/bce-sdk-go/bce
@@ -669,7 +669,7 @@ github.com/eapache/queue
# github.com/edsrzf/mmap-go v1.1.0
## explicit; go 1.17
github.com/edsrzf/mmap-go
-# github.com/efficientgo/core v1.0.0-rc.2
+# github.com/efficientgo/core v1.0.0-rc.3
## explicit; go 1.17
github.com/efficientgo/core/errcapture
github.com/efficientgo/core/errors
@@ -742,12 +742,10 @@ github.com/fluent/fluent-bit-go/output
# github.com/fsnotify/fsnotify v1.7.0
## explicit; go 1.17
github.com/fsnotify/fsnotify
-# github.com/fsouza/fake-gcs-server v1.47.7
-## explicit; go 1.20
+# github.com/fsouza/fake-gcs-server v1.7.0
+## explicit
github.com/fsouza/fake-gcs-server/fakestorage
github.com/fsouza/fake-gcs-server/internal/backend
-github.com/fsouza/fake-gcs-server/internal/checksum
-github.com/fsouza/fake-gcs-server/internal/notification
# github.com/gabriel-vasile/mimetype v1.4.3
## explicit; go 1.20
github.com/gabriel-vasile/mimetype
@@ -973,9 +971,6 @@ github.com/gophercloud/gophercloud/openstack/identity/v3/extensions/oauth1
github.com/gophercloud/gophercloud/openstack/identity/v3/tokens
github.com/gophercloud/gophercloud/openstack/utils
github.com/gophercloud/gophercloud/pagination
-# github.com/gorilla/handlers v1.5.2
-## explicit; go 1.20
-github.com/gorilla/handlers
# github.com/gorilla/mux v1.8.1
## explicit; go 1.20
github.com/gorilla/mux
@@ -985,7 +980,7 @@ github.com/gorilla/websocket
# github.com/grafana/cloudflare-go v0.0.0-20230110200409-c627cf6792f2
## explicit; go 1.17
github.com/grafana/cloudflare-go
-# github.com/grafana/dskit v0.0.0-20240819131358-463219e80ea0
+# github.com/grafana/dskit v0.0.0-20240905221822-931a021fb06b
## explicit; go 1.21
github.com/grafana/dskit/aws
github.com/grafana/dskit/backoff
@@ -1068,7 +1063,7 @@ github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc
# github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed
## explicit
github.com/hailocab/go-hostpool
-# github.com/hashicorp/consul/api v1.29.2
+# github.com/hashicorp/consul/api v1.29.4
## explicit; go 1.19
github.com/hashicorp/consul/api
# github.com/hashicorp/errwrap v1.1.0
@@ -1086,7 +1081,7 @@ github.com/hashicorp/go-immutable-radix
# github.com/hashicorp/go-msgpack v1.1.5
## explicit; go 1.13
github.com/hashicorp/go-msgpack/codec
-# github.com/hashicorp/go-msgpack/v2 v2.1.1
+# github.com/hashicorp/go-msgpack/v2 v2.1.2
## explicit; go 1.19
github.com/hashicorp/go-msgpack/v2/codec
# github.com/hashicorp/go-multierror v1.1.1
@@ -1115,7 +1110,7 @@ github.com/hashicorp/golang-lru/v2/simplelru
# github.com/hashicorp/memberlist v0.5.0 => github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe
## explicit; go 1.12
github.com/hashicorp/memberlist
-# github.com/hashicorp/raft v1.7.0
+# github.com/hashicorp/raft v1.7.1
## explicit; go 1.20
github.com/hashicorp/raft
# github.com/hashicorp/raft-wal v0.4.1
@@ -1272,7 +1267,7 @@ github.com/miekg/dns
# github.com/minio/md5-simd v1.1.2
## explicit; go 1.14
github.com/minio/md5-simd
-# github.com/minio/minio-go/v7 v7.0.75
+# github.com/minio/minio-go/v7 v7.0.76
## explicit; go 1.21
github.com/minio/minio-go/v7
github.com/minio/minio-go/v7/pkg/cors
@@ -1366,7 +1361,7 @@ github.com/oschwald/geoip2-golang
# github.com/oschwald/maxminddb-golang v1.13.0
## explicit; go 1.21
github.com/oschwald/maxminddb-golang
-# github.com/pierrec/lz4/v4 v4.1.18
+# github.com/pierrec/lz4/v4 v4.1.21
## explicit; go 1.14
github.com/pierrec/lz4/v4
github.com/pierrec/lz4/v4/internal/lz4block
@@ -1382,9 +1377,6 @@ github.com/pkg/browser
# github.com/pkg/errors v0.9.1
## explicit
github.com/pkg/errors
-# github.com/pkg/xattr v0.4.10
-## explicit; go 1.14
-github.com/pkg/xattr
# github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10
## explicit; go 1.20
github.com/planetscale/vtprotobuf/protohelpers
@@ -1520,8 +1512,8 @@ github.com/richardartoul/molecule/src/protowire
# github.com/rivo/uniseg v0.4.7
## explicit; go 1.18
github.com/rivo/uniseg
-# github.com/rs/xid v1.5.0
-## explicit; go 1.12
+# github.com/rs/xid v1.6.0
+## explicit; go 1.16
github.com/rs/xid
# github.com/schollz/progressbar/v3 v3.14.6
## explicit; go 1.13
@@ -1566,8 +1558,8 @@ github.com/sony/gobreaker
# github.com/spaolacci/murmur3 v1.1.0
## explicit
github.com/spaolacci/murmur3
-# github.com/spf13/afero v1.10.0
-## explicit; go 1.16
+# github.com/spf13/afero v1.11.0
+## explicit; go 1.19
github.com/spf13/afero
github.com/spf13/afero/internal/common
github.com/spf13/afero/mem
@@ -1586,7 +1578,7 @@ github.com/stretchr/testify/assert
github.com/stretchr/testify/mock
github.com/stretchr/testify/require
github.com/stretchr/testify/suite
-# github.com/thanos-io/objstore v0.0.0-20240722162417-19b0c0f0ffd8
+# github.com/thanos-io/objstore v0.0.0-20240818203309-0363dadfdfb1
## explicit; go 1.21
github.com/thanos-io/objstore
github.com/thanos-io/objstore/exthttp
@@ -1602,6 +1594,30 @@ github.com/tklauser/go-sysconf
# github.com/tklauser/numcpus v0.6.1
## explicit; go 1.13
github.com/tklauser/numcpus
+# github.com/twmb/franz-go v1.17.1
+## explicit; go 1.21
+github.com/twmb/franz-go/pkg/kbin
+github.com/twmb/franz-go/pkg/kerr
+github.com/twmb/franz-go/pkg/kgo
+github.com/twmb/franz-go/pkg/kgo/internal/sticky
+github.com/twmb/franz-go/pkg/kversion
+github.com/twmb/franz-go/pkg/sasl
+# github.com/twmb/franz-go/pkg/kadm v1.13.0
+## explicit; go 1.21
+github.com/twmb/franz-go/pkg/kadm
+# github.com/twmb/franz-go/pkg/kfake v0.0.0-20240821035758-b77dd13e2bfa
+## explicit; go 1.21
+github.com/twmb/franz-go/pkg/kfake
+# github.com/twmb/franz-go/pkg/kmsg v1.8.0
+## explicit; go 1.19
+github.com/twmb/franz-go/pkg/kmsg
+github.com/twmb/franz-go/pkg/kmsg/internal/kbin
+# github.com/twmb/franz-go/plugin/kotel v1.5.0
+## explicit; go 1.21
+github.com/twmb/franz-go/plugin/kotel
+# github.com/twmb/franz-go/plugin/kprom v1.1.0
+## explicit; go 1.18
+github.com/twmb/franz-go/plugin/kprom
# github.com/uber/jaeger-client-go v2.30.0+incompatible
## explicit
github.com/uber/jaeger-client-go
@@ -1751,6 +1767,7 @@ go.opentelemetry.io/otel/internal/baggage
go.opentelemetry.io/otel/internal/global
go.opentelemetry.io/otel/propagation
go.opentelemetry.io/otel/semconv/v1.17.0
+go.opentelemetry.io/otel/semconv/v1.18.0
go.opentelemetry.io/otel/semconv/v1.20.0
go.opentelemetry.io/otel/semconv/v1.21.0
go.opentelemetry.io/otel/semconv/v1.24.0
@@ -2422,7 +2439,7 @@ k8s.io/kube-openapi/pkg/schemaconv
k8s.io/kube-openapi/pkg/spec3
k8s.io/kube-openapi/pkg/util/proto
k8s.io/kube-openapi/pkg/validation/spec
-# k8s.io/utils v0.0.0-20230726121419-3b25d923346b
+# k8s.io/utils v0.0.0-20240902221715-702e33fdd3c3
## explicit; go 1.18
k8s.io/utils/buffer
k8s.io/utils/clock