Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFD 0179: Web API backward compatibility guidelines #44668

Merged
merged 1 commit into from
Aug 21, 2024

Conversation

avatus
Copy link
Contributor

@avatus avatus commented Jul 25, 2024

rendered

This RFD includes the API guidelines for keeping our web clients/api backward compatible

Copy link
Contributor

@bl-nero bl-nero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My opinion is that incrementing a version number would be a perfectly good solution for some one-off breaking changes. There is definitely a big appeal in converting to Connect RPC, I can't deny, and my inner self striving for nice, complete solutions, is all in favor. But we need to do it for valid reasons, and openly acknowledge it.

As a solution for the problem of manual bookkeeping of the client-side and server-side data structure definitions, this is definitely something that I would support. Hell, this really triggered me sometimes. However, I don't think I've seen enough evidence that this actually helps to solve the backwards compatibility problem. In my eyes, we would still need to use similar techniques to tell old from new; only now, we would get some guardrails in form of both "new" and "old" being defined as a part of the same type-safe structure. Which we could achieve anyway with a bit of manual work, without involving Connect RPC.

Perhaps I don't see it? Maybe it would be easier to understand if you elaborated exactly how leveraging Connect RPC would help in the case that you mentioned?

### Creating a new rpc service with Connect
(buf's Connectrpc, not Teleport Connect)

Connect is a library used in making gRPC compatible HTTP apis. It supports [multiple protocols](https://connectrpc.com/docs/introduction#seamless-multi-protocol-support) and can generate go service code and typescript client code from the proto schema. We do something similar in Teleport Connect where we generate a client for `tsh` to make requests to the server and use the generated typescript code. This would be doing the same for the web client now.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Support for multiple protocols sounds great, but can we disable unused ones to narrow down attack vectors?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you can select your transport.

@bl-nero
Copy link
Contributor

bl-nero commented Jul 30, 2024

One more thing: if we decide to use Connect RPC, can we make sure that we are prepared for debugging the network traffic? (I've seen solutions that make it really difficult to see what's happening between the browser and the server, so it's important for this to be either human-readable or have a good tooling support.)

@avatus
Copy link
Contributor Author

avatus commented Jul 30, 2024

One more thing: if we decide to use Connect RPC, can we make sure that we are prepared for debugging the network traffic? (I've seen solutions that make it really difficult to see what's happening between the browser and the server, so it's important for this to be either human-readable or have a good tooling support.)

yeah, this is human readable still
https://github.com/gravitational/teleport/pull/44668/files#diff-7852f06643f466fb42c702c6f8bb51f67339dbb53e40629bce1c2a69918d1d0dR107

https://connectrpc.com/docs/go/getting-started#make-requests

@avatus
Copy link
Contributor Author

avatus commented Jul 30, 2024

Which we could achieve anyway with a bit of manual work, without involving Connect RPC.
Thanks, I need to update the language of this PR (working on it now). This shouldn't be "hey here is an idea!" type of RFD, but a prescription of rules/guidelines. I am using Connect as a way to facilitate these changes, but the guidelines I'm writing up (will be in tomorrow) will tell us how to keep our APIs backward compatible (with manual work) and here is a tool to help us do it.

thanks for the feedback.

at the end of the day, anyone can break anything if they try hard enough :), so hopefully the guidelines will keep us all cognizant of it when deving/reviewing.

@avatus avatus marked this pull request as ready for review July 31, 2024 18:09
Copy link

The PR changelog entry failed validation: Changelog entry not found in the PR body. Please add a "no-changelog" label to the PR, or changelog lines starting with changelog: followed by the changelog entries for the PR.

@github-actions github-actions bot added rfd Request for Discussion size/md labels Jul 31, 2024
@github-actions github-actions bot requested review from lxea and tigrato July 31, 2024 18:09
@avatus avatus removed request for tigrato and lxea July 31, 2024 18:10
@avatus
Copy link
Contributor Author

avatus commented Jul 31, 2024

Ready for review

Copy link

The PR changelog entry failed validation: Changelog entry not found in the PR body. Please add a "no-changelog" label to the PR, or changelog lines starting with changelog: followed by the changelog entries for the PR.

@avatus avatus added the no-changelog Indicates that a PR does not require a changelog entry label Jul 31, 2024
@avatus avatus changed the title RFD 0179: Web API backward compatibility and type system RFD 0179: Web API backward compatibility guidelines and type system Jul 31, 2024
@avatus avatus requested review from bl-nero, ravicious and gzdunek and removed request for ravicious and gzdunek July 31, 2024 21:32
@avatus
Copy link
Contributor Author

avatus commented Aug 1, 2024

@ravicious @gzdunek i know this RFD is for the web UI, but I'd still appreciate any input. Especially on the optional use of tanstack query



### Adding/removing a field to a response of an existing endpoint
Adding a new field to a response is OK as long as that field has to effect on
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Adding a new field to a response is OK as long as that field has to effect on
Adding a new field to a response is OK as long as that field has no effect on

}, nil
}
```
If using ConnectRPC (a proposed solution later in this RFD), then we must _not_ remove fields from requests/responses, even if unused. Mark as deprecated and move on.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean that marking it reserved won't work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reserved will work. I should change the language to "follow the deprecation process of proto messages and move on". i will update and link. Thanks!

@avatus avatus requested a review from flyinghermit August 2, 2024 19:53
Copy link
Member

@ravicious ravicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll have a couple of more thoughts on this that I'll post later today.

rfd/0179-api-versioning.md Show resolved Hide resolved

## Web Client/API Backward Compatibility Guidelines

In general, if the Request or Response shape does not change, logic in an API handler can change without worry of backward compatibility.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly do you mean here? I think I can see it, but taken at face value this sentence doesn't seem to be true. ;)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape of request or response doesn't change so the API contract is still fulfilled. This means it wouldn't lead to a "breaking" change

I guess a contrived example would be changing to return value in a response like {"hello": "world"} to {"hello": ", world!"} wouldn't be breaking.

But after typing it out, perhaps some logic is done by that field in the web client so. I'm not sure now actually. I'll think about it

Copy link
Contributor

@gzdunek gzdunek Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape of request or response doesn't change so the API contract is still fulfilled. This means it wouldn't lead to a "breaking" change

I agree that it's not true.
For example, if an API returns status: "SUCCESS", and then we change it to status: "OK" it is a breaking change.

I think we should say instead that meaning of the fields should not change? Like in https://www.mnot.net/blog/2012/12/04/api-evolution.html (paragraph Make Changes Backwards-Compatible).

Copy link
Contributor Author

@avatus avatus Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I'm convinced. I will remove this because if i said "if all the fields are the same and all the values are the same, its ok!" but that seems quite obvious. anything else, can fall onto the guidelines themsleves.

type GetFoosResponse struct {
Foos []string
NextKey string
// TODO (avatus) DELETE IN 18
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be the most popular format of a TODO note used in the project. Leaving a quick note is should be helpful in most cases as well.

Suggested change
// TODO (avatus) DELETE IN 18
// TODO(avatus): DELETE IN 18. Use `DuplicateCount` instead.

foos := []string{"foo1", "foo2", "foo3", "foo3"}
return GetFoosResponse{
Foos: foos,
// somedev: "we must keep ContainsDuplicates populated to support older clients!"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// somedev: "we must keep ContainsDuplicates populated to support older clients!"
// ContainsDuplicates must be populated to support older clients.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i didn't see this change 😅

If a feature requires a new endpoint, it must properly handle the case of a 404.
This means making the "expected" 404 error clear to the user that an endpoint is
not found due to a version mismatch (and preferably which minimum version is
needed) rather than an ambiguous "Not Found" error.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give an example here? I think there are legitimate cases where an HTTP endpoint can return 404 to mean "I couldn't find the resource you were looking for" rather than "I don't know what this endpoint is". In our HTTP API, those two are conflated. In gRPC, there's a separate UNIMPLEMENTED status code.

Though tbh, I can't remember if I've ever seen a web API that was returning 400 or 501 instead of 404 when it encountered an unknown route. Especially the latter I think most of the time is reserved specifically for unsupported request methods.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I think in general the sentiment stands but perhaps I can word it differently. I would say in most cases a List returning a 404 would probably indicate a missing endpoint rather than a single resource (I don't think there is much we can do here besides still returning some shape?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I guess this would be up to the discretion of the dev at time of creation. Definitely one to just keep an eye out for (not a strict requirement but a guideline)

A good example of this was Pinned Resources. We would check if the endpoint existed and if not, display a message informing users to upgrade to 14.1 or something like that. I could have swore I had that example in here but I will add that example for clarity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaf servers also play a big role in this one (and the previous comment about rolling upgrades too)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the distinction between "list" endpoints and single resource endpoints returning 404. I believe its going to be "Case-by-case" but the guidelines should be "best effort to handle gracefully"


If a response shape must change entirely, prefer creating a new endpoint/RPC.

### Updating Request body fields
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Updating Request body fields
### Updating request body fields

Comment on lines 199 to 200
### Updating request fields in the web client
Updating what fields are sent in a request body is ok as long as the API follows the updated request fields guidelines.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this section different from "Updating Request body fields"? Is it about changing what fields a client sends vs "Updating Request body fields" being about which fields the server expects?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah basically just the direction of the communication. I have them separated since I went with "server changes" and "web client changes". I think that you were able to assume the correct answer that it makes enough sense but I will reword it to make it more clear regardless (or note or something). Thank you!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm not as smart as @ravicious and without his comment about it, i wouldn't have understood 🥲

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the feedback. I will update! 🙏

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this second paragraph and slightly changed the wording on the other to include both sides of the change.

@ravicious ravicious self-requested a review August 5, 2024 13:55
fetching framework to simplify and unify the way we make api requests in the web
client. This will remove boilerplate that we need to use our APIs in the web
client, provide type safety out of the box for both server and client code, and
provide backward compatibility guardrails when creating/updating our APIs.
Copy link
Member

@ravicious ravicious Aug 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree with @bl-nero's comment above about bumping a version number being good enough for one-off breaking changes and the biggest advantage of Connect RPC being not forwards/backwards compatibility, but autogenerated type-checked structures in both JS and Go.

As far as compatibility is concerned, I feel like Connect RPC itself doesn't provide anything which we couldn't replicate with something like, idk, zod on the JS side. The difference is that with zod we would need to replicate the same structures in JS and then in Go.

I don't think we'll find anyone who, after reading this RFD, would be able to conclude with "This is a great idea that will absolutely work and be worth the time, let's do it". Perhaps it would be worthwhile to do a spike with an example implementation to see:

  • How easy it'll be to integrate this along the current web API.
  • How Connect RPC would address common pain points or past problems compared to the existing solution.

Another idea to toy with which could turn out to be an argument in favor of RPCs: how many web API endpoints do actual work and how many of them merely send off data to the auth service? Given the unusual compatibility guarantees put upon the Web UI (a client that needs to work with older servers), how feasible would it be to reuse protos from the /api directory to streamline endpoints which merely pass data to the auth service?

By the last question, I mean that when I want to add a new field to a usage event in Connect, which passes from the Electron app through tshd to prehog, it's a matter of updating a single proto file and the change trickles down to every place that uses it. Is this something achievable in the Web UI as well?

Copy link
Contributor Author

@avatus avatus Aug 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The difference is that with zod we would need to replicate the same structures in JS and then in Go.

This is the crux of the problem. At the end of the day, if we rely on creating code on the backend and the frontend, we are bound to just end up in the same place we are. Removing the need to create code in two places and removing boilerplate seems worthwhile. A spike is probably a great idea (more than I spike i did for implementing a test feature, more like spike an actual feature instead).

how feasible would it be to reuse protos from the /api directory to streamline endpoints which merely pass data to the auth service?

Probably pretty feasible. Although, from my experience, we send some fields that arent passed to auth (and are used for creating the auth message generally) but its the return trip where things are quite different. Almost every resource-based web endpoint receives data from auth an has to convert it to a UI structure so reusing protos from proxy<->auth doesn't really seem to work.
I think the benefit that Connect gives us is the type generation is a bit more modern in that it reads closer to how types would be generated by a human, and also (if we went the tanstack query route), has built in generators for client queries as well. A lot it the type system could be solved without it but from my initial testing, this was the best DX imo.

I agree with you and @bl-nero that solving the backward compatibility problem is as simple as appending the version prefix I talked about in the RFD. So I +1 the idea of "add the version prefix to our apis and follow the guidelines" is good enough, and will be happy if thats the only result of this RFD.

If we want to discuss the type system in a greater context, we can move it to its own RFD.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to discuss the type system in a greater context, we can move it to its own RFD.

+1

rfd/0179-api-versioning.md Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
### Removing an endpoint
An endpoint can be removed after our backward compatibility promise has been
fulfilled and NOT before. This means marking for deletion with a TODO and
removing it in the relevant version. For example, if v17 no longer uses
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should be specific on the minor b/c if changes are introduced 17.[minor > 0], then we have to +2 version delete (maybe that's worthwhile mentioning it here? since this has been something i always get confused on)

Suggested change
removing it in the relevant version. For example, if v17 no longer uses
removing it in the relevant version. For example, if v17.0 no longer uses

Copy link
Contributor Author

@avatus avatus Aug 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think minors count? As far as I can tell, we only worry about major versions when it comes to declaring incompatibility.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the scenario i was trying to describe: if something got introduced v17.1, then you have to mark for deletion in v19 (not v18), b/c if we delete in v18, then we break compatibility with v17.0 -> v18.0

am i misunderstanding? i do get confused by this all the time so maybe i am 😅

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see what you are saying. Then yes YOU are correct and I will update the example!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh btw, i think we need to include the patch too... probably unlikely to happen but it could happen, so:

  • new feature introduce on major version, mark deletion for one major version ahead
  • new feature introduce on minor/patch version, mark deletion for two major version ahead

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, yeah, it didn't occur to me, but yeah, it's something we do have to account for.

Perhaps we could explain the general rule and then show examples, rather than adding one rule and then being like "oh in case a feature is added in a minor or patch version, there's a slightly different rule". I was thinking of something like this:

An endpoint can be removed in a major version n+2, where n is the last major version where the endpoint was used.

Example 1: v17 no longer uses GET /webapi/foos which was last used in v16. The endpoint can be removed in v18.
Example 2: v4.2.0 no longer uses GET /webapi/bars which was last used in v4.1.3. The endpoint can be removed in v6, so that v5 still supports clients using v4.1.3 and v4.2.0.

rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
fetching framework to simplify and unify the way we make api requests in the web
client. This will remove boilerplate that we need to use our APIs in the web
client, provide type safety out of the box for both server and client code, and
provide backward compatibility guardrails when creating/updating our APIs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to discuss the type system in a greater context, we can move it to its own RFD.

+1

Copy link
Contributor

@flyinghermit flyinghermit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed only the versioning part and that LGTM.

The api versioning guideline and the idea of backward compatibility itself is good, though it feels unfortunate that we have to deal with this situation of proxy running with different version in a same cluster in the first place. Alternatively, can we make changes in the proxy so it would always forward incoming request to the proxy which has the latest API version? That way we wouldn't need to worry about internal req/resp fields as long as the request hits correct proxy.

The idea of using Connect RPC sounds good to me and I see single type system as a win. Maybe worth it to move it to a different RFD as suggested in existing review comments so we have more discussion on that.

Copy link
Contributor

@gzdunek gzdunek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments.
In general, I agree with others that ConnectRPC is a really good improvement but doesn't change much when it comes to API compatibility (so we can discuss it separately).

rfd/0179-api-versioning.md Show resolved Hide resolved

## Web Client/API Backward Compatibility Guidelines

In general, if the Request or Response shape does not change, logic in an API handler can change without worry of backward compatibility.
Copy link
Contributor

@gzdunek gzdunek Aug 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The shape of request or response doesn't change so the API contract is still fulfilled. This means it wouldn't lead to a "breaking" change

I agree that it's not true.
For example, if an API returns status: "SUCCESS", and then we change it to status: "OK" it is a breaking change.

I think we should say instead that meaning of the fields should not change? Like in https://www.mnot.net/blog/2012/12/04/api-evolution.html (paragraph Make Changes Backwards-Compatible).

@avatus
Copy link
Contributor Author

avatus commented Aug 7, 2024

Thanks for all the feedback team. I've removed the talk about adding Connect (im still excited on this so I will open another RFD soon). Also, I believe I've handled all the feedback. Please take another look 🙏 🙌

@avatus avatus changed the title RFD 0179: Web API backward compatibility guidelines and type system RFD 0179: Web API backward compatibility guidelines Aug 7, 2024
@ravicious ravicious self-requested a review August 8, 2024 15:29
@avatus avatus requested review from kimlisa and bl-nero August 8, 2024 16:31
@avatus
Copy link
Contributor Author

avatus commented Aug 14, 2024

Any lingering feedback on this? Friendly ping to reviewers

Copy link
Contributor

@gzdunek gzdunek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, I completely missed that you updated the RFD last week :(

rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Show resolved Hide resolved
### New required fields from an API response in the web client
If the updated feature _cannot_ function without receiving new fields from the
API (for example, receiving a response from a proxy version N-1), refer to the
API guidelines about creating a new endpoint. If the feature itself is degraded
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the guideline you are referring to is about creating an endpoint that didn't exist before, not about the version increase.

Copy link
Contributor

@kimlisa kimlisa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, i'll leave approving to @kiosion since things are new for him and can request any clarification

// keep this
h.GET("/webapi/tokens", h.WithAuth(h.getTokens))

// and add this when a new version
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think

Suggested change
// and add this when a new version
// and add this with a new version

Comment on lines 153 to 156
Example 1: v17 no longer uses GET /webapi/foos which was last used in v16. The
endpoint can be removed in v18.
Example 2: v4.2.0 no longer uses GET /webapi/bars which was last used in v4.1.3. The endpoint can be removed in v6,
so that v5 still supports clients using v4.1.3 and v4.2.0.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it wasn't separated (run on)

Suggested change
Example 1: v17 no longer uses GET /webapi/foos which was last used in v16. The
endpoint can be removed in v18.
Example 2: v4.2.0 no longer uses GET /webapi/bars which was last used in v4.1.3. The endpoint can be removed in v6,
so that v5 still supports clients using v4.1.3 and v4.2.0.
Example 1: v17 no longer uses GET /webapi/foos which was last used in v16. The
endpoint can be removed in v18.
Example 2: v4.2.0 no longer uses GET /webapi/bars which was last used in v4.1.3. The endpoint can be removed in v6, so that v5 still supports clients using v4.1.3 and v4.2.0.

foos := []string{"foo1", "foo2", "foo3", "foo3"}
return GetFoosResponse{
Foos: foos,
// somedev: "we must keep ContainsDuplicates populated to support older clients!"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i didn't see this change 😅

Comment on lines 253 to 254
API (for example, receiving a response from a proxy version N-1), refer to the
API guidelines about creating a new versioned endpoint. If the feature itself is degraded
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
API (for example, receiving a response from a proxy version N-1), refer to the
API guidelines about creating a new versioned endpoint. If the feature itself is degraded
API (for example, receiving a response from a proxy version N-1), refer to the
API guidelines about [creating a new versioned endpoint](#creating-a-new-endpoint). If the feature itself is degraded

Comment on lines 1 to 3
authors: Michael Myers ([email protected])
state: draft
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think you need the extra hyphens on top to be formatted correctly

Suggested change
authors: Michael Myers ([email protected])
state: draft
---
---
authors: Michael Myers ([email protected])
state: draft
---

### Removing an endpoint
An endpoint can be removed in a major version n+2, where n is the last major
version where the endpoint was used.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think its good add a delete example:

Suggested change
Mark endpoints that needs to be removed with:
```go
// TODO(<your-github-handle>): DELETE IN 18.0
h.GET("/webapi/tokens", h.WithAuth(h.getTokens))
```

rfd/0179-api-versioning.md Show resolved Hide resolved
type GetFoosResponse struct {
Foos []string
NextKey string
// TODO(avatus): DELETE IN 18. Use `DuplicateCount` instead.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be helpful to add a // Deprecated: <Reason> line as well as the TODO prior to its removal

rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
rfd/0179-api-versioning.md Outdated Show resolved Hide resolved
Copy link
Contributor

@kimlisa kimlisa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rest looks like minor comments, approving

@public-teleport-github-review-bot public-teleport-github-review-bot bot removed the request for review from ryanclark August 20, 2024 23:31
@avatus avatus force-pushed the rfd/0179-api-versioning branch from 9559def to a8276b8 Compare August 21, 2024 15:45
@avatus avatus added this pull request to the merge queue Aug 21, 2024
Merged via the queue into master with commit e185d8f Aug 21, 2024
40 checks passed
@avatus avatus deleted the rfd/0179-api-versioning branch August 21, 2024 16:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
no-changelog Indicates that a PR does not require a changelog entry rfd Request for Discussion size/md
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants