Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AUTH-421 Fargate FastAPI container #15287

Merged
merged 6 commits into from
May 31, 2024
Merged

AUTH-421 Fargate FastAPI container #15287

merged 6 commits into from
May 31, 2024

Conversation

y3rsh
Copy link
Member

@y3rsh y3rsh commented May 29, 2024

Switch architecture yet again

This is all deployed on staging and is responding to requests from #15193

@y3rsh y3rsh force-pushed the AUTH-421-fastapi branch from 9d59f2a to 190a95d Compare May 29, 2024 22:08
@y3rsh y3rsh requested a review from a team as a code owner May 29, 2024 22:08
@y3rsh y3rsh requested review from jerader and removed request for a team May 29, 2024 22:08
Copy link
Contributor

@koji koji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a few comments but they aren't a blocker.
the changes make sense to me.
Thank you for doing this!

@Elyorcv
Copy link
Contributor

Elyorcv commented May 30, 2024

I want to understand more on 176s. Sometimes, to generate protocol we may need more than 3 mins, do you think 176s is related to this?

opentrons-ai-server/README.md Outdated Show resolved Hide resolved
@y3rsh
Copy link
Member Author

y3rsh commented May 31, 2024

I want to understand more on 176s. Sometimes, to generate protocol we may need more than 3 mins, do you think 176s is related to this?

178 seconds is the maximum possible for this architecture and is already not optimal for waiting on a POST request to return a response. There are many other ways to do this we will need to iterate on performance and other methods getting the response.

@y3rsh y3rsh merged commit 74f94c0 into edge May 31, 2024
16 checks passed

## Install a dev dependency

`python -m pipenv install pytest==8.2.0 --dev`

## Install a production dependency

`python -m pipenv install openai==1.25.1`
`python -m pipenv install openai==1.30.4`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not keep changing the version please. Reproducibility is already difficult due to GPT model itself.

@@ -4,26 +4,29 @@ verify_ssl = true
name = "pypi"

[packages]
openai = "==1.25.1"
openai = "==1.30.4"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to stick to 1.25.1, AI projects are updated fast. it may damage things unexpectedly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants