Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: fetch LLM API keys from user env variables #102

Merged
merged 5 commits into from
Sep 13, 2024

Conversation

dakshpokar
Copy link
Collaborator

@dakshpokar dakshpokar commented Aug 29, 2024

Description

This pull request fixes the handling of API keys for LLMs in the code. It adds a JavaScript script to handle the API keys for LLMs and initializes the LLM secrets in the MAIDR instance. The script injects the LLM API keys into the MAIDR instance and sets the appropriate settings based on the presence of the Gemini and OpenAI API keys. This ensures that the LLM functionality works correctly with the updated API key handling.

closes #76

Type of Change

  • Bug fix
  • New feature
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published in downstream modules

Pull Request

Description

  1. Added a new method called initialize_llm_secrets() in environment.py which fetches the keys from the environment variable.
  2. Injected the script when the maidr iframe loads initially.

Checklist

  • I have read the Contributor Guidelines.
  • I have performed a self-review of my own code and ensured it follows the project's coding standards.
  • I have tested the changes locally following ManualTestingProcess.md, and all tests related to this pull request pass.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation, if applicable.
  • I have added appropriate unit tests, if applicable.

Additional Notes

@dakshpokar dakshpokar marked this pull request as ready for review August 29, 2024 05:06
@dakshpokar
Copy link
Collaborator Author

Greetings Professor @jooyoungseo,

While this PR addresses the issue of LLM keys being lost due to changes in the port number each time a Maidr instance is run, I believe we should consider a more scalable solution, especially in light of potential future changes. I discussed this matter with @SaaiVenkat, and he shares the concern that these fixes do not address the core problem but rather serve as a workaround for a dependency issue with the library. This issue could have been more easily resolved if py-htmltools provided a way to fix the port.

In my opinion, a more sustainable solution would be to implement a custom server to serve our HTML files, rather than relying on py-htmltools for this functionality. However, this is just my suggestion, and I would appreciate your thoughts on the matter.

Best Regards,
Daksh Pokar

@jooyoungseo
Copy link
Member

I had a Slack communication with @dakshpokar and we agreed that we need to use a custom local server and pin down a port number for an interactive console sessions (e.g., vanilla Python repl; ipython repl) while keeping the current behavior of htmltools.show() for shiny and any other web widgets. This way, we can preserve the same API keys across different sessions. Of course, this may be a stopgap solution and there might be a better way to handle this case, but we want to patch this first for the sake of the importance of this issue against our end-users.

@SaaiVenkat
Copy link
Collaborator

I had a Slack communication with @dakshpokar and we agreed that we need to use a custom local server and pin down a port number for an interactive console sessions (e.g., vanilla Python repl; ipython repl) while keeping the current behavior of htmltools.show() for shiny and any other web widgets. This way, we can preserve the same API keys across different sessions. Of course, this may be a stopgap solution and there might be a better way to handle this case, but we want to patch this first for the sake of the importance of this issue against our end-users.

Professor @jooyoungseo , since this is stopgap solution, I would suggest to create a backlog ticket right away to address this issue and link it with this issue.

cc: @dakshpokar

@jooyoungseo jooyoungseo changed the title fix: saved LLM API keys are not carried over to the next sessions feat: fetch LLM API keys from user env variables Sep 12, 2024
@jooyoungseo
Copy link
Member

Update: @dakshpokar and I decided to make a separate PR addressing the custom local server implementation. However, I vote for the current approach where users save their API keys in their env variables (e.g., OPENAI_API_KEY; GEMINI_API_KEY) for the recurrent and stable use across different sessions. This approach also aligns with other major libs (e.g., openai; langchain). I will merge this PR today and will create another issue for the custom server.

@jooyoungseo
Copy link
Member

@dakshpokar GEMINI_API_KEY seems to be an officially recommended key name: https://github.com/google-gemini/generative-ai-python

maidr/core/maidr.py Outdated Show resolved Hide resolved
Copy link
Member

@jooyoungseo jooyoungseo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tested all the possible edge cases with various env variable combinations. It is working very reliably. LGTM! I am merging it now.

@jooyoungseo jooyoungseo merged commit fc84593 into xability:main Sep 13, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

fix(llm): address an issue where saved AI API keys are not carried over to the next sessions
3 participants