Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Update docs on data privacy risks involving use of LangSmith and LangFuse #894

Merged
merged 9 commits into from
Nov 22, 2024
8 changes: 8 additions & 0 deletions vizro-ai/docs/pages/explanation/safety-in-vizro-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,3 +68,11 @@ Users must exercise caution when executing code generated by or influenced by AI
- Always review and understand the selected model before connecting with `vizro_ai`

To learn more, refer to the section that describes how to [safeguard execution of dynamic code](safeguard.md).


## Debugging in Vizro-AI with LangSmith and LangFuse

[LangSmith](https://docs.smith.langchain.com/) and [LangFuse](https://langfuse.com/docs) are tools designed to enhance transparency and interpretability in AI workflows.
You can use these tools with Vizro-AI to improve debugging and traceability by leveraging their advanced observability and evaluation features.
nadijagraca marked this conversation as resolved.
Show resolved Hide resolved

To ensure responsible use of said tools, review their data privacy and security policies. See [LangFuse](https://langfuse.com/docs/data-security-privacy) and [LangSmith](https://www.langchain.com/privacy-policy) site for more details on responsible usage.
nadijagraca marked this conversation as resolved.
Show resolved Hide resolved
Loading