TruLens Eval v0.21.0
What's changed
- Deduplicated sync/async methods by @piotrm0 in #793
- Refactored groundedness methods by @joshreini1 in #801
- Error on deprecated passthrough methods by @piotrm0 in #803
- Virtual models for logging and evaluating existing data by @piotrm0 in #806
- Rename summarization quality to comprehensiveness by @joshreini1 in #816
- Delete long deprecated TruApp and TruDB by @piotrm0 in #817
- Enable async unit tests by @piotrm0 in #831
- Add generation of test cases by @joshreini1 in #705
Examples
- Expand evaluation docs by @joshreini1 in #823 including:
- Running Feedback Functions
- Feedback Function Selectors
- Feedback Function Providers
- Feedback Implementations
- Generating Test Cases
- Feedback Evaluations
Bug Fixes
- Add metadata display and application tag display in UI by @joshreini1 in #797
- Fixed issue with float precision by @joshreini1 in #798
- Fix typo in openai moderation - sexual minors by @joshreini1 in #815
- Include reasoning in summarization eval by @joshreini1 in #815
- Make OpenAI optional by @joshreini1 in #827
New contributors
- @vivekgangasani made their first contribution to update AWS jumpstart examples in #795
Notes
- When feedback mode is set to WITH_APP_THREAD, feedback may be computed more eagerly than expected.