Releases: PrefectHQ/ControlFlow
v0.11.4: OpenAI Hotfix
This release includes a hotfix for a bug in the OpenAI client: #390
Full Changelog: v0.11.3...v0.11.4
v0.11.3: Exception-al Service
v0.11.2: Great Success
This is a small release that tweaks mark_success instructions to improve GPT-4o mini responses.
What's Changed
Other Changes 🦾
Full Changelog: v0.11.1...v0.11.2
v0.11.1: Let That Async In
What's Changed
New Features 🎉
Docs 📚
- add syntax highlighting by @jlowin in #357
- Fix broken link and broken code in docs by @discdiver in #359
Other Changes 🦾
- Update test_history.py by @AranavMahalpure in #360
- add slackbot example and update typing + debug logs by @zzstoatzz in #363
- fully rm invalid test by @zzstoatzz in #366
- Add AI labeler by @jlowin in #367
- Pass basemodel attributes directly as kwargs by @jlowin in #369
- Orchestrator: exclude
handlers
field from being serialized by @teocns in #370
New Contributors
- @AranavMahalpure made their first contribution in #360
- @teocns made their first contribution in #370
Full Changelog: v0.11.0...v0.11.1
v0.11.0: Flow Stopper
The 0.11 release includes new ways of controlling the agentic loop to introduce custom early termination conditions.
What's Changed
New Features 🎉
Enhancements 🚀
- Improve type hints by @jlowin in #343
- Improve task rendering by @jlowin in #344
- Allow parent tasks to optionally wait for subtasks by @jlowin in #346
- Allow model_kwargs to be passed to llm API by @jlowin in #347
- Add LLM rules for custom or unrecognized models by @jlowin in #350
- Add compilation flag for removing all system messages by @jlowin in #355
Fixes 🐞
Breaking Changes 🛫
Docs 📚
- "Added in" → "New in" for docs badge by @jlowin in #339
- chore: update settings.py by @eltociear in #352
Other Changes 🦾
Full Changelog: v0.10.0...v0.11.0
v0.10.0: Total Recall
The 0.10 release is headlined by a new memory system, so that agents can retain partitioned knowledge across flows and invocations. Agents can have multiple memory modules, each with its own instructions and shared access patterns. Memory is backed by a pluggable provider interface; 0.10 includes support for Chroma and LanceDB.
What's Changed
New Features 🎉
- remove old
typer
extra and updatejson
->model_dump_json
(LC 0.3+ support) by @zzstoatzz in #312 - Support langchain 0.3 by @jlowin in #321
- Add llm-specific prompt instructions by @jlowin in #322
- Add agent memories by @jlowin in #326
- Bump langchain versions by @jlowin in #328
- Allow completion tools to be customized per-task by @jlowin in #330
- Add chroma cloud configs by @jlowin in #331
- Add LanceDB memory provider by @jlowin in #334
- Support async tasks with task decorator by @jlowin in #337
Fixes 🐞
Breaking Changes 🛫
Docs 📚
- Add docs for multiple labels by @jlowin in #320
- Remove comma from pip install by @ahuang11 in #324
- Add version badges by @jlowin in #329
- Add memory example by @jlowin in #332
- Update all examples by @jlowin in #335
Other Changes 🦾
New Contributors
Full Changelog: v0.9.4...v0.10.0
v0.9.4: Pin It to Win It
v0.9.3: Model Behavior
What's Changed
New Features 🎉
- Support automatic model configuration on agents by @jlowin in #308
- Improve gemini support by @jlowin in #309
Full Changelog: v0.9.2...v0.9.3
v0.9.2: Handle with Care
This release exposes and documents Handlers
for users to customize. Handlers provide a way to observe and react to events that occur during task execution. They allow users to customize logging, monitoring, or take specific actions based on the orchestration process.
What's Changed
New Features 🎉
Docs 📚
Full Changelog: v0.9.1...v0.9.2