v0.13.0
🚀 Added
🤯 Next-level workflows
Better integration with Roboflow platform
From now on, we have much better alignment with UI workflow creator available in Roboflow app
. Just take a look how nice it presents itself thanks to @hansent @EmilyGavrilenko @casmwenger @kresetar @jchens
But great look is not the only feature, the team has added tons of functionalities, including:
- operations on processed by
workflow
Execution Engine - including filtering and conditions are now possible to be build with UI creators - Roboflow models and projects available to be used are suggested automatically
- Preview option to run workflow that is under development is now available
- ... and much more - check out yourself!
workflows
Universal Query Language (UQL)
We've added Universal Query Language as extension to workflows
eco-system. We've discovered that it would be extremely helpful for users to be able to build chains of transformations (like filtering, selecting only specific bounding boxes, aggregating results etc) or expressions evaluating into booleans. UQL powers UI extensions like the one presented below:
Yes, we know that UQL
is not the best name, but as majority engineers we are struggling to find names for things we create. Please help us in that regards!
workflows
🤝 sv.Detections
From now on, the default representation of predictions from object-detection
, instance-segmentation
and keypoint-detection
models is sv.Detections
. That has a lot of practical implications for blocks creators. Take a look how easy it is to add a block that makes prediction from your custom model. This was mainly possible thanks to @grzegorz-roboflow
👉 Code snippet with your custom model block fitting our eco-system
from typing import Literal, Type
import supervision as sv
from inference.core.workflows.entities.base import (
Batch,
OutputDefinition,
WorkflowImageData,
)
from inference.core.workflows.entities.types import (
BATCH_OF_OBJECT_DETECTION_PREDICTION_KIND,
ImageInputField,
StepOutputImageSelector,
WorkflowImageSelector,
)
from inference.core.workflows.prototypes.block import (
BlockResult,
WorkflowBlock,
WorkflowBlockManifest,
)
class BlockManifest(WorkflowBlockManifest):
type: Literal["MyModel"]
images: Union[WorkflowImageSelector, StepOutputImageSelector] = ImageInputField
@classmethod
def describe_outputs(cls) -> List[OutputDefinition]:
return [
OutputDefinition(
name="predictions", kind=[BATCH_OF_OBJECT_DETECTION_PREDICTION_KIND]
)
]
class MyModelBlock(WorkflowBlock):
def __init__(self):
self._model = load_my_model(...)
@classmethod
def get_manifest(cls) -> Type[WorkflowBlockManifest]:
return BlockManifest
async def run(self, image: WorkflowImageData) -> BlockResult:
result = self._model(image)
detections = sv.Detections(...) # here you need to convert results into sv.Detections - there is a need to add couple of keys into .data property - docs covering that will come soon, in questions - do not hesitate to ask
return {"predictions": detections}
True conditional branching for SIMD operations in workflows
We had a serious technical limitation in previous iterations of workflows
Execution Engine - lack of ability to simulate different execution branches for each element of data processed`. This is no longer the case! Now it is possible to detect high-level objects, make crops based on detections and then for each cropped image independently decide whether or not to save in Roboflow project - based on condition stated in UQL 🤯
But this is not everything! As technical preview we prepared rock-paper-scissor game in workflows
. Check it out here
Advancements in video processing with workflows
This feature is still experimental, but we are making progress - now it is possible to process multiple videos at once with InferencePipeline
and workflows
:
Screen.Recording.2024-06-27.at.13.22.37.mov
👉 Code snippet
from typing import List, Optional
import cv2
import supervision as sv
from inference import InferencePipeline
from inference.core.interfaces.camera.entities import VideoFrame
from inference.core.utils.drawing import create_tiles
STOP = False
ANNOTATOR = sv.BoundingBoxAnnotator()
def main() -> None:
workflow_specification = {
"version": "1.0",
"inputs": [
{"type": "WorkflowImage", "name": "image"},
],
"steps": [
{
"type": "ObjectDetectionModel",
"name": "step_1",
"image": "$inputs.image",
"model_id": "yolov8n-640",
"confidence": 0.5,
}
],
"outputs": [
{"type": "JsonField", "name": "predictions", "selector": "$steps.step_1.predictions"},
],
}
pipeline = InferencePipeline.init_with_workflow(
video_reference=[
"<YOUR-VIDEO>",
"<YOUR-VIDEO>",
],
workflow_specification=workflow_specification,
on_prediction=workflows_sink,
)
pipeline.start()
pipeline.join()
def workflows_sink(
predictions: List[Optional[dict]],
video_frames: List[Optional[VideoFrame]],
) -> None:
images_to_show = []
for prediction, frame in zip(predictions, video_frames):
if prediction is None or frame is None:
continue
detections: sv.Detections = prediction["predictions"]
visualised = ANNOTATOR.annotate(frame.image.copy(), detections)
images_to_show.append(visualised)
tiles = create_tiles(images=images_to_show)
cv2.imshow(f"Predictions", tiles)
cv2.waitKey(1)
if __name__ == '__main__':
main()
Other changes:
- Step Name Property Copy Changes by @yeldarby in #444
- Abstract ImageInputField and RoboflowModelField + Copy Changes by @yeldarby in #445
- Allow CORS by default by @yeldarby in #485
- Add PerspectiveCorrectionBlock and PolygonSimplificationBlock by @grzegorz-roboflow in #441
List of contributors: @EmilyGavrilenko, @casmwenger, @kresetar, @jchens, @yeldarby, @grzegorz-roboflow, @hansent, @SkalskiP, @PawelPeczek-Roboflow
Predictions JSON ➕ visualisation @ Roboflow hosted platform
Previously clients needed to choose between visualisation of predictions and Predictions JSON returned from inference
server running at Roboflow hosted platform. This is no longer the case thanks to @SolomonLake and #467
from inference_sdk import InferenceHTTPClient, InferenceConfiguration
CLIENT = InferenceHTTPClient(
api_url="https://detect.roboflow.com/",
api_key="<YOUR-API-KEY>"
).configure(InferenceConfiguration(
format="image_and_json",
))
response = CLIENT.infer("<your_image>.jpg", model_id="yolov8n-640")
# check out
response["predictions"]
# and
response["visualisation"]
🌱 Changed
- Fixing yolov10 documentation by @nathan-marraccini in #480
- Supervision updates for Predict on a Video, Webcam or RTSP Stream Page by @nathan-marraccini in #477
- Add paligemma aliases for newly uploaded models by @probicheaux in #463
- Add PaliGemma LoRA by @probicheaux in #464
- Bump braces from 3.0.2 to 3.0.3 in /inference/landing by @dependabot in #466
- Fix security vulnerabilities by @PawelPeczek-Roboflow in #483
🥇 New Contributors
- @nathan-marraccini made their first contribution in #480
Full Changelog: v0.12.1...v0.13.0