You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I discovered the problem when using Label Studio ML Backend to label a video with multiple objects to track. This issue occurs consistently with all videos that contain multiple objects, regardless of their complexity or duration. All objects in a task receive the same label in the user interface, regardless of the distinct labels assigned by the model.
Key Issue:
The core problem is that when updating annotations via the model's response, the Label Studio interface is expected to display distinct labels for each object. However, it ends up displaying identical labels for all objects, even if the labels are supposed to be different. This happens both when creating new annotations and updating existing ones.
Request:
Ensure that when multiple objects are present in the predictions, all of them are correctly displayed in the UI with their respective labels.
Configure Label Studio ML Backend and return the following dummy prediction. A 'dummy prediction' in this context refers to a sample output generated by the ML backend to simulate how predictions would look for multiple objects being tracked. This example demonstrates how the ML backend is expected to return prediction data for multiple tracked objects. The goal is to ensure that each object is uniquely labeled, which aligns with the intended workflow of providing distinct labels for different objects in the Label Studio UI:
Observe that all objects in the UI are incorrectly assigned the label of the first object from the predictions, instead of displaying their respective unique labels as provided by the model.
The text was updated successfully, but these errors were encountered:
I tested YOLO based on your suggestion. I can confirm the following:
It works only if no boxes are present before triggering the model's prediction. If the model is called on task loading (when you select the task, it opens, and the prediction starts automatically without any intervention), it works as expected.
However, if you manually label an object and then click on the class label to trigger the prediction, when it returns, all objects are displayed with the same label, even though the ML backend is functioning correctly.
This seems to work for models like YOLO that ignore user input, but it does not work at all for models like SAM2, which require one or more prompts as initial input.
Additionally, even for YOLO, it doesn't work if the user adds boxes that the model cannot detect on its own, as the problem reoccurs.
Description:
I discovered the problem when using Label Studio ML Backend to label a video with multiple objects to track. This issue occurs consistently with all videos that contain multiple objects, regardless of their complexity or duration. All objects in a task receive the same label in the user interface, regardless of the distinct labels assigned by the model.
Key Issue:
The core problem is that when updating annotations via the model's response, the Label Studio interface is expected to display distinct labels for each object. However, it ends up displaying identical labels for all objects, even if the labels are supposed to be different. This happens both when creating new annotations and updating existing ones.
Request:
Steps to Reproduce:
Start by using a label config template for video object tracking: https://labelstud.io/templates/video_object_detector.
Configure Label Studio ML Backend and return the following dummy prediction. A 'dummy prediction' in this context refers to a sample output generated by the ML backend to simulate how predictions would look for multiple objects being tracked. This example demonstrates how the ML backend is expected to return prediction data for multiple tracked objects. The goal is to ensure that each object is uniquely labeled, which aligns with the intended workflow of providing distinct labels for different objects in the Label Studio UI:
Observe that all objects in the UI are incorrectly assigned the label of the first object from the predictions, instead of displaying their respective unique labels as provided by the model.
The text was updated successfully, but these errors were encountered: