DISCLAIMERS: Some features described below are under development (refer to feature/otx branch). You can find more detailed estimation from the Roadmap section below.
OpenVINO™ Training Extensions (OTE) is command-line interface (CLI) framework designed for low-code deep learning model training. OTE lets developers train/inference/optimize models with a diverse combination of model architectures and learning methods using the OpenVINO™
toolkit. For example, users can train a ResNet18-based SSD (Single Shot Detection) model in a semi-supervised manner without worrying about setting a configuration manually. ote build
and ote train
commands will automatically analyze users' dataset and do necessary tasks for training the model with best configuration. OTE provides the following features:
- Provide a set of pre-configured models for quick start
ote find
helps you quickly finds the best pre-configured models for common task types like classification, detection, segmentation, and anomaly analysis.
- Configure and train a model from torchvision, OpenVINO Model Zoo (OMZ)
ote build
can help you configure your own model based on torchvision and OpenVINO Model Zoo models. You can replace backbones, necks and heads for your own preference (Currently only backbones are supported).
- Provide several learning methods including supervised, semi-supervised, imbalanced-learn, class-incremental, self-supervised representation learning
ote build
helps you automatically identify the best learning methods for your data and model. All you need to do is to set your data in the supported format. If you don't specify a model, the framework will automatically sets the best model for you. For example, if your dataset has long-tailed and partially-annotated bounding box annotations, OTE auto-configurator will choose a semi-supervised imbalanced-learning method and an appropriate model with the best parameters.
- Integrated efficient hyper-parameter optimization
- OTE has an integrated, efficient hyper-parameter optimization module. So, you don't need to worry about searching right hyper-parameters. Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget.
- Support widely-used annotation formats
- OTE uses datumaro, which is designed for dataset building and transformation, as a default interface for dataset management. All supported formats by datumaro are also consumable by OTE without the need of explicit data conversion. If you want to build your own custom dataset format, you can do this via datumaro CLI and API.
- Installation through PyPI
- Package will be renamed as OTX (OpenVINO Training eXtension)
- CLI update
- Update
find
command to find configurations of tasks/algorithms - Introduce
build
command to customize task or model configurations - Automatic algorihm selection for the
train
command using the given input dataset
- Update
- Adaptation of Datumaro component as a dataset interface
- Integrate hyper-parameter optimizations
- Support action recognition task
- SDK/API update
- Components
- Branches
In order to get started with OpenVINO™ Training Extensions see the quick-start guide.
Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Please use Issues tab for your bug reporting, feature requesting, or any questions.
Please read the Contribution guide before starting work on a pull request.
Training, export, and evaluation scripts for TensorFlow- and most PyTorch-based models from the misc branch are, currently, not production-ready. They serve exploratory purposes and are not validated.
* Other names and brands may be claimed as the property of others.