This repository contains a stripped down implementation of essentially just the core optimizer and config space description APIs from the original MLOS as well as the mlos-bench
module intended to help automate and manage running experiments for autotuning systems with mlos-core
.
It is intended to provide a simplified, easier to consume (e.g. via pip
), with lower dependencies abstraction to
- describe a space of context, parameters, their ranges, constraints, etc. and result objectives
- an "optimizer" service abstraction (e.g.
register()
andsuggest()
) so we can easily swap out different implementations methods of searching (e.g. random, BO, etc.) - provide some helpers for automating optimization experiment runner loops and data collection
For these design requirements we intend to reuse as much from existing OSS libraries as possible and layer policies and optimizations specifically geared towards autotuning over top.
The development environment for MLOS uses conda
to ease dependency management.
For a quick start, you can use the provided VSCode devcontainer configuration.
Simply open the project in VSCode and follow the prompts to build and open the devcontainer and the conda environment and additional tools will be installed automatically inside the container.
See Also:
conda
install instructionsNote: to support Windows we currently rely on some pre-compiled packages from
conda-forge
channels, which increases theconda
solver time during environment create/update.To work around this the (currently) experimental
libmamba
solver can be used.See https://github.com/conda-incubator/conda-libmamba-solver#getting-started for more details.
-
Create the
mlos
Conda environment.conda env create -f conda-envs/mlos.yml
See the
conda-envs/
directory for additional conda environment files, including those used for Windows (e.g.mlos-windows.yml
).or
# This will also ensure the environment is update to date using "conda env update -f conda-envs/mlos.yml" make conda-env
Note: the latter expects a *nix environment.
-
Initialize the shell environment.
conda activate mlos
-
For an example of using the
mlos_core
optimizer APIs run theBayesianOptimization.ipynb
notebook. -
For an example of using the
mlos_bench
tool to run an experiment, see themlos_bench
Quickstart README.Here's a quick summary:
# get an azure token ./scripts/generate-azure-credentials-config.sh # run a simple experiment mlos_bench --config ./mlos_bench/mlos_bench/config/cli/azure-redis-1shot.jsonc
See Also: mlos_bench/config for additional configuration details.
-
Build the wheel file(s)
make dist
-
Install it (e.g. after copying it somewhere else).
# this will install just the optimizer component with emukit support: pip install dist/mlos_core-0.1.0-py3-none-any.whl[emukit] # this will install just the optimizer component with flaml support: pip install dist/mlos_core-0.1.0-py3-none-any.whl[flaml] # this will install just the optimizer component with smac and flaml support: pip install dist/mlos_core-0.1.0-py3-none-any.whl[smac,flaml]
# this will install both the optimizer and the experiment runner: pip install dist/mlos_bench-0.1.0-py3-none-any.whl
Note: exact versions may differ due to automatic versioning.
- API and Examples Documentation: https://aka.ms/mlos-core/docs
- Source Code Repository: https://aka.ms/mlos-core/src