Skip to content
forked from microsoft/MLOS

MLOS is a Data Science powered infrastructure and methodology to democratize and automate Performance Engineering. MLOS enables continuous, instance-based, robust, and trackable systems optimization.

License

Notifications You must be signed in to change notification settings

anjagruenheid/MLOS

 
 

Repository files navigation

MLOS

MLOS DevContainer MLOS Linux MLOS Windows Code Coverage Status

This repository contains a stripped down implementation of essentially just the core optimizer and config space description APIs from the original MLOS as well as the mlos-bench module intended to help automate and manage running experiments for autotuning systems with mlos-core.

It is intended to provide a simplified, easier to consume (e.g. via pip), with lower dependencies abstraction to

  • describe a space of context, parameters, their ranges, constraints, etc. and result objectives
  • an "optimizer" service abstraction (e.g. register() and suggest()) so we can easily swap out different implementations methods of searching (e.g. random, BO, etc.)
  • provide some helpers for automating optimization experiment runner loops and data collection

For these design requirements we intend to reuse as much from existing OSS libraries as possible and layer policies and optimizations specifically geared towards autotuning over top.

Getting Started

The development environment for MLOS uses conda to ease dependency management.

Devcontainer

For a quick start, you can use the provided VSCode devcontainer configuration.

Simply open the project in VSCode and follow the prompts to build and open the devcontainer and the conda environment and additional tools will be installed automatically inside the container.

Manually

See Also: conda install instructions

Note: to support Windows we currently rely on some pre-compiled packages from conda-forge channels, which increases the conda solver time during environment create/update.

To work around this the (currently) experimental libmamba solver can be used.

See https://github.com/conda-incubator/conda-libmamba-solver#getting-started for more details.

  1. Create the mlos Conda environment.

    conda env create -f conda-envs/mlos.yml

    See the conda-envs/ directory for additional conda environment files, including those used for Windows (e.g. mlos-windows.yml).

    or

    # This will also ensure the environment is update to date using "conda env update -f conda-envs/mlos.yml"
    make conda-env

    Note: the latter expects a *nix environment.

  2. Initialize the shell environment.

    conda activate mlos
  3. For an example of using the mlos_core optimizer APIs run the BayesianOptimization.ipynb notebook.

  4. For an example of using the mlos_bench tool to run an experiment, see the mlos_bench Quickstart README.

    Here's a quick summary:

    # get an azure token
    ./scripts/generate-azure-credentials-config.sh
    
    # run a simple experiment
    mlos_bench --config ./mlos_bench/mlos_bench/config/cli/azure-redis-1shot.jsonc

    See Also: mlos_bench/config for additional configuration details.

Distributing

  1. Build the wheel file(s)

    make dist
  2. Install it (e.g. after copying it somewhere else).

    # this will install just the optimizer component with emukit support:
    pip install dist/mlos_core-0.1.0-py3-none-any.whl[emukit]
    
    # this will install just the optimizer component with flaml support:
    pip install dist/mlos_core-0.1.0-py3-none-any.whl[flaml]
    
    # this will install just the optimizer component with smac and flaml support:
    pip install dist/mlos_core-0.1.0-py3-none-any.whl[smac,flaml]
    # this will install both the optimizer and the experiment runner:
    pip install dist/mlos_bench-0.1.0-py3-none-any.whl

    Note: exact versions may differ due to automatic versioning.

See Also

About

MLOS is a Data Science powered infrastructure and methodology to democratize and automate Performance Engineering. MLOS enables continuous, instance-based, robust, and trackable systems optimization.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 78.4%
  • Jupyter Notebook 13.4%
  • Makefile 3.3%
  • Shell 3.1%
  • PowerShell 1.0%
  • Dockerfile 0.8%