Scaling and Distributing Neural Architecture Search (NAS) Algorithms with the distributed machine learning framework Ray.
- DARTS
- uses official pytorch implementation
- CNN only
- ENAS
- Random NAS
- Same search space as ENAS, no RNN controller
- CNN only
Can search for CNN and RNN architectures from the same entry-point: main.py
First choose algorithm from darts
, enas
, or random
, which correspond to differentiable architecture search, efficient neural architecture search, and a simple random sample based approach which simplifies NAS to hyperparameter tuning.
To search for CNN architecture for cifar10 with DARTS,
python main.py darts cnn --dataset cifar10 --layers 2 --cuda
To search for RNN architecture for ptb with ENAS,
python main.py enas rnn --num_blocks 4 --cuda
To search for CNN architecture for cifar10 with simple random NAS,
python main.py random cnn --dataset cifar10 --cuda
Monitor runs in ray dashboard, or by launching tensorboard in the experiment directory created by Ray Tune:
tensorboard --logdir="~/ray_results/exp/"
Find the location of the run which generated the desired architecture and run the entry-point in visualize mode:
python main.py viz --load <path_to_trial> --viz
TBD
- DARTS
- RNN implementation
- Results
- ENAS
- CNN implementation
- Results
- RandomNAS
- Search Space expansion for CNN
- RNN implementation
- Results