diff --git a/README.md b/README.md index f8565d085..a1818caba 100755 --- a/README.md +++ b/README.md @@ -6,10 +6,38 @@ While early AutoML frameworks focused on optimizing traditional ML pipelines and Auto-PyTorch is mainly developed to support tabular data (classification, regression). The newest features in Auto-PyTorch for tabular data are described in the paper ["Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL"](https://arxiv.org/abs/2006.13799) (see below for bibtex ref). +Also, find the documentation [here](https://automl.github.io/Auto-PyTorch/development). ***From v0.1.0, AutoPyTorch has been updated to further improve usability, robustness and efficiency by using SMAC as the underlying optimization package as well as changing the code structure. Therefore, moving from v0.0.2 to v0.1.0 will break compatibility. In case you would like to use the old API, you can find it at [`master_old`](https://github.com/automl/Auto-PyTorch/tree/master-old).*** +## Workflow + +The rough description of the workflow of Auto-Pytorch is drawn in the following figure. + + + +In the figure, **Data** is provided by user and +**Portfolio** is a set of configurations of neural networks that work well on diverse datasets. +The current version only supports the *greedy portfolio* as described in the paper *Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL* +This portfolio is used to warm-start the optimization of SMAC. +In other words, we evaluate the portfolio on a provided data as initial configurations. +Then API starts the following procedures: +1. **Validate input data**: Process each data type, e.g. encoding categorical data, so that Auto-Pytorch can handled. +2. **Create dataset**: Create a dataset that can be handled in this API with a choice of cross validation or holdout splits. +3. **Evaluate baselines** *1: Train each algorithm in the predefined pool with a fixed hyperparameter configuration and dummy model from `sklearn.dummy` that represents the worst possible performance. +4. **Search by [SMAC](https://github.com/automl/SMAC3)**:\ + a. Determine budget and cut-off rules by [Hyperband](https://jmlr.org/papers/volume18/16-558/16-558.pdf)\ + b. Sample a pipeline hyperparameter configuration *2 by SMAC\ + c. Update the observations by obtained results\ + d. Repeat a. -- c. until the budget runs out +5. Build the best ensemble for the provided dataset from the observations and [model selection of the ensemble](https://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml04.icdm06long.pdf). + +*1: Baselines are a predefined pool of machine learning algorithms, e.g. LightGBM and support vector machine, to solve either regression or classification task on the provided dataset + +*2: A pipeline hyperparameter configuration specifies the choice of components, e.g. target algorithm, the shape of neural networks, in each step and +(which specifies the choice of components in each step and their corresponding hyperparameters. + ## Installation ### Manual Installation @@ -25,8 +53,8 @@ We recommend using Anaconda for developing as follows: git submodule update --init --recursive # Create the environment -conda create -n autopytorch python=3.8 -conda activate autopytorch +conda create -n auto-pytorch python=3.8 +conda activate auto-pytorch conda install swig cat requirements.txt | xargs -n 1 -L 1 pip install python setup.py install diff --git a/figs/apt_workflow.png b/figs/apt_workflow.png new file mode 100644 index 000000000..5e3f048b9 Binary files /dev/null and b/figs/apt_workflow.png differ