You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+30-2Lines changed: 30 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,38 @@ While early AutoML frameworks focused on optimizing traditional ML pipelines and
6
6
7
7
Auto-PyTorch is mainly developed to support tabular data (classification, regression).
8
8
The newest features in Auto-PyTorch for tabular data are described in the paper ["Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL"](https://arxiv.org/abs/2006.13799) (see below for bibtex ref).
9
+
Also, find the documentation [here](https://automl.github.io/Auto-PyTorch/development).
9
10
10
11
***From v0.1.0, AutoPyTorch has been updated to further improve usability, robustness and efficiency by using SMAC as the underlying optimization package as well as changing the code structure. Therefore, moving from v0.0.2 to v0.1.0 will break compatibility.
11
12
In case you would like to use the old API, you can find it at [`master_old`](https://github.com/automl/Auto-PyTorch/tree/master-old).***
12
13
14
+
## Workflow
15
+
16
+
The rough description of the workflow of Auto-Pytorch is drawn in the following figure.
17
+
18
+
<imgsrc="figs/apt_workflow.png"width="500">
19
+
20
+
In the figure, **Data** is provided by user and
21
+
**Portfolio** is a set of configurations of neural networks that work well on diverse datasets.
22
+
The current version only supports the *greedy portfolio* as described in the paper *Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL*
23
+
This portfolio is used to warm-start the optimization of SMAC.
24
+
In other words, we evaluate the portfolio on a provided data as initial configurations.
25
+
Then API starts the following procedures:
26
+
1.**Validate input data**: Process each data type, e.g. encoding categorical data, so that Auto-Pytorch can handled.
27
+
2.**Create dataset**: Create a dataset that can be handled in this API with a choice of cross validation or holdout splits.
28
+
3.**Evaluate baselines***1: Train each algorithm in the predefined pool with a fixed hyperparameter configuration and dummy model from `sklearn.dummy` that represents the worst possible performance.
29
+
4.**Search by [SMAC](https://github.com/automl/SMAC3)**:\
30
+
a. Determine budget and cut-off rules by [Hyperband](https://jmlr.org/papers/volume18/16-558/16-558.pdf)\
31
+
b. Sample a pipeline hyperparameter configuration *2 by SMAC\
32
+
c. Update the observations by obtained results\
33
+
d. Repeat a. -- c. until the budget runs out
34
+
5. Build the best ensemble for the provided dataset from the observations and [model selection of the ensemble](https://www.cs.cornell.edu/~caruana/ctp/ct.papers/caruana.icml04.icdm06long.pdf).
35
+
36
+
*1: Baselines are a predefined pool of machine learning algorithms, e.g. LightGBM and support vector machine, to solve either regression or classification task on the provided dataset
37
+
38
+
*2: A pipeline hyperparameter configuration specifies the choice of components, e.g. target algorithm, the shape of neural networks, in each step and
39
+
(which specifies the choice of components in each step and their corresponding hyperparameters.
40
+
13
41
## Installation
14
42
15
43
### Manual Installation
@@ -25,8 +53,8 @@ We recommend using Anaconda for developing as follows:
0 commit comments