Skip to content

Commit 6030aeb

Browse files
franchuteriveraravinkohlibastiscodenabenabe0928
authored
Development longregression (#3)
* New refactor code. Initial push * Allow specifying the network type in include (automl#78) * Allow specifying the network type in include * Fix test flake 8 * fix test api * increased time for func eval in cros validation * Addressed comments Co-authored-by: Ravin Kohli <[email protected]> * Search space update (automl#80) * Added Hyperparameter Search space updates * added test for search space update * Added Hyperparameter Search space updates * added test for search space update * Added hyperparameter search space updates to network, trainer and improved check for search space updates * Fix mypy, flake8 * Fix tests and silly mistake in base_pipeline * Fix flake * added _cs_updates to dummy component * fixed indentation and isinstance comment * fixed silly error * Addressed comments from fransisco * added value error for search space updates * ADD tests for setting range of config space * fic utils search space update * Make sure the performance of pipeline is at least 0.8 * Early stop fixes * Network Cleanup (automl#81) * removed old supported_tasks dictionary from heads, added some docstrings and some small fixes * removed old supported_tasks attribute and updated doc strings in base backbone and base head components * removed old supported_tasks attribute from network backbones * put time series backbones in separate files, add doc strings and refactored search space arguments * split image networks into separate files, add doc strings and refactor search space * fix typo * add an intial simple backbone test similar to the network head test * fix flake8 * fixed imports in backbones and heads * added new network backbone and head tests * enabled tests for adding custom backbones and heads, added required properties to base head and base backbone * First documentation * Default to ubuntu-18.04 * Comment enhancements * Feature preprocessors, Loss strategies (automl#86) * ADD Weighted loss * Now? * Fix tests, flake, mypy * Fix tests * Fix mypy * change back sklearn requirement * Assert for fast ica sklearn bug * Forgot to add skip * Fix tests, changed num only data to float * removed fast ica * change num only dataset * Increased number of features in num only * Increase timeout for pytest * ADD tensorboard to requirement * Fix bug with small_preprocess * Fix bug in pytest execution * Fix tests * ADD error is raised if default not in include * Added dynamic search space for deciding n components in feature preprocessors, add test for pipeline include * Moved back to random configs in tabular test * Added floor and ceil and handling of logs * Fix flake * Remove TruncatedSVD from cs if num numerical ==1 * ADD flakyness to network accuracy test * fix flake * remove cla to pytest * Validate the input to autopytorch * Bug fixes after rebase * Move to new scikit learn * Remove dangerous convert dtype * Try to remove random float error again and make data pickable * Tets pickle on versions higher than 3.6 * Tets pickle on versions higher than 3.6 * Comment fixes * Adding tabular regression pipeline (automl#85) * removed old supported_tasks dictionary from heads, added some docstrings and some small fixes * removed old supported_tasks attribute and updated doc strings in base backbone and base head components * removed old supported_tasks attribute from network backbones * put time series backbones in separate files, add doc strings and refactored search space arguments * split image networks into separate files, add doc strings and refactor search space * fix typo * add an intial simple backbone test similar to the network head test * fix flake8 * fixed imports in backbones and heads * added new network backbone and head tests * enabled tests for adding custom backbones and heads, added required properties to base head and base backbone * adding tabular regression pipeline * fix flake8 * adding tabular regression pipeline * fix flake8 * fix regression test * fix indentation and comments, undo change in base network * pipeline fitting tests now check the expected output shape dynamically based on the input data * refactored trainer tests, added trainer test for regression * remove regression from mixup unitest * use pandas unique instead of numpy * [IMPORTANT] added proper target casting based on task type to base trainer * adding tabular regression task to api * adding tabular regression example, some small fixes * new/more tests for tabular regression * fix mypy and flake8 errors from merge * fix issues with new weighted loss and regression tasks * change tabular column transformer to use net fit_dictionary_tabular fixture * fixing tests, replaced num_classes with output_shape * fixes after merge * adding voting regressor wrapper * fix mypy and flake * updated example * lower r2 target * address comments * increasing timeout * increase number of labels in test_losses because it occasionally failed if one class was not in the labels * lower regression lr in score test until seeding properly works * fix randomization in feature validator test * Make sure the performance of pipeline is at least 0.8 * Early stop fixes * Network Cleanup (automl#81) * removed old supported_tasks dictionary from heads, added some docstrings and some small fixes * removed old supported_tasks attribute and updated doc strings in base backbone and base head components * removed old supported_tasks attribute from network backbones * put time series backbones in separate files, add doc strings and refactored search space arguments * split image networks into separate files, add doc strings and refactor search space * fix typo * add an intial simple backbone test similar to the network head test * fix flake8 * fixed imports in backbones and heads * added new network backbone and head tests * enabled tests for adding custom backbones and heads, added required properties to base head and base backbone * First documentation * Default to ubuntu-18.04 * Comment enhancements * Feature preprocessors, Loss strategies (automl#86) * ADD Weighted loss * Now? * Fix tests, flake, mypy * Fix tests * Fix mypy * change back sklearn requirement * Assert for fast ica sklearn bug * Forgot to add skip * Fix tests, changed num only data to float * removed fast ica * change num only dataset * Increased number of features in num only * Increase timeout for pytest * ADD tensorboard to requirement * Fix bug with small_preprocess * Fix bug in pytest execution * Fix tests * ADD error is raised if default not in include * Added dynamic search space for deciding n components in feature preprocessors, add test for pipeline include * Moved back to random configs in tabular test * Added floor and ceil and handling of logs * Fix flake * Remove TruncatedSVD from cs if num numerical ==1 * ADD flakyness to network accuracy test * fix flake * remove cla to pytest * Validate the input to autopytorch * Bug fixes after rebase * Move to new scikit learn * Remove dangerous convert dtype * Try to remove random float error again and make data pickable * Tets pickle on versions higher than 3.6 * Tets pickle on versions higher than 3.6 * Comment fixes * [REFACTORING]: no change in the functionalities, inputs, returns * Modified an error message * [Test error fix]: Fixed the error caused by flake8 * [Test error fix]: Fixed the error caused by flake8 * FIX weighted loss issue (automl#94) * Changed tests for losses and how weighted strategy is handled in the base trainer * Addressed comments from francisco * Fix training test * Re-arranged tests and moved test_setup to pytest * Reduced search space for dummy forward backward pass of backbones * Fix typo * ADD Doc string to loss function * Logger enhancements * show_models * Move to spawn * Adding missing logger line * Feedback from comments * ADD_109 * No print allow * [PR response]: deleted unneeded changes from merge and fixed the doc-string. * fixed the for loop in type_check based on samuel's review * deleted blank space pointed out by flake8 * Try no autouse * handle nans in categorical columns (automl#118) * handle nans in categorical columns * Fixed error in self dtypes * Addressed comments from francisco * Forgot to commit * Fix flake * Embedding layer (automl#91) * work in progress * in progress * Working network embedding * ADD tests for network embedding * Removed ordinal encoder * Removed ordinal encoder * Add seed for test_losses for reproducibility * Addressed comments * fix flake * fix test import training * ADD_109 * No print allow * Fix tests and move to boston * Debug issue with python 3.6 * Debug for python3.6 * Run only debug file * work in progress * in progress * Working network embedding * ADD tests for network embedding * Removed ordinal encoder * Removed ordinal encoder * Addressed comments * fix flake * fix test import training * Fix tests and move to boston * Debug issue with python 3.6 * Run only debug file * Debug for python3.6 * print paths of parent dir * Trying to run examples * Trying to run examples * Add success model * Added parent directory for printing paths * Try no autouse * print log file to see if backend is saving num run * Setup logger in backend * handle nans in categorical columns (automl#118) * handle nans in categorical columns * Fixed error in self dtypes * Addressed comments from francisco * Forgot to commit * Fix flake * try without embeddings * work in progress * in progress * Working network embedding * ADD tests for network embedding * Removed ordinal encoder * Removed ordinal encoder * Addressed comments * fix flake * fix test import training * Fix tests and move to boston * Debug issue with python 3.6 * Run only debug file * Debug for python3.6 * work in progress * in progress * Working network embedding * ADD tests for network embedding * print paths of parent dir * Trying to run examples * Trying to run examples * Add success model * Added parent directory for printing paths * print log file to see if backend is saving num run * Setup logger in backend * try without embeddings * no embedding for python 3.6 * Deleted debug example * Fix test for evaluation * Deleted utils file Co-authored-by: chico <[email protected]> * Fixes to address automlbenchmark problems * Fix trajectory file output * modified the doc-string in TransformSubset in base_dataset.py * change config_id to config_id+1 (automl#129) * move to a minimization problem (automl#113) * move to a minimization problem * Fix missing test loss file * Missed regression * More robust test * Try signal timeout * Kernel PCA failures * Feedback from Ravin * Better debug msg * Feedback from comments * Doc string request * Feedback from comments * Enhanced doc string * FIX_123 (automl#133) * FIX_123 * Better debug msg * at least 1 config in regression * Return self in _fit() * Adds more examples to customise AutoPyTorch. (automl#124) * 3 examples plus doc update * Forgot the examples * Added example for resampling strategy * Update example worflow * Fixed bugs in example and resampling strategies * Addressed comments * Addressed comments * Addressed comments from shuhei, better documentation * [Feat] Better traditional pipeline cutoff time (automl#141) * [Feat] Better traditional pipeline cutoff time * Fix unit testing * Better failure msg * bug fix catboost * Feedback from Ravin * First batch of feedback from comments * Missed examples * Syntax fix * Hyperparameter Search Space updates now with constant and include ability (automl#146) * In progress, add_hyperparameter * Added SearchSpace working functionality * Working search space update with test for __choice__ and fix flake * fixed mypy bug and bug in making constant float hyperparameters * Add test for fitting pipeline with constant updates * fix flake * bug in int for feature preprocessors and minor bugs in hyperparameter search space fixed * Forgot to add a file * Addressed comments, better documentation and better tests for search space updates * Fix flake * [Bug] Fix random halt problems on traditional pipelines (automl#147) * [feat] Fix random halt problems on traditional pipelines * Documentation update * Fix flake * Flake due to kernel pca errors * Run history traditional (automl#121) * In progress, issue with failed traditional * working traditional classifiers * Addressed comments from francisco * Changed test loop in test_api * Add .autopytorch runs back again * Addressed comments, better documentation and dict for runhistory * Fix flake * Fix tests and add additional run info for crossval * fix tests for train evaluator and api * Addressed comments * Addressed comments * Addressed comments from shuhei, removed deleting from additioninfo * [FIX] Enables backend to track the num run (automl#162) * AA_151 * doc the peek attr * [ADD] Relax constant pipeline performance * [Doc] First push of the developer documentation (automl#127) * First push of the developer documentation * Feedback from Ravin * Document scikit-learn develop guide * Feedback from Ravin * Delete extra point * Refactoring base dataset splitting functions (automl#106) * [Fork from automl#105] Made CrossValFuncs and HoldOutFuncs class to group the functions * Modified time_series_dataset.py to be compatible with resampling_strategy.py * [fix]: back to the renamed version of CROSS_VAL_FN from temporal SplitFunc typing. * fixed flake8 issues in three files * fixed the flake8 issues * [refactor] Address the francisco's comments * [refactor] Adress the francisco's comments * [refactor] Address the doc-string issue in TransformSubset class * [fix] Address flake8 issues * [fix] Fix flake8 issue * [fix] Fix mypy issues raised by github check * [fix] Fix a mypy issue * [fix] Fix a contradiction in holdout_stratified_validation Since stratified splitting requires to shuffle by default and it raises error in the github check, I fixed this issue. * [fix] Address the francisco's review * [fix] Fix a mypy issue tabular_dataset.py * [fix] Address the francisco's comment about the self.dataset_name Since we would to use the dataset name which does not have any name, I decided to get self.dataset_name back to Optional[str]. * [fix] Fix mypy issues * [Fix] Refactor development reproducibility (automl#172) * [Fix] pass random state to randomized algorithms * [Fix] double instantiation of random state * [fix] Flaky for sample configuration * [FIX] Runtime warning * [FIX] hardcoded budget * [FIX] flake * [Fix] try forked * [Fix] try forked * [FIX] budget * [Fix] missing random_state in trainer * [Fix] overwrite in random_state * [FIX] fix seed in splits * [Rebase] * [FIX] Update cv score after split num change * [FIX] CV split * [ADD] Extra visualization example (automl#189) * [ADD] Extra visualization example * Update docs/manual.rst Co-authored-by: Ravin Kohli <[email protected]> * Update docs/manual.rst Co-authored-by: Ravin Kohli <[email protected]> * [Fix] missing version * Update examples/tabular/40_advanced/example_visualization.py Co-authored-by: Ravin Kohli <[email protected]> * [FIX] make docs more clear to the user Co-authored-by: Ravin Kohli <[email protected]> * [Fix] docs links (automl#201) * [Fix] docs links * Update README.md Co-authored-by: Ravin Kohli <[email protected]> * Update examples check * Remove tmp in examples Co-authored-by: Ravin Kohli <[email protected]> * [Refactor] Use the backend implementation from automl common (automl#185) * [ADD] First push to enable common backend * Fix unit test * Try public https * [FIX] conftest prefix * [fix] unit test * [FIX] Fix fixture in score * [Fix] pytest collection * [FIX] flake * [FIX] regression also! * Update README.md Co-authored-by: Ravin Kohli <[email protected]> * Update .gitmodules Co-authored-by: Ravin Kohli <[email protected]> * [FIX] Regression time * Make flaky in case memout doesn't happen * Refacto development automl common backend debug (#2) * [ADD] debug information * [FIX] try fork for more stability Co-authored-by: Ravin Kohli <[email protected]> * [DOC] Adds documentation to the abstract evaluator (automl#160) * DOC_153 * Changes from Ravin * [FIX] improve clarity of msg in commit * [FIX] Update Readme (automl#208) * Reduce run time of the test (automl#205) * In progress, changing te4sts * Reduce time for tests * Fix flake in tests * Patch train in other tests also * Address comments from shuhei and fransisco: * Move base training to pytest * Fix flake in tests * forgot to pass n_samples * stupid error * Address comments from shuhei, remove hardcoding and fix bug in dummy eval function * Skip ensemble test for python >=3.7 and introduce random state for feature processors * fix flake * Remove example workflow * Remove from __init__ in feature preprocessing * [refactor] Getting dataset properties from the dataset object (automl#164) * Use get_required_dataset_info of the dataset when needing required info for getting dataset requirements * Fix flake * Fix bug in getting dataset requirements * Added doc string to explain dataset properties * Update doc string in utils pipeline * Change ubuntu version in docs workflow (automl#237) * Add dist check worflow (automl#238) * [feature] Greedy Portfolio (automl#200) * initial configurations added * In progress, adding flag in search function * Adds documentation, example and fixes setup.py * Address comments from shuhei, change run_greedy to portfolio_selection * address comments from fransisco, movie portfolio to configs * Address comments from fransisco, add tests for greedy portfolio and tests * fix flake tests * Simplify portfolio selection * Update autoPyTorch/optimizer/smbo.py Co-authored-by: Francisco Rivera Valverde <[email protected]> * Address comments from fransisco, path exception handling and test * fix flake * Address comments from shuhei * fix bug in setup.py * fix tests in base trainer evaluate, increase n samples and add seed * fix tests in base trainer evaluate, increase n samples (fix) Co-authored-by: Francisco Rivera Valverde <[email protected]> * [ADD] Forkserver as default multiprocessing strategy (automl#223) * First push of forkserver * [Fix] Missing file * [FIX] mypy * [Fix] renam choice to init * [Fix] Unit test * [Fix] bugs in examples * [Fix] ensemble builder * Update autoPyTorch/pipeline/components/preprocessing/image_preprocessing/normalise/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/preprocessing/image_preprocessing/normalise/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/preprocessing/tabular_preprocessing/encoding/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/preprocessing/image_preprocessing/normalise/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/preprocessing/tabular_preprocessing/feature_preprocessing/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/preprocessing/tabular_preprocessing/scaling/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/setup/network_head/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/setup/network_initializer/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * Update autoPyTorch/pipeline/components/setup/network_embedding/__init__.py Co-authored-by: Ravin Kohli <[email protected]> * [FIX] improve doc-strings * Fix rebase Co-authored-by: Ravin Kohli <[email protected]> * [ADD] Get incumbent config (automl#175) * In progress get_incumbent_results * [Add] get_incumbent_results to base task, changed additional info in abstract evaluator, and tests * In progress addressing fransisco's comment * Proper check for include_traditional * Fix flake * Mock search of estimator * Fixed path of run history test_api * Addressed comments from Fransisco, making better tests * fix flake * After rebase fix issues * fix flake * Added debug information for API * filtering only successful runs in get_incumbent_results * Address comments from fransisco * Revert changes made to run history assertion in base taks #1257 * fix flake issue * [ADD] Coverage calculation (automl#224) * [ADD] Coverage calculation * [Fix] Flake8 * [fix] rebase artifacts * [Fix] smac reqs * [Fix] Make traditional test robust * [Fix] unit test * [Fix] test_evaluate * [Fix] Try more time for cross validation * Fix mypy post rebase * Fix unit test * [ADD] Pytest schedule (automl#234) * add schedule for pytests workflow * Add ref to development branch * Add scheduled test * update schedule workflow to run on python 3.8 * omit test, examples, workflow from coverage and remove unnecessary code from schedule * Fix call for python3.8 * Fix call for python3.8 (2) * fix code cov call in python 3.8 * Finally fix cov call * [fix] Dropout bug fix (automl#247) * fix dropout bug * fix dropout shape discrepancy * Fix unit test bug * Add tests for dropout shape asper comments from fransisco * Fix flake * Early stop on metric * Enable long run regression Co-authored-by: Ravin Kohli <[email protected]> Co-authored-by: Ravin Kohli <[email protected]> Co-authored-by: bastiscode <[email protected]> Co-authored-by: nabenabe0928 <[email protected]> Co-authored-by: nabenabe0928 <[email protected]>
1 parent e22a374 commit 6030aeb

File tree

685 files changed

+36260
-25907
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

685 files changed

+36260
-25907
lines changed

.binder/apt.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
build-essential
2+
swig

.binder/postBuild

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
#!/bin/bash
2+
3+
set -e
4+
5+
python -m pip install .[docs,examples]
6+
7+
# Taken from https://github.com/scikit-learn/scikit-learn/blob/22cd233e1932457947e9994285dc7fd4e93881e4/.binder/postBuild
8+
# under BSD3 license, copyright the scikit-learn contributors
9+
10+
# This script is called in a binder context. When this script is called, we are
11+
# inside a git checkout of the automl/Auto-PyTorch repo. This script
12+
# generates notebooks from the Auto-PyTorch python examples.
13+
14+
if [[ ! -f /.dockerenv ]]; then
15+
echo "This script was written for repo2docker and is supposed to run inside a docker container."
16+
echo "Exiting because this script can delete data if run outside of a docker container."
17+
exit 1
18+
fi
19+
20+
# Copy content we need from the Auto-PyTorch repo
21+
TMP_CONTENT_DIR=/tmp/Auto-PyTorch
22+
mkdir -p $TMP_CONTENT_DIR
23+
cp -r examples .binder $TMP_CONTENT_DIR
24+
# delete everything in current directory including dot files and dot folders
25+
find . -delete
26+
27+
# Generate notebooks and remove other files from examples folder
28+
GENERATED_NOTEBOOKS_DIR=examples
29+
cp -r $TMP_CONTENT_DIR/examples $GENERATED_NOTEBOOKS_DIR
30+
31+
find $GENERATED_NOTEBOOKS_DIR -name 'example_*.py' -exec sphx_glr_python_to_jupyter.py '{}' +
32+
# Keep __init__.py and custom_metrics.py
33+
NON_NOTEBOOKS=$(find $GENERATED_NOTEBOOKS_DIR -type f | grep -v '\.ipynb' | grep -v 'init' | grep -v 'custom_metrics')
34+
rm -f $NON_NOTEBOOKS
35+
36+
# Modify path to be consistent by the path given by sphinx-gallery
37+
mkdir notebooks
38+
mv $GENERATED_NOTEBOOKS_DIR notebooks/
39+
40+
# Put the .binder folder back (may be useful for debugging purposes)
41+
mv $TMP_CONTENT_DIR/.binder .
42+
# Final clean up
43+
rm -rf $TMP_CONTENT_DIR

.binder/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
-r ../requirements.txt

.codecov.yml

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
#see https://github.com/codecov/support/wiki/Codecov-Yaml
2+
codecov:
3+
notify:
4+
require_ci_to_pass: yes
5+
6+
coverage:
7+
precision: 2 # 2 = xx.xx%, 0 = xx%
8+
round: nearest # how coverage is rounded: down/up/nearest
9+
range: 10...90 # custom range of coverage colors from red -> yellow -> green
10+
status:
11+
# https://codecov.readme.io/v1.0/docs/commit-status
12+
project:
13+
default:
14+
against: auto
15+
target: 70% # specify the target coverage for each commit status
16+
threshold: 50% # allow this little decrease on project
17+
# https://github.com/codecov/support/wiki/Filtering-Branches
18+
# branches: master
19+
if_ci_failed: error
20+
# https://github.com/codecov/support/wiki/Patch-Status
21+
patch:
22+
default:
23+
against: auto
24+
target: 30% # specify the target "X%" coverage to hit
25+
threshold: 50% # allow this much decrease on patch
26+
changes: false
27+
28+
parsers:
29+
gcov:
30+
branch_detection:
31+
conditional: true
32+
loop: true
33+
macro: false
34+
method: false
35+
javascript:
36+
enable_partials: false
37+
38+
comment:
39+
layout: header, diff
40+
require_changes: false
41+
behavior: default # update if exists else create new
42+
branches: *

.coveragerc

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
# .coveragerc to control coverage.py
2+
[run]
3+
branch = True
4+
include = "autoPyTorch/*"
5+
6+
[report]
7+
# Regexes for lines to exclude from consideration
8+
exclude_lines =
9+
# Have to re-enable the standard pragma
10+
pragma: no cover
11+
12+
# Don't complain about missing debug-only code:
13+
def __repr__
14+
if self\.debug
15+
16+
# Don't complain if tests don't hit defensive assertion code:
17+
raise AssertionError
18+
raise NotImplementedError
19+
20+
# Don't complain if non-runnable code isn't run:
21+
if 0:
22+
if __name__ == .__main__.:
23+
24+
ignore_errors = True
25+
26+
[html]
27+
directory = coverage_html_report

.flake8

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
[flake8]
2+
max-line-length = 120
3+
show-source = True
4+
application-import-names = autoPyTorch
5+
exclude =
6+
venv
7+
build

.github/workflows/dist.yml

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
name: dist-check
2+
3+
on: [push, pull_request]
4+
5+
jobs:
6+
dist:
7+
runs-on: ubuntu-latest
8+
steps:
9+
- uses: actions/checkout@v2
10+
- name: Setup Python
11+
uses: actions/setup-python@v2
12+
with:
13+
python-version: 3.8
14+
- name: Build dist
15+
run: |
16+
python setup.py sdist
17+
- name: Twine check
18+
run: |
19+
pip install twine
20+
last_dist=$(ls -t dist/autoPyTorch-*.tar.gz | head -n 1)
21+
twine_output=`twine check "$last_dist"`
22+
if [[ "$twine_output" != "Checking $last_dist: PASSED" ]]; then echo $twine_output && exit 1;fi
23+
- name: Install dist
24+
run: |
25+
last_dist=$(ls -t dist/autoPyTorch-*.tar.gz | head -n 1)
26+
pip install $last_dist
27+
- name: PEP 561 Compliance
28+
run: |
29+
pip install mypy
30+
cd .. # required to use the installed version of autosklearn
31+
if ! python -c "import autoPyTorch"; then exit 1; fi

.github/workflows/docs.yml

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
name: Docs
2+
on: [pull_request, push]
3+
4+
jobs:
5+
build-and-deploy:
6+
runs-on: ubuntu-latest
7+
steps:
8+
- uses: actions/checkout@v2
9+
- name: Setup Python
10+
uses: actions/setup-python@v2
11+
with:
12+
python-version: 3.8
13+
- name: Install dependencies
14+
run: |
15+
git submodule update --init --recursive
16+
pip install -e .[docs,examples]
17+
- name: Make docs
18+
run: |
19+
cd docs
20+
make html
21+
- name: Pull latest gh-pages
22+
if: (contains(github.ref, 'develop') || contains(github.ref, 'master')) && github.event_name == 'push'
23+
run: |
24+
cd ..
25+
git clone https://github.com/automl/Auto-PyTorch.git --branch gh-pages --single-branch gh-pages
26+
- name: Copy new doc into gh-pages
27+
if: (contains(github.ref, 'develop') || contains(github.ref, 'master')) && github.event_name == 'push'
28+
run: |
29+
branch_name=${GITHUB_REF##*/}
30+
cd ../gh-pages
31+
rm -rf $branch_name
32+
cp -r ../Auto-PyTorch/docs/build/html $branch_name
33+
- name: Push to gh-pages
34+
if: (contains(github.ref, 'develop') || contains(github.ref, 'master')) && github.event_name == 'push'
35+
run: |
36+
last_commit=$(git log --pretty=format:"%an: %s")
37+
cd ../gh-pages
38+
branch_name=${GITHUB_REF##*/}
39+
git add $branch_name/
40+
git config --global user.name 'Github Actions'
41+
git config --global user.email '[email protected]'
42+
git remote set-url origin https://x-access-token:${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}
43+
git commit -am "$last_commit"
44+
git push
Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
name: Tests
2+
3+
on:
4+
schedule:
5+
# Every Truesday at 7AM UTC
6+
# TODO teporary set to every day just for the PR
7+
#- cron: '0 07 * * 2'
8+
- cron: '0 07 * * *'
9+
10+
11+
jobs:
12+
ubuntu:
13+
14+
runs-on: ubuntu-latest
15+
strategy:
16+
matrix:
17+
python-version: [3.8]
18+
fail-fast: false
19+
20+
steps:
21+
- uses: actions/checkout@v2
22+
with:
23+
ref: development
24+
- name: Setup Python ${{ matrix.python-version }}
25+
uses: actions/setup-python@v2
26+
with:
27+
python-version: ${{ matrix.python-version }}
28+
- name: Install test dependencies
29+
run: |
30+
git submodule update --init --recursive
31+
python -m pip install --upgrade pip
32+
pip install -e .[test]
33+
- name: Run tests
34+
run: |
35+
python -m pytest --durations=200 cicd/test_preselected_configs.py -vs

.github/workflows/pre-commit.yaml

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
name: pre-commit
2+
3+
on: [push, pull_request]
4+
5+
jobs:
6+
run-all-files:
7+
runs-on: ubuntu-latest
8+
steps:
9+
- uses: actions/checkout@v2
10+
- name: Setup Python 3.7
11+
uses: actions/setup-python@v2
12+
with:
13+
python-version: 3.7
14+
- name: Init Submodules
15+
run: |
16+
git submodule update --init --recursive
17+
- name: Install pre-commit
18+
run: |
19+
pip install pre-commit
20+
pre-commit install
21+
- name: Run pre-commit
22+
run: |
23+
pre-commit run --all-files

0 commit comments

Comments
 (0)