|
| 1 | +# Voxelwise Encoding Model (VEM) tutorials |
| 2 | + |
| 3 | +Welcome to the tutorials on the Voxelwise Encoding Model framework from the |
| 4 | +[GallantLab](https://gallantlab.org). |
| 5 | + |
| 6 | +If you use these tutorials for your work, consider citing the corresponding paper: |
| 7 | + |
| 8 | +> T. Dupré La Tour, M. Visconti di Oleggio Castello, and J. L. Gallant. The voxelwise modeling framework: a tutorial introduction to fitting encoding models to fMRI data. psyRxiv, 2024. [doi:10.31234/osf.io/t975e.](https://doi.org/10.31234/osf.io/t975e) |
| 9 | +
|
| 10 | +You can find a copy of the paper [here](https://github.com/gallantlab/voxelwise_tutorials/blob/main/paper/voxelwise_tutorials_paper.pdf). |
| 11 | + |
| 12 | +## Getting started |
| 13 | + |
| 14 | +This website contains tutorials describing how to use the |
| 15 | +[Voxelwise Encoding Model framework](voxelwise_modeling.html). |
| 16 | + |
| 17 | +To explore these tutorials, one can: |
| 18 | + |
| 19 | +- read the rendered examples in the tutorials |
| 20 | + [gallery of examples](_auto_examples/index.html) (recommended) |
| 21 | +- run the Python scripts ([tutorials](https://github.com/gallantlab/voxelwise_tutorials/tree/main/tutorials) directory) |
| 22 | +- run the Jupyter notebooks ([tutorials/notebooks |
| 23 | + ](https://github.com/gallantlab/voxelwise_tutorials/tree/main/tutorials/notebooks) |
| 24 | + directory) |
| 25 | +- run the notebooks in Google Colab: |
| 26 | + [all notebooks](https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/shortclips/merged_for_colab.ipynb) or |
| 27 | + [only the notebooks about model fitting](https://colab.research.google.com/github/gallantlab/voxelwise_tutorials/blob/main/tutorials/notebooks/shortclips/merged_for_colab_model_fitting.ipynb) --> |
| 28 | + |
| 29 | +The tutorials are best explored in order, starting with the [Shortclips |
| 30 | +tutorial](_auto_examples/index.html). |
| 31 | + |
| 32 | +The project is available on GitHub at [gallantlab/voxelwise_tutorials |
| 33 | +](https://github.com/gallantlab/voxelwise_tutorials). On top of the tutorials |
| 34 | +scripts, the GitHub repository contains a Python package called |
| 35 | +`voxelwise_tutorials`, which contains useful functions to download the data |
| 36 | +sets, load the files, process the data, and visualize the results. Install |
| 37 | +instructions are available [here](voxelwise_package.html). |
| 38 | + |
| 39 | +## Cite as |
| 40 | + |
| 41 | +If you use one of our packages in your work (`voxelwise_tutorials` |
| 42 | +{cite}`dupre2023`, `himalaya` {cite}`dupre2022`, `pycortex` |
| 43 | +{cite}`gao2015`, or `pymoten` {cite}`nunez2021software`), please cite the |
| 44 | +corresponding publications. |
| 45 | + |
| 46 | +If you use one of our public datasets in your work |
| 47 | +(`shortclips` {cite}`huth2022data`, `vim-2` {cite}`nishimoto2014data`), |
| 48 | +please cite the corresponding publications. |
0 commit comments