Skip to content

Commit bbeb320

Browse files
authored
Remove incorrect ViT recipe commands. (#5159)
1 parent cc7e856 commit bbeb320

File tree

1 file changed

+0
-22
lines changed

1 file changed

+0
-22
lines changed

references/classification/README.md

Lines changed: 0 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -143,28 +143,6 @@ torchrun --nproc_per_node=8 train.py\
143143
```
144144
Here `$MODEL` is one of `regnet_x_32gf`, `regnet_y_16gf` and `regnet_y_32gf`.
145145

146-
### Vision Transformer
147-
148-
#### Base models
149-
```
150-
torchrun --nproc_per_node=8 train.py\
151-
--model $MODEL --epochs 300 --batch-size 64 --opt adamw --lr 0.003 --wd 0.3\
152-
--lr-scheduler cosineannealinglr --lr-warmup-method linear --lr-warmup-epochs 30\
153-
--lr-warmup-decay 0.033 --amp --label-smoothing 0.11 --mixup-alpha 0.2 --auto-augment ra\
154-
--clip-grad-norm 1 --ra-sampler --cutmix-alpha 1.0 --model-ema
155-
```
156-
Here `$MODEL` is one of `vit_b_16` and `vit_b_32`.
157-
158-
#### Large models
159-
```
160-
torchrun --nproc_per_node=8 train.py\
161-
--model $MODEL --epochs 300 --batch-size 16 --opt adamw --lr 0.003 --wd 0.3\
162-
--lr-scheduler cosineannealinglr --lr-warmup-method linear --lr-warmup-epochs 30\
163-
--lr-warmup-decay 0.033 --amp --label-smoothing 0.11 --mixup-alpha 0.2 --auto-augment ra\
164-
--clip-grad-norm 1 --ra-sampler --cutmix-alpha 1.0 --model-ema
165-
```
166-
Here `$MODEL` is one of `vit_l_16` and `vit_l_32`.
167-
168146
## Mixed precision training
169147
Automatic Mixed Precision (AMP) training on GPU for Pytorch can be enabled with the [torch.cuda.amp](https://pytorch.org/docs/stable/amp.html?highlight=amp#module-torch.cuda.amp).
170148

0 commit comments

Comments
 (0)