Skip to content

[BUG] scripts/test_environment.py fails after INSTALL.md due to vllm <> transformers conflict (AutoConfig.register('aimv2')) #210

@niko-zvt

Description

@niko-zvt

Issue Type

  • Bug Report
  • Feature Request
  • Question
  • Documentation
  • Security
  • Other

Description

Following the steps in INSTALL.md, the final check PYTHONPATH=$(pwd) python scripts/test_environment.py fails when importing vllm with:

Attempting to import critical packages...
[SUCCESS] torch found
[SUCCESS] torchvision found
[SUCCESS] transformers found
[SUCCESS] megatron.core found
[SUCCESS] transformer_engine found
INFO ... Automatically detected platform cuda.
[ERROR] Package not successfully imported: vllm
[SUCCESS] pandas found
Traceback (most recent call last):
  ...
  File ".../vllm/transformers_utils/configs/ovis.py", line 75, in <module>
    AutoConfig.register("aimv2", AIMv2Config)
  File ".../transformers/models/auto/configuration_auto.py", line 1350, in register
    CONFIG_MAPPING.register(model_type, config, exist_ok=exist_ok)
  File ".../transformers/models/auto/configuration_auto.py", line 1037, in register
    raise ValueError(f"'{key}' is already used by a Transformers config, pick another name.")
ValueError: 'aimv2' is already used by a Transformers config, pick another name.

Steps to Reproduce (for bugs)

  1. Install per current INSTALL.md.
  2. Run PYTHONPATH=$(pwd) python scripts/test_environment.py or PYTHONPATH=$(pwd) python -c "import vllm; print('ok')"
  3. Error

Expected Behavior

scripts/test_environment.py completes without errors on versions implied by INSTALL.md.

Actual Behavior

Attempting to import critical packages...
[SUCCESS] torch found
[SUCCESS] torchvision found
[SUCCESS] transformers found
[SUCCESS] megatron.core found
[SUCCESS] transformer_engine found
INFO ... Automatically detected platform cuda.
[ERROR] Package not successfully imported: vllm

Manual import shows:

ValueError: 'aimv2' is already used by a Transformers config, pick another name.

Environment

  • Model: nvidia-cosmos/cosmos-transfer1
  • Framework (e.g., Docker, Conda): conda 25.7.0
  • Python Version: 3.12.11 (conda env cosmos-transfer1)
  • OS/Platform: Linux, Ubuntu 24.04
  • GPU/CPU: NVIDIA H200 (143GB VRAM, CUDA 12.8)
  • Additional Dependencies:
    • vllm==0.9.0
    • transformers==4.56.1
    • tokenizers==0.20.3
    • torch==2.7.0+cu128
    • torchvision==0.22.0+cu128
    • transformer_engine==2.6.0.post1

Additional Context

vllm==0.9.0 registers custom configs via transformers.AutoConfig.register(...).
In transformers>=4.56.x, the key "aimv2" already exists in CONFIG_MAPPING, causing a duplicate-registration ValueError.

Deliverable

Workaround (confirmed)

pip install "tokenizers>=0.21.1"
pip install "transformers==4.45.1"

PYTHONPATH=$(pwd) python scripts/test_environment.py
# -> SUCCESS

Proposed Fix

  • Docs quick fix: Pin compatible versions in INSTALL.md until code is adjusted:

    pip install "tokenizers>=0.21.1" "transformers==4.45.1"
    

    or specify a range, e.g.:

    transformers>=4.44,<4.51
    tokenizers>=0.21.1
    

    and note the known conflict with transformers>=4.56.x.

  • Code fix (preferred): Guard registration to avoid duplicates, e.g.:

    from transformers.models.auto.configuration_auto import CONFIG_MAPPING
    if "aimv2" not in CONFIG_MAPPING:
       AutoConfig.register("aimv2", AIMv2Config)
    

    (or use an exist_ok-style approach if supported), and/or pin a tested vllm/transformers pair in project requirements.

  • Alternative: Update to a vllm release that’s compatible with transformers>=4.56.x and reflect that pin in INSTALL.md.

Impact

Blocks a clean install following INSTALL.md - users hit an error on the final environment check without manual version adjustments.

Metadata

Metadata

Labels

SASolution architects

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions