Skip to content

pip install torchao cannot get latest versions (only 0.1 and 2 other version in the same level) #1300

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
moreAImore opened this issue Nov 17, 2024 · 10 comments
Labels
bug Something isn't working

Comments

@moreAImore
Copy link

moreAImore commented Nov 17, 2024

Similar to #1106
I had py 3.12.7
cuda 12.5
toolkit .5

I had to obtain the WHEEL from another person and install it with the wheel.
(Application: ComfyUI)

@nitinmukesh
Copy link

Same error

(C:\aitools\cv_venv) C:\aitools>pip install torchao==0.5.0
Looking in indexes: https://pypi.org/simple/, https://pypi.ngc.nvidia.com
ERROR: Could not find a version that satisfies the requirement torchao==0.5.0 (from versions: 0.0.1, 0.0.3, 0.1)
ERROR: No matching distribution found for torchao==0.5.0

@nitinmukesh
Copy link

@moreAImore
Please could you share the wheel

@moreAImore
Copy link
Author

@moreAImore Please could you share the wheel

Yes.
torchao-0.6.1+git-cp312-cp312-win_amd64.zip

@nitinmukesh
Copy link

Thank you @moreAImore

@moreAImore
Copy link
Author

Thank you @moreAImore

Where did you end up using it?

@gau-nernst
Copy link
Collaborator

There are pre-built Windows wheels for nightly build. This should work on windows I think (haven't tested)

pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cpu

Otherwise you can also install from source

USE_CPP=0 pip install git+https://github.com/pytorch/ao

USE_CPP=0 means without CUDA extensions -> less chance for error. If you feel adventurous, you can leave that out to build the CUDA extensions.

@nitinmukesh
Copy link

nitinmukesh commented Nov 19, 2024

@moreAImore

I actually build from source. I needed it for CogVideo.

torchao 0.7.0+gitd4ca98f6 C:\ai\CogVideo\ao

@Wurzeldieb
Copy link

@moreAImore

I actually build from source. I needed it for CogVideo.

torchao 0.7.0+gitd4ca98f6 C:\ai\CogVideo\ao

what pytorch version are you using? 2.5.1+cu124 is giving me an error

yanbing-j pushed a commit to yanbing-j/ao that referenced this issue Dec 9, 2024
yanbing-j pushed a commit to yanbing-j/ao that referenced this issue Dec 9, 2024
* add pp_dim, distributed, num_gpus, num_nodes as cmd line args

* add tp_dim

* add elastic_launch

* working, can now launch from cli

* Remove numpy < 2.0 pin to align with pytorch (pytorch#1301)

Fix pytorch#1296

Align with https://github.com/pytorch/pytorch/blame/main/requirements.txt#L5

* Update torchtune pin to 0.4.0-dev20241010 (pytorch#1300)

Co-authored-by: vmpuri <[email protected]>

* Unbreak gguf util CI job by fixing numpy version (pytorch#1307)

Setting numpy version to be the range required by gguf: https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/pyproject.toml

* Remove apparently-unused import torchvision in model.py (pytorch#1305)

Co-authored-by: vmpuri <[email protected]>

* remove global var for tokenizer type + patch tokenizer to allow list of sequences

* make pp tp visible in interface

* Add llama 3.1 to dist_run.py

* [WIP] Move dist inf into its own generator

* Add initial generator interface to dist inference

* Added generate method and placeholder scheduler

* use prompt parameter for dist generation

* Enforce tp>=2

* Build tokenizer from TokenizerArgs

* Disable torchchat format + constrain possible models for distributed

* disable calling dist_run.py directly for now

* Restore original dist_run.py for now

* disable _maybe_parallelize_model again

* Reenable arg.model_name in dist_run.py

* Use singleton logger instead of print in generate

* Address PR comments; try/expect in launch_dist_inference; added comments

---------

Co-authored-by: lessw2020 <[email protected]>
Co-authored-by: Mengwei Liu <[email protected]>
Co-authored-by: vmpuri <[email protected]>
Co-authored-by: vmpuri <[email protected]>
Co-authored-by: Scott Wolchok <[email protected]>
@FurkanGozukara
Copy link

just installed torch ao but still getting this error why?

Traceback (most recent call last):
File "R:\CogVideoX_v3\CogVideo\inference\gradio_composite_demo\app.py", line 40, in
from torchao.quantization import quantize_, int8_weight_only, weight_only_quant_qconfig
ImportError: cannot import name 'weight_only_quant_qconfig' from 'torchao.quantization' (R:\CogVideoX_v3\CogVideo\venv\Lib\site-packages\torchao\quantization_init_.py)

@drisspg
Copy link
Contributor

drisspg commented Dec 12, 2024

what version did you install @FurkanGozukara

@drisspg drisspg added the bug Something isn't working label Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants