You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cherry pick Bump to pytorch 25.05 container along with TE update (13899) into r2.4.0 (#14145)
* Bump to pytorch 25.05 container along with TE update (#13899)
* Update base container to be pytorch:25.05-py3
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update TE to 2.4
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Remove torch accelerator patch
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update triton patch
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Bump TE and Mcore commits
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix triton patch
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix triton patch
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* No fail fast
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update trt-llm to 0.20.0
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix test_sched_config_parse_reduce_on_plateau
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Add no build isolation to TE
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update trt-llm dependencies
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update manifest
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Revert "Enable LoRA for TELinear layers (#13929)"
This reverts commit 7d9f40f.
* update mcore with wd_mult key fix
Signed-off-by: oliver könig <okoenig@nvidia.com>
* Revert "Revert "Enable LoRA for TELinear layers (#13929)""
This reverts commit 5a1da6c.
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix nemo install
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix nemo install
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix export image build
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Remove unnecessary sed for torch_tensorrt
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update TE and Mcore commits
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Add optional tests
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix install
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Ensure test script arg types are correct for top_p and top_k
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Increase export deploy timeouts
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Skip failing test_rnnt_logprobs_random after pytorch bump
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Skip coverage artifact config-3.12.py
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Include more config files ot exclude during coverage
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Update dependencies
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Ensure top_p is float in nemo_export test script
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Set Optional_L2_Speech_Batch_Size_OOMptimizer_Canary to truly be optional
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix top_k and top_p types in megatronllm_deployable
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Revert "Skip failing test_rnnt_logprobs_random after pytorch bump"
This reverts commit c6c3a76.
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Fix optional export test
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
* Revert unnecessary changes
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
---------
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Co-authored-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Co-authored-by: oliver könig <okoenig@nvidia.com>
* Set L2_NeMo_2_Export_Deploy_Query_In_Framework to be optional (#13946)
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
---------
Signed-off-by: Charlie Truong <chtruong@nvidia.com>
Signed-off-by: oliver könig <okoenig@nvidia.com>
Co-authored-by: Charlie Truong <chtruong@nvidia.com>
Co-authored-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
0 commit comments