-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Add kwargs to optimizer, scheduler and dataloader using function accelerator().load_state()
#3540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
SunMarc
merged 18 commits into
huggingface:main
from
luiz0992:luiz/add-kwargs-to-load-state
May 22, 2025
Merged
Changes from 1 commit
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
d42384d
Added artifacts and figure tracking at MLFlow tracker
luiz0992 5d051b9
Added `log_artifact` to the MLFlowTracker
luiz0992 748a587
Remove changes
luiz0992 30a3145
Merge branch 'huggingface:main' into main
luiz0992 b903eef
Added kwargs when loading state.
luiz0992 7f54f12
added doc string
luiz0992 e07f9ff
Adjusted correct default types of kwargs
luiz0992 810c69e
Changed the load kwargs to a single one
luiz0992 e3c4327
removed None value from kwargs
luiz0992 0dd2077
Merge branch 'huggingface:main' into luiz/add-kwargs-to-load-state
luiz0992 3cfa4c2
fix kwargs for loading the model
luiz0992 c96107f
removed load_kwargs from optimizer state dict
luiz0992 13e6e6e
make load_kwargs a dictionary
luiz0992 17beba7
revert last changes
luiz0992 c3d7f1f
reverted load_kwargs
luiz0992 4381b93
fix docstring
luiz0992 e54694c
added dict initiation
luiz0992 9980afb
Fix quality error during PR
luiz0992 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -180,10 +180,7 @@ def load_accelerator_state( | |
| process_index, | ||
| scaler=None, | ||
| map_location=None, | ||
| optimizer_load_kwargs={}, | ||
| scheduler_load_kwargs={}, | ||
| dataloader_load_kwargs={}, | ||
| **load_model_func_kwargs, | ||
| load_kwargs=None, | ||
SunMarc marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ): | ||
| """ | ||
| Loads states of the models, optimizers, scaler, and RNG generators from a given directory. | ||
|
|
@@ -203,14 +200,8 @@ def load_accelerator_state( | |
| An optional *GradScaler* instance to load | ||
| map_location (`str`, *optional*): | ||
| What device to load the optimizer state onto. Should be one of either "cpu" or "on_device". | ||
| optimizer_load_kwargs (`dict`, *optional*): | ||
| Additional arguments that can be passed to the optimizer's `load` function. | ||
| scheduler_load_kwargs (`dict`, *optional*): | ||
| Additional arguments that can be passed to the scheduler's `load` function. | ||
| dataloader_load_kwargs (`dict`, *optional*): | ||
| Additional arguments that can be passed to the dataloader's `load` function. | ||
| load_model_func_kwargs (`dict`, *optional*): | ||
| Additional arguments that can be passed to the model's `load_state_dict` method. | ||
| load_kwargs (`dict`, *optional*): | ||
| Additional arguments that can be passed to the `load`, `load_model` and `load_state_dict` functions. | ||
|
|
||
| Returns: | ||
| `dict`: Contains the `Accelerator` attributes to override while loading the state. | ||
|
|
@@ -232,28 +223,28 @@ def load_accelerator_state( | |
| ending = f"_{i}" if i > 0 else "" | ||
| input_model_file = input_dir.joinpath(f"{SAFE_MODEL_NAME}{ending}.safetensors") | ||
| if input_model_file.exists(): | ||
| load_model(model, input_model_file, device=str(map_location), **load_model_func_kwargs) | ||
| load_model(model, input_model_file, device=str(map_location), **load_kwargs) | ||
| else: | ||
| # Load with torch | ||
| input_model_file = input_dir.joinpath(f"{MODEL_NAME}{ending}.bin") | ||
| state_dict = load(input_model_file, map_location=map_location) | ||
| model.load_state_dict(state_dict, **load_model_func_kwargs) | ||
| model.load_state_dict(state_dict, **load_kwargs) | ||
| logger.info("All model weights loaded successfully") | ||
|
|
||
| # Optimizer states | ||
| for i, opt in enumerate(optimizers): | ||
| optimizer_name = f"{OPTIMIZER_NAME}.bin" if i == 0 else f"{OPTIMIZER_NAME}_{i}.bin" | ||
| input_optimizer_file = input_dir.joinpath(optimizer_name) | ||
| optimizer_state = load(input_optimizer_file, map_location=map_location, **optimizer_load_kwargs) | ||
| optimizers[i].load_state_dict(optimizer_state) | ||
| optimizer_state = load(input_optimizer_file, map_location=map_location, **load_kwargs) | ||
| optimizers[i].load_state_dict(optimizer_state, **load_kwargs) | ||
| logger.info("All optimizer states loaded successfully") | ||
|
|
||
| # Scheduler states | ||
| for i, scheduler in enumerate(schedulers): | ||
| scheduler_name = f"{SCHEDULER_NAME}.bin" if i == 0 else f"{SCHEDULER_NAME}_{i}.bin" | ||
| input_scheduler_file = input_dir.joinpath(scheduler_name) | ||
| scheduler_state = load(input_scheduler_file, map_location=None, **scheduler_load_kwargs) | ||
| scheduler.load_state_dict(scheduler_state) | ||
| scheduler_state = load(input_scheduler_file, map_location=None, **load_kwargs) | ||
| scheduler.load_state_dict(scheduler_state, **load_kwargs) | ||
| logger.info("All scheduler states loaded successfully") | ||
|
|
||
| for i, dataloader in enumerate(dataloaders): | ||
|
|
@@ -270,8 +261,8 @@ def load_accelerator_state( | |
| dataloader_state_dict_name = "dl_state_dict.bin" if i == 0 else f"dl_state_dict_{i}.bin" | ||
| input_dataloader_state_dict_file = input_dir.joinpath(dataloader_state_dict_name) | ||
| if input_dataloader_state_dict_file.exists(): | ||
| state_dict = load(input_dataloader_state_dict_file, map_location=None, **dataloader_load_kwargs) | ||
| dataloader.load_state_dict(state_dict) | ||
| state_dict = load(input_dataloader_state_dict_file, map_location=None, **load_kwargs) | ||
| dataloader.load_state_dict(state_dict, **load_kwargs) | ||
| logger.info("All dataloader sampler states loaded successfully") | ||
|
|
||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. let's also include it for scaler and states
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
| # GradScaler state | ||
|
|
||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please keep
load_model_func_kwargsasload_modelhave different kwargs compared toloadThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
load_modeldoes not accept kwargs. Let me know if I am mistaken.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in safetensors, we have the following
If you can revert the changes related to
load_model_func_kwargsand only updateload_kwargsto where we useload, it will be betterThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way it is being used it is missing only the
strictargument, but that's ok. I reverted theload_model_func_kwargs.c3d7f1f