Skip to content

Commit 5f70e19

Browse files
authored
Use HF_TOKEN directly and remove require_read_token (#43233)
* Use HF_TOKEN * style * remove the usage * remove the def --------- Co-authored-by: ydshieh <[email protected]>
1 parent 840fab6 commit 5f70e19

File tree

63 files changed

+20
-221
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+20
-221
lines changed

.github/workflows/benchmark_v2.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ env:
88
TRANSFORMERS_IS_CI: yes
99
# For gated repositories, we still need to agree to share information on the Hub repo. page in order to get access.
1010
# This token is created under the bot `hf-transformers-bot`.
11-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
11+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
1212

1313
jobs:
1414
benchmark-v2:

.github/workflows/check_failed_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ env:
3737
RUN_SLOW: yes
3838
# For gated repositories, we still need to agree to share information on the Hub repo. page in order to get access.
3939
# This token is created under the bot `hf-transformers-bot`.
40-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
40+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
4141
TF_FORCE_GPU_ALLOW_GROWTH: true
4242
CUDA_VISIBLE_DEVICES: 0,1
4343

.github/workflows/model_jobs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ env:
4040
RUN_SLOW: yes
4141
# For gated repositories, we still need to agree to share information on the Hub repo. page in order to get access.
4242
# This token is created under the bot `hf-transformers-bot`.
43-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
43+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
4444
TF_FORCE_GPU_ALLOW_GROWTH: true
4545
CUDA_VISIBLE_DEVICES: 0,1
4646

.github/workflows/model_jobs_intel_gaudi.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ env:
2525
PT_HPU_LAZY_MODE: 0
2626
TRANSFORMERS_IS_CI: yes
2727
PT_ENABLE_INT64_SUPPORT: 1
28-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
28+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
2929
HF_HOME: /mnt/cache/.cache/huggingface
3030

3131
jobs:

.github/workflows/self-comment-ci.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ env:
1919
RUN_SLOW: yes
2020
# For gated repositories, we still need to agree to share information on the Hub repo. page in order to get access.
2121
# This token is created under the bot `hf-transformers-bot`.
22-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
22+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
2323
TF_FORCE_GPU_ALLOW_GROWTH: true
2424
CUDA_VISIBLE_DEVICES: 0,1
2525

.github/workflows/self-scheduled-intel-gaudi.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ env:
2525
PT_HPU_LAZY_MODE: 0
2626
TRANSFORMERS_IS_CI: yes
2727
PT_ENABLE_INT64_SUPPORT: 1
28-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
28+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
2929
HF_HOME: /mnt/cache/.cache/huggingface
3030

3131
jobs:

.github/workflows/self-scheduled.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ env:
5757
RUN_SLOW: yes
5858
# For gated repositories, we still need to agree to share information on the Hub repo. page in order to get access.
5959
# This token is created under the bot `hf-transformers-bot`.
60-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
60+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
6161
TF_FORCE_GPU_ALLOW_GROWTH: true
6262
CUDA_VISIBLE_DEVICES: 0,1
6363

.github/workflows/ssh-runner.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ on:
1414
required: true
1515

1616
env:
17-
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
17+
HF_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
1818
HF_HOME: /mnt/cache
1919
TRANSFORMERS_IS_CI: yes
2020
OMP_NUM_THREADS: 8
@@ -140,11 +140,11 @@ jobs:
140140
cd /transformers 2>/dev/null || true
141141
142142
# Remind user to set token if needed
143-
if [ -z "$HF_HUB_READ_TOKEN" ]; then
144-
echo "⚠️ HF_HUB_READ_TOKEN not set. Set it with:"
145-
echo " export HF_HUB_READ_TOKEN=hf_xxxxx"
143+
if [ -z "$HF_TOKEN" ]; then
144+
echo "⚠️ HF_TOKEN not set. Set it with:"
145+
echo " export HF_TOKEN=hf_xxxxx"
146146
else
147-
echo "✅ HF_HUB_READ_TOKEN is set"
147+
echo "✅ HF_TOKEN is set"
148148
fi
149149
150150
echo "📁 Working directory: $(pwd)"

src/transformers/testing_utils.py

Lines changed: 0 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -648,40 +648,6 @@ def require_flash_attn_3(test_case):
648648
return unittest.skipUnless(is_flash_attn_3_available(), "test requires Flash Attention 3")(test_case)
649649

650650

651-
def require_read_token(test_case):
652-
"""
653-
A decorator that loads the HF token for tests that require to load gated models.
654-
"""
655-
token = os.getenv("HF_HUB_READ_TOKEN")
656-
657-
if isinstance(test_case, type):
658-
for attr_name in dir(test_case):
659-
attr = getattr(test_case, attr_name)
660-
if isinstance(attr, types.FunctionType):
661-
if getattr(attr, "__require_read_token__", False):
662-
continue
663-
wrapped = require_read_token(attr)
664-
if isinstance(inspect.getattr_static(test_case, attr_name), staticmethod):
665-
# Don't accidentally bind staticmethods to `self`
666-
wrapped = staticmethod(wrapped)
667-
setattr(test_case, attr_name, wrapped)
668-
return test_case
669-
else:
670-
if getattr(test_case, "__require_read_token__", False):
671-
return test_case
672-
673-
@functools.wraps(test_case)
674-
def wrapper(*args, **kwargs):
675-
if token is not None:
676-
with patch("huggingface_hub.utils._headers.get_token", return_value=token):
677-
return test_case(*args, **kwargs)
678-
else: # Allow running locally with the default token env variable
679-
return test_case(*args, **kwargs)
680-
681-
wrapper.__require_read_token__ = True
682-
return wrapper
683-
684-
685651
def require_peft(test_case):
686652
"""
687653
Decorator marking a test that requires PEFT.

tests/generation/test_utils.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,6 @@
4444
require_flash_attn,
4545
require_flash_attn_3,
4646
require_optimum_quanto,
47-
require_read_token,
4847
require_torch,
4948
require_torch_accelerator,
5049
require_torch_gpu,
@@ -3943,7 +3942,6 @@ def test_generate_compile_fullgraph_tiny(self):
39433942
gen_out = compiled_generate(**model_inputs, generation_config=generation_config)
39443943
self.assertTrue(gen_out.shape[1] > model_inputs["input_ids"].shape[1]) # some text was generated
39453944

3946-
@require_read_token
39473945
@slow
39483946
def test_assisted_generation_early_exit(self):
39493947
"""
@@ -4471,7 +4469,6 @@ def test_load_generation_config_from_text_subconfig(self):
44714469
# test that we can generate without inputs, i.e. from BOS
44724470
_ = model.generate()
44734471

4474-
@require_read_token
44754472
@slow
44764473
@require_torch_accelerator
44774474
def test_cache_device_map_with_vision_layer_device_map(self):

0 commit comments

Comments
 (0)