Skip to content

fix sentencepiece tokenizer special tokens#11811

Merged
akoumpa merged 14 commits intomainfrom
akoumparouli/fix_sentencepiece_tokenizer_special_tokens
Jan 13, 2025
Merged

fix sentencepiece tokenizer special tokens#11811
akoumpa merged 14 commits intomainfrom
akoumparouli/fix_sentencepiece_tokenizer_special_tokens

Conversation

@akoumpa
Copy link
Collaborator

@akoumpa akoumpa commented Jan 10, 2025

What does this PR do ?

Changes:

  1. previously, it was inserting a space token between the a special token and the first word of the text following the special token.
  2. apply the above to text_to_tokens and text_to_ids
  3. if a special token is in the vocab, then use its id when marking it as special token.

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

GitHub Actions CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
@akoumpa akoumpa force-pushed the akoumparouli/fix_sentencepiece_tokenizer_special_tokens branch 2 times, most recently from f12af7d to e4251fc Compare January 10, 2025 02:39
akoumpa and others added 2 commits January 10, 2025 02:40
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
…cial_tokens

Signed-off-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com>
@github-actions
Copy link
Contributor

[🤖]: Hi @akoumpa 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully

So it might be time to merge this PR or get some approvals

I'm just a bot so I'll leave it you what to do next.

//cc @pablo-garay @ko3n1g

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
@akoumpa akoumpa changed the title Akoumparouli/fix sentencepiece tokenizer special tokens fix sentencepiece tokenizer special tokens Jan 10, 2025
@akoumpa akoumpa added Run CICD and removed Run CICD labels Jan 10, 2025
@github-actions
Copy link
Contributor

[🤖]: Hi @akoumpa 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully

So it might be time to merge this PR or get some approvals

I'm just a bot so I'll leave it you what to do next.

//cc @pablo-garay @ko3n1g

akoumpa and others added 4 commits January 10, 2025 16:17
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
@akoumpa akoumpa added Run CICD and removed Run CICD labels Jan 11, 2025
@github-actions
Copy link
Contributor

beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base.


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.common.tokenizers.sentencepiece_tokenizer
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:42:0: C0301: Line too long (140/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:43:0: C0301: Line too long (139/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:341:0: C0301: Line too long (137/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:396:0: C0301: Line too long (120/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:405:0: C0301: Line too long (135/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:410:0: C0301: Line too long (131/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:411:0: C0301: Line too long (153/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:412:0: C0301: Line too long (155/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:233:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:291:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:299:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:307:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:315:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:322:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:329:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:336:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:348:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:357:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:358:4: C0116: Missing function or method docstring (missing-function-docstring)
************* Module nemo.collections.nlp.modules.common.tokenizer_utils
nemo/collections/nlp/modules/common/tokenizer_utils.py:73:0: C0301: Line too long (199/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:96:0: C0301: Line too long (149/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:131:0: C0301: Line too long (146/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:233:0: C0301: Line too long (146/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:42:0: C0115: Missing class docstring (missing-class-docstring)

-----------------------------------
Your code has been rated at 9.37/10

Mitigation guide:

  • Add sensible and useful docstrings to functions and methods
  • For trivial methods like getter/setters, consider adding # pylint: disable=C0116 inside the function itself
  • To disable multiple functions/methods at once, put a # pylint: disable=C0116 before the first and a # pylint: enable=C0116 after the last.

By applying these rules, we reduce the occurance of this message in future.

Thank you for improving NeMo's documentation!

1 similar comment
@github-actions
Copy link
Contributor

beep boop 🤖: 🙏 The following files have warnings. In case you are familiar with these, please try helping us to improve the code base.


Your code was analyzed with PyLint. The following annotations have been identified:

************* Module nemo.collections.common.tokenizers.sentencepiece_tokenizer
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:42:0: C0301: Line too long (140/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:43:0: C0301: Line too long (139/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:341:0: C0301: Line too long (137/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:396:0: C0301: Line too long (120/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:405:0: C0301: Line too long (135/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:410:0: C0301: Line too long (131/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:411:0: C0301: Line too long (153/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:412:0: C0301: Line too long (155/119) (line-too-long)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:233:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:291:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:299:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:307:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:315:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:322:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:329:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:336:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:348:4: C0116: Missing function or method docstring (missing-function-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:357:0: C0115: Missing class docstring (missing-class-docstring)
nemo/collections/common/tokenizers/sentencepiece_tokenizer.py:358:4: C0116: Missing function or method docstring (missing-function-docstring)
************* Module nemo.collections.nlp.modules.common.tokenizer_utils
nemo/collections/nlp/modules/common/tokenizer_utils.py:73:0: C0301: Line too long (199/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:96:0: C0301: Line too long (149/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:131:0: C0301: Line too long (146/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:233:0: C0301: Line too long (146/119) (line-too-long)
nemo/collections/nlp/modules/common/tokenizer_utils.py:42:0: C0115: Missing class docstring (missing-class-docstring)

-----------------------------------
Your code has been rated at 9.37/10

Mitigation guide:

  • Add sensible and useful docstrings to functions and methods
  • For trivial methods like getter/setters, consider adding # pylint: disable=C0116 inside the function itself
  • To disable multiple functions/methods at once, put a # pylint: disable=C0116 before the first and a # pylint: enable=C0116 after the last.

By applying these rules, we reduce the occurance of this message in future.

Thank you for improving NeMo's documentation!

@github-actions
Copy link
Contributor

[🤖]: Hi @akoumpa 👋,

We wanted to let you know that a CICD pipeline for this PR just finished successfully

So it might be time to merge this PR or get some approvals

I'm just a bot so I'll leave it you what to do next.

//cc @pablo-garay @ko3n1g

@akoumpa akoumpa requested a review from suiyoubi January 11, 2025 03:42
@akoumpa akoumpa marked this pull request as ready for review January 13, 2025 17:18
Copy link
Collaborator

@cuichenx cuichenx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@akoumpa akoumpa merged commit 8279d3e into main Jan 13, 2025
@akoumpa akoumpa deleted the akoumparouli/fix_sentencepiece_tokenizer_special_tokens branch January 13, 2025 18:56
abhinavg4 pushed a commit that referenced this pull request Jan 30, 2025
* Allow special tokens in vocab

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* pass special tokens to SentencePieceTokenizer

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* cleanup

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* cleanup

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add trim_spm_separator_after_special_token option

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

* remove unused code

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* also handle dict in omegaconf to_container

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* also handle special tokens that are already in vocab

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Abhinav Garg <abhgarg@nvidia.com>
youngeunkwon0405 pushed a commit to youngeunkwon0405/NeMo that referenced this pull request Feb 10, 2025
* Allow special tokens in vocab

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* pass special tokens to SentencePieceTokenizer

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* cleanup

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* cleanup

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add trim_spm_separator_after_special_token option

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

* remove unused code

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* also handle dict in omegaconf to_container

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* also handle special tokens that are already in vocab

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* add test

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>

* Apply isort and black reformatting

Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>

---------

Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Alexandros Koumparoulis <153118171+akoumpa@users.noreply.github.com>
Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Youngeun Kwon <youngeunk@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants