Merged
Conversation
From what I can see, this is done in linear time: 4*O(n) This tokenizer change converts that to something a little quicker: 3*O(n) Seems that not using a capture group and something other than split would be a big win. Other than that, the changes were meager. I used https://regex101.com/ (and pcre2) to evaluate the cost of the TOKENIZER. I verified with cruby 3.0.6 ``` /(%%\{[^\}]+\}|%\{[^\}]+\})/ =~ '%{{'*9999)+'}' /(%%\{[^\}]+\}|%\{[^\}]+\})/ ==> 129,990 steps /(%?%\{[^\}]+\})/ ==> 129,990 steps /(%%?\{[^\}]+\})/ ==> 99,992 steps (simple savings of 25%) <=== /(%%?\{[^%}{]+\})/ ==> 89,993 steps (limiting variable contents has minimal gains) ``` Also of note are the null/simple cases: ``` /x/ =~ '%{{'*9999)+'}' /x/ ==> 29,998 steps /(x)/ ==> 59,996 steps /%{x/ ==> 49,998 steps /(%%?{x)/ ==> 89,993 steps ``` And comparing against a the plain string of the same length. ``` /x/ =~ 'abb'*9999+'c' /x/ ==> 29,999 /(%%?{x)/ ==> 59,998 /(%%?\{[^\}]+\})/ ==> 59,998 /(%%\{[^\}]+\}|%\{[^\}]+\})/ ==> 89,997 ``` per ruby-i18n#667
Collaborator
|
Thank you very much :) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
From what I can see, this is done in linear time:
4*O(n)This tokenizer change converts that to something a little quicker:
3*O(n)Seems that not using a capture group and something other than split would be a big win. Other than that, the changes were meager.
I used https://regex101.com/ (and pcre2) to evaluate the cost of the TOKENIZER. I verified with cruby 3.0.6 (by eyeball - nothing too extensive)
I tried a few changes to the regular expression and the example in the issue. I was able to speed up by 23% with minimal changes to the codebase. Other savings were to be had but request feedback before going that route.
There really isn't much room for improvement overall. The null/simple cases seem to speak for themselves:
And the plain string that doesn't fair too much worse than the specially crafted string. So this suggests that if there is a vulnerability in the regular expression, it is not expressed by this example. (especially since they all seem to be linear)
per #667