Skip to content

PEP command incorrectly assumes file extension type #367

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
kosayoda opened this issue May 21, 2019 · 9 comments · Fixed by #378
Closed

PEP command incorrectly assumes file extension type #367

kosayoda opened this issue May 21, 2019 · 9 comments · Fixed by #378
Assignees
Labels
s: WIP Work In Progress t: bug Something isn't working

Comments

@kosayoda
Copy link
Member

The current implementation of the PEP command assumes that PEPs before PEP 542 are written in .txt, and PEPs after PEP 542 are written in .rst.

bot/bot/cogs/utils.py

Lines 40 to 43 in 301e877

if pep_number > 542:
pep_url = f"{self.base_github_pep_url}{pep_number:04}.rst"
else:
pep_url = f"{self.base_github_pep_url}{pep_number:04}.txt"

That is not the case.
PEPs before 542 that are written in .rst:

  • pep-0012, pep-0013, pep-0505

Peps after 542 that are written in .txt:

  • pep-0544, pep-0545, pep-0546, pep-0628. pep-0666, pep-0754
  • pep-3xxx

There was a PR in 2017 that renamed every .txt PEP file to .rst that got merged, but was later reverted because revision history was lost doing so. Therefore it is very likely these edge cases will remain.

@eivl
Copy link
Contributor

eivl commented May 21, 2019

so instead of assuming one or the other, could we just check for both?

@SebastiaanZ
Copy link
Member

I think that's a good idea.

We could still implement a look-up order with the current division (or a better one) to minimize the number of API calls we make. I'm not sure how relevant that is and how often this feature is used, though.

@GhostofGoes
Copy link
Contributor

Unfortunately, I think the simplest solution would be to write a script to generate a mapping of pep number to file extension, then hardcode that lookup table in the bot.

@lemonsaurus
Copy link
Member

simplest solution is to hardcode a lookup table? but new peps get added all the time. that sounds like by far the most complicated solution, the simplest would be to just make two requests. our bot could technically even make them in parallel since it's async, thus taking roughly the same time as it does now.

@GhostofGoes
Copy link
Contributor

I suppose making extra requests doesn't hurt, it just unnecessary work for every issue of the command. However, the reduction in maintenance overhead is probably worth it.

@eivl
Copy link
Contributor

eivl commented May 23, 2019

looks like the command have been used less than 100 times by our users. there are mostly staff that uses it, and that is not even remotely high. We are talking among the magnitude of less than once per day. also, 90% is !pep 8, so maybe make a "normal" tag out of it? one that is more descriptive of what it means and is.

@SebastiaanZ
Copy link
Member

SebastiaanZ commented May 23, 2019

Another way to go would be to use some sort of caching, like a lru_cache. This would require some refactoring of the code, but would limit the number of requests made per pep to 2 (for both extension versions) per restart of the bot. The cache size doesn't even have to be that large, since I don't think we actually request that many different PEPs and the "classics", like pep-8, are probably requested often enough to stay in the cache. (And, if not, an additional 2 requests won't hurt us.)

edit: Nevermind, we do want to be sensitive to status changes of the pep, which would mean we could only cache the correct URL to make a request to. This optimization, from 2 to 1 requests is not really worth refactoring the code.

I think a normal tag for pep-8 with a custom explanation may be a good idea as well, but I'd like to retain the current functionality of having that succinct embed with status data available for PEP-8 as well.

@lemonsaurus
Copy link
Member

making extra requests at this scale, that is maybe 100 times over what, 8, 9 months? is completely acceptable.

@lemonsaurus
Copy link
Member

if we could make both requests in parallel that would be nice though, so as not to double execution time.

@MarkKoz MarkKoz added a: backend Related to internal functionality and utilities (error_handler, logging, security, utils and core) t: bug Something isn't working s: WIP Work In Progress cogs and removed a: backend Related to internal functionality and utilities (error_handler, logging, security, utils and core) labels Aug 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
s: WIP Work In Progress t: bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants