-
Notifications
You must be signed in to change notification settings - Fork 25
rescue less #55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rescue less #55
Conversation
err_output = stderr.gets | ||
stderr.close | ||
|
||
if 0 == exit_code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that there was a exit_code.to_i
before the move. Is that still needed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope. Different invocation. This is already an int.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
This looks like a great approach 👍 |
module CC | ||
module Engine | ||
module Analyzers | ||
class Base | ||
RESCUABLE_ERRORS = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opinion question for the team: should we also catch Timeout::Error
? Flay does internally (not in a code path we're using, FWIW). I think the thinking behind it was that if the parser took too long on a file, it's probably a parser bug (or maybe an absurdly large file). We were implicitly catching it before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd try to match the behavior of Flay run on the command line then. If the Flay catches the timeout error, logs a skip, and continues on, then I think that we should do the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dblandin 's reasoning sounds good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree as well. It has been added now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'm going to back this out: I remembered why I was hesitant about this in the first place. This series of changes was prompted by concerns of non-deterministic results from this engine (due to memory settings, but really a variety of system issues could come into play).
These timeouts are, by their nature, slightly non-deterministic. a file complex & big enough to be borderline might timeout some times & not others. Memory pressure on the system could cause execution time to vary between one run & another. Etc.
So instead I think I'm going to consider the timeout errors fatal & up the timeout limit to something absurd like 5 minutes. That should ensure it only gets triggered in pathological cases (like a parser bug leading to an infinite loop or something).
@jpignata this is relevant to your interests, so your thoughts are welcome.
618ac29
to
5f1fddd
Compare
The Analyzer `#run` loop was rescuing *all* exceptions, which is excessive. I don't think we should hard-fail on all exceptions, either: the Ruby parser gem is definitely known to barf on some valid but esoteric Ruby code, and I wouldn't be surprised if the other parsers had similar edge cases. So this changes behavior to only skip files for a known set of catchable errors. The out-of-process parsers are a little tricky since all you can get from them is an exit code and output streams: for now wrapping those in our own exception class seems reasonable. I'm still catching all exceptions, but only so that we can log a message about which file is impacted before we re-raise. Since a raw exception we'll likely abort on may not contain helpful information about the file that triggered it, this would be helpful for debugging cases.
Each of the out-of-process parser classes implemented its own CommandLineRunner, and they were all functionally the same (with some small differences that weren't actually used). This pulls the class up to one reused class.
Timeouts are slightly non-deterministic by their nature. So we should consider them fatal. They can still be useful, but should be for truly pathological cases since they are fatal, so I've upped the limit to 5 mintues.
5f1fddd
to
724e48f
Compare
The Analyzer
#run
loop was rescuing all exceptions, which is clearlyexcessive. But I don't think we should hard-fail on all exceptions, either:
the Ruby parser gem is definitely known to barf on some valid but
esoteric Ruby code, and I wouldn't be surprised if the other parsers had
similar edge cases.
So this changes behavior to only skip files for a known set of catchable
errors. The out-of-process parsers are a little tricky since all you can
get from them is an exit code and output streams: for now wrapping any non-zero exit from them
in our own exception class seems reasonable. They're not effected by Java heap problems, and since we don't control the heap anymore, any potential memory problems should kill the whole container with OOM.
I'm still catching all exceptions, but only so that we can log a message
about which file is impacted before we re-raise. Since a raw exception
that will cause an abort may not contain helpful information about the file
that triggered it, this would be helpful for debugging cases.
Note that there is a similar case of an excessive
rescue
in theFileThreadPool
. I'm investigating & fixing that separately.Thoughts, @codeclimate/review ?