-
Notifications
You must be signed in to change notification settings - Fork 18k
cmd/vet: add math check for erroneous math expressions #30951
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Currently constant division by zero is a compile time error. So there is no way to write a constant expression that produces a floating point infinity or a NaN. So I assume you are suggesting that the language be changed such that a constant division by zero produces infinity or NaN rather than a compile-timer error. |
Correct, that is my intent. Division by [-]0 yields NaN or +Inf or -Inf, depending on the dividend. |
In this same arena would be a standard way to specify quiet and signalling NaNs, and to specify/test the 51-bit NaN payload for float64s. If you want to add this to the proposal that seems natural, if not, I could open a separate proposal. |
Do other programming languages support this in their constant arithmetic? My main goal here is a net increase in adoption and usability -- don't change constants in a way that makes them radically different from what they were (harder to learn/explain), and make the changes align with existing run time behavior, floating point standards, and what other languages do for constants, so that there's actually less to learn overall. |
The notion of "exact" (for all practical purposes) constants in Go has hugely simplified dealing with numeric constants in code, because arithmetic behaves (essentially) like it would in math. We do not distinguish between Having written my share of floating-point code I agree that it would be convenient to write Needless to say that adding I am against this proposal. |
I propose that in constant land -0 and -0.0 both convert to the same float (-0.0). |
@dr2chase So you're saying |
-0 has a default type of integer, but float64(-0) === float64(-0.0) (using === to distinguish from Go or IEEE == rules). I think that's simpler and a consistent extrapolation of the Go constant rules to include these additional values. |
So you're saying that an But that still adds extra complexity because now we have to explain all the combinations of operations involving negative zeros in constant arithmetic (even if This adds real complexity for little benefit. I maintain that the majority of programmers don't need to care about negative zeroes, and making Go constants follow IEEE rules only makes life complicated. The reason for |
Robert’s arguments are regrettably persuasive. I dream on the “just
perfect” solution for many things but the counterpoint argument about “will
it be confusing on average” is powerful. My exposure to confusions in
GoNuts has made me far more appreciative of such matters. (Imagine If
presence of synchronization primitives made a struct impossible to pass by
value, for example)
On Fri, Apr 19, 2019 at 11:19 AM Robert Griesemer ***@***.***> wrote:
So you're saying that an -0 behaves like a 0 int but it becomes a -0.0 in
floating-point context. Agreed that that would be easy enough to do.
Internally (compiler) we'd have to keep track of the sign of zero integers,
which is something we don't do at the moment. We'd also have to incorporate
the sign rules in all integer operations that produce a zero, for
consistency, so they behave "correctly" in floating point context where
they might be a negative zero. More annoying.
But that still adds extra complexity because now we have to explain all
the combinations of operations involving negative zeros in constant
arithmetic (even if inf and nan are ignored). And not just for
floating-point constants but all numeric constants since they might be used
in floating-point context. I don't even want to think about the rules for
complex numbers.
This adds real complexity for little benefit. I maintain that the majority
of programmers don't need to care about negative zeroes, and making Go
constants follow IEEE rules only makes life complicated.
The reason for -0.0 in IEEE floating-point computations is that floats
may underflow to 0. Knowing that it's -0 provides some residual
information (the value is negative, but so close to zero that we can't
represent it anymore). There's no such thing as underflow in Go constants.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#30951 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AB4DFJNBTX56C7VQJPK532LPRIEMPANCNFSM4G76NVUQ>
.
--
*Michael T. [email protected] <[email protected]>*
|
Another possibility along these lines would be simply disallow negative floating point zero as an untyped constant. That would avoid confusing people who write |
I like @ianlancetaylor's suggestion. It is unlikely, but it may brake existing code (albeit it would be trivial to fix by removing the |
A vet check would be okay, that would at least prevent people who intended to get IEEE -0.0 from being surprised (assuming that they use vet). |
If we add the check to the suite run during |
I'm OK with landing a vet check for floating-point constant -0 at the start of Go 1.14, and we can see what happens. I wonder if instead of adding a new check specifically for that, we should start a more general 'math' vet check that could absorb the current 'dumb shift' checks as well. |
Retitled to make this about adding a new 'math' vet check. That will include:
I think at the same time we should investigate adding checks for 2^n as well. (See the recent https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90885.) |
Mind if I take a crack at this? I have a working analysis pass to check for negative floating point zero, but seeing as it's my first vet check, I'm not sure if it's too lenient or too exhaustive. Additionally, would it be better to send a separate CL that moves the oversized shifts pass into the math check? Edit: I now have the 2^n check implemented as well. Now I'm not so sure if its possible to absorb the shift checks without violating backwards compatibility. |
Change https://golang.org/cl/197937 mentions this issue: |
This issue is currently labeled as early-in-cycle for Go 1.15. That time is now, so friendly ping. If it no longer needs to be done early in cycle, that label can be removed. |
Moving to Go1.16 both because we've entered the freeze, and the CL still needs to be reviewed and revised. |
Go constants can be extended to support -0, NaN, +Inf, and -Inf, as constant values.
There is no special name for any of these constants; this is support for values, not new tokens. The model for this is mythical "IEEE Reals", which is the combination of Real numbers (not floating point) with the rules for IEEE -0, NaN, +Inf, and -Inf added where they fit, in accordance with the rules for IEEE floating point.
Why should we do this?
The following conversion-from-constant-value rules are added: when converting to integer, -0 becomes 0, and NaN and Inf become errors. When converting to floating point (or component of a complex number), the obvious mapping to IEEE floats occurs. In the case of tiny negative fraction underflow when converting to float, -0 results; in the case of positive or negative overflow on conversion to float, +Inf or -Inf results, as appropriate.
This arithmetic rules are enumerated in the following tables, which were checked against the IEEE specification and run-time arithmetic evaluation in both Java (expected to conform to IEEE) and Go (since this commit for 1.13 , believed to conform to IEEE).
Legend:
In the tables below, "R" as a result means to use the rules for real numbers. "RZ" means use the rule for real numbers with NZ replaced by Z.
The text was updated successfully, but these errors were encountered: