-
Notifications
You must be signed in to change notification settings - Fork 18k
Floating point semantics: unstable semantics inside/outside of a loop and among architectures #61061
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There is difference in results in different assembler instruction of float-point arithmetic. Simple analyse of assembler output of your code say that used different instructions. One is package main
import (
"math"
)
func main() {
println(math.Float64bits(Func1()))
println(math.Float64bits(Func2()))
}
func Func1() float64
func Func2() float64
Output:
As a result, it can be said that
No its not depends of loop, if you will print result outside of printAddition all will be works fine :)
Yeap. To understand whether this is normal or not, you need to read https://en.wikipedia.org/wiki/IEEE_754 and analyze what requirements it puts on implementations (impl in ARM proc, not in Go). |
@gavv suggested it's basically the same issue as, say dotnet/runtime#64591: |
This seems to be as documented in https://go.dev/ref/spec#Floating_point_operators:
@RemiMattheyDoret, have you tried adding explicit conversions as suggested in that paragraph? |
@bcmills on
The individual expressions generating
I agree it appears to be within spec, it's just the inconsistency based on context which could be surprising. |
Thank you @bcmills! The existence of a discrepancy among machines with different instruction sets (assuming the instruction set is what matters here) is still surprising to me and feels like a bug. |
The instruction set determines which of the “fused operations” are available for the compiler to use. |
That is, unfortunately, the nature of compiler optimizations. |
This is an area where different languages make different optimization choices. Go has chosen to generate faster code on average. The results are not bit-for-bit identical on different processors, but as pointed out above Go has a documented language feature that you can use to get bit-for-bit identical results. This is based on the expectation that more people care about getting fast results on their processor than care about getting bit-for-bit identical results on multiple processors. Java, for example, has chosen differently, and tries to get bit-for-bit identical results on all processors. This has led to papers like How Java’s Floating-Point Hurts Everyone Everywhere. Not that I want to fully endorse everything that that paper says, but it suggests that there is no ideal choice here. I'm going to close this issue because I don't think there is anything to do here. Please comment if you disagree. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes (I have not tried the 1.21-rc though)
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
Consider the following code
demo
What did you expect to see?
I expected
This is indeed what I observe on the demo as well as on a linux (with 11th Gen Intel® Core™ i7-1165G7; x86) I tried this code on.
What did you see instead?
On two different macbook (with Apple M1; ARM64), I however observe the following output
Problem
We observe a rounding error but not on all machines and only in specific circumstances (seemingly, only when the addition is directly performed within a for loop). I see two discrepancies in floating point semantics
The text was updated successfully, but these errors were encountered: