-
-
Notifications
You must be signed in to change notification settings - Fork 31.9k
normalize() method of Decimal class does not always preserve value #105774
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
From the docs:
Your example is 29 places so it gets rounded |
@rhettinger Decimal question |
I couldn't see it documented anywhere that |
@ericvsmith Yes, updating the docs is probably a good idea. (I'm not offering to do it, I'm afraid; I don't have the bandwidth in the near future.) FWIW, the behaviour itself is deliberate, even if questionable: the The reduce operation is documented (in the spec) as equivalent to the "plus" operation along with trimming of trailing zeros, and the "plus" operation similarly rounds to the current context (and we have test cases from Mike Cowlishaw that confirm that intention). Slightly surprisingly, IEEE 754 doesn't seem to have any equivalent operation (perhaps because there are issues at the top end of the dynamic range - e.g., in the IEEE 754 |
That is because rounding is the default for all operations that return a Decimal object except for
The docs say that
|
Also, I'm thinking of adding an entry to the Decimal FAQ section to demonstrate and explain the notion that numbers are considered exact, that they are created independent of the current context (and can have greater precision), and that contexts are applied after an operation:
|
Also, I'm thinking that the docs and docstring for
|
(cherry picked from commit a8210b6) Co-authored-by: Raymond Hettinger <[email protected]>
(cherry picked from commit a8210b6) Co-authored-by: Raymond Hettinger <[email protected]>
@mdickinson so do we have no recourse for normalizing an arbitrary (unknown precision, unknown exponent, etc.) decimal without rounding other than doing something like: Context(
prec=len(value.as_tuple().digits),
Emin=[calculate an Emin that won't throw],
Emax=[calculate an Emax that won't throw]
).normalize(value) If that's the case, it actually sounds easier to do the normalization manually by stripping zeros from |
Yes, I believe that's currently the case.
Agreed. I'd either do it this way, or via the string representation. |
For a practical recipe, this code will make the best effort and raise an exception if there is any precision loss (from the normalized_value = decimal.Decimal(value).normalize(decimal.Context(
traps=[decimal.Inexact],
prec=decimal.MAX_PREC,
Emax=decimal.MAX_EMAX,
Emin=decimal.MIN_EMIN
)) |
I have encountered unexpected behavior while using the
normalize()
method of theDecimal
class. I performed the following test using Python 3.10.10 and 3.11.4.Based on my understanding, the normalize() method is intended to produce a canonical representation of a decimal number. However, in this case, the values of v1 and v2 are not matching, leading to an AssertionError.
Linked PRs
The text was updated successfully, but these errors were encountered: