You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pybind11 by default permits lossy conversions from numpy arithmetic types to c++ ones.
I understand the reasoning behind the current situation: that python doesn't really have numeric types other than "float" and "long", and so conversion from python numeric types is always going to be potentially truncating.
However, for numpy scalars (int64, float64, etc.), there is a hierarchy of precision that has direct matches to c++ types. For the work I'm doing accidental loss of precision can be dangerous, so with the boost::python bindings we're looking to migrate away from we've gone to extensive trouble to avoid truncating casts of numpy values from python to c++.
I am aware that for py::array_t I can disable forcecasting (aside, I agree with the discussion in #338 that probably no forecasting is a better default).
However, for wrapped functions that take e.g. int32_t or std::vector<int32_t>, then truncation of numpy values can occur.
I think that making the type_caster for numeric types aware of numpy scalars would make it possible to prevent e.g. np.int64 successfully casting to int32_t.
Would such a change be possible? I think this has to be done within the pybind11 cast.h header itself.
With pybind11's first-class support for numpy this feels like it would be a great addition.
Can be called like so, but to avoid truncating conversions we would need both of these calls to be TypeErrors:
import pybind_numpy
import numpy as np
# scalar converting to lower-precision scalar
print(pybind_numpy.test_takes_int32(np.int64(1000)))
# array is iterated by pybind11 and contained scalars being cast to lower-precision scalars
print(pybind_numpy.test_takes_vec_int32(np.array([1000, 2000], dtype=np.int64)))
The text was updated successfully, but these errors were encountered:
Issue description
Pybind11 by default permits lossy conversions from numpy arithmetic types to c++ ones.
I understand the reasoning behind the current situation: that python doesn't really have numeric types other than "float" and "long", and so conversion from python numeric types is always going to be potentially truncating.
However, for numpy scalars (int64, float64, etc.), there is a hierarchy of precision that has direct matches to c++ types. For the work I'm doing accidental loss of precision can be dangerous, so with the boost::python bindings we're looking to migrate away from we've gone to extensive trouble to avoid truncating casts of numpy values from python to c++.
I am aware that for
py::array_t
I can disable forcecasting (aside, I agree with the discussion in #338 that probably no forecasting is a better default).However, for wrapped functions that take e.g.
int32_t
orstd::vector<int32_t>
, then truncation of numpy values can occur.I think that making the
type_caster
for numeric types aware of numpy scalars would make it possible to prevent e.g.np.int64
successfully casting toint32_t
.Would such a change be possible? I think this has to be done within the pybind11 cast.h header itself.
With pybind11's first-class support for numpy this feels like it would be a great addition.
Thanks
David
Reproducible example code
For example, this binding code:
Can be called like so, but to avoid truncating conversions we would need both of these calls to be TypeErrors:
The text was updated successfully, but these errors were encountered: