-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Tracking Issue for fNN::lerp #86269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Considering this is isn't C++, don't you want to give it a more descriptive name, like "linear_interpolation"? |
In this particular case I think it is good to copy the name. It is associated with this specific formula and the convention is used in other libraries and languages. I think it is better to not have a |
In Rust you can write |
Huh TIL 👆 |
Yeah, lerp is a standard name for this, like signum, min, or max. We don't write out minimum or maximum for those methods. ;) |
Given that a significant part of the advantage to lerp is that it's super fast and trivial, it'll be interesting to sell using an implementation other than the two standard implementations (
It's just
(Min and max were just chosen randomly here.) My code found 978813 pairs of adjacent float
According to Wikipedia at least, both
emath has a great solution to use here: That is, use
As far as I can tell, the only somewhat common name is "inverse lerp" or " |
An alternative option is to add |
The current choice of lerp signature First, it is impossible to be consistent with |
Second, it is inconsistent with most of existing algebraic crates in the ecosystem. As @CAD97 has mentioned above: |
Note that there is the unfortunate but possible implementation of requiring a function call style, e.g. |
One downside is that it would require type names to be stated explicitly, which is not the case with many other methods such as as |
That is an angle I hadn't realized, and consistent even for using trait Lerp {
fn lerp32(t: f32, v0: Self, v1: Self) -> Self { Self::lerp64(t, v0, v1) };
fn lerp64(t: f64, v0: Self, v1: Self) -> Self;
}
impl f32 {
fn lerp<L: Lerp>(self, v0: L, v1: L) -> L { L::lerp32(self, v0, v1) }
}
impl f64 {
fn lerp<L: Lerp>(self, v0: L, v1: L) -> L { L::lerp64(self, v0, v1) }
} and it's starting to get kind of messy, due to how overloading works in Rust. Assuming we only lerp between two values of the same type, we effectively want to do double dispatch, on the bitwidth of That, at least, speaks to putting the dispatch on the interpolated type. This brings me back to the possibility of providing it as Although, on the other hand, the intricacy here makes me wonder whether |
I feel like it is a smart, but complicated solution to this problem. |
Consider that some implementations of fn lerp(self, end: Self, s: T) -> Self {
// ...
interpolated.normalize()
} |
(GitHub issues are not a great medium for in-depth back-and-forth: consider discussing in the zulip instead.) Here are a few notes about guarantees. For finite inputs, we would like to provide guarantees that the function is
Note that bounded implies consistent (while Separately, an implementation is said to be clamped if it returns a value in For each potential implementation, I'll provide the x64 assembly as generated by godbolt compiler explorer for both The simplest implementation (and the one that many gamedevs will reach for initially) is monotonic and consistent, but not quite exact nor bounded. pub fn lerp(t: f32, v0: f32, v1: f32) -> f32 {
t.mul_add(v1 - v0, v0)
} ASM; -O
example::lerp:
movaps xmm3, xmm1
subss xmm2, xmm1
movaps xmm1, xmm2
movaps xmm2, xmm3
jmp qword ptr [rip + fmaf@GOTPCREL]
example::lerp_known:
movss xmm1, dword ptr [rip + .LCPI1_0]
movss xmm2, dword ptr [rip + .LCPI1_1]
jmp qword ptr [rip + fmaf@GOTPCREL]
; -O -target-cpu=skylake
example::lerp:
vsubss xmm2, xmm2, xmm1
vfmadd213ss xmm0, xmm2, xmm1
ret
example::lerp_known:
vmovss xmm1, dword ptr [rip + .LCPI1_0]
vfmadd213ss xmm0, xmm1, dword ptr [rip + .LCPI1_1]
ret This is monotonic because as This might be enough if we say that While it does not recover the exact quality, it's rather trivial to recover boundedness for a clamped lerp, by clamping to the range of pub fn lerp(t: f32, v0: f32, v1: f32) -> f32 {
// monotonic, consistent
let naive = t.mul_add(v1 - v0, v0);
// bounded
if v0 <= v1 {
naive.clamp(v0, v1)
} else {
naive.clamp(v1, v0)
}
} ASM; -O
example::lerp:
push rax
movss dword ptr [rsp + 4], xmm2
movaps xmm3, xmm1
movss dword ptr [rsp], xmm1
movaps xmm1, xmm2
subss xmm1, xmm3
movaps xmm2, xmm3
call qword ptr [rip + fmaf@GOTPCREL]
movss xmm2, dword ptr [rsp + 4]
movss xmm1, dword ptr [rsp]
ucomiss xmm2, xmm1
jae .LBB0_3
ucomiss xmm1, xmm2
jb .LBB0_5
maxss xmm2, xmm0
minss xmm1, xmm2
jmp .LBB0_4
.LBB0_3:
maxss xmm1, xmm0
minss xmm2, xmm1
movaps xmm1, xmm2
.LBB0_4:
movaps xmm0, xmm1
pop rax
ret
.LBB0_5:
lea rdi, [rip + .L__unnamed_1]
lea rdx, [rip + .L__unnamed_2]
mov esi, 28
call qword ptr [rip + core::panicking::panic@GOTPCREL]
ud2
example::lerp_known:
push rax
movss xmm1, dword ptr [rip + .LCPI1_0]
movss xmm2, dword ptr [rip + .LCPI1_1]
call qword ptr [rip + fmaf@GOTPCREL]
movss xmm1, dword ptr [rip + .LCPI1_1]
maxss xmm1, xmm0
movss xmm0, dword ptr [rip + .LCPI1_2]
minss xmm0, xmm1
pop rax
ret
; -O -target-cpu=skylake
example::lerp:
vsubss xmm3, xmm2, xmm1
vfmadd213ss xmm3, xmm0, xmm1
vucomiss xmm2, xmm1
jae .LBB0_3
vucomiss xmm1, xmm2
jb .LBB0_5
vmaxss xmm0, xmm2, xmm3
vminss xmm0, xmm1, xmm0
ret
.LBB0_3:
vmaxss xmm0, xmm1, xmm3
vminss xmm0, xmm2, xmm0
ret
.LBB0_5:
push rax
lea rdi, [rip + .L__unnamed_1]
lea rdx, [rip + .L__unnamed_2]
mov esi, 28
call qword ptr [rip + core::panicking::panic@GOTPCREL]
ud2
example::lerp_known:
vmovss xmm1, dword ptr [rip + .LCPI1_0]
vfmadd132ss xmm0, xmm1, dword ptr [rip + .LCPI1_1]
vmaxss xmm0, xmm1, xmm0
vmovss xmm1, dword ptr [rip + .LCPI1_2]
vminss xmm0, xmm1, xmm0
ret This, if I understand correctly, fulfills all of the above defined properties for a clamped lerp, except that the returned value for It's also worth noting that while the prior implementation contains no panicking code paths, this implementation still contains panicking code, I assume for some combination of nonfinite inputs. This is the other simple implementation, and the one the other portion of gamedevs will reach for. I've written the simple definition you'll often see, but the current double FMA implementation is algebraically equivalent to it, just with pub fn lerp(t: f32, v0: f32, v1: f32) -> f32 {
((1.0 - t) * v0) + (t * v1)
} ASMNote that this doesn't use explicit FMA, but could use at least one. ; -O
example::lerp:
movss xmm3, dword ptr [rip + .LCPI0_0]
subss xmm3, xmm0
mulss xmm3, xmm1
mulss xmm2, xmm0
addss xmm3, xmm2
movaps xmm0, xmm3
ret
example::lerp_known:
movss xmm1, dword ptr [rip + .LCPI1_0]
subss xmm1, xmm0
mulss xmm1, dword ptr [rip + .LCPI1_1]
mulss xmm0, dword ptr [rip + .LCPI1_2]
addss xmm0, xmm1
ret
; -O -target-cpu=skylake
example::lerp:
vmovss xmm3, dword ptr [rip + .LCPI0_0]
vsubss xmm3, xmm3, xmm0
vmulss xmm1, xmm3, xmm1
vmulss xmm0, xmm0, xmm2
vaddss xmm0, xmm1, xmm0
ret
example::lerp_known:
vmovss xmm1, dword ptr [rip + .LCPI1_0]
vsubss xmm1, xmm1, xmm0
vmulss xmm1, xmm1, dword ptr [rip + .LCPI1_1]
vmulss xmm0, xmm0, dword ptr [rip + .LCPI1_2]
vaddss xmm0, xmm0, xmm1
ret This implementation is exact, but it is not monotonic nor consistent, nor even bounded. Again, clamping recovers boundedness and consistency, so I've included that implementation as well for completeness. pub fn lerp(t: f32, v0: f32, v1: f32) -> f32 {
let naive = ((1.0 - t) * v0) + (t * v1);
if v0 < v1 {
naive.clamp(v0, v1)
} else {
naive.clamp(v1, v0)
}
}
pub fn lerp_known(t: f32) -> f32 {
lerp(t, 7.0, 13.0)
} ASM; -O
example::lerp:
push rax
movss xmm3, dword ptr [rip + .LCPI0_0]
subss xmm3, xmm0
mulss xmm3, xmm1
mulss xmm0, xmm2
addss xmm0, xmm3
ucomiss xmm2, xmm1
jbe .LBB0_1
jb .LBB0_6
maxss xmm1, xmm0
minss xmm2, xmm1
movaps xmm1, xmm2
jmp .LBB0_5
.LBB0_1:
ucomiss xmm1, xmm2
jb .LBB0_6
maxss xmm2, xmm0
minss xmm1, xmm2
.LBB0_5:
movaps xmm0, xmm1
pop rax
ret
.LBB0_6:
lea rdi, [rip + .L__unnamed_1]
lea rdx, [rip + .L__unnamed_2]
mov esi, 28
call qword ptr [rip + core::panicking::panic@GOTPCREL]
ud2
example::lerp_known:
movss xmm2, dword ptr [rip + .LCPI1_0]
subss xmm2, xmm0
movss xmm3, dword ptr [rip + .LCPI1_1]
mulss xmm2, xmm3
movss xmm1, dword ptr [rip + .LCPI1_2]
mulss xmm0, xmm1
addss xmm0, xmm2
maxss xmm3, xmm0
minss xmm1, xmm3
movaps xmm0, xmm1
ret
; -O -target-cpu=skylake
example::lerp:
push rax
vmovss xmm3, dword ptr [rip + .LCPI0_0]
vsubss xmm3, xmm3, xmm0
vmulss xmm3, xmm3, xmm1
vmulss xmm0, xmm0, xmm2
vaddss xmm0, xmm3, xmm0
vucomiss xmm2, xmm1
jbe .LBB0_1
jb .LBB0_6
vmaxss xmm0, xmm1, xmm0
vminss xmm0, xmm2, xmm0
pop rax
ret
.LBB0_1:
vucomiss xmm1, xmm2
jb .LBB0_6
vmaxss xmm0, xmm2, xmm0
vminss xmm0, xmm1, xmm0
pop rax
ret
.LBB0_6:
lea rdi, [rip + .L__unnamed_1]
lea rdx, [rip + .L__unnamed_2]
mov esi, 28
call qword ptr [rip + core::panicking::panic@GOTPCREL]
ud2
example::lerp_known:
vmovss xmm1, dword ptr [rip + .LCPI1_0]
vsubss xmm1, xmm1, xmm0
vmovss xmm2, dword ptr [rip + .LCPI1_1]
vmulss xmm1, xmm1, xmm2
vmovss xmm3, dword ptr [rip + .LCPI1_2]
vmulss xmm0, xmm0, xmm3
vaddss xmm0, xmm0, xmm1
vmaxss xmm0, xmm2, xmm0
vminss xmm0, xmm3, xmm0
ret This implementation also contains panicking control flow, presumably for some combination of nonfinite inputs. This is the last implementation that I've personally seen in use, though I know some others exist. This adds a consistency check to our exact implementation, but still fails at being monotonic or bounded. pub fn lerp(t: f32, v0: f32, v1: f32) -> f32 {
let naive = ((1.0 - t) * v0) + (t * v1);
if v0 == v1 {
v0
} else {
naive
}
} ASM; -O
example::lerp:
movss xmm3, dword ptr [rip + .LCPI0_0]
subss xmm3, xmm0
mulss xmm3, xmm1
mulss xmm0, xmm2
addss xmm0, xmm3
cmpeqss xmm2, xmm1
movaps xmm3, xmm2
andnps xmm3, xmm0
andps xmm2, xmm1
orps xmm2, xmm3
movaps xmm0, xmm2
ret
example::lerp_known:
movss xmm1, dword ptr [rip + .LCPI1_0]
subss xmm1, xmm0
mulss xmm1, dword ptr [rip + .LCPI1_1]
mulss xmm0, dword ptr [rip + .LCPI1_2]
addss xmm0, xmm1
ret
; -O -target-cpu=skylake
example::lerp:
vmovss xmm3, dword ptr [rip + .LCPI0_0]
vsubss xmm3, xmm3, xmm0
vmulss xmm3, xmm3, xmm1
vmulss xmm0, xmm0, xmm2
vaddss xmm0, xmm3, xmm0
vcmpeqss xmm2, xmm1, xmm2
vblendvps xmm0, xmm0, xmm1, xmm2
ret
example::lerp_known:
vmovss xmm1, dword ptr [rip + .LCPI1_0]
vsubss xmm1, xmm1, xmm0
vmulss xmm1, xmm1, dword ptr [rip + .LCPI1_1]
vmulss xmm0, xmm0, dword ptr [rip + .LCPI1_2]
vaddss xmm0, xmm0, xmm1
ret This is the one I currently use, though I might reconsider after doing this analysis. Note also that this doesn't actually contain any branching assembly, just a cmov, despite having a branch in the source. And finally, the proposed implementation from C++ P0811R3. I make no assertion whether it does or doesn't meet the promised properties, just reproduce it here. pub fn lerp(t: f32, v0: f32, v1: f32) -> f32 {
// exact, monotonic, finite, determinate, and (for v0=v1=0) consistent:
if (v0 <= 0.0 && v1 >= 0.0) || (v0 >= 0.0 && v1 <= 0.0) {
return t * v1 + (1.0-t) * v0;
}
// exact
if t == 1.0 {
return v1;
}
// exact at t=0, monotonic except near t=1,
// bounded, determinate, and consistent:
let x = v0 + t * (v1 - v0);
// monotonic near t=1
if t > 1.0 && v1 > v0 {
f32::max(v1, x)
} else {
f32::min(v1, x)
}
} ASM; -O
example::lerp:
xorps xmm3, xmm3
ucomiss xmm3, xmm1
jb .LBB0_2
ucomiss xmm2, xmm3
jae .LBB0_9
.LBB0_2:
ucomiss xmm1, xmm3
jb .LBB0_4
xorps xmm3, xmm3
ucomiss xmm3, xmm2
jb .LBB0_4
.LBB0_9:
mulss xmm2, xmm0
movss xmm3, dword ptr [rip + .LCPI0_0]
subss xmm3, xmm0
mulss xmm3, xmm1
addss xmm2, xmm3
.LBB0_10:
movaps xmm0, xmm2
ret
.LBB0_4:
ucomiss xmm0, dword ptr [rip + .LCPI0_0]
jne .LBB0_5
jnp .LBB0_10
.LBB0_5:
movaps xmm3, xmm2
subss xmm3, xmm1
mulss xmm3, xmm0
addss xmm3, xmm1
ucomiss xmm0, dword ptr [rip + .LCPI0_0]
jbe .LBB0_7
ucomiss xmm2, xmm1
jbe .LBB0_7
movaps xmm0, xmm2
cmpunordss xmm0, xmm2
movaps xmm1, xmm0
andps xmm1, xmm3
maxss xmm3, xmm2
jmp .LBB0_8
.LBB0_7:
movaps xmm0, xmm2
cmpunordss xmm0, xmm2
movaps xmm1, xmm0
andps xmm1, xmm3
minss xmm3, xmm2
.LBB0_8:
andnps xmm0, xmm3
orps xmm0, xmm1
ret
example::lerp_known:
ucomiss xmm0, dword ptr [rip + .LCPI1_0]
jne .LBB1_1
jp .LBB1_1
movss xmm0, dword ptr [rip + .LCPI1_3]
ret
.LBB1_1:
ucomiss xmm0, dword ptr [rip + .LCPI1_0]
mulss xmm0, dword ptr [rip + .LCPI1_1]
addss xmm0, dword ptr [rip + .LCPI1_2]
jbe .LBB1_4
maxss xmm0, dword ptr [rip + .LCPI1_3]
ret
.LBB1_4:
minss xmm0, dword ptr [rip + .LCPI1_3]
ret
; -O -target-cpu=skylake
example::lerp:
vxorps xmm3, xmm3, xmm3
vucomiss xmm3, xmm1
jb .LBB0_2
vucomiss xmm2, xmm3
jae .LBB0_9
.LBB0_2:
vucomiss xmm1, xmm3
jb .LBB0_4
vxorps xmm3, xmm3, xmm3
vucomiss xmm3, xmm2
jb .LBB0_4
.LBB0_9:
vmulss xmm2, xmm0, xmm2
vmovss xmm3, dword ptr [rip + .LCPI0_0]
vsubss xmm0, xmm3, xmm0
vmulss xmm0, xmm0, xmm1
vaddss xmm2, xmm0, xmm2
.LBB0_10:
vmovaps xmm0, xmm2
ret
.LBB0_4:
vucomiss xmm0, dword ptr [rip + .LCPI0_0]
jne .LBB0_5
jnp .LBB0_10
.LBB0_5:
vsubss xmm3, xmm2, xmm1
vmulss xmm3, xmm3, xmm0
vaddss xmm3, xmm3, xmm1
vucomiss xmm0, dword ptr [rip + .LCPI0_0]
jbe .LBB0_7
vucomiss xmm2, xmm1
jbe .LBB0_7
vmaxss xmm0, xmm3, xmm2
jmp .LBB0_8
.LBB0_7:
vminss xmm0, xmm3, xmm2
.LBB0_8:
vcmpunordss xmm1, xmm2, xmm2
vblendvps xmm0, xmm0, xmm3, xmm1
ret
example::lerp_known:
vucomiss xmm0, dword ptr [rip + .LCPI1_0]
jne .LBB1_1
jp .LBB1_1
vmovss xmm0, dword ptr [rip + .LCPI1_3]
ret
.LBB1_1:
vmulss xmm1, xmm0, dword ptr [rip + .LCPI1_1]
vaddss xmm1, xmm1, dword ptr [rip + .LCPI1_2]
vucomiss xmm0, dword ptr [rip + .LCPI1_0]
jbe .LBB1_4
vmaxss xmm0, xmm1, dword ptr [rip + .LCPI1_3]
ret
.LBB1_4:
vminss xmm0, xmm1, dword ptr [rip + .LCPI1_3]
ret The fact that this is so branchy, even for constant I think it would be a useful exercise to dig up what (if any) guarantees GPU shaders give on their implementation of |
Excerpts from lerp/mix used in shading languages: Khronos documentation on GLSL
NVIDIA Cg documentation on
Microsoft HLSL documentation on
WGSL on
Note that lerp does not do additional clamping there and can be used to extrapolate values. |
Note that they also use |
In terms of the calling convention, I personally feel very strongly about not splitting the bounds between |
@clarfonthey Just wanted to make sure you're aware that a good amount of discussion has also happened on Zulip at https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/lerp.20API.20design/near/244050240 , if you'd like to also weigh in there. |
The calling convention for this bit me, I've never encountered a library where lerp was (t, a, b) rather than (a, b, t), and (foolishly) assumed that the warning for there being a new function that collides with my own lerp interpolation on f32 I'd just be able to swap them right over. |
Right now, based on the discussion in Zulip, my current feeling is that the next task here is to open a community RFC to determine
As a stretch goal, it would be nice if the signature could also be easily extended to support higher-order bezier interpolation (but that's a low order desire). (To be clear, I know how it can be done, but it's a question of supporting lerp first.) |
That was exactly my experience: first reaction: sweet, there is now standard lerp coming. Second: I have to reorder my lerp arguments, but only for f32. 🤦 |
pub trait Lerp<R = Self, const N: usize = 2> {
type Output;
fn lerp(self, control_points: [R; N]) -> Self::Output;
}
impl Lerp for f32 {
type Output = f32;
fn lerp(self, control_points: [f32; 2]) -> f32 {
(1.0 - self) * control_points[0] + self * control_points[1]
}
}
impl Lerp<f32, 3> for f32 {
type Output = f32;
/// quadratic bezier curve -- a lerp of lerps
fn lerp(self, control_points: [f32; 3]) -> f32 {
let nt = 1.0 - self;
nt * nt * control_points[0] + nt * self * 2.0 * control_points[1] + self * self * control_points[2]
}
} |
Any API for this in std should balance user expectation with Rust idioms. I'm sympathetic to the familiarity of Also note that Rust isn't exactly a stranger to adapting APIs to make them more Rust-like, even at the expense of familiarity: see how |
Well, if you want pub trait Lerp<T> {
type Output;
fn lerp(self, t: T) -> Self::Output;
}
impl Lerp<f32> for [f32; 2] {
type Output = f32;
fn lerp(self, t: f32) -> Self::Output {
(1.0 - t) * self[0] + t * self[1]
}
}
impl Lerp<f32> for [f32; 3] {
type Output = f32;
/// quadratic bezier curve -- a lerp of lerps
fn lerp(self, t: f32) -> Self::Output {
let nt = 1.0 - t;
nt * nt * self[0] + nt * t * 2.0 * self[1] + t * t * self[2]
}
} Example use: assert_eq!([4.0f32, 8.0].lerp(0.75f32), 7.0); |
I use There are other methods on f32 in the standard library that might feel similarly "weird" to you, but are still not associated functions: And what I am asking for here is not (just) familiarity from other languages, but the existing rust ecosystem. |
Functions like But I don't see that these same rationalizations hold for lerp: 1) although people in here have proposed traits, the original PR just adds these as inherent methods on floats, 2) there are more than two arguments, so we don't need to split the receiver across the boundaries of a conceptual range (yuck); we can make the third argument the receiver (which users find unfamiliar) or we can coalesce the boundary arguments into a range, tuple, or array (which I see no arguments against yet), 3) there is no consistency to be be gained from making these inherent methods on floats since there is no other equivalent method in std that we are trying to emulate; they can easily just be associated functions. |
They are? If that's the case, shouldn't the examples in the docs at least be calling them as
1 and 3 here don't hold for
My argument against this would be that you're introducing complexity where it doesn't need to exist, and would add noise to call sites. Granted, not a strong argument, but the benefits aren't particularly strong either. |
If lerp becomes a method of
All of these will require construction of useless temporary objects.
There is a number of existing algebraic crates (listed above) that use |
It's simple with a pub trait Lerp<T = [Self; 2]>: Sized {
type Output;
fn lerp(self, control_points: T) -> Self::Output;
}
impl Lerp<[Vec3; 2]> for f32 {
type Output = Vec3;
fn lerp(self, control_points: [Vec3; 2]) -> Self::Output {
(1.0 - self) * control_points[0] + self * control_points[1]
}
} |
They're not useless, arrays allow easily being generic over |
There is Rust code that uses I'm probably far from the only one... |
Use of array causes suboptimal code generation (both debug and optimized) even for f32. Here is an example: https://godbolt.org/z/zT7Pr1Pjb |
They produce identical code with optimizations on once the .set example::test_value, example::test_array https://godbolt.org/z/o7esvxssa If optimizations are turned off, passing arrays/separate-args to
That's inherent in having larger types passed across non-inlined function call boundaries -- caused by current rustc inheriting C's poor ABI decisions of not binpacking function arguments'/returns' fields in registers -- nothing specific to |
additional crate that uses Has nearly 67k downloads. |
Since 1.55 the following code doesn't compile anymore: trait MyLerp {
fn lerp(&self) -> f32;
}
impl MyLerp for f32 {
fn lerp(&self) -> f32 {
0.1
}
}
fn main() {
println!("{}", 0.5_f32.lerp());
} Because of Even though I can't enable the feature as I'm using stable. How could I fix this on stable until this feature is stabilised? |
|
I thought that unstable library features wouldn't be preferred if you didn't have the features enabled? |
That's if the two items are ambiguous / are both trait methods, IIRC. Here If we aren't actively pursuing progress on deciding the correct implementation and API for a std [1] and as a reminder, I consider 3rd party math types being able to offer the same lerp API a hard requirement, because having differing APIs by requirements is awful |
Given the complexity here, this is where I think I lean right now. Especially since so much of the discussion is about "gamey" uses, where there's going to be another math library involved anyways for interpolation of vectors. That library can just provide it for scalars too (by an extension trait if it must). And maybe that's true for non-gamey uses too? Like perhaps those cases are all using ndarray or similar, which could provide this instead. Then Rust can wait, and add one later should IEEE 754 decide to add it to a future version of their spec.
FWIW there are many other floating-point methods which are also ±1 ULP, like |
Remove fNN::lerp Lerp is [surprisingly complex with multiple tradeoffs depending on what guarantees you want to provide](rust-lang#86269 (comment)) (and what you're willing to drop for raw speed), so we don't have consensus on what implementation to use, let alone what signature - `t.lerp(a, b)` nicely puts `a, b` together, but makes dispatch to lerp custom types with the same signature basically impossible, and major ecosystem crates (e.g. nalgebra, glium) use `a.lerp(b, t)`, which is easily confusable. It was suggested to maybe provide a `Lerp<T>` trait and `t.lerp([a, b])`, which _could_ be implemented by downstream math libraries for their types, but also significantly raises the bar from a simple fNN method to a full trait, and does nothing to solve the implementation question. (It also raises the question of whether we'd support higher-order bezier interpolation.) The only consensus we have is the lack of consensus, and the [general temperature](rust-lang#86269 (comment)) is that we should just remove this method (giving the method space back to 3rd party libs) and revisit this if (and likely only if) IEEE adds lerp to their specification. If people want a lerp, they're _probably_ already using (or writing) a math support library, which provides a lerp function for its custom math types and can provide the same lerp implementation for the primitive types via an extension trait. See also [previous Zulip discussion](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/lerp.20API.20design) cc `@clarfonthey` (original PR author), `@m-ou-se` (original r+), `@scottmcm` (last voice in tracking issue, prompted me to post this) Closes rust-lang#86269 (removed)
RIP std |
For posterity, this is actually a bit subtle than those docs seem to suggest. Per Precision and Operation of SPIR-V Instructions from the Vulkan spec:
(emphasis added) This caused me considerable pain in an application where I absolutely required the "exact" property, but nVidia implemented |
Why not?: trait Lerp {
fn lerp(self, t: f32) -> f32;
}
impl Lerp for (f32, f32) {
fn lerp(self, t: f32) -> f32 {
self.0 + t * (self.1 - self.0)
}
} Usage: |
This has been closed, so, it's unlikely this would be accepted without a new ACP. However, there's still the issue of Your suggestion has been made in various forms above in the discussion already, so, I would suggest reading through and seeing why the decision was made to simply close this. I think that, if we were to push the existing ecosystem to adopt |
Feature gate:
#![feature(float_interpolationl)]
This is a tracking issue for the
fNN::lerp
methods in the standard library.Public API
Steps / History
Unresolved Questions
std::lerp
from C++?lerp(a, b, t)
but we have t.lerp(a, b)` here.The text was updated successfully, but these errors were encountered: