Conversation
Number of iterations of Newton method should increase with the order of the argument, to maintain precision. Not expected to have a significant effect on most calculated IV curves.
|
@cwhanse why not use from scipy.optimize import newton
# Three iterations of Newton-Raphson method to solve
# w+log(w)=logargW. The initial guess is w=logargW. Where direct
# evaluation (above) results in NaN from overflow, 3 iterations
# of Newton's method gives approximately 8 digits of precision.
lambertwterm_log = newton(lambda w: w + np.log(w) - logargW, logargW, lambda w: 1 + 1/w) |
|
This was transcribed from Matlab. No reason not to use the scipy function, but I doubt there's any performance advantage - after I fix the failed checks, of course. |
|
True, I checked, performance at least for the test case is exactly the same. But IMO |
|
I think the newton function will only work with scalar inputs. I'm guessing that it's more efficient to do it Cliff's original way for array input of non-trivial length. I did not know you could highlight a block of code like that in a github link! |
|
I chose From the GNU Scientific Library:
from scipy import optimize
# Three iterations of Newton-Raphson method to solve
# w+log(w)=logargW. The initial guess is w=logargW. Where direct
# evaluation (above) results in NaN from overflow, 3 iterations
# of Newton's method gives approximately 8 digits of precision.
lambertwterm_log = optimize.fsolve(
func=lambda w: w + np.log(w) - logargW,
x0=logargW,
fprime=lambda w: 1 + 1/w)If even more flexibility is needed, then @cwhanse the equivalent of this in MATLAB would be the Optimization Toolbox's |
|
I'm okay with it as it is:
but IMO I think that mature, well established, easily available methods are usually more transparent to users and easier to maintain than custom re-implementations. |
# Conflicts: # docs/sphinx/source/whatsnew/v0.4.2.txt
|
Closing this to submit a clean pull request |
Fix typing, add whatsnew note