Skip to content

Conversation

cwhanse
Copy link
Member

@cwhanse cwhanse commented Nov 16, 2016

Fix typing, add whatsnew note

cwhanse and others added 2 commits November 12, 2016 10:15
Number of iterations of Newton method should increase with the order of
the argument, to maintain precision.  Not expected to have a significant
effect on most calculated IV curves.
@mikofski
Copy link
Member

mikofski commented Nov 16, 2016

@cwhanse why not use scipy.optimize.newton here in pvsystem.v_from_i on line 1846?

from scipy.optimize import newton
# Three iterations of Newton-Raphson method to solve
# w+log(w)=logargW. The initial guess is w=logargW. Where direct
# evaluation (above) results in NaN from overflow, 3 iterations
# of Newton's method gives approximately 8 digits of precision.
lambertwterm_log = newton(lambda w: w + np.log(w) - logargW, logargW, lambda w: 1 + 1/w)

@cwhanse
Copy link
Member Author

cwhanse commented Nov 17, 2016

This was transcribed from Matlab. No reason not to use the scipy function, but I doubt there's any performance advantage - after I fix the failed checks, of course.

@mikofski
Copy link
Member

True, I checked, performance at least for the test case is exactly the same. But IMO scipy.optimize.newton will be easier to maintain. For example, you wouldn't have to update the number of iterations to depend on order as you did in #260

@wholmgren
Copy link
Member

I think the newton function will only work with scalar inputs. I'm guessing that it's more efficient to do it Cliff's original way for array input of non-trivial length.

I did not know you could highlight a block of code like that in a github link!

@mikofski
Copy link
Member

mikofski commented Nov 18, 2016

I chose newton specifically because this example uses scalar inputs. The equivalent multivariate solver is scipy.optimize.fsolve based on MINPACK’s hybrd and hybrj which is a modified version of Powell's method and the Newton-Raphson methods.

From the GNU Scientific Library:

This is a modified version of Powell’s Hybrid method as implemented in the HYBRJ algorithm in MINPACK. Minpack was written by Jorge J. Moré, Burton S. Garbow and Kenneth E. Hillstrom. The Hybrid algorithm retains the fast convergence of Newton’s method but will also reduce the residual when Newton’s method is unreliable. The algorithm uses a generalized trust region to keep each step under control.

from scipy import optimize
# Three iterations of Newton-Raphson method to solve
# w+log(w)=logargW. The initial guess is w=logargW. Where direct
# evaluation (above) results in NaN from overflow, 3 iterations
# of Newton's method gives approximately 8 digits of precision.
lambertwterm_log = optimize.fsolve(
    func=lambda w: w + np.log(w) - logargW,
    x0=logargW,
    fprime=lambda w: 1 + 1/w)

If even more flexibility is needed, then scipy.optimize.root is the general interface to multivariate equation solvers. You can specify the method argument, EG: lm is Levenberg-Marquardt for least square fitting of non-square systems of equations.

@cwhanse the equivalent of this in MATLAB would be the Optimization Toolbox's fsolve which I re-implemented in the MATLAB FEX Newton-Raphson solver for times when I didn't have access to the expensive toolbox.

@mikofski
Copy link
Member

I'm okay with it as it is:

  1. I don't want to block merging it,
  2. it is working well and is identical to the SciPy methods,
  3. and it's already quite compact and short,

but IMO I think that mature, well established, easily available methods are usually more transparent to users and easier to maintain than custom re-implementations.

# Conflicts:
#	docs/sphinx/source/whatsnew/v0.4.2.txt
@cwhanse
Copy link
Member Author

cwhanse commented Nov 22, 2016

Closing this to submit a clean pull request

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants