-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Improve convergence in pvsystem/v_from_i #268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Number of iterations of Newton method should increase with the order of the argument, to maintain precision. Not expected to have a significant effect on most calculated IV curves.
to accomodate series input
This reverts commit fc575d9.
Cliff, I'm not sure if it was your intent or not, but the proposed changes here are the same as #260. I do see some merge cruft in the whatsnew diff on #266. The joys of git! In any case, this still needs a call to Do we need to set a minimum order value? If all w are less than 1, all log10(w) will be negative, and all ceil will return 0 or less. I also think there should be a maximum order value so that we don't end up in a near infinite loop for erroneous input. Or we could avoid all of the above by following @mikofski's suggestion to use one of the scipy.optimize routines. I still think we have to separately call the routine on each element of array-like input, though. Wrapping the routine with np.vectorize might be ideal since it would take care of broadcasting for us. Here's an extension of @mikofski's example in #266 that works with array input: In [84]: def func(w, logargW):
...: return w + np.log(w) - logargW
...:
In [91]: def fprime(w, *args):
...: return 1 + 1/w
...:
In [92]: def solveit(logargW):
...: return optimize.fsolve(func=func, x0=logargW, args=(logargW), fprime=fprime)
...:
In [93]: solveit(5)
Out[93]: array([ 3.69344136])
In [94]: solveit_vec = np.vectorize(solveit)
In [95]: solveit_vec(np.array([.25, .75, 5, np.inf, 0, 100]))
/Users/holmgren/miniconda3/envs/pvlib35/bin/ipython:2: RuntimeWarning: invalid value encountered in subtract
if __name__ == '__main__':
/Users/holmgren/miniconda3/envs/pvlib35/bin/ipython:2: RuntimeWarning: divide by zero encountered in log
if __name__ == '__main__':
/Users/holmgren/miniconda3/envs/pvlib35/bin/ipython:2: RuntimeWarning: divide by zero encountered in true_divide
if __name__ == '__main__':
/Users/holmgren/miniconda3/envs/pvlib35/lib/python3.5/site-packages/scipy/optimize/minpack.py:161: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
Out[95]:
array([ 0.66219508, 0.87898614, 3.69344136, inf,
0. , 95.44148665])
In [102]: test_array = np.random.randn(10000)
In [103]: %timeit solveit_vec(test_array)
/Users/holmgren/miniconda3/envs/pvlib35/bin/ipython:2: RuntimeWarning: invalid value encountered in log
if __name__ == '__main__':
/Users/holmgren/miniconda3/envs/pvlib35/lib/python3.5/site-packages/scipy/optimize/minpack.py:161: RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
1 loop, best of 3: 720 ms per loop
In [104]: def manual(logargW):
...: w = logargW
...: order = np.ceil(np.log10(np.max(w))).astype(int)
...: order = np.maximum(order, 1)
...: order = np.minimum(order, 10)
...: for i in range(0, 3*order):
...: w = w * (1 - np.log(w) + logargW) / (1 + w)
...: return w
...:
In [105]: %timeit manual(test_array)
/Users/holmgren/miniconda3/envs/pvlib35/bin/ipython:7: RuntimeWarning: invalid value encountered in log
The slowest run took 6.19 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 454 µs per loop I also tried the same approach with the newton routine. It seemed slightly faster than fsolve but threw exceptions for some inputs. |
That wasn’t my intent, just me being clumsy with git. I thought I’d started fresh but it looks like it’s all still connected. I wanted to do this small change to learn how to use git, and for that, mission not accomplished. Practically speaking, logargW is determined by Rsh*IL, which can be expected to be <10^6, so order<6. Now that I think about it, I’d just skip this whole business and set range(0,8). We get roughly 2 digits in base 10 of precision with each iteration, so 9 iterations should get us double precision (53 bits) for the mantissa of w. In Matlab, we only do the logspace calculations where argW is too large. That filtering is not present in the python version; we calculate the logspace values for all inputs. If there’s need for a performance improvement, maybe that filter would help. I’d rather not use scipy.optimize.newton in this instance because its stopping criteria is an absolute (not relative) tolerance on w (or a maximum number of iterations). We want precision across such a large range of magnitudes that the tolerance has to scale with the argument, and it seemed simpler to scale the iteration count. I also prefer explicit code over package references, but that’s more of a Matlab mindset I’m projecting onto python. Cliff |
That all sounds good to me. I skipped the filter because I was lazy and the function seemed fast enough already. Maybe with more iterations it becomes worthwhile, or maybe not. |
Fix typing to handle series inputs