Skip to content

Commit cf89a8e

Browse files
authored
Next release (#585)
* call for contributors * add pub ESDD
1 parent cbaec43 commit cf89a8e

File tree

10 files changed

+201
-178
lines changed

10 files changed

+201
-178
lines changed

CHANGELOG.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ What's New
33
==========
44

55

6-
climpred v2.1.3 (2021-xx-xx)
6+
climpred v2.1.3 (2021-03-23)
77
============================
88

99
Breaking changes
@@ -20,7 +20,6 @@ New Features
2020
- Added new metric :py:class:`~climpred.metrics._roc` Receiver Operating
2121
Characteristic as ``metric='roc'``. (:pr:`566`) `Aaron Spring`_.
2222

23-
2423
Bug fixes
2524
---------
2625
- :py:meth:`~climpred.classes.HindcastEnsemble.verify` and
@@ -32,6 +31,8 @@ Bug fixes
3231
raised. Furthermore, ``PredictionEnsemble.map(func, *args, **kwargs)``
3332
applies only function to Datasets with matching dims if ``dim="dim0_or_dim1"`` is
3433
passed as ``**kwargs``. (:issue:`417`, :issue:`437`, :pr:`552`) `Aaron Spring`_.
34+
- :py:class:`~climpred.metrics._rpc` was fixed in ``xskillscore>=0.0.19`` and hence is
35+
not falsely limited to 1 anymore (:issue:`562`, :pr:`566`) `Aaron Spring`_.
3536

3637
Internals/Minor Fixes
3738
---------------------

README.rst

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,20 @@ Verification of weather and climate forecasts.
7676
:alt: climpred cloud demo
7777
:target: https://github.com/aaronspring/climpred-cloud-demo
7878

79+
.. note::
80+
We are actively looking for new contributors for climpred! Riley moved to McKinsey's
81+
Climate Analytics team. Aaron is finishing his PhD in Hamberg, Germany, but will stay
82+
in academia.
83+
We especially hope for python enthusiasts from seasonal, subseasonal or weather
84+
prediction community. In our past coding journey, collaborative coding, feedbacking
85+
issues and pull requests advanced our code and thinking about forecast verification
86+
more than we could have ever expected.
87+
`Aaron <https://github.com/aaronspring/>`_ can provide guidance on
88+
implementing new features into climpred. Feel free to implement
89+
your own new feature or take a look at the
90+
`good first issue <https://github.com/pangeo-data/climpred/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22>`_
91+
tag in the issues. Please reach out to us via `gitter <https://gitter.im/climpred>`_.
92+
7993
Installation
8094
============
8195

@@ -90,7 +104,12 @@ You can install the latest release of ``climpred`` using ``pip`` or ``conda``:
90104
conda install -c conda-forge climpred
91105
92106
You can also install the bleeding edge (pre-release versions) by cloning this
93-
repository and running ``pip install . --upgrade`` in the main directory.
107+
repository and running ``pip install . --upgrade`` in the main directory or
108+
109+
.. code-block:: bash
110+
111+
pip install git+https://github.com/pangeo-data/climpred.git
112+
94113
95114
Documentation
96115
=============

climpred/metrics.py

Lines changed: 22 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -2554,7 +2554,7 @@ def _discrimination(forecast, verif, dim=None, **metric_kwargs):
25542554
* event (event) bool True False
25552555
skill <U11 'initialized'
25562556
Data variables:
2557-
SST (lead, event, forecast_probability) float64 0.1481 ...
2557+
SST (lead, event, forecast_probability) float64 0.07407...
25582558
25592559
Option 2. Pre-process to generate a binary forecast and verification product:
25602560
@@ -2568,7 +2568,7 @@ def _discrimination(forecast, verif, dim=None, **metric_kwargs):
25682568
* event (event) bool True False
25692569
skill <U11 'initialized'
25702570
Data variables:
2571-
SST (lead, event, forecast_probability) float64 0.1481 ...
2571+
SST (lead, event, forecast_probability) float64 0.07407...
25722572
25732573
Option 3. Pre-process to generate a probability forecast and binary
25742574
verification product. because ``member`` not present in ``hindcast``, use
@@ -2584,7 +2584,7 @@ def _discrimination(forecast, verif, dim=None, **metric_kwargs):
25842584
* event (event) bool True False
25852585
skill <U11 'initialized'
25862586
Data variables:
2587-
SST (lead, event, forecast_probability) float64 0.1481 ...
2587+
SST (lead, event, forecast_probability) float64 0.07407...
25882588
25892589
"""
25902590
forecast, verif, metric_kwargs, dim = _extract_and_apply_logical(
@@ -2659,10 +2659,10 @@ def _reliability(forecast, verif, dim=None, **metric_kwargs):
26592659
Coordinates:
26602660
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
26612661
* forecast_probability (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
2662-
SST_samples (forecast_probability) float64 25.0 3.0 0.0 3.0 21.0
2662+
SST_samples (forecast_probability) float64 22.0 5.0 1.0 3.0 21.0
26632663
skill <U11 'initialized'
26642664
Data variables:
2665-
SST (lead, forecast_probability) float64 0.16 ... 1.0
2665+
SST (lead, forecast_probability) float64 0.09091 ... 1.0
26662666
26672667
Option 2. Pre-process to generate a binary forecast and verification product:
26682668
@@ -2673,10 +2673,10 @@ def _reliability(forecast, verif, dim=None, **metric_kwargs):
26732673
Coordinates:
26742674
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
26752675
* forecast_probability (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
2676-
SST_samples (forecast_probability) float64 25.0 3.0 0.0 3.0 21.0
2676+
SST_samples (forecast_probability) float64 22.0 5.0 1.0 3.0 21.0
26772677
skill <U11 'initialized'
26782678
Data variables:
2679-
SST (lead, forecast_probability) float64 0.16 ... 1.0
2679+
SST (lead, forecast_probability) float64 0.09091 ... 1.0
26802680
26812681
Option 3. Pre-process to generate a probability forecast and binary
26822682
verification product. because ``member`` not present in ``hindcast``, use
@@ -2689,10 +2689,10 @@ def _reliability(forecast, verif, dim=None, **metric_kwargs):
26892689
Coordinates:
26902690
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
26912691
* forecast_probability (forecast_probability) float64 0.1 0.3 0.5 0.7 0.9
2692-
SST_samples (forecast_probability) float64 25.0 3.0 0.0 3.0 21.0
2692+
SST_samples (forecast_probability) float64 22.0 5.0 1.0 3.0 21.0
26932693
skill <U11 'initialized'
26942694
Data variables:
2695-
SST (lead, forecast_probability) float64 0.16 ... 1.0
2695+
SST (lead, forecast_probability) float64 0.09091 ... 1.0
26962696
26972697
"""
26982698
if "logical" in metric_kwargs:
@@ -2805,27 +2805,30 @@ def _rps(forecast, verif, dim=None, **metric_kwargs):
28052805
28062806
Example:
28072807
>>> category_edges = np.array([-.5, 0., .5, 1.])
2808-
>>> HindcastEnsemble.verify(metric='rps', comparison='m2o', dim='member',
2808+
>>> HindcastEnsemble.verify(metric='rps', comparison='m2o', dim=['member', 'init'],
28092809
... alignment='same_verifs', category_edges=category_edges)
28102810
<xarray.Dataset>
2811-
Dimensions: (init: 52, lead: 10)
2811+
Dimensions: (lead: 10)
28122812
Coordinates:
2813-
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
2814-
* init (init) object 1964-01-01 00:00:00 ... 2015-01-01 00:00:00
2815-
skill <U11 'initialized'
2813+
* lead (lead) int32 1 2 3 4 5 6 7 8 9 10
2814+
observations_category_edge <U67 '[-np.inf, -0.5), [-0.5, 0.0), [0.0, 0.5...
2815+
forecasts_category_edge <U67 '[-np.inf, -0.5), [-0.5, 0.0), [0.0, 0.5...
2816+
skill <U11 'initialized'
28162817
Data variables:
2817-
SST (lead, init) float64 0.2696 0.2696 0.2696 ... 0.2311 0.2311 0.2311
2818+
SST (lead) float64 0.115 0.1123 ... 0.1687 0.1875
2819+
28182820
28192821
>>> category_edges = np.array([9.5, 10., 10.5, 11.])
28202822
>>> PerfectModelEnsemble.verify(metric='rps', comparison='m2c',
28212823
... dim=['member','init'], category_edges=category_edges)
28222824
<xarray.Dataset>
2823-
Dimensions: (lead: 20)
2825+
Dimensions: (lead: 20)
28242826
Coordinates:
2825-
* lead (lead) int64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
2827+
* lead (lead) int64 1 2 3 4 5 6 7 ... 15 16 17 18 19 20
2828+
observations_category_edge <U71 '[-np.inf, 9.5), [9.5, 10.0), [10.0, 10....
2829+
forecasts_category_edge <U71 '[-np.inf, 9.5), [9.5, 10.0), [10.0, 10....
28262830
Data variables:
2827-
tos (lead) float64 0.1512 0.2726 0.1259 0.214 ... 0.2085 0.1427 0.2757
2828-
2831+
tos (lead) float64 0.08951 0.1615 ... 0.1399 0.2274
28292832
"""
28302833
dim = _remove_member_from_dim_or_raise(dim)
28312834
if "category_edges" in metric_kwargs:

climpred/tests/test_PerfectModelEnsemble_class.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,6 @@ def test_PerfectModel_verify_bootstrap_deterministic(
368368
if dim == "member" and metric in pearson_r_containing_metrics:
369369
dim = ["init", "member"]
370370

371-
# verify()
372371
actual = pm.verify(
373372
comparison=comparison,
374373
metric=metric,

climpred/tests/test_metrics_perfect.py

Lines changed: 40 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -21,11 +21,24 @@
2121

2222
xr.set_options(display_style="text")
2323

24+
pearson_r_containing_metrics = [
25+
"pearson_r",
26+
"spearman_r",
27+
"pearson_r_p_value",
28+
"spearman_r_p_value",
29+
"msess_murphy",
30+
"bias_slope",
31+
"conditional_bias",
32+
"std_ratio",
33+
"conditional_bias",
34+
"uacc",
35+
]
36+
2437

2538
@pytest.mark.parametrize("how", ["constant", "increasing_by_lead"])
2639
@pytest.mark.parametrize("comparison", PM_COMPARISONS)
2740
@pytest.mark.parametrize("metric", PM_METRICS)
28-
def test_PerfectModelEnsemble_constant_forecasts(
41+
def test_PerfectModelEnsemble_perfect_forecasts(
2942
perfectModelEnsemble_initialized_control, metric, comparison, how
3043
):
3144
"""Test that PerfectModelEnsemble.verify() returns a perfect score for a perfectly
@@ -73,24 +86,30 @@ def f(x):
7386
comparison = "m2c"
7487
skill = pe.verify(
7588
metric=metric, comparison=comparison, dim=dim, **metric_kwargs
76-
)
89+
).tos
7790
else:
7891
dim = "init" if comparison == "e2c" else ["init", "member"]
7992
skill = pe.verify(
8093
metric=metric, comparison=comparison, dim=dim, **metric_kwargs
81-
)
82-
# # TODO: test assert skill.variable == perfect).all()
83-
if metric == "contingency":
94+
).tos
95+
96+
if metric == "contingency" and how == "constant":
8497
assert (skill == 1).all() # checks Contingency.accuracy
98+
elif metric in ["crpss", "msess"]: # identical forecast lead to nans
99+
pass
100+
elif Metric.perfect and metric not in pearson_r_containing_metrics:
101+
assert (skill == Metric.perfect).all(), print(
102+
f"{metric} perfect", Metric.perfect, "found", skill
103+
)
85104
else:
86-
assert skill == Metric.perfect
105+
pass
87106

88107

89108
@pytest.mark.parametrize("alignment", ["same_inits", "same_verif", "maximize"])
90109
@pytest.mark.parametrize("how", ["constant", "increasing_by_lead"])
91110
@pytest.mark.parametrize("comparison", HINDCAST_COMPARISONS)
92111
@pytest.mark.parametrize("metric", HINDCAST_METRICS)
93-
def test_HindcastEnsemble_constant_forecasts(
112+
def test_HindcastEnsemble_perfect_forecasts(
94113
hindcast_hist_obs_1d, metric, comparison, how, alignment
95114
):
96115
"""Test that HindcastEnsemble.verify() returns a perfect score for a perfectly
@@ -152,18 +171,26 @@ def f(x):
152171
if metric in probabilistic_metrics_requiring_more_than_member_dim
153172
else "member",
154173
alignment=alignment,
155-
**metric_kwargs
156-
)
174+
**metric_kwargs,
175+
).SST
157176
else:
158177
dim = "member" if comparison == "m2o" else "init"
159178
skill = he.verify(
160179
metric=metric,
161180
comparison=comparison,
162181
dim=dim,
163182
alignment=alignment,
164-
**metric_kwargs
183+
**metric_kwargs,
184+
).SST
185+
if metric == "contingency" and how == "constant":
186+
assert (skill.mean() == 1).all(), print(
187+
f"{metric} found", skill
188+
) # checks Contingency.accuracy
189+
elif metric in ["msess", "crpss"]:
190+
pass # identical forecasts produce NaNs
191+
elif Metric.perfect and metric not in pearson_r_containing_metrics:
192+
assert (skill == Metric.perfect).all(), print(
193+
f"{metric} perfect", Metric.perfect, "found", skill
165194
)
166-
if metric == "contingency":
167-
assert (skill == 1).all() # checks Contingency.accuracy
168195
else:
169-
assert skill == Metric.perfect
196+
pass

docs/source/contributing.rst

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,6 @@ If you need to add new functions to the API, run ``sphinx-autogen -o api api.rst
102102
Preparing Pull Requests
103103
-----------------------
104104

105-
106105
#. Fork the
107106
`climpred GitHub repository <https://github.com/pangeo-data/climpred>`__. It's
108107
fine to use ``climpred`` as your fork repository name because it will live
@@ -136,7 +135,8 @@ Preparing Pull Requests
136135
$ pip install --user pre-commit
137136
$ pre-commit install
138137

139-
Afterwards ``pre-commit`` will run whenever you commit.
138+
pre-commit automatically beautifies the code, makes it more maintainable and catches syntax errors.
139+
Afterwards ``pre-commit`` will run whenever you commit.
140140

141141
https://pre-commit.com/ is a framework for managing and maintaining multi-language pre-commit
142142
hooks to ensure code-style and code formatting is consistent.
@@ -145,9 +145,10 @@ Preparing Pull Requests
145145
You’ll need to make sure to activate that environment next time you want
146146
to use it after closing the terminal or your system.
147147

148-
You can now edit your local working copy and run/add tests as necessary. Please follow
149-
PEP-8 for naming. When committing, ``pre-commit`` will modify the files as needed, or
150-
will generally be quite clear about what you need to do to pass the commit test.
148+
You can now edit your local working copy and run/add tests as necessary. Please try
149+
to follow PEP-8 for naming. When committing, ``pre-commit`` will modify the files as
150+
needed, or will generally be quite clear about what you need to do to pass the
151+
commit test.
151152

152153
#. Break your edits up into reasonably sized commits::
153154

@@ -176,9 +177,9 @@ Preparing Pull Requests
176177

177178
#. Running the performance test suite
178179

179-
Performance matters and it is worth considering whether your code has introduced
180-
performance regressions. `climpred` is starting to write a suite of benchmarking tests
181-
using `asv <https://asv.readthedocs.io/en/stable/>`_
180+
If you considerabling changed to core of code of climpred, it is worth considering
181+
whether your code has introduced performance regressions. `climpred` has a suite of
182+
benchmarking tests using `asv <https://asv.readthedocs.io/en/stable/>`_
182183
to enable easy monitoring of the performance of critical `climpred` operations.
183184
These benchmarks are all found in the ``asv_bench`` directory.
184185

docs/source/contributors.rst

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,21 @@
22
Contributors
33
************
44

5+
.. note::
6+
We are actively looking for new contributors for climpred! Riley moved to McKinsey's
7+
Climate Analytics team. Aaron is finishing his PhD in Hamberg, Germany, but will stay
8+
in academia.
9+
We especially hope for python enthusiasts from seasonal, subseasonal or weather
10+
prediction community. In our past coding journey, collaborative coding, feedbacking
11+
issues and pull requests advanced our code and thinking about forecast verification
12+
more than we could have ever expected.
13+
`Aaron <https://github.com/aaronspring/>`_ can provide guidance on
14+
implementing new features into climpred. Feel free to implement
15+
your own new feature or take a look at the
16+
`good first issue <https://github.com/pangeo-data/climpred/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22>`_
17+
tag in the issues. Please reach out to us via `gitter <https://gitter.im/climpred>`_.
18+
19+
520
Core Developers
621
===============
722
* Riley X. Brady (`github <https://github.com/bradyrx/>`__)
@@ -11,6 +26,8 @@ Contributors
1126
============
1227
* Andrew Huang (`github <https://github.com/ahuang11/>`__)
1328
* Kathy Pegion (`github <https://github.com/kpegion/>`__)
29+
* Anderson Banihirwe (`github <https://github.com/andersy005/>`__)
30+
* Ray Bell (`github <https://github.com/raybellwaves/>`__)
1431

1532
For a list of all the contributions, see the github
1633
`contribution graph <https://github.com/pangeo-data/climpred/graphs/contributors>`_.

docs/source/examples/decadal/Significance.ipynb

Lines changed: 48 additions & 109 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
 (0)