Skip to content

Commit 3c51b1a

Browse files
committed
Squash merge pydata#5950
Squashed commit of the following: commit 6916fa7 Author: Deepak Cherian <[email protected]> Date: Mon Nov 22 11:16:43 2021 -0700 Update xarray/util/generate_reductions.py Co-authored-by: Illviljan <[email protected]> commit cd8a898 Author: dcherian <[email protected]> Date: Sat Nov 20 14:37:17 2021 -0700 add doctests commit 19d82cd Author: Illviljan <[email protected]> Date: Sat Nov 20 22:00:29 2021 +0100 more reduce commit 0f94bec Author: Illviljan <[email protected]> Date: Sat Nov 20 20:48:27 2021 +0100 another reduce commit be33560 Author: Illviljan <[email protected]> Date: Sat Nov 20 20:28:39 2021 +0100 one more reduce commit 3d854e5 Author: Illviljan <[email protected]> Date: Sat Nov 20 20:21:26 2021 +0100 more reduce edits commit 2bbddaf Author: Illviljan <[email protected]> Date: Sat Nov 20 20:12:31 2021 +0100 make reduce args consistent commit dfbe103 Merge: f03b675 dd28a57 Author: Illviljan <[email protected]> Date: Sat Nov 20 19:01:59 2021 +0100 Merge branch 'generate-reductions-class' of https://github.com/dcherian/xarray into pr/5950 commit f03b675 Merge: 411d75d 7a201de Author: Illviljan <[email protected]> Date: Sat Nov 20 19:01:42 2021 +0100 Merge branch 'main' into pr/5950 commit dd28a57 Author: dcherian <[email protected]> Date: Sat Nov 20 10:57:22 2021 -0700 updates commit 6a9a124 Author: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Sat Nov 20 17:02:07 2021 +0000 [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci commit 411d75d Author: Illviljan <[email protected]> Date: Sat Nov 20 18:00:08 2021 +0100 Now get normal code running as well Protocols are not needed anymore when subclassing/defining directly in the class. When adding a dummy method in DatasetResampleReductions the order of subclassing had to be changed so the correct reduce was used. commit 5dcb5bf Author: Illviljan <[email protected]> Date: Sat Nov 20 12:30:50 2021 +0100 Attempt fixing typing errors Mixing in DatasetReduce fixes: xarray/tests/test_groupby.py:460: error: Invalid self argument "Dataset" to attribute function "mean" with type "Callable[[DatasetReduce, Optional[Hashable], Optional[bool], Optional[bool], KwArg(Any)], T_Dataset]" [misc] Switching to "Dateset" as returned type fixes: xarray/tests/test_groupby.py:77: error: Need type annotation for "expected" [var-annotated] commit 7a201de Author: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Fri Nov 19 11:37:20 2021 -0700 [pre-commit.ci] pre-commit autoupdate (pydata#5990) commit 95394d5 Author: Illviljan <[email protected]> Date: Mon Nov 15 21:40:37 2021 +0100 Use set_options for asv bottleneck tests (pydata#5986) * Use set_options for bottleneck tests * Use set_options in rolling * Update rolling.py * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update rolling.py * Update rolling.py * set_options not needed. Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> commit b2d7cd8 Author: Kai Mühlbauer <[email protected]> Date: Mon Nov 15 18:33:43 2021 +0100 Fix module name retrieval in `backend.plugins.remove_duplicates()`, plugin tests (pydata#5959) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> commit c7e9d96 Author: dcherian <[email protected]> Date: Wed Nov 10 11:49:47 2021 -0700 Minor improvement commit dea8fd9 Author: dcherian <[email protected]> Date: Mon Nov 8 16:18:07 2021 -0700 REfactor commit 9bb2c32 Author: dcherian <[email protected]> Date: Mon Nov 8 13:56:53 2021 -0700 Reorder docstring to match numpy commit 99bfe12 Author: dcherian <[email protected]> Date: Mon Nov 8 12:44:23 2021 -0700 Fixes pydata#5898 commit 7f39cc0 Author: dcherian <[email protected]> Date: Mon Nov 8 12:39:00 2021 -0700 Minor docstring improvements. commit a04ed82 Author: dcherian <[email protected]> Date: Mon Nov 8 12:35:48 2021 -0700 Small changes commit 816e794 Author: dcherian <[email protected]> Date: Sun Nov 7 20:56:37 2021 -0700 Generate DataArray, Dataset reductions too. commit 569c67f Author: dcherian <[email protected]> Date: Sun Nov 7 20:54:42 2021 -0700 Add ddof for var, std commit 6b9a81a Author: dcherian <[email protected]> Date: Sun Nov 7 20:35:52 2021 -0700 Better generator for reductions.
1 parent cfd2c07 commit 3c51b1a

17 files changed

+438
-401
lines changed

.pre-commit-config.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,12 @@ repos:
88
- id: check-yaml
99
# isort should run before black as black sometimes tweaks the isort output
1010
- repo: https://github.com/PyCQA/isort
11-
rev: 5.9.3
11+
rev: 5.10.1
1212
hooks:
1313
- id: isort
1414
# https://github.com/python/black#version-control-integration
1515
- repo: https://github.com/psf/black
16-
rev: 21.9b0
16+
rev: 21.10b0
1717
hooks:
1818
- id: black
1919
- id: black-jupyter
@@ -22,8 +22,8 @@ repos:
2222
hooks:
2323
- id: blackdoc
2424
exclude: "generate_reductions.py"
25-
- repo: https://gitlab.com/pycqa/flake8
26-
rev: 3.9.2
25+
- repo: https://github.com/PyCQA/flake8
26+
rev: 4.0.1
2727
hooks:
2828
- id: flake8
2929
# - repo: https://github.com/Carreau/velin

asv_bench/asv.conf.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@
6262
"pandas": [""],
6363
"netcdf4": [""],
6464
"scipy": [""],
65-
"bottleneck": ["", null],
65+
"bottleneck": [""],
6666
"dask": [""],
6767
"distributed": [""],
6868
"flox": [""],

asv_bench/benchmarks/dataarray_missing.py

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,6 @@ def make_bench_data(shape, frac_nan, chunks):
1616
return da
1717

1818

19-
def requires_bottleneck():
20-
try:
21-
import bottleneck # noqa: F401
22-
except ImportError:
23-
raise NotImplementedError()
24-
25-
2619
class DataArrayMissingInterpolateNA:
2720
def setup(self, shape, chunks, limit):
2821
if chunks is not None:
@@ -46,7 +39,6 @@ def time_interpolate_na(self, shape, chunks, limit):
4639

4740
class DataArrayMissingBottleneck:
4841
def setup(self, shape, chunks, limit):
49-
requires_bottleneck()
5042
if chunks is not None:
5143
requires_dask()
5244
self.da = make_bench_data(shape, 0.1, chunks)

asv_bench/benchmarks/rolling.py

Lines changed: 56 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -36,29 +36,45 @@ def setup(self, *args, **kwargs):
3636
randn_long, dims="x", coords={"x": np.arange(long_nx) * 0.1}
3737
)
3838

39-
@parameterized(["func", "center"], (["mean", "count"], [True, False]))
40-
def time_rolling(self, func, center):
41-
getattr(self.ds.rolling(x=window, center=center), func)().load()
42-
43-
@parameterized(["func", "pandas"], (["mean", "count"], [True, False]))
44-
def time_rolling_long(self, func, pandas):
39+
@parameterized(
40+
["func", "center", "use_bottleneck"],
41+
(["mean", "count"], [True, False], [True, False]),
42+
)
43+
def time_rolling(self, func, center, use_bottleneck):
44+
with xr.set_options(use_bottleneck=use_bottleneck):
45+
getattr(self.ds.rolling(x=window, center=center), func)().load()
46+
47+
@parameterized(
48+
["func", "pandas", "use_bottleneck"],
49+
(["mean", "count"], [True, False], [True, False]),
50+
)
51+
def time_rolling_long(self, func, pandas, use_bottleneck):
4552
if pandas:
4653
se = self.da_long.to_series()
4754
getattr(se.rolling(window=window, min_periods=window), func)()
4855
else:
49-
getattr(self.da_long.rolling(x=window, min_periods=window), func)().load()
50-
51-
@parameterized(["window_", "min_periods"], ([20, 40], [5, 5]))
52-
def time_rolling_np(self, window_, min_periods):
53-
self.ds.rolling(x=window_, center=False, min_periods=min_periods).reduce(
54-
getattr(np, "nansum")
55-
).load()
56-
57-
@parameterized(["center", "stride"], ([True, False], [1, 1]))
58-
def time_rolling_construct(self, center, stride):
59-
self.ds.rolling(x=window, center=center).construct(
60-
"window_dim", stride=stride
61-
).sum(dim="window_dim").load()
56+
with xr.set_options(use_bottleneck=use_bottleneck):
57+
getattr(
58+
self.da_long.rolling(x=window, min_periods=window), func
59+
)().load()
60+
61+
@parameterized(
62+
["window_", "min_periods", "use_bottleneck"], ([20, 40], [5, 5], [True, False])
63+
)
64+
def time_rolling_np(self, window_, min_periods, use_bottleneck):
65+
with xr.set_options(use_bottleneck=use_bottleneck):
66+
self.ds.rolling(x=window_, center=False, min_periods=min_periods).reduce(
67+
getattr(np, "nansum")
68+
).load()
69+
70+
@parameterized(
71+
["center", "stride", "use_bottleneck"], ([True, False], [1, 1], [True, False])
72+
)
73+
def time_rolling_construct(self, center, stride, use_bottleneck):
74+
with xr.set_options(use_bottleneck=use_bottleneck):
75+
self.ds.rolling(x=window, center=center).construct(
76+
"window_dim", stride=stride
77+
).sum(dim="window_dim").load()
6278

6379

6480
class RollingDask(Rolling):
@@ -87,24 +103,28 @@ def setup(self, *args, **kwargs):
87103

88104

89105
class DataArrayRollingMemory(RollingMemory):
90-
@parameterized("func", ["sum", "max", "mean"])
91-
def peakmem_ndrolling_reduce(self, func):
92-
roll = self.ds.var1.rolling(x=10, y=4)
93-
getattr(roll, func)()
106+
@parameterized(["func", "use_bottleneck"], (["sum", "max", "mean"], [True, False]))
107+
def peakmem_ndrolling_reduce(self, func, use_bottleneck):
108+
with xr.set_options(use_bottleneck=use_bottleneck):
109+
roll = self.ds.var1.rolling(x=10, y=4)
110+
getattr(roll, func)()
94111

95-
@parameterized("func", ["sum", "max", "mean"])
96-
def peakmem_1drolling_reduce(self, func):
97-
roll = self.ds.var3.rolling(t=100)
98-
getattr(roll, func)()
112+
@parameterized(["func", "use_bottleneck"], (["sum", "max", "mean"], [True, False]))
113+
def peakmem_1drolling_reduce(self, func, use_bottleneck):
114+
with xr.set_options(use_bottleneck=use_bottleneck):
115+
roll = self.ds.var3.rolling(t=100)
116+
getattr(roll, func)()
99117

100118

101119
class DatasetRollingMemory(RollingMemory):
102-
@parameterized("func", ["sum", "max", "mean"])
103-
def peakmem_ndrolling_reduce(self, func):
104-
roll = self.ds.rolling(x=10, y=4)
105-
getattr(roll, func)()
106-
107-
@parameterized("func", ["sum", "max", "mean"])
108-
def peakmem_1drolling_reduce(self, func):
109-
roll = self.ds.rolling(t=100)
110-
getattr(roll, func)()
120+
@parameterized(["func", "use_bottleneck"], (["sum", "max", "mean"], [True, False]))
121+
def peakmem_ndrolling_reduce(self, func, use_bottleneck):
122+
with xr.set_options(use_bottleneck=use_bottleneck):
123+
roll = self.ds.rolling(x=10, y=4)
124+
getattr(roll, func)()
125+
126+
@parameterized(["func", "use_bottleneck"], (["sum", "max", "mean"], [True, False]))
127+
def peakmem_1drolling_reduce(self, func, use_bottleneck):
128+
with xr.set_options(use_bottleneck=use_bottleneck):
129+
roll = self.ds.rolling(t=100)
130+
getattr(roll, func)()

doc/user-guide/computation.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,8 @@ Xarray also provides the ``max_gap`` keyword argument to limit the interpolation
107107
data gaps of length ``max_gap`` or smaller. See :py:meth:`~xarray.DataArray.interpolate_na`
108108
for more.
109109

110+
.. _agg:
111+
110112
Aggregation
111113
===========
112114

doc/whats-new.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ Bug fixes
3636
~~~~~~~~~
3737
- Fix plot.line crash for data of shape ``(1, N)`` in _title_for_slice on format_item (:pull:`5948`).
3838
By `Sebastian Weigand <https://github.com/s-weigand>`_.
39+
- Fix a regression in the removal of duplicate backend entrypoints (:issue:`5944`, :pull:`5959`)
40+
By `Kai Mühlbauer <https://github.com/kmuehlbauer>`_.
3941

4042
Documentation
4143
~~~~~~~~~~~~~
@@ -49,6 +51,10 @@ Documentation
4951
Internal Changes
5052
~~~~~~~~~~~~~~~~
5153

54+
- Use ``importlib`` to replace functionality of ``pkg_resources`` in
55+
backend plugins tests. (:pull:`5959`).
56+
By `Kai Mühlbauer <https://github.com/kmuehlbauer>`_.
57+
5258

5359
.. _whats-new.0.20.1:
5460

xarray/backends/plugins.py

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,15 +23,17 @@ def remove_duplicates(entrypoints):
2323
# check if there are multiple entrypoints for the same name
2424
unique_entrypoints = []
2525
for name, matches in entrypoints_grouped:
26-
matches = list(matches)
26+
# remove equal entrypoints
27+
matches = list(set(matches))
2728
unique_entrypoints.append(matches[0])
2829
matches_len = len(matches)
2930
if matches_len > 1:
30-
selected_module_name = matches[0].module_name
31-
all_module_names = [e.module_name for e in matches]
31+
all_module_names = [e.value.split(":")[0] for e in matches]
32+
selected_module_name = all_module_names[0]
3233
warnings.warn(
3334
f"Found {matches_len} entrypoints for the engine name {name}:"
34-
f"\n {all_module_names}.\n It will be used: {selected_module_name}.",
35+
f"\n {all_module_names}.\n "
36+
f"The entrypoint {selected_module_name} will be used.",
3537
RuntimeWarning,
3638
)
3739
return unique_entrypoints

0 commit comments

Comments
 (0)