Skip to content

Commit 4489394

Browse files
committed
Merge remote-tracking branch 'upstream/master' into fix/plot-broadcast
* upstream/master: format indexing.rst code with black (pydata#3511) add missing pint integration tests (pydata#3508) DOC: update bottleneck repo url (pydata#3507) add drop_sel, drop_vars, map to api.rst (pydata#3506) remove syntax warning (pydata#3505) Dataset.map, GroupBy.map, Resample.map (pydata#3459) tests for datasets with units (pydata#3447) fix pandas-dev tests (pydata#3491) unpin pseudonetcdf (pydata#3496) whatsnew corrections (pydata#3494) drop_vars; deprecate drop for variables (pydata#3475) uamiv test using only raw uamiv variables (pydata#3485) Optimize dask array equality checks. (pydata#3453) Propagate indexes in DataArray binary operations. (pydata#3481) python 3.8 tests (pydata#3477)
2 parents 279ff1d + b74f80c commit 4489394

37 files changed

+2703
-417
lines changed

azure-pipelines.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ jobs:
1818
conda_env: py36
1919
py37:
2020
conda_env: py37
21+
py38:
22+
conda_env: py38
2123
py37-upstream-dev:
2224
conda_env: py37
2325
upstream_dev: true

ci/azure/install.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ steps:
1616
--pre \
1717
--upgrade \
1818
matplotlib \
19-
pandas=0.26.0.dev0+628.g03c1a3db2 \ # FIXME https://github.com/pydata/xarray/issues/3440
19+
pandas \
2020
scipy
2121
# numpy \ # FIXME https://github.com/pydata/xarray/issues/3409
2222
pip install \

ci/requirements/py36.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies:
2929
- pandas
3030
- pint
3131
- pip
32-
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
32+
- pseudonetcdf
3333
- pydap
3434
- pynio
3535
- pytest

ci/requirements/py37-windows.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies:
2929
- pandas
3030
- pint
3131
- pip
32-
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
32+
- pseudonetcdf
3333
- pydap
3434
# - pynio # Not available on Windows
3535
- pytest

ci/requirements/py37.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies:
2929
- pandas
3030
- pint
3131
- pip
32-
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
32+
- pseudonetcdf
3333
- pydap
3434
- pynio
3535
- pytest

ci/requirements/py38.yml

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
name: xarray-tests
2+
channels:
3+
- conda-forge
4+
dependencies:
5+
- python=3.8
6+
- pip
7+
- pip:
8+
- coveralls
9+
- dask
10+
- distributed
11+
- numpy
12+
- pandas
13+
- pytest
14+
- pytest-cov
15+
- pytest-env

doc/api.rst

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Dataset contents
9494
Dataset.rename_dims
9595
Dataset.swap_dims
9696
Dataset.expand_dims
97-
Dataset.drop
97+
Dataset.drop_vars
9898
Dataset.drop_dims
9999
Dataset.set_coords
100100
Dataset.reset_coords
@@ -118,6 +118,7 @@ Indexing
118118
Dataset.loc
119119
Dataset.isel
120120
Dataset.sel
121+
Dataset.drop_sel
121122
Dataset.head
122123
Dataset.tail
123124
Dataset.thin
@@ -154,7 +155,7 @@ Computation
154155
.. autosummary::
155156
:toctree: generated/
156157

157-
Dataset.apply
158+
Dataset.map
158159
Dataset.reduce
159160
Dataset.groupby
160161
Dataset.groupby_bins
@@ -263,7 +264,7 @@ DataArray contents
263264
DataArray.rename
264265
DataArray.swap_dims
265266
DataArray.expand_dims
266-
DataArray.drop
267+
DataArray.drop_vars
267268
DataArray.reset_coords
268269
DataArray.copy
269270

@@ -283,6 +284,7 @@ Indexing
283284
DataArray.loc
284285
DataArray.isel
285286
DataArray.sel
287+
DataArray.drop_sel
286288
DataArray.head
287289
DataArray.tail
288290
DataArray.thin
@@ -542,10 +544,10 @@ GroupBy objects
542544
:toctree: generated/
543545

544546
core.groupby.DataArrayGroupBy
545-
core.groupby.DataArrayGroupBy.apply
547+
core.groupby.DataArrayGroupBy.map
546548
core.groupby.DataArrayGroupBy.reduce
547549
core.groupby.DatasetGroupBy
548-
core.groupby.DatasetGroupBy.apply
550+
core.groupby.DatasetGroupBy.map
549551
core.groupby.DatasetGroupBy.reduce
550552

551553
Rolling objects
@@ -566,7 +568,7 @@ Resample objects
566568
================
567569

568570
Resample objects also implement the GroupBy interface
569-
(methods like ``apply()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
571+
(methods like ``map()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
570572

571573
.. autosummary::
572574
:toctree: generated/

doc/computation.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ a value when aggregating:
183183

184184
Note that rolling window aggregations are faster and use less memory when bottleneck_ is installed. This only applies to numpy-backed xarray objects.
185185

186-
.. _bottleneck: https://github.com/kwgoodman/bottleneck/
186+
.. _bottleneck: https://github.com/pydata/bottleneck/
187187

188188
We can also manually iterate through ``Rolling`` objects:
189189

@@ -462,13 +462,13 @@ Datasets support most of the same methods found on data arrays:
462462
abs(ds)
463463
464464
Datasets also support NumPy ufuncs (requires NumPy v1.13 or newer), or
465-
alternatively you can use :py:meth:`~xarray.Dataset.apply` to apply a function
465+
alternatively you can use :py:meth:`~xarray.Dataset.map` to map a function
466466
to each variable in a dataset:
467467

468468
.. ipython:: python
469469
470470
np.sin(ds)
471-
ds.apply(np.sin)
471+
ds.map(np.sin)
472472
473473
Datasets also use looping over variables for *broadcasting* in binary
474474
arithmetic. You can do arithmetic between any ``DataArray`` and a dataset:

doc/dask.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ For the best performance when using Dask's multi-threaded scheduler, wrap a
292292
function that already releases the global interpreter lock, which fortunately
293293
already includes most NumPy and Scipy functions. Here we show an example
294294
using NumPy operations and a fast function from
295-
`bottleneck <https://github.com/kwgoodman/bottleneck>`__, which
295+
`bottleneck <https://github.com/pydata/bottleneck>`__, which
296296
we use to calculate `Spearman's rank-correlation coefficient <https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient>`__:
297297

298298
.. code-block:: python

doc/data-structures.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -393,14 +393,14 @@ methods (like pandas) for transforming datasets into new objects.
393393

394394
For removing variables, you can select and drop an explicit list of
395395
variables by indexing with a list of names or using the
396-
:py:meth:`~xarray.Dataset.drop` methods to return a new ``Dataset``. These
396+
:py:meth:`~xarray.Dataset.drop_vars` methods to return a new ``Dataset``. These
397397
operations keep around coordinates:
398398

399399
.. ipython:: python
400400
401401
ds[['temperature']]
402402
ds[['temperature', 'temperature_double']]
403-
ds.drop('temperature')
403+
ds.drop_vars('temperature')
404404
405405
To remove a dimension, you can use :py:meth:`~xarray.Dataset.drop_dims` method.
406406
Any variables using that dimension are dropped:

0 commit comments

Comments
 (0)