-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Implementing dask.array.coarsen in xarrays #1192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This has the feel of a multi-dimensional resampling operation operation. I could potentially see this as part of that interface (e.g., That said, this seems useful and I wouldn't get too hung up about the optimal interface. I would be happy with a cc @jhamman who has been thinking about regridding/resampling. |
The
Does that fit with the |
I think this would be a nice feature and something that would fit nicely within xarray. The spatial resampling that I'm working towards is 1) a ways off and 2) quite a bit more domain specific than this. I'm +1! |
Hello @laliberte @shoyer @jhamman . I'm with Continuum and working on NASA funded Earth science ML (see ensemble learning models in github and its documentation here as well as We can submit a PR on this issue for dask's coarsen and the specs above for using |
Dask has this actually. We had to build it before we could build the parallel version. See |
If it's part of |
My guess is that if you want to avoid a strong dependence on Dask then you'll want to copy the code over regardless. Historically chunk.py hasn't been considered public (we don't publish docstrings in the docs for example). That being said it hasn't moved in a long while and I don't see any reason for it to move. I'm certainly willing to commit to going through a lengthy deprecation cycle if it does need to move. |
The reason I ask is that, ideally, |
Not to hijack the thread, but @PeterDSteinberg - this is the first I've heard of earthio and I think there would be a lot of interest from the broader atmospheric/oceanic sciences community to hear about what your all's plans are. Could your team do a blog post on Continuum sometime outlining the goals of the project? |
Hi @darothen Back to the subject matter of the thread.... You can assign the issue to me (can you add me also to xarray repo so I can assign myself things?).. I'll wait to get started until after @shoyer comments on @laliberte 's question:
|
Currently dask is an optional dependency for carry, which I would like to
preserve if possible. I'll take a glance at the implementation shortly, but
my guess is that we will indeed want to vendor the numpy version into
xarray.
…On Wed, May 31, 2017 at 6:38 AM Peter Steinberg ***@***.***> wrote:
Hi @darothen <https://github.com/darothen> earthio is a recent
experimental refactor of what was the elm.readers subpackage. elm -
Ensemble Learning Models was developed with a Phase I NASA SBIR in 2016 and
in part reflects our thinking in late 2015 when xarray was newer and we
were planning the proposal. In the last ca. month we have started a Phase
II of development on multi-model dask/xarray ML algorithms based on xarray,
dask, scikit-learn and a Bokeh maps UI for tasks like land cover
classification. I'll add you to elm and feel free to contact me at
psteinberg [at] continuum [dot] io. We will do more promotion / blogs in
the near term and also in about 12 months we will release a free/open
collection of notebooks that form a "Machine Learning with Environmental
Data" 3-day course.
Back to the subject matter of the thread.... I assigned this issue to
myself. I'll wait to get started until after @shoyer
<https://github.com/shoyer> comments on @laliberte
<https://github.com/laliberte> 's question:
(1) replicate serial coarsen into xarray or (2) point to dask coarsen
methods?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1192 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABKS1hU-F507Wp1-MMa_JXzyaNtxXYQvks5r_W1hgaJpZM4LaxU4>
.
|
The dask implementation is short enough that I would certainly reimplement/vendor the pure numpy version for xarray. It might also be worth considering using the related utility |
Is this feature still being considered? I wrote my own function to achieve this (using dask.array.coarsen), but I was planning to implement a similar functionality in xgcm, and it would be ideal if we could use an upstream implementation from xarray. |
Just to be clear, my comment above was a joke... @jbusecke and I are good friends! 🤣 |
I should add that I would be happy to work on an implementation, but probably need a good amount of pointers. Here is the implementation that I have been using (only works with dask.arrays at this point). Should have posted that earlier to avoid @rabernat s zingers over here. def aggregate(da, blocks, func=np.nanmean, debug=False):
"""
Performs efficient block averaging in one or multiple dimensions.
Only works on regular grid dimensions.
Parameters
----------
da : xarray DataArray (must be a dask array!)
blocks : list
List of tuples containing the dimension and interval to aggregate over
func : function
Aggregation function.Defaults to numpy.nanmean
Returns
-------
da_agg : xarray Data
Aggregated array
Examples
--------
>>> from xarrayutils import aggregate
>>> import numpy as np
>>> import xarray as xr
>>> import matplotlib.pyplot as plt
>>> %matplotlib inline
>>> import dask.array as da
>>> x = np.arange(-10,10)
>>> y = np.arange(-10,10)
>>> xx,yy = np.meshgrid(x,y)
>>> z = xx**2-yy**2
>>> a = xr.DataArray(da.from_array(z, chunks=(20, 20)),
coords={'x':x,'y':y}, dims=['y','x'])
>>> print a
<xarray.DataArray 'array-7e422c91624f207a5f7ebac426c01769' (y: 20, x: 20)>
dask.array<array-7..., shape=(20, 20), dtype=int64, chunksize=(20, 20)>
Coordinates:
* y (y) int64 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
* x (x) int64 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9
>>> blocks = [('x',2),('y',5)]
>>> a_coarse = aggregate(a,blocks,func=np.mean)
>>> print a_coarse
<xarray.DataArray 'array-7e422c91624f207a5f7ebac426c01769' (y: 2, x: 10)>
dask.array<coarsen..., shape=(2, 10), dtype=float64, chunksize=(2, 10)>
Coordinates:
* y (y) int64 -10 0
* x (x) int64 -10 -8 -6 -4 -2 0 2 4 6 8
Attributes:
Coarsened with: <function mean at 0x111754230>
Coarsenblocks: [('x', 2), ('y', 10)]
"""
# Check if the input is a dask array (I might want to convert this
# automaticlaly in the future)
if not isinstance(da.data, Array):
raise RuntimeError('data array data must be a dask array')
# Check data type of blocks
# TODO write test
if (not all(isinstance(n[0], str) for n in blocks) or
not all(isinstance(n[1], int) for n in blocks)):
print('blocks input', str(blocks))
raise RuntimeError("block dimension must be dtype(str), \
e.g. ('lon',4)")
# Check if the given array has the dimension specified in blocks
try:
block_dict = dict((da.get_axis_num(x), y) for x, y in blocks)
except ValueError:
raise RuntimeError("'blocks' contains non matching dimension")
# Check the size of the excess in each aggregated axis
blocks = [(a[0], a[1], da.shape[da.get_axis_num(a[0])] % a[1])
for a in blocks]
# for now default to trimming the excess
da_coarse = coarsen(func, da.data, block_dict, trim_excess=True)
# for now default to only the dims
new_coords = dict([])
# for cc in da.coords.keys():
warnings.warn("WARNING: only dimensions are carried over as coordinates")
for cc in list(da.dims):
new_coords[cc] = da.coords[cc]
for dd in blocks:
if dd[0] in list(da.coords[cc].dims):
new_coords[cc] = \
new_coords[cc].isel(
**{dd[0]: slice(0, -(1 + dd[2]), dd[1])})
attrs = {'Coarsened with': str(func), 'Coarsenblocks': str(blocks)}
da_coarse = xr.DataArray(da_coarse, dims=da.dims, coords=new_coords,
name=da.name, attrs=attrs)
return da_coarse |
see also #2525 |
Should this have been closed by #2612? |
Looks like it |
Would it make sense to implement the dask.array.coarsen method on xarrays?
In some ways,
coarsen
is a generalization ofreduce
.Any thoughts?
The text was updated successfully, but these errors were encountered: