Skip to content

Commit

Permalink
DOC: fix docstrings and doc build for 0.11.1
Browse files Browse the repository at this point in the history
  • Loading branch information
shoyer committed Dec 30, 2018
1 parent 250b19c commit d1d2ece
Show file tree
Hide file tree
Showing 7 changed files with 63 additions and 65 deletions.
2 changes: 1 addition & 1 deletion doc/examples/weather-data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Shared setup:
.. ipython:: python
:suppress:
fpath = "doc/examples/_code/weather_data_setup.py"
fpath = "examples/_code/weather_data_setup.py"
with open(fpath) as f:
code = compile(f.read(), fpath, 'exec')
exec(code)
Expand Down
2 changes: 1 addition & 1 deletion doc/internals.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ Back in an interactive IPython session, we can use these properties:
.. ipython:: python
:suppress:
exec(open("doc/examples/_code/accessor_example.py").read())
exec(open("examples/_code/accessor_example.py").read())
.. ipython:: python
Expand Down
29 changes: 15 additions & 14 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,24 @@ What's New
import xarray as xr
np.random.seed(123456)
.. _whats-new.0.11.1:

v0.11.1 (29 December 2018)
--------------------------

This minor release includes a number of enhancements and bug fixes, and two
(slightly) breaking changes.

.. warning::

Xarray plans to drop support for python 2.7 at the end of 2018. This
means that new releases of xarray published after this date will only be
installable on python 3+ environments, but older versions of xarray will
always be available to python 2.7 users. For more information see the
following references
This is the last xarray release that will support Python 2.7. Future releases
will be Python 3 only, but older versions of xarray will always be available
for Python 2.7 users. For the more details, see:

- `Xarray Github issue discussing dropping Python 2 <https://github.com/pydata/xarray/issues/1829>`__
- `Xarray Github issue discussing dropping Python 2 <https://github.com/pydata/xarray/issues/1829>`__
- `Python 3 Statement <http://www.python3statement.org/>`__
- `Tips on porting to Python 3 <https://docs.python.org/3/howto/pyporting.html>`__

.. _whats-new.0.11.1:

v0.11.1 (unreleased)
--------------------

Breaking changes
~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -70,9 +71,9 @@ Enhancements
- Datasets are now guaranteed to have a ``'source'`` encoding, so the source
file name is always stored (:issue:`2550`).
By `Tom Nicholas <http://github.com/TomNicholas>`_.
- The `apply` methods for `DatasetGroupBy`, `DataArrayGroupBy`,
`DatasetResample` and `DataArrayResample` can now pass positional arguments to
the applied function.
- The ``apply`` methods for ``DatasetGroupBy``, ``DataArrayGroupBy``,
``DatasetResample`` and ``DataArrayResample`` now support passing positional
arguments to the applied function as a tuple to the ``args`` argument.
By `Matti Eskelinen <https://github.com/maaleske>`_.
- 0d slices of ndarrays are now obtained directly through indexing, rather than
extracting and wrapping a scalar, avoiding unnecessary copying. By `Daniel
Expand Down
63 changes: 32 additions & 31 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -517,18 +517,17 @@ def open_mfdataset(paths, chunks=None, concat_dim=_CONCAT_DIM_DEFAULT,
By default, xarray attempts to infer this argument by examining
component files. Set ``concat_dim=None`` explicitly to disable
concatenation.
compat : {'identical', 'equals', 'broadcast_equals',
'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'no_conflicts'}, optional
String indicating how to compare variables of the same name for
potential conflicts when merging:
- 'broadcast_equals': all values must be equal when variables are
broadcast against each other to ensure common dimensions.
- 'equals': all values and dimensions must be the same.
- 'identical': all values, dimensions and attributes must be the
same.
- 'no_conflicts': only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
* 'broadcast_equals': all values must be equal when variables are
broadcast against each other to ensure common dimensions.
* 'equals': all values and dimensions must be the same.
* 'identical': all values, dimensions and attributes must be the
same.
* 'no_conflicts': only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
preprocess : callable, optional
If provided, call this function on each dataset prior to concatenation.
You can find the file-name from which each dataset was loaded in
Expand All @@ -545,29 +544,31 @@ def open_mfdataset(paths, chunks=None, concat_dim=_CONCAT_DIM_DEFAULT,
active dask scheduler.
data_vars : {'minimal', 'different', 'all' or list of str}, optional
These data variables will be concatenated together:
* 'minimal': Only data variables in which the dimension already
appears are included.
* 'different': Data variables which are not equal (ignoring
attributes) across all datasets are also concatenated (as well as
all for which dimension already appears). Beware: this option may
load the data payload of data variables into memory if they are not
already loaded.
* 'all': All data variables will be concatenated.
* list of str: The listed data variables will be concatenated, in
addition to the 'minimal' data variables.
* 'minimal': Only data variables in which the dimension already
appears are included.
* 'different': Data variables which are not equal (ignoring
attributes) across all datasets are also concatenated (as well as
all for which dimension already appears). Beware: this option may
load the data payload of data variables into memory if they are not
already loaded.
* 'all': All data variables will be concatenated.
* list of str: The listed data variables will be concatenated, in
addition to the 'minimal' data variables.
coords : {'minimal', 'different', 'all' o list of str}, optional
These coordinate variables will be concatenated together:
* 'minimal': Only coordinates in which the dimension already appears
are included.
* 'different': Coordinates which are not equal (ignoring attributes)
across all datasets are also concatenated (as well as all for which
dimension already appears). Beware: this option may load the data
payload of coordinate variables into memory if they are not already
loaded.
* 'all': All coordinate variables will be concatenated, except
those corresponding to other dimensions.
* list of str: The listed coordinate variables will be concatenated,
in addition the 'minimal' coordinates.
* 'minimal': Only coordinates in which the dimension already appears
are included.
* 'different': Coordinates which are not equal (ignoring attributes)
across all datasets are also concatenated (as well as all for which
dimension already appears). Beware: this option may load the data
payload of coordinate variables into memory if they are not already
loaded.
* 'all': All coordinate variables will be concatenated, except
those corresponding to other dimensions.
* list of str: The listed coordinate variables will be concatenated,
in addition the 'minimal' coordinates.
parallel : bool, optional
If True, the open and preprocess steps of this function will be
performed in parallel using ``dask.delayed``. Default is False.
Expand Down
1 change: 1 addition & 0 deletions xarray/core/alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ def align(*objects, **kwargs):
join : {'outer', 'inner', 'left', 'right', 'exact'}, optional
Method for joining the indexes of the passed objects along each
dimension:
- 'outer': use the union of object indexes
- 'inner': use the intersection of object indexes
- 'left': use indexes from the first object with each dimension
Expand Down
6 changes: 3 additions & 3 deletions xarray/core/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -574,10 +574,10 @@ def auto_combine(datasets, concat_dim=_CONCAT_DIM_DEFAULT,
By default, xarray attempts to infer this argument by examining
component files. Set ``concat_dim=None`` explicitly to disable
concatenation.
compat : {'identical', 'equals', 'broadcast_equals',
'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'no_conflicts'}, optional
String indicating how to compare variables of the same name for
potential conflicts:
- 'broadcast_equals': all values must be equal when variables are
broadcast against each other to ensure common dimensions.
- 'equals': all values and dimensions must be the same.
Expand All @@ -599,7 +599,7 @@ def auto_combine(datasets, concat_dim=_CONCAT_DIM_DEFAULT,
--------
concat
Dataset.merge
"""
""" # noqa

# Coerce 1D input into ND to maintain backwards-compatible API until API
# for N-D combine decided
Expand Down
25 changes: 10 additions & 15 deletions xarray/core/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,7 @@ def unique_variable(name, variables, compat='broadcast_equals'):
variables : list of xarray.Variable
List of Variable objects, all of which go by the same name in different
inputs.
compat : {'identical', 'equals', 'broadcast_equals',
'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'no_conflicts'}, optional
Type of equality check to use.
Returns
Expand All @@ -60,7 +59,7 @@ def unique_variable(name, variables, compat='broadcast_equals'):
Raises
------
MergeError: if any of the variables are not equal.
"""
""" # noqa
out = variables[0]
if len(variables) > 1:
combine_method = None
Expand Down Expand Up @@ -122,16 +121,15 @@ def merge_variables(
priority_vars : mapping with Variable or None values, optional
If provided, variables are always taken from this dict in preference to
the input variable dictionaries, without checking for conflicts.
compat : {'identical', 'equals', 'broadcast_equals',
'minimal', 'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'minimal', 'no_conflicts'}, optional
Type of equality check to use when checking for conflicts.
Returns
-------
OrderedDict with keys taken by the union of keys on list_of_variable_dicts,
and Variable values corresponding to those that should be found on the
merged result.
"""
""" # noqa
if priority_vars is None:
priority_vars = {}

Expand Down Expand Up @@ -313,15 +311,14 @@ def _get_priority_vars(objects, priority_arg, compat='equals'):
Dictionaries in which to find the priority variables.
priority_arg : int or None
Integer object whose variable should take priority.
compat : {'identical', 'equals', 'broadcast_equals',
'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'no_conflicts'}, optional
Compatibility checks to use when merging variables.
Returns
-------
None, if priority_arg is None, or an OrderedDict with Variable objects as
values indicating priority variables.
"""
""" # noqa
if priority_arg is None:
priority_vars = {}
else:
Expand Down Expand Up @@ -406,8 +403,7 @@ def merge_core(objs,
----------
objs : list of mappings
All values must be convertable to labeled arrays.
compat : {'identical', 'equals', 'broadcast_equals',
'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'no_conflicts'}, optional
Compatibility checks to use when merging variables.
join : {'outer', 'inner', 'left', 'right'}, optional
How to combine objects with different indexes.
Expand All @@ -430,7 +426,7 @@ def merge_core(objs,
Raises
------
MergeError if the merge cannot be done successfully.
"""
""" # noqa
from .dataset import calculate_dimensions

_assert_compat_valid(compat)
Expand Down Expand Up @@ -472,8 +468,7 @@ def merge(objects, compat='no_conflicts', join='outer'):
objects : Iterable[Union[xarray.Dataset, xarray.DataArray, dict]]
Merge together all variables from these objects. If any of them are
DataArray objects, they must have a name.
compat : {'identical', 'equals', 'broadcast_equals',
'no_conflicts'}, optional
compat : {'identical', 'equals', 'broadcast_equals', 'no_conflicts'}, optional
String indicating how to compare variables of the same name for
potential conflicts:
Expand Down Expand Up @@ -516,7 +511,7 @@ def merge(objects, compat='no_conflicts', join='outer'):
See also
--------
concat
"""
""" # noqa
from .dataarray import DataArray
from .dataset import Dataset

Expand Down

0 comments on commit d1d2ece

Please sign in to comment.