API reference

Top-level classes and functions

configure([path])

Configure genno globally.

Computer(**kwargs)

Class for describing and executing computations.

Key(name, dims, tag)

A hashable key for a quantity that includes its dimensionality.

Quantity(*args, **kwargs)

A sparse data structure that behaves like xarray.DataArray.

genno.configure(path: Optional[Union[pathlib.Path, str]] = None, **config)[source]

Configure genno globally.

Modifies global variables that affect the behaviour of all Computers and computations. Configuration keys loaded from file are superseded by keyword arguments. Messages are logged at level logging.INFO if config contains unhandled sections.

Parameters
  • path (Path, optional) – Path to a configuration file in JSON or YAML format.

  • **config – Configuration keys/sections and values.

class genno.Computer(**kwargs)[source]

Class for describing and executing computations.

Parameters

kwargs – Passed to configure().

A Computer is used to describe (add() and related methods) and then execute (get() and related methods) tasks stored in a graph. Advanced users may manipulate the graph directly; but common reporting tasks can be handled by using Computer methods.

Instance attributes:

default_key

The default key to get() with no argument.

graph

A dask-format graph (see 1, 2).

keys()

Return the keys of graph.

modules

List of modules containing pre-defined computations.

unit_registry

The pint.UnitRegistry() used by the Computer.

General-purpose methods for describing tasks and preparing computations:

add(data, *args, **kwargs)

General-purpose method to add computations.

add_queue(queue[, max_tries, fail])

Add tasks from a list or queue.

add_single(key, *computation[, strict, index])

Add a single computation at key.

apply(generator, *keys, **kwargs)

Add computations by applying generator to keys.

cache(func)

Decorate func so that its return value is cached.

describe([key, quiet])

Return a string describing the computations that produce key.

visualize(filename, **kwargs)

Generate an image describing the Computer structure.

Helper methods to simplify adding specific computations:

add_file(path[, key])

Add exogenous quantities from path.

add_product(key, *quantities[, sums])

Add a computation that takes the product of quantities.

aggregate(qty, tag, dims_or_groups[, …])

Add a computation that aggregates qty.

convert_pyam(quantities[, tag])

Add conversion of one or more quantities to IAMC format.

disaggregate(qty, new_dim[, method, args])

Add a computation that disaggregates qty using method.

Exectuing tasks:

get([key])

Execute and return the result of the computation key.

write(key, path)

Write the result of key to the file path.

Utility and configuration methods:

check_keys(*keys[, action, _permute])

Check that keys are in the Computer.

configure([path, fail])

Configure the Computer.

full_key(name_or_key)

Return the full-dimensionality key for name_or_key.

get_comp(name)

Return a computation function.

infer_keys(key_or_keys[, dims])

Infer complete key_or_keys.

require_compat(pkg)

Load computations from genno.compat.{pkg} for use with get_comp().

graph: Dict[str, Any] = {'config': {}}

A dask-format graph (see 1, 2).

Dictionary keys are either Key, str, or any other hashable value.

Dictionary values are computations, one of:

  1. Any other, existing key in the Computer. This functions as an alias.

  2. Any other literal value or constant, to be returned directly.

  3. A task tuple: a callable (e.g. function), followed by zero or more computations, e.g. keys for other tasks.

  4. A list containing zero or more of (1), (2), and/or (3).

genno reserves some keys for special usage:

"config"

A dict storing configuration settings. See Configuration. Because this information is stored in the graph, it can be used as one input to other computations.

Some inputs to tasks may be confused for (1) or (4), above. The recommended way to protect these is:

  • Literal str inputs to tasks: use functools.partial() on the function that is the first element of the task tuple.

  • list of str: use dask.core.quote() to wrap the list.

add(data, *args, **kwargs)[source]

General-purpose method to add computations.

add() can be called in several ways; its behaviour depends on data; see below. It chains to methods such as add_single(), add_queue(), and/or apply(); each can also be called directly.

Returns

Some or all of the keys added to the Computer.

Return type

list of Key-like

The data argument may be:

list

A list of computations, like [(list(args1), dict(kwargs1)), (list(args2), dict(kwargs2)), ...] → passed to add_queue().

str naming a computation

e.g. “select”, retrievable with get_comp(). add_single() is called with (key=args[0], data, *args[1], **kwargs, i.e. applying the named computation. to the other parameters.

str naming another Computer method

e.g. add_file() → the named method is called with the args and kwargs.

Key or other str:

Passed to add_single().

add() may be used to:

  • Provide an alias from one key to another:

    >>> from genno import Computer
    >>> rep = Computer()  # Create a new Computer object
    >>> rep.add('aliased name', 'original name')
    
  • Define an arbitrarily complex computation in a Python function that operates directly on the ixmp.Scenario:

    >>> def my_report(scenario):
    >>>     # many lines of code
    >>>     return 'foo'
    >>> rep.add('my report', (my_report, 'scenario'))
    >>> rep.finalize(scenario)
    >>> rep.get('my report')
    foo
    
apply(generator, *keys, **kwargs)[source]

Add computations by applying generator to keys.

Parameters
  • generator (callable) – Function to apply to keys.

  • keys (hashable) – The starting key(s).

  • kwargs – Keyword arguments to generator.

The generator may have a type annotation for Computer on its first positional argument. In this case, a reference to the Computer is supplied, and generator can use the Computer methods to add many keys and computations:

def my_gen0(c: genno.Computer, **kwargs):
    c.load_file("file0.txt", **kwargs)
    c.load_file("file1.txt", **kwargs)

# Use the generator to add several computations
rep.apply(my_gen0, units="kg")

Or, generator may yield a sequence (0 or more) of (key, computation), which are added to the graph:

def my_gen1(**kwargs):
    op = partial(computations.load_file, **kwargs)
    yield from (f"file:{i}", (op, "file{i}.txt")) for i in range(2)

rep.apply(my_gen1, units="kg")
convert_pyam(quantities, tag='iamc', **kwargs)[source]

Add conversion of one or more quantities to IAMC format.

Parameters
  • quantities (str or Key or list of (str, Key)) – Keys for quantities to transform.

  • tag (str, optional) – Tag to append to new Keys.

  • kwargs – Any keyword arguments accepted by as_pyam().

Returns

Each task converts a Quantity into a pyam.IamDataFrame.

Return type

list of Key

See also

as_pyam

The IAMC data format includes columns named ‘Model’, ‘Scenario’, ‘Region’, ‘Variable’, ‘Unit’; one of ‘Year’ or ‘Time’; and ‘value’.

Using convert_pyam():

  • ‘Model’ and ‘Scenario’ are populated from the attributes of the object returned by the Reporter key scenario;

  • ‘Variable’ contains the name(s) of the quantities;

  • ‘Unit’ contains the units associated with the quantities; and

  • ‘Year’ or ‘Time’ is created according to year_time_dim.

A callback function (collapse) can be supplied that modifies the data before it is converted to an IamDataFrame; for instance, to concatenate extra dimensions into the ‘Variable’ column. Other dimensions can simply be dropped (with drop). Dimensions that are not collapsed or dropped will appear as additional columns in the resulting IamDataFrame; this is valid, but non-standard IAMC data.

For example, here the values for the MESSAGEix technology and mode dimensions are appended to the ‘Variable’ column:

def m_t(df):
    """Callback for collapsing ACT columns."""
    # .pop() removes the named column from the returned row
    df['variable'] = 'Activity|' + df['t'] + '|' + df['m']
    return df

ACT = rep.full_key('ACT')
keys = rep.convert_pyam(ACT, 'ya', collapse=m_t, drop=['t', 'm'])
add_aggregate(qty, tag, dims_or_groups, weights=None, keep=True, sums=False)

Add a computation that aggregates qty.

Parameters
  • qty (Key or str) – Key of the quantity to be aggregated.

  • tag (str) – Additional string to add to the end the key for the aggregated quantity.

  • dims_or_groups (str or iterable of str or dict) – Name(s) of the dimension(s) to sum over, or nested dict.

  • weights (xarray.DataArray, optional) – Weights for weighted aggregation.

  • keep (bool, optional) – Passed to computations.aggregate.

  • sums (bool, optional) – Passed to add().

Returns

The key of the newly-added node.

Return type

Key

add_file(path, key=None, **kwargs)[source]

Add exogenous quantities from path.

Computing the key or using it in other computations causes path to be loaded and converted to Quantity.

Parameters
  • path (os.PathLike) – Path to the file, e.g. ‘/path/to/foo.ext’.

  • key (str or Key, optional) – Key for the quantity read from the file.

  • dims (dict or list or set) – Either a collection of names for dimensions of the quantity, or a mapping from names appearing in the input to dimensions.

  • units (str or pint.Unit) – Units to apply to the loaded Quantity.

Returns

Either key (if given) or e.g. file:foo.ext based on the path name, without directory components.

Return type

Key

add_product(key, *quantities, sums=True)[source]

Add a computation that takes the product of quantities.

Parameters
  • key (str or Key) – Key of the new quantity. If a Key, any dimensions are ignored; the dimensions of the product are the union of the dimensions of quantities.

  • sums (bool, optional) – If True, all partial sums of the new quantity are also added.

Returns

The full key of the new quantity.

Return type

Key

add_queue(queue: Iterable[Tuple[Tuple, Mapping]], max_tries: int = 1, fail: Optional[Union[int, str]] = None) Tuple[Union[genno.core.key.Key, Hashable], ...][source]

Add tasks from a list or queue.

Parameters
  • queue (iterable of 2-tuple) – The members of each tuple are the arguments (e.g. list or tuple) and keyword arguments (e.g dict) to add().

  • max_tries (int, optional) – Retry adding elements up to this many times.

  • fail (“raise” or str or logging level, optional) – Action to take when a computation from queue cannot be added after max_tries: “raise” an exception, or log messages on the indicated level and continue.

add_single(key, *computation, strict=False, index=False)[source]

Add a single computation at key.

Parameters
  • key (str or Key or hashable) – A string, Key, or other value identifying the output of computation.

  • computation (object) – Any computation. See graph.

  • strict (bool, optional) – If True, key must not already exist in the Computer, and any keys referred to by computation must exist.

  • index (bool, optional) – If True, key is added to the index as a full-resolution key, so it can be later retrieved with full_key().

Raises
  • KeyExistsError – If strict is True and either (a) key already exists; or (b) sums is True and the key for one of the partial sums of key already exists.

  • MissingKeyError – If strict is True and any key referred to by computation does not exist.

aggregate(qty, tag, dims_or_groups, weights=None, keep=True, sums=False)[source]

Add a computation that aggregates qty.

Parameters
  • qty (Key or str) – Key of the quantity to be aggregated.

  • tag (str) – Additional string to add to the end the key for the aggregated quantity.

  • dims_or_groups (str or iterable of str or dict) – Name(s) of the dimension(s) to sum over, or nested dict.

  • weights (xarray.DataArray, optional) – Weights for weighted aggregation.

  • keep (bool, optional) – Passed to computations.aggregate.

  • sums (bool, optional) – Passed to add().

Returns

The key of the newly-added node.

Return type

Key

cache(func)[source]

Decorate func so that its return value is cached.

See also

Caching

check_keys(*keys: Union[str, genno.core.key.Key], action='raise', _permute=True) Optional[List[Union[str, genno.core.key.Key]]][source]

Check that keys are in the Computer.

If any of keys is not in the Computer and action is “raise” (the default) KeyError is raised. Otherwise, a list is returned with either the key from keys, or the corresponding full_key().

If action is “return” (or any other value), None is returned on missing keys.

configure(path: Optional[Union[pathlib.Path, str]] = None, fail: Union[str, int] = 'raise', **config)[source]

Configure the Computer.

Accepts a path to a configuration file and/or keyword arguments. Configuration keys loaded from file are superseded by keyword arguments. Messages are logged at level logging.INFO if config contains unhandled sections.

See Configuration for a list of all configuration sections and keys, and details of the configuration file format.

Parameters
  • path (Path, optional) – Path to a configuration file in JSON or YAML format.

  • fail (“raise” or str or logging level, optional) – Passed to add_queue(). If not “raise”, then log messages are generated for config handlers that fail. The Computer may be only partially configured.

  • **config – Configuration keys/sections and values.

default_key = None

The default key to get() with no argument.

describe(key=None, quiet=True)[source]

Return a string describing the computations that produce key.

If key is not provided, all keys in the Computer are described.

Unless quiet, the string is also printed to the console.

Returns

Description of computations.

Return type

str

disaggregate(qty, new_dim, method='shares', args=[])[source]

Add a computation that disaggregates qty using method.

Parameters
  • qty (hashable) – Key of the quantity to be disaggregated.

  • new_dim (str) – Name of the new dimension of the disaggregated variable.

  • method (callable or str) – Disaggregation method. If a callable, then it is applied to var with any extra args. If a string, then a method named ‘disaggregate_{method}’ is used.

  • args (list, optional) – Additional arguments to the method. The first element should be the key for a quantity giving shares for disaggregation.

Returns

The key of the newly-added node.

Return type

Key

full_key(name_or_key)[source]

Return the full-dimensionality key for name_or_key.

An quantity ‘foo’ with dimensions (a, c, n, q, x) is available in the Computer as 'foo:a-c-n-q-x'. This Key can be retrieved with:

c.full_key("foo")
c.full_key("foo:c")
# etc.
get(key=None)[source]

Execute and return the result of the computation key.

Only key and its dependencies are computed.

Parameters

key (str, optional) – If not provided, default_key is used.

Raises

ValueError – If key and default_key are both None.

get_comp(name) Optional[Callable][source]

Return a computation function.

get_comp() checks each of the modules for a function or callable with the given name. Modules at the end of the list take precedence over those earlier in the lists.

Returns

  • .callable

  • None – If there is no computation with the given name in any of modules.

infer_keys(key_or_keys, dims=[])[source]

Infer complete key_or_keys.

Parameters

dims (list of str, optional) – Drop all but these dimensions from the returned key(s).

keys()[source]

Return the keys of graph.

modules: Sequence[module] = [<module 'genno.computations' from '/home/docs/checkouts/readthedocs.org/user_builds/genno/envs/latest/lib/python3.8/site-packages/genno/computations.py'>]

List of modules containing pre-defined computations.

By default, this includes the genno built-in computations in genno.computations. require_compat() appends additional modules, e.g. #: compat.pyam.computations, to this list. User code may also add modules to this list.

require_compat(pkg: str)[source]

Load computations from genno.compat.{pkg} for use with get_comp().

The specified module is appended to modules.

Raises

ModuleNotFoundError – If the required packages are missing.

See also

get_comp

property unit_registry

The pint.UnitRegistry() used by the Computer.

visualize(filename, **kwargs)[source]

Generate an image describing the Computer structure.

This is a shorthand for dask.visualize(). Requires graphviz.

write(key, path)[source]

Write the result of key to the file path.

class genno.Key(name: str, dims: Iterable[str] = [], tag: Optional[str] = None)[source]

A hashable key for a quantity that includes its dimensionality.

Quantities are indexed by 0 or more dimensions. A Key refers to a quantity using three components:

  1. a string name,

  2. zero or more ordered dims, and

  3. an optional tag.

For example, for a \(\text{foo}\) with with three dimensions \(a, b, c\):

\[\text{foo}^{abc}\]

Key allows a specific, explicit reference to various forms of “foo”:

  • in its full resolution, i.e. indexed by a, b, and c:

    >>> k1 = Key("foo", ["a", "b", "c"])
    >>> k1
    <foo:a-b-c>
    
  • in a partial sum over one dimension, e.g. summed across dimension c, with remaining dimensions a and b:

    >>> k2 = k1.drop('c')
    >>> k2 == 'foo:a-b'
    True
    
  • in a partial sum over multiple dimensions, etc.:

    >>> k1.drop('a', 'c') == k2.drop('a') == 'foo:b'
    True
    
  • after it has been manipulated by other computations, e.g.

    >>> k3 = k1.add_tag('normalized')
    >>> k3
    <foo:a-b-c:normalized>
    >>> k4 = k3.add_tag('rescaled')
    >>> k4
    <foo:a-b-c:normalized+rescaled>
    

Notes:

A Key has the same hash, and compares equal to its str representation. A Key also compares equal to another key or str with the same dimensions in any other order. repr(key) prints the Key in angle brackets (‘<>’) to signify that it is a Key object.

>>> str(k1)
'foo:a-b-c'
>>> repr(k1)
'<foo:a-b-c>'
>>> hash(k1) == hash("foo:a-b-c")
True
>>> k1 == "foo:c-b-a"
True

Keys are immutable: the properties name, dims, and tag are read-only, and the methods append(), drop(), and add_tag() return new Key objects.

Keys may be generated concisely by defining a convenience method:

>>> def foo(dims):
>>>     return Key('foo', dims.split())
>>> foo('a b c')
<foo:a-b-c>
add_tag(tag)[source]

Return a new Key with tag appended.

append(*dims: str)[source]

Return a new Key with additional dimensions dims.

property dims: Tuple[str, ...]

Dimensions of the quantity, tuple of str.

drop(*dims: Union[str, bool])[source]

Return a new Key with dims dropped.

classmethod from_str_or_key(value: Union[str, genno.core.key.Key], drop: Union[Iterable[str], bool] = [], append: Iterable[str] = [], tag: Optional[str] = None)[source]

Return a new Key from value.

Parameters
  • value (str or Key) – Value to use to generate a new Key.

  • drop (list of str or True, optional) – Existing dimensions of value to drop. See drop().

  • append (list of str, optional.) – New dimensions to append to the returned Key. See append().

  • tag (str, optional) – Tag for returned Key. If value has a tag, the two are joined using a ‘+’ character. See add_tag().

Returns

Return type

Key

iter_sums() Generator[Tuple[genno.core.key.Key, Callable, genno.core.key.Key], None, None][source]

Generate (key, task) for all possible partial sums of the Key.

property name: str

Name of the quantity, str.

permute_dims() Generator[genno.core.key.Key, None, None][source]

Generate variants of the Key with dimensions in all possible orders.

Examples

>>> k = Key("A", "xyz")
>>> list(k.permute_dims())
[<A:x-y-z>, <A:x-z-y>, <A:y-x-z>, <A:y-z-x>, <A:z-x-y>, <A:z-y-x>]
classmethod product(new_name: str, *keys, tag: Optional[str] = None) genno.core.key.Key[source]

Return a new Key that has the union of dimensions on keys.

Dimensions are ordered by their first appearance:

  1. First, the dimensions of the first of the keys.

  2. Next, any additional dimensions in the second of the keys that were not already added in step 1.

  3. etc.

Parameters

new_name (str) – Name for the new Key. The names of keys are discarded.

property sorted: genno.core.key.Key

A version of the Key with its dims sorted alphabetically.

property tag: Optional[str]

Quantity tag, str or None.

class genno.Quantity(*args, **kwargs)[source]

A sparse data structure that behaves like xarray.DataArray.

Depending on the value of CLASS, Quantity is either AttrSeries or SparseDataArray.

classmethod from_series(series, sparse=True)[source]

Convert series to the Quantity class given by CLASS.

to_series() pandas.core.series.Series[source]

Like xarray.DataArray.to_series().

property units

Retrieve or set the units of the Quantity.

Examples

Create a quantity without units:

>>> qty = Quantity(...)

Set using a string; automatically converted to pint.Unit:

>>> qty.units = "kg"
>>> qty.units
<Unit('kilogram')>

The Quantity constructor converts its arguments to an internal, xarray.DataArray-like data format:

# Existing data
data = pd.Series(...)

# Convert to a Quantity for use in reporting calculations
qty = Quantity(data, name="Quantity name", units="kg")
rep.add("new_qty", qty)

Common genno usage, e.g. in message_ix, creates large, sparse data frames (billions of possible elements, but <1% populated); DataArray’s default, ‘dense’ storage format would be too large for available memory.

The goal is that all genno-based code, including built-in and user computations, can treat quantity arguments as if they were DataArray.

Computations

Elementary computations for genno.

Unless otherwise specified, these methods accept and return Quantity objects for data arguments/return values.

Genno’s compatibility modules each provide additional computations.

Calculations:

add(*quantities[, fill_value])

Sum across multiple quantities.

aggregate(quantity, groups, keep)

Aggregate quantity by groups.

apply_units(qty, units[, quiet])

Simply apply units to qty.

broadcast_map(quantity, map[, rename, strict])

Broadcast quantity using a map.

combine(*quantities[, select, weights])

Sum distinct quantities by weights.

disaggregate_shares(quantity, shares)

Disaggregate quantity by shares.

group_sum(qty, group, sum)

Group by dimension group, then sum across dimension sum.

interpolate(qty[, coords, method, …])

Interpolate qty.

pow(a, b)

Compute a raised to the power of b.

product(*quantities)

Compute the product of any number of quantities.

ratio(numerator, denominator)

Compute the ratio numerator / denominator.

select(qty, indexers[, inverse])

Select from qty based on indexers.

sum(quantity[, weights, dimensions])

Sum quantity over dimensions, with optional weights.

Input and output:

load_file(path[, dims, units, name])

Read the file at path and return its contents as a Quantity.

write_report(quantity, path)

Write a quantity to a file.

Data manipulation:

concat(*objs, **kwargs)

Concatenate Quantity objs.

genno.computations.add(*quantities, fill_value=0.0)[source]

Sum across multiple quantities.

Raises

ValueError – if any of the quantities have incompatible units.

Returns

Units are the same as the first of quantities.

Return type

Quantity

genno.computations.aggregate(quantity, groups, keep)[source]

Aggregate quantity by groups.

Parameters
  • quantity (Quantity) –

  • groups (dict of dict) – Top-level keys are the names of dimensions in quantity. Second-level keys are group names; second-level values are lists of labels along the dimension to sum into a group.

  • keep (bool) – If True, the members that are aggregated into a group are returned with the group sums. If False, they are discarded.

Returns

Same dimensionality as quantity.

Return type

Quantity

genno.computations.apply_units(qty, units, quiet=False)[source]

Simply apply units to qty.

Logs on level WARNING if qty already has existing units.

Parameters
genno.computations.broadcast_map(quantity, map, rename={}, strict=False)[source]

Broadcast quantity using a map.

The map must be a 2-dimensional Quantity with dimensions (d1, d2), such as returned by map_as_qty(). quantity must also have a dimension d1. Typically len(d2) > len(d1).

quantity is ‘broadcast’ by multiplying it with map, and then summing on the common dimension d1. The result has the dimensions of quantity, but with d2 in place of d1.

Parameters
  • rename (dict (str -> str), optional) – Dimensions to rename on the result.

  • strict (bool, optional) – Require that each element of d2 is mapped from exactly 1 element of d1.

genno.computations.combine(*quantities, select=None, weights=None)[source]

Sum distinct quantities by weights.

Parameters
  • *quantities (Quantity) – The quantities to be added.

  • select (list of dict) – Elements to be selected from each quantity. Must have the same number of elements as quantities.

  • weights (list of float) – Weight applied to each quantity. Must have the same number of elements as quantities.

Raises

ValueError – If the quantities have mismatched units.

genno.computations.concat(*objs, **kwargs)[source]

Concatenate Quantity objs.

Any strings included amongst objs are discarded, with a logged warning; these usually indicate that a quantity is referenced which is not in the Computer.

genno.computations.disaggregate_shares(quantity, shares)[source]

Disaggregate quantity by shares.

genno.computations.group_sum(qty, group, sum)[source]

Group by dimension group, then sum across dimension sum.

The result drops the latter dimension.

genno.computations.interpolate(qty: genno.core.quantity.Quantity, coords: Mapping[Hashable, Any] = None, method: str = 'linear', assume_sorted: bool = True, kwargs: Mapping[str, Any] = None, **coords_kwargs: Any) genno.core.quantity.Quantity[source]

Interpolate qty.

For the meaning of arguments, see xarray.DataArray.interp(). When CLASS is AttrSeries, only 1-dimensional interpolation (one key in coords) is tested/supported.

genno.computations.load_file(path, dims={}, units=None, name=None)[source]

Read the file at path and return its contents as a Quantity.

Some file formats are automatically converted into objects for direct use in genno computations:

.csv:

Converted to Quantity. CSV files must have a ‘value’ column; all others are treated as indices, except as given by dims. Lines beginning with ‘#’ are ignored.

Parameters
  • path (pathlib.Path) – Path to the file to read.

  • dims (collections.abc.Collection or collections.abc.Mapping, optional) – If a collection of names, other columns besides these and ‘value’ are discarded. If a mapping, the keys are the column labels in path, and the values are the target dimension names.

  • units (str or pint.Unit) – Units to apply to the loaded Quantity.

  • name (str) – Name for the loaded Quantity.

genno.computations.pow(a, b)[source]

Compute a raised to the power of b.

Todo

Provide units on the result in the special case where b is a Quantity but all its values are the same int.

Returns

If b is int, then the quantity has the units of a raised to this power; e.g. “kg²” → “kg⁴” if b is 2. In other cases, there are no meaningful units, so the returned quantity is dimensionless.

Return type

Quantity

genno.computations.product(*quantities)[source]

Compute the product of any number of quantities.

genno.computations.ratio(numerator, denominator)[source]

Compute the ratio numerator / denominator.

Parameters
genno.computations.select(qty, indexers, inverse=False)[source]

Select from qty based on indexers.

Parameters
  • qty (Quantity) –

  • indexers (dict (str -> list of str)) – Elements to be selected from qty. Mapping from dimension names to labels along each dimension.

  • inverse (bool, optional) – If True, remove the items in indexers instead of keeping them.

genno.computations.sum(quantity, weights=None, dimensions=None)[source]

Sum quantity over dimensions, with optional weights.

Parameters
  • quantity (Quantity) –

  • weights (Quantity, optional) – If dimensions is given, weights must have at least these dimensions. Otherwise, any dimensions are valid.

  • dimensions (list of str, optional) – If not provided, sum over all dimensions. If provided, sum over these dimensions.

genno.computations.write_report(quantity, path)[source]

Write a quantity to a file.

Parameters

path (str or Path) – Path to the file to be written.

Internal format for quantities

genno.core.quantity.CLASS = 'AttrSeries'

Name of the class used to implement Quantity.

genno.core.quantity.assert_quantity(*args)[source]

Assert that each of args is a Quantity object.

Raises

TypeError – with a indicative message.

genno.core.quantity.maybe_densify(func)[source]

Wrapper for computations that densifies SparseDataArray input.

class genno.core.attrseries.AttrSeries(*args, **kwargs)[source]

pandas.Series subclass imitating xarray.DataArray.

The AttrSeries class provides similar methods and behaviour to xarray.DataArray, so that genno.computations functions and user code can use xarray-like syntax. In particular, this allows such code to be agnostic about the order of dimensions.

Parameters
  • units (str or pint.Unit, optional) – Set the units attribute. The value is converted to pint.Unit and added to attrs.

  • attrs (Mapping, optional) – Set the attrs of the AttrSeries. This attribute was added in pandas 1.0, but is not currently supported by the Series constructor.

align_levels(other)[source]

Work around https://github.com/pandas-dev/pandas/issues/25760.

Return a copy of self with common levels in the same order as other.

assign_coords(coords=None, **coord_kwargs)[source]

Like xarray.DataArray.assign_coords().

bfill(dim: Hashable, limit: Optional[int] = None)[source]

Like xarray.DataArray.bfill().

property coords

Like xarray.DataArray.coords. Read-only.

cumprod(dim=None, axis=None, skipna=None, **kwargs)[source]

Like xarray.DataArray.cumprod().

property dims

Like xarray.DataArray.dims.

drop(label)[source]

Like xarray.DataArray.drop().

drop_vars(names: Union[Hashable, Iterable[Hashable]], *, errors: str = 'raise')[source]

Like xarray.DataArray.drop_vars().

expand_dims(dim: Union[None, Mapping[Hashable, Any]] = None, axis=None, **dim_kwargs: Any)[source]

Like xarray.DataArray.expand_dims().

ffill(dim: Hashable, limit: Optional[int] = None)[source]

Like xarray.DataArray.ffill().

classmethod from_series(series, sparse=None)[source]

Like xarray.DataArray.from_series().

interp(coords: Optional[Mapping[Hashable, Any]] = None, method: str = 'linear', assume_sorted: bool = True, kwargs: Optional[Mapping[str, Any]] = None, **coords_kwargs: Any)[source]

Like xarray.DataArray.interp().

This method works around two long-standing bugs in pandas:

item(*args)[source]

Like xarray.DataArray.item().

rename(new_name_or_name_dict)[source]

Like xarray.DataArray.rename().

sel(indexers=None, drop=False, **indexers_kwargs)[source]

Like xarray.DataArray.sel().

shift(shifts: Optional[Mapping[Hashable, int]] = None, fill_value: Optional[Any] = None, **shifts_kwargs: int)[source]

Like xarray.DataArray.shift().

squeeze(dim=None, *args, **kwargs)[source]

Like xarray.DataArray.squeeze().

sum(*args, **kwargs)[source]

Like xarray.DataArray.sum().

to_dataframe()[source]

Like xarray.DataArray.to_dataframe().

to_series()[source]

Like xarray.DataArray.to_series().

transpose(*dims)[source]

Like xarray.DataArray.transpose().

class genno.core.sparsedataarray.SparseAccessor(obj)[source]

xarray accessor to help SparseDataArray.

See the xarray accessor documentation, e.g. register_dataarray_accessor().

property COO_data

True if the DataArray has sparse.COO data.

convert()[source]

Return a SparseDataArray instance.

property dense

Return a copy with dense (ndarray) data.

property dense_super

Return a proxy to a ndarray-backed DataArray.

class genno.core.sparsedataarray.SparseDataArray(*args, **kwargs)[source]

DataArray with sparse data.

SparseDataArray uses sparse.COO for storage with numpy.nan as its sparse.COO.fill_value. Some methods of DataArray are overridden to ensure data is in sparse, or dense, format as necessary, to provide expected functionality not currently supported by sparse, and to avoid exhausting memory for some operations that require dense data.

ffill(dim: Hashable, limit: Optional[int] = None)[source]

Override ffill() to auto-densify.

classmethod from_series(obj, sparse=True)[source]

Convert a pandas.Series into a SparseDataArray.

item(*args)

Like item().

sel(indexers=None, method=None, tolerance=None, drop=False, **indexers_kwargs) genno.core.sparsedataarray.SparseDataArray[source]

Return a new array by selecting labels along the specified dim(s).

Overrides sel() to handle >1-D indexers with sparse data.

to_dataframe(name=None)[source]

Convert this array and its coords into a DataFrame.

Overrides to_dataframe().

to_series() pandas.core.series.Series[source]

Convert this array into a Series.

Overrides to_series() to create the series without first converting to a potentially very large numpy.ndarray.

Utilities

genno.util.REPLACE_UNITS = {'%': 'percent'}

Replacements to apply to Quantity units before parsing by pint. Mapping from original unit -> preferred unit.

The default values include:

  • The ‘%’ symbol cannot be supported by pint, because it is a Python operator; it is replaced with “percent”.

Additional values can be added with configure(); see units:.

genno.util.clean_units(input_string)[source]

Tolerate messy strings for units.

  • Dimensions enclosed in “[]” have these characters stripped.

  • Replacements from REPLACE_UNITS are applied.

genno.util.collect_units(*args)[source]

Return the “_unit” attributes of the args.

genno.util.filter_concat_args(args)[source]

Filter out str and Key from args.

A warning is logged for each element removed.

genno.util.parse_units(data: Iterable, registry=None) pint.unit.Unit[source]

Return a pint.Unit for an iterable of strings.

Valid unit expressions not already present in the registry are defined, e.g.:

u = parse_units(["foo/bar", "foo/bar"], reg)

…results in the addition of unit definitions equivalent to:

reg.define("foo = [foo]")
reg.define("bar = [bar]")
u = reg.foo / reg.bar
Raises

ValueError – if data contains more than 1 unit expression, or the unit expression contains characters not parseable by pint, e.g. -?$.

genno.util.partial_split(func, kwargs)[source]

Forgiving version of functools.partial().

Returns a partial object and leftover kwargs not applicable to func.

genno.util.unquote(value)[source]

Reverse dask.core.quote().