API reference#

Top-level classes and functions#

configure([path])

Configure genno globally.

Computer(**kwargs)

Class for describing and executing computations.

Key(name_or_value[, dims, tag, _fast])

A hashable key for a quantity that includes its dimensionality.

Quantity(*args, **kwargs)

A sparse data structure that behaves like xarray.DataArray.

genno.configure(path: Path | str | None = None, **config)[source]

Configure genno globally.

Modifies global variables that affect the behaviour of all Computers and operators. Configuration keys loaded from file are superseded by keyword arguments. Messages are logged at level logging.INFO if config contains unhandled sections.

Parameters:
  • path (pathlib.Path, optional) – Path to a configuration file in JSON or YAML format.

  • **config – Configuration keys/sections and values.

class genno.Computer(**kwargs)[source]#

Class for describing and executing computations.

Parameters:

kwargs – Passed to configure().

A Computer is used to prepare (add() and related methods) and then execute (get() and related methods) computations stored in a graph. Advanced users may manipulate the graph directly; but most computations can be prepared can be handled by using Computer methods.

Instance attributes:

default_key

The default key to get() with no argument.

graph

A dask-format graph (see 1, 2).

keys()

Return the keys of graph.

modules

List of modules containing pre-defined computations.

unit_registry

The pint.UnitRegistry() used by the Computer.

General-purpose methods for preparing computations and tasks:

add(data, *args, **kwargs)

General-purpose method to add computations.

add_queue(queue[, max_tries, fail])

Add tasks from a list or queue.

add_single(key, *computation[, strict, index])

Add a single computation at key.

aggregate(qty, tag, dims_or_groups[, ...])

Deprecated.

apply(generator, *keys, **kwargs)

Add computations by applying generator to keys.

cache(func)

Decorate func so that its return value is cached.

describe([key, quiet])

Return a string describing the computations that produce key.

eval(expr)

Evaluate expr to add tasks and keys.

visualize(filename[, key, optimize_graph])

Generate an image describing the Computer structure.

Executing computations:

get([key])

Execute and return the result of the computation key.

write(key, path)

Compute key and write the result directly to path.

Utility and configuration methods:

check_keys(*keys[, predicate, action])

Check that keys are in the Computer.

configure([path, fail, config])

Configure the Computer.

full_key(name_or_key)

Return the full-dimensionality key for name_or_key.

get_comp(name)

Return a function or callable for use in computations.

infer_keys(key_or_keys[, dims])

Infer complete key_or_keys.

require_compat(pkg)

Register computations from genno.compat/others for get_comp().

Deprecated:

add_file(*args, **kwargs)

Deprecated.

add_product(*args, **kwargs)

Deprecated.

convert_pyam(*args, **kwargs)

Deprecated.

disaggregate(qty, new_dim[, method, args])

Deprecated.

graph: Graph = {'config': {}}#

A dask-format graph (see 1, 2).

Dictionary keys are either Key, str, or any other hashable value.

Dictionary values are computations, one of:

  1. Any other, existing key in the Computer. This functions as an alias.

  2. Any other literal value or constant, to be returned directly.

  3. A task tuple: a callable (e.g. function), followed by zero or more computations, e.g. keys for other tasks.

  4. A list containing zero or more of (1), (2), and/or (3).

genno reserves some keys for special usage:

"config"

A dict storing configuration settings. See Configuration. Because this information is stored in the graph, it can be used as one input to other computations.

Some inputs to tasks may be confused for (1) or (4), above. The recommended way to protect these is:

  • Literal str inputs to tasks: use functools.partial() on the function that is the first element of the task tuple.

  • list of str: use dask.core.quote() to wrap the list.

add(data, *args, **kwargs) Key | str | Tuple[Key | str, ...][source]#

General-purpose method to add computations.

add() can be called in several ways; its behaviour depends on data; see below. It chains to methods such as add_single(), add_queue(), and/or apply(); each can also be called directly.

Returns:

Some or all of the keys added to the Computer.

Return type:

KeyLike or tuple of KeyLike

The data argument may be:

list

A list of computations, like [(list(args1), dict(kwargs1)), (list(args2), dict(kwargs2)), ...] → passed to add_queue().

str naming an operator

e.g. “select”, retrievable with get_comp(). add_single() is called with (key=args[0], data, *args[1], **kwargs), that is, applying the named operator to the other parameters.

str naming another Computer method

e.g. add_file() → the named method is called with the args and kwargs.

Key or other str:

Passed to add_single().

add() may be used to:

  • Provide an alias from one key to another:

    >>> from genno import Computer
    >>> rep = Computer()  # Create a new Computer object
    >>> rep.add('aliased name', 'original name')
    
  • Define an arbitrarily complex operator in a Python function that operates directly on the ixmp.Scenario:

    >>> def my_report(scenario):
    >>>     # many lines of code
    >>>     return 'foo'
    >>> rep.add('my report', (my_report, 'scenario'))
    >>> rep.finalize(scenario)
    >>> rep.get('my report')
    foo
    
apply(generator, *keys, **kwargs)[source]#

Add computations by applying generator to keys.

Parameters:
  • generator (.callable) – Function to apply to keys.

  • keys (Hashable) – The starting key(s).

  • kwargs – Keyword arguments to generator.

The generator may have a type annotation for Computer on its first positional argument. In this case, a reference to the Computer is supplied, and generator can use the Computer methods to add many keys and computations:

def my_gen0(c: genno.Computer, **kwargs):
    c.load_file("file0.txt", **kwargs)
    c.load_file("file1.txt", **kwargs)

# Use the generator to add several computations
rep.apply(my_gen0, units="kg")

Or, generator may yield a sequence (0 or more) of (key, computation), which are added to the graph:

def my_gen1(**kwargs):
    op = partial(computations.load_file, **kwargs)
    yield from (f"file:{i}", (op, "file{i}.txt")) for i in range(2)

rep.apply(my_gen1, units="kg")
eval(expr: str) Tuple[Key, ...][source]#

Evaluate expr to add tasks and keys.

Parse a statement or block of statements using ast from the Python standard library. expr may include:

  • Constants.

  • References to existing keys in the Computer by their name; these are expanded using full_key().

  • Multiple statements on separate lines or separated by “;”.

  • Python arithmetic operators including +, -, *, /, **; these are mapped to the corresponding computations.

  • Function calls, also mapped to the corresponding computations via get_comp(). These may include simple positional (constants or key references) or keyword (constants only) arguments.

Parameters:

expr (str) – Expression to be evaluated.

Returns:

One key for the left-hand side of each expression.

Return type:

tuple of Key

Raises:
  • NotImplementedError – For complex expressions not supported; if any of the statements is anything other than a simple assignment.

  • NameError – If a function call references a non-existent computation.

Examples

Parse a multi-line string and add tasks to compute z, a, b, d, and e. The dimensions of each are automatically inferred given the dimension of the existing operand, x.

>>> c = Computer()
>>> # (Here, add tasks to compute a quantity like "x:t-y")
>>> added = c.eval(
...     """
...     z = - (0.5 / (x ** 3))
...     a = x ** 3 + z
...     b = a + a
...     d = assign_units(b, "km")
...     e = index_to(d, dim="t", label="foo1")
...     """
... )
>>> added[-1]
<e:t-y>
add_aggregate(qty: Key | str, tag: str, dims_or_groups: Mapping | str | Sequence[str], weights: DataArray | None = None, keep: bool = True, sums: bool = False, fail: str | int | None = None)#

Deprecated.

Add a computation that aggregates qty.

Deprecated since version 1.18.0: Instead, for a mapping/dict dims_or_groups, use:

c.add(qty, "aggregate", groups=dims_or_groups, keep=keep, ...)

Or, for str or sequence of str dims_or_groups, use:

c.add(None, "sum", qty, dimensions=dims_or_groups, ...)
Parameters:
  • qty (Key or str) – Key of the quantity to be aggregated.

  • tag (str) – Additional string to add to the end the key for the aggregated quantity.

  • dims_or_groups (str or iterable of str or dict) – Name(s) of the dimension(s) to sum over, or nested dict.

  • weights (xarray.DataArray, optional) – Weights for weighted aggregation.

  • keep (bool, optional) – Passed to computations.aggregate.

  • sums (bool, optional) – Passed to add().

  • fail (str or int, optional) – Passed to add_queue() via add().

Returns:

The key of the newly-added node.

Return type:

Key

add_file(*args, **kwargs)[source]#

Deprecated.

Deprecated since version 1.18.0: Instead use add_load_file() via:

c.add(..., "load_file", ...)
add_product(*args, **kwargs)[source]#

Deprecated.

Deprecated since version 1.18.0: Instead use add_mul() via:

c.add(..., "mul", ...)
add_queue(queue: Iterable[Tuple], max_tries: int = 1, fail: str | int | None = None) Tuple[Key | str, ...][source]#

Add tasks from a list or queue.

Parameters:
  • queue (iterable of 2-tuple) – The members of each tuple are the arguments (such as list or tuple) and keyword arguments (e.g dict) to add().

  • max_tries (int, optional) – Retry adding elements up to this many times.

  • fail (“raise” or str or logging level, optional) – Action to take when a computation from queue cannot be added after max_tries: “raise” an exception, or log messages on the indicated level and continue.

add_single(key: Key | str, *computation, strict=False, index=False) Key | str[source]#

Add a single computation at key.

Parameters:
  • key (str or Key or hashable) – A string, Key, or other value identifying the output of computation.

  • computation (object) – Any computation. See graph.

  • strict (bool, optional) – If True, key must not already exist in the Computer, and any keys referred to by computation must exist.

  • index (bool, optional) – If True, key is added to the index as a full-resolution key, so it can be later retrieved with full_key().

Raises:
  • KeyExistsError – If strict is True and either (a) key already exists; or (b) sums is True and the key for one of the partial sums of key already exists.

  • MissingKeyError – If strict is True and any key referred to by computation does not exist.

aggregate(qty: Key | str, tag: str, dims_or_groups: Mapping | str | Sequence[str], weights: DataArray | None = None, keep: bool = True, sums: bool = False, fail: str | int | None = None)[source]#

Deprecated.

Add a computation that aggregates qty.

Deprecated since version 1.18.0: Instead, for a mapping/dict dims_or_groups, use:

c.add(qty, "aggregate", groups=dims_or_groups, keep=keep, ...)

Or, for str or sequence of str dims_or_groups, use:

c.add(None, "sum", qty, dimensions=dims_or_groups, ...)
Parameters:
  • qty (Key or str) – Key of the quantity to be aggregated.

  • tag (str) – Additional string to add to the end the key for the aggregated quantity.

  • dims_or_groups (str or iterable of str or dict) – Name(s) of the dimension(s) to sum over, or nested dict.

  • weights (xarray.DataArray, optional) – Weights for weighted aggregation.

  • keep (bool, optional) – Passed to computations.aggregate.

  • sums (bool, optional) – Passed to add().

  • fail (str or int, optional) – Passed to add_queue() via add().

Returns:

The key of the newly-added node.

Return type:

Key

cache(func)[source]#

Decorate func so that its return value is cached.

See also

Caching

check_keys(*keys: str | Key, predicate=None, action='raise') List[Key | str][source]#

Check that keys are in the Computer.

Parameters:
  • keys (KeyLike) – Some Keys or strings.

  • predicate (callable, optional) – Function to run on each of keys; see below.

  • action ("raise" or any other value) – Action to take on missing keys.

Returns:

One item for each item k in keys:

  1. k itself, unchanged, if predicate is given and predicate(k) returns True.

  2. Graph.unsorted_key(), that is, k but with its dimensions in a specific order that already appears in graph.

  3. Graph.full_key(), that is, an existing key with the name k with its full dimensionality.

  4. None otherwise.

Return type:

list of KeyLike

Raises:

MissingKeyError – If action is “raise” and 1 or more of keys do not appear (either in different dimension order, or full dimensionality) in the graph.

configure(path: Path | str | None = None, fail: str | int = 'raise', config: Mapping[str, Any] | None = None, **config_kw)[source]#

Configure the Computer.

Accepts a path to a configuration file and/or keyword arguments. Configuration keys loaded from file are superseded by keyword arguments. Messages are logged at level logging.INFO if config contains unhandled sections.

See Configuration for a list of all configuration sections and keys, and details of the configuration file format.

Parameters:
  • path (.Path, optional) – Path to a configuration file in JSON or YAML format.

  • fail (“raise” or str or logging level, optional) – Passed to add_queue(). If not “raise”, then log messages are generated for config handlers that fail. The Computer may be only partially configured.

  • config – Configuration keys/sections and values, as a mapping. Use this if any of the keys/sections are not valid Python names, for instance if they contain “-” or ” “.

  • **config_kw – Configuration keys/sections and values, as keyword arguments.

convert_pyam(*args, **kwargs)[source]#

Deprecated.

Deprecated since version 1.18.0: Instead use add_as_pyam() via:

c.require_compat("pyam")
c.add(..., "as_pyam", ...)
default_key = None#

The default key to get() with no argument.

describe(key=None, quiet=True)[source]#

Return a string describing the computations that produce key.

If key is not provided, all keys in the Computer are described.

Unless quiet, the string is also printed to the console.

Returns:

Description of computations.

Return type:

str

disaggregate(qty, new_dim, method='shares', args=[])[source]#

Deprecated.

Deprecated since version 1.18.0: Instead, for method = “disaggregate_shares”, use:

c = Computer()
c.add(qty.append(new_dim), "mul", qty, ..., strict=True)

Or for a callable() method, use:

c.add(qty.append(new_dim), method, qty, ..., strict=True)
full_key(name_or_key: Key | str) Key | str[source]#

Return the full-dimensionality key for name_or_key.

An quantity ‘foo’ with dimensions (a, c, n, q, x) is available in the Computer as 'foo:a-c-n-q-x'. This Key can be retrieved with:

c.full_key("foo")
c.full_key("foo:c")
# etc.
Raises:

KeyError – if name_or_key is not in the graph.

get(key=None)[source]#

Execute and return the result of the computation key.

Only key and its dependencies are computed.

Parameters:

key (str, optional) – If not provided, default_key is used.

Raises:

ValueError – If key and default_key are both None.

get_comp(name) Callable | None[source]#

Return a function or callable for use in computations.

get_comp() checks each of the modules for a function or callable with the given name. Modules at the end of the list take precedence over those earlier in the lists.

Returns:

  • .callable

  • None – If there is no callable with the given name in any of modules.

infer_keys(key_or_keys: Key | str | Iterable[Key | str], dims: Iterable[str] = [])[source]#

Infer complete key_or_keys.

Each return value is one of:

  • a Key with either

    • dimensions dims, if any are given, otherwise

    • its full dimensionality (cf. full_key())

  • str, the same as input, if the key is not defined in the Computer.

Parameters:
  • key_or_keys (str or Key or list of str or Key) –

  • dims (list of str, optional) – Drop all but these dimensions from the returned key(s).

Returns:

  • str or Key – If key_or_keys is a single KeyLike.

  • list of str or Key – If key_or_keys is an iterable of KeyLike.

keys()[source]#

Return the keys of graph.

modules: MutableSequence[module] = []#

List of modules containing pre-defined computations.

By default, this includes the genno built-in computations in genno.computations. require_compat() appends additional modules, for instance compat.pyam.computations, to this list. User code may also add modules to this list.

require_compat(pkg: str | module)[source]#

Register computations from genno.compat/others for get_comp().

The specified module is appended to modules.

Parameters:

pkg (str or module) –

One of:

  • the name of a package (for instance “plotnine”), corresponding to a submodule of genno.compat (genno.compat.plotnine). genno.compat.{pkg}.computations is added.

  • the name of any importable module, for instance “foo.bar”.

  • a module object that has already been imported.

Raises:

ModuleNotFoundError – If the required packages are missing.

Examples

Computations packaged with genno for compatibility:

>>> c = Computer()
>>> c.require_compat("pyam")

Computations in another module, using the module name:

>>> c.require_compat("ixmp.reporting.computations")

or using imported module:

>>> import ixmp.reporting.computations as mod
>>> c.require_compat(mod)
property unit_registry#

The pint.UnitRegistry() used by the Computer.

visualize(filename, key=None, optimize_graph=False, **kwargs)[source]#

Generate an image describing the Computer structure.

This is similar to dask.visualize(); see compat.graphviz.visualize(). Requires graphviz.

write(key, path)[source]#

Compute key and write the result directly to path.

class genno.Key(name_or_value: str | Key | Quantity, dims: Iterable[str] = [], tag: str | None = None, _fast: bool = False)[source]#

A hashable key for a quantity that includes its dimensionality.

Quantities are indexed by 0 or more dimensions. A Key refers to a quantity using three components:

  1. a string name,

  2. zero or more ordered dims, and

  3. an optional tag.

For example, for a \(\text{foo}\) with with three dimensions \(a, b, c\):

\[\text{foo}^{abc}\]

Key allows a specific, explicit reference to various forms of “foo”:

  • in its full resolution, i.e. indexed by a, b, and c:

    >>> k1 = Key("foo", ["a", "b", "c"])
    >>> k1
    <foo:a-b-c>
    
  • in a partial sum over one dimension, e.g. summed across dimension c, with remaining dimensions a and b:

    >>> k2 = k1.drop('c')
    >>> k2 == 'foo:a-b'
    True
    
  • in a partial sum over multiple dimensions, etc.:

    >>> k1.drop('a', 'c') == k2.drop('a') == 'foo:b'
    True
    
  • after it has been manipulated by other computations, e.g.

    >>> k3 = k1.add_tag('normalized')
    >>> k3
    <foo:a-b-c:normalized>
    >>> k4 = k3.add_tag('rescaled')
    >>> k4
    <foo:a-b-c:normalized+rescaled>
    

Notes:

A Key has the same hash, and compares equal to its str representation. A Key also compares equal to another key or str with the same dimensions in any other order. repr(key) prints the Key in angle brackets (‘<>’) to signify that it is a Key object.

>>> str(k1)
'foo:a-b-c'
>>> repr(k1)
'<foo:a-b-c>'
>>> hash(k1) == hash("foo:a-b-c")
True
>>> k1 == "foo:c-b-a"
True

Keys are immutable: the properties name, dims, and tag are read-only, and the methods append(), drop(), and add_tag() return new Key objects.

Keys may be generated concisely by defining a convenience method:

>>> def foo(dims):
>>>     return Key('foo', dims.split())
>>> foo('a b c')
<foo:a-b-c>
add_tag(tag) Key[source]#

Return a new Key with tag appended.

append(*dims: str) Key[source]#

Return a new Key with additional dimensions dims.

classmethod bare_name(value) str | None[source]#

If value is a bare name (no dims or tags), return it; else None.

property dims: Tuple[str, ...]#

Dimensions of the quantity, tuple of str.

drop(*dims: str | bool) Key[source]#

Return a new Key with dims dropped.

drop_all() Key[source]#

Return a new Key with all dimensions dropped / zero dimensions.

classmethod from_str_or_key(value: str | Key | Quantity, drop: Iterable[str] | bool = [], append: Iterable[str] = [], tag: str | None = None) Key[source]#

Return a new Key from value.

Parameters:
  • value (str or Key) – Value to use to generate a new Key.

  • drop (list of str or True, optional) – Existing dimensions of value to drop. See drop().

  • append (list of str, optional.) – New dimensions to append to the returned Key. See append().

  • tag (str, optional) – Tag for returned Key. If value has a tag, the two are joined using a ‘+’ character. See add_tag().

Returns:

  • Key

  • .. versionchanged:: 1.18.0 – Calling from_str_or_key() with a single argument is no longer necessary; simply give the same value as an argument to Key.

    The class method is retained for convenience when calling with multiple arguments. However, the following are equivalent and may be more readable:

    k1 = Key("foo:a-b-c:t1", drop="b", append="d", tag="t2")
    k2 = Key("foo:a-b-c:t1").drop("b").append("d)"
    

iter_sums() Generator[Tuple[Key, Callable, Key], None, None][source]#

Generate (key, task) for all possible partial sums of the Key.

property name: str#

Name of the quantity, str.

classmethod product(new_name: str, *keys, tag: str | None = None) Key[source]#

Return a new Key that has the union of dimensions on keys.

Dimensions are ordered by their first appearance:

  1. First, the dimensions of the first of the keys.

  2. Next, any additional dimensions in the second of the keys that were not already added in step 1.

  3. etc.

Parameters:

new_name (str) – Name for the new Key. The names of keys are discarded.

rename(name: str) Key[source]#

Return a Key with a replaced name.

property sorted: Key#

A version of the Key with its dims sorted alphabetically.

property tag: str | None#

Quantity tag, str or None.

class genno.Quantity(*args, **kwargs)[source]#

A sparse data structure that behaves like xarray.DataArray.

Depending on the value of CLASS, Quantity is either AttrSeries or SparseDataArray.

classmethod from_series(series, sparse=True)[source]#

Convert series to the Quantity class given by CLASS.

property name: Hashable | None#

The name of this quantity.

property units#

Retrieve or set the units of the Quantity.

Examples

Create a quantity without units:

>>> qty = Quantity(...)

Set using a string; automatically converted to pint.Unit:

>>> qty.units = "kg"
>>> qty.units
<Unit('kilogram')>

The Quantity constructor converts its arguments to an internal, xarray.DataArray-like data format:

# Existing data
data = pd.Series(...)

# Convert to a Quantity for use in reporting calculations
qty = Quantity(data, name="Quantity name", units="kg")
rep.add("new_qty", qty)

Common genno usage, e.g. in message_ix, creates large, sparse data frames (billions of possible elements, but <1% populated); DataArray’s default, ‘dense’ storage format would be too large for available memory.

The goal is that all genno-based code, including built-in and user functions, can treat quantity arguments as if they were DataArray.

exception genno.MissingKeyError[source]#

Raised by Computer.add() when a required input key is missing.

Operators#

Elementary operators for genno.

Unless otherwise specified, these functions accept and return Quantity objects for data arguments/return values.

Genno’s compatibility modules each provide additional operators.

Numerical operators:

add(*quantities[, fill_value])

Sum across multiple quantities.

aggregate(quantity, groups, keep)

Aggregate quantity by groups.

broadcast_map(quantity, map[, rename, strict])

Broadcast quantity using a map.

combine(*quantities[, select, weights])

Sum distinct quantities by weights.

disaggregate_shares(quantity, shares)

Deprecated: Disaggregate quantity by shares.

div(numerator, denominator)

Compute the ratio numerator / denominator.

group_sum(qty, group, sum)

Group by dimension group, then sum across dimension sum.

index_to(qty, dim_or_selector[, label])

Compute an index of qty against certain of its values.

interpolate(qty[, coords, method, ...])

Interpolate qty.

mul(*quantities)

Compute the product of any number of quantities.

add_mul(func, c, key, *quantities, **kwargs)

Computer.add() helper for mul().

pow(a, b)

Compute a raised to the power of b.

product(*quantities)

Alias of mul(), for backwards compatibility.

ratio(numerator, denominator)

Alias of div(), for backwards compatibility.

sub(a, b)

Subtract b from a.

sum(quantity[, weights, dimensions])

Sum quantity over dimensions, with optional weights.

add_sum(func, c, key, qty[, weights, dimensions])

Computer.add() helper for sum().

Input and output:

load_file(path[, dims, units, name])

Read the file at path and return its contents as a Quantity.

add_load_file(func, c, path[, key])

Computer.add() helper for load_file().

write_report(quantity, path)

Write a quantity to a file.

Data manipulation:

apply_units(qty, units)

Apply units to qty.

assign_units(qty, units)

Set the units of qty without changing magnitudes.

concat(*objs, **kwargs)

Concatenate Quantity objs.

convert_units(qty, units)

Convert magnitude of qty from its current units to units.

relabel(qty[, labels])

Replace specific labels along dimensions of qty.

rename_dims(qty[, new_name_or_name_dict])

Rename the dimensions of qty.

select(qty, indexers, *[, inverse, drop])

Select from qty based on indexers.

genno.computations.add(*quantities: Quantity, fill_value: float = 0.0) Quantity[source]#

Sum across multiple quantities.

Raises:

ValueError – if any of the quantities have incompatible units.

Returns:

Units are the same as the first of quantities.

Return type:

.Quantity

genno.computations.aggregate(quantity: Quantity, groups: Mapping[str, Mapping], keep: bool) Quantity[source]#

Aggregate quantity by groups.

Parameters:
  • groups (dict of dict) – Top-level keys are the names of dimensions in quantity. Second-level keys are group names; second-level values are lists of labels along the dimension to sum into a group.

  • keep (bool) – If True, the members that are aggregated into a group are returned with the group sums. If False, they are discarded.

Returns:

Same dimensionality as quantity.

Return type:

Quantity

genno.computations.apply_units(qty: Quantity, units: str | Unit | Quantity) Quantity[source]#

Apply units to qty.

If qty has existing units…

  • …with compatible dimensionality to units, the magnitudes are adjusted, i.e. behaves like convert_units().

  • …with incompatible dimensionality to units, the units attribute is overwritten and magnitudes are not changed, i.e. like assign_units(), with a log message on level WARNING.

To avoid ambiguities between the two cases, use convert_units() or assign_units() instead.

Parameters:

units (str or pint.Unit) – Units to apply to qty.

genno.computations.assign_units(qty: Quantity, units: str | Unit | Quantity) Quantity[source]#

Set the units of qty without changing magnitudes.

Logs on level INFO if qty has existing units.

Parameters:

units (str or pint.Unit) – Units to assign to qty.

genno.computations.broadcast_map(quantity: Quantity, map: Quantity, rename: Mapping = {}, strict: bool = False) Quantity[source]#

Broadcast quantity using a map.

The map must be a 2-dimensional Quantity with dimensions (d1, d2), such as returned by map_as_qty(). quantity must also have a dimension d1. Typically len(d2) > len(d1).

quantity is ‘broadcast’ by multiplying it with map, and then summing on the common dimension d1. The result has the dimensions of quantity, but with d2 in place of d1.

Parameters:
  • rename (dict (str -> str), optional) – Dimensions to rename on the result.

  • strict (bool, optional) – Require that each element of d2 is mapped from exactly 1 element of d1.

genno.computations.combine(*quantities: Quantity, select: List[Mapping] | None = None, weights: List[float] | None = None) Quantity[source]#

Sum distinct quantities by weights.

Parameters:
  • *quantities (Quantity) – The quantities to be added.

  • select (list of dict) – Elements to be selected from each quantity. Must have the same number of elements as quantities.

  • weights (list of float) – Weight applied to each quantity. Must have the same number of elements as quantities.

Raises:

ValueError – If the quantities have mismatched units.

genno.computations.concat(*objs: Quantity, **kwargs) Quantity[source]#

Concatenate Quantity objs.

Any strings included amongst objs are discarded, with a logged warning; these usually indicate that a quantity is referenced which is not in the Computer.

genno.computations.convert_units(qty: Quantity, units: str | Unit | Quantity) Quantity[source]#

Convert magnitude of qty from its current units to units.

Parameters:

units (str or pint.Unit) – Units to assign to qty.

Raises:

ValueError – if units does not match the dimensionality of the current units of qty.

genno.computations.disaggregate_shares(quantity: Quantity, shares: Quantity) Quantity[source]#

Deprecated: Disaggregate quantity by shares.

This operator is identical to mul(); use mul() and its helper instead.

genno.computations.div(numerator: Quantity | float, denominator: Quantity) Quantity[source]#

Compute the ratio numerator / denominator.

Parameters:
  • numerator (.Quantity) –

  • denominator (.Quantity) –

genno.computations.drop_vars(qty: Quantity, names: Hashable | Iterable[Hashable], *, errors='raise') Quantity[source]#

Return a Quantity with dropped variables (coordinates).

Like xarray.DataArray.drop_vars().

genno.computations.group_sum(qty: Quantity, group: str, sum: str) Quantity[source]#

Group by dimension group, then sum across dimension sum.

The result drops the latter dimension.

genno.computations.index_to(qty: Quantity, dim_or_selector: str | Mapping, label: Hashable | None = None) Quantity[source]#

Compute an index of qty against certain of its values.

If the label is not provided, index_to() uses the label in the first position along the identified dimension.

Parameters:
  • qty (Quantity) –

  • dim_or_selector (str or mapping) – If a string, the ID of the dimension to index along. If a mapping, it must have only one element, mapping a dimension ID to a label.

  • label (Hashable) – Label to select along the dimension, required if dim_or_selector is a string.

Raises:

TypeError – if dim_or_selector is a mapping with length != 1.

genno.computations.interpolate(qty: Quantity, coords: Mapping[Hashable, Any] | None = None, method: Literal['linear', 'nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'polynomial'] | Literal['barycentric', 'krog', 'pchip', 'spline', 'akima'] = 'linear', assume_sorted: bool = True, kwargs: Mapping[str, Any] | None = None, **coords_kwargs: Any) Quantity[source]#

Interpolate qty.

For the meaning of arguments, see xarray.DataArray.interp(). When CLASS is AttrSeries, only 1-dimensional interpolation (one key in coords) is tested/supported.

genno.computations.load_file(path: Path, dims: Collection[Hashable] | Mapping[Hashable, Hashable] = {}, units: str | Unit | Quantity | None = None, name: str | None = None) Any[source]#

Read the file at path and return its contents as a Quantity.

Some file formats are automatically converted into objects for direct use in genno computations:

.csv:

Converted to Quantity. CSV files must have a ‘value’ column; all others are treated as indices, except as given by dims. Lines beginning with ‘#’ are ignored.

Parameters:
  • path (pathlib.Path) – Path to the file to read.

  • dims (collections.abc.Collection or collections.abc.Mapping, optional) – If a collection of names, other columns besides these and ‘value’ are discarded. If a mapping, the keys are the column labels in path, and the values are the target dimension names.

  • units (str or pint.Unit) – Units to apply to the loaded Quantity.

  • name (str) – Name for the loaded Quantity.

See also

add_load_file

genno.computations.mul(*quantities: Quantity) Quantity[source]#

Compute the product of any number of quantities.

See also

add_mul

genno.computations.pow(a: Quantity, b: Quantity | int) Quantity[source]#

Compute a raised to the power of b.

Returns:

If b is int or a Quantity with all int values that are equal to one another, then the quantity has the units of a raised to this power; for example, “kg²” → “kg⁴” if b is 2. In other cases, there are no meaningful units, so the returned quantity is dimensionless.

Return type:

.Quantity

genno.computations.product(*quantities: Quantity) Quantity#

Alias of mul(), for backwards compatibility.

Note

This may be deprecated and possibly removed in a future version.

genno.computations.ratio(numerator: Quantity | float, denominator: Quantity) Quantity#

Alias of div(), for backwards compatibility.

Note

This may be deprecated and possibly removed in a future version.

genno.computations.relabel(qty: Quantity, labels: Mapping[Hashable, Mapping] | None = None, **dim_labels: Mapping) Quantity[source]#

Replace specific labels along dimensions of qty.

Parameters:
  • labels – Keys are strings identifying dimensions of qty; values are further mappings from original labels to new labels. Dimensions and labels not appearing in qty have no effect.

  • dim_labels – Mappings given as keyword arguments, where argument name is the dimension.

Raises:

ValueError – if both labels and dim_labels are given.

genno.computations.rename_dims(qty: Quantity, new_name_or_name_dict: Hashable | Mapping[Any, Hashable] | None = None, **names: Hashable) Quantity[source]#

Rename the dimensions of qty.

Like xarray.DataArray.rename().

genno.computations.round(qty: Quantity, *args, **kwargs) Quantity[source]#

Like xarray.DataArray.round().

genno.computations.select(qty: Quantity, indexers: Mapping[Hashable, Iterable[Hashable]], *, inverse: bool = False, drop: bool = False) Quantity[source]#

Select from qty based on indexers.

Parameters:
  • indexers (dict (str -> xarray.DataArray or list of str)) – Elements to be selected from qty. Mapping from dimension names to coords along the respective dimension of qty, or to xarray-style indexers. Values not appearing in the dimension coords are silently ignored.

  • inverse (bool, optional) – If True, remove the items in indexers instead of keeping them.

genno.computations.sub(a: Quantity, b: Quantity) Quantity[source]#

Subtract b from a.

genno.computations.sum(quantity: Quantity, weights: Quantity | None = None, dimensions: List[str] | None = None) Quantity[source]#

Sum quantity over dimensions, with optional weights.

Parameters:
  • weights (.Quantity, optional) – If dimensions is given, weights must have at least these dimensions. Otherwise, any dimensions are valid.

  • dimensions (list of str, optional) – If not provided, sum over all dimensions. If provided, sum over these dimensions.

genno.computations.write_report(quantity: Quantity, path: str | PathLike) None[source]#

Write a quantity to a file.

Parameters:

path (str or Path) – Path to the file to be written.

Helper functions for adding tasks to Computers#

genno.computations.add_load_file(func, c: Computer, path, key=None, **kwargs)[source]#

Computer.add() helper for load_file().

Add a task to load an exogenous quantity from path. Computing the key or using it in other computations causes path to be loaded and converted to Quantity.

Parameters:
  • path (os.PathLike) – Path to the file, e.g. ‘/path/to/foo.ext’.

  • key (str or .Key, optional) – Key for the quantity read from the file.

  • dims (dict or list or set) – Either a collection of names for dimensions of the quantity, or a mapping from names appearing in the input to dimensions.

  • units (str or pint.Unit) – Units to apply to the loaded Quantity.

Returns:

Either key (if given) or e.g. file foo.ext based on the path name, without directory components.

Return type:

.Key

genno.computations.add_mul(func, c: Computer, key, *quantities, **kwargs) Key[source]#

Computer.add() helper for mul().

Add a computation that takes the product of quantities.

Parameters:
  • key (str or Key) – Key of the new quantity. If a Key, any dimensions are ignored; the dimensions of the product are the union of the dimensions of quantities.

  • sums (bool, optional) – If True, all partial sums of the new quantity are also added.

Returns:

The full key of the new quantity.

Return type:

Key

Internal format for quantities#

genno.core.quantity.CLASS = 'AttrSeries'#

Name of the class used to implement Quantity.

genno.core.quantity.assert_quantity(*args)[source]#

Assert that each of args is a Quantity object.

Raises:

TypeError – with a indicative message.

genno.core.quantity.maybe_densify(func)[source]#

Wrapper for computations that densifies SparseDataArray input.

class genno.core.attrseries.AttrSeriesCoordinates(obj)[source]#
property variables#

Low level interface to Coordinates contents as dict of Variable objects.

This dictionary is frozen to prevent mutation.

class genno.core.attrseries.AttrSeries(*args, **kwargs)[source]#

pandas.Series subclass imitating xarray.DataArray.

The AttrSeries class provides similar methods and behaviour to xarray.DataArray, so that genno.computations functions and user code can use xarray-like syntax. In particular, this allows such code to be agnostic about the order of dimensions.

Parameters:
  • units (str or pint.Unit, optional) – Set the units attribute. The value is converted to pint.Unit and added to attrs.

  • attrs (Mapping, optional) – Set the attrs of the AttrSeries. This attribute was added in pandas 1.0, but is not currently supported by the Series constructor.

name#

The name of this Quantity.

Like xarray.DataArray.name.

align_levels(other: AttrSeries) Tuple[Sequence[Hashable], AttrSeries][source]#

Return a copy of self with ≥1 dimension(s) in the same order as other.

Work-around for pandas-dev/pandas#25760 and other limitations of pandas.Series.

assign_coords(coords=None, **coord_kwargs)[source]#

Like xarray.DataArray.assign_coords().

bfill(dim: Hashable, limit: int | None = None)[source]#

Like xarray.DataArray.bfill().

property coords#

Like xarray.DataArray.coords. Read-only.

cumprod(dim=None, axis=None, skipna=None, **kwargs)[source]#

Like xarray.DataArray.cumprod().

property dims: Tuple[Hashable, ...]#

Like xarray.DataArray.dims.

drop(label)[source]#

Like xarray.DataArray.drop().

drop_vars(names: Hashable | Iterable[Hashable], *, errors: str = 'raise')[source]#

Like xarray.DataArray.drop_vars().

expand_dims(dim=None, axis=None, **dim_kwargs: Any) AttrSeries[source]#

Like xarray.DataArray.expand_dims().

ffill(dim: Hashable, limit: int | None = None)[source]#

Like xarray.DataArray.ffill().

classmethod from_series(series, sparse=None)[source]#

Like xarray.DataArray.from_series().

interp(coords: Mapping[Hashable, Any] | None = None, method: str = 'linear', assume_sorted: bool = True, kwargs: Mapping[str, Any] | None = None, **coords_kwargs: Any)[source]#

Like xarray.DataArray.interp().

This method works around two long-standing bugs in pandas:

item(*args)[source]#

Like xarray.DataArray.item().

rename(new_name_or_name_dict: Hashable | Mapping[Hashable, Hashable] | None = None, **names: Hashable)[source]#

Like xarray.DataArray.rename().

sel(indexers: Mapping[Any, Any] | None = None, method: str | None = None, tolerance=None, drop: bool = False, **indexers_kwargs: Any)[source]#

Like xarray.DataArray.sel().

property shape: Tuple[int, ...]#

Like xarray.DataArray.shape.

shift(shifts: Mapping[Hashable, int] | None = None, fill_value: Any | None = None, **shifts_kwargs: int)[source]#

Like xarray.DataArray.shift().

squeeze(dim=None, *args, **kwargs)[source]#

Like xarray.DataArray.squeeze().

sum(dim: str | Iterable[Hashable] | None = None, skipna: bool | None = None, min_count: int | None = None, keep_attrs: bool | None = None, **kwargs: Any) AttrSeries[source]#

Like xarray.DataArray.sum().

to_dataframe(name: Hashable | None = None, dim_order: Sequence[Hashable] | None = None) DataFrame[source]#

Like xarray.DataArray.to_dataframe().

to_series()[source]#

Like xarray.DataArray.to_series().

transpose(*dims)[source]#

Like xarray.DataArray.transpose().

class genno.core.sparsedataarray.SparseAccessor(obj)[source]#

xarray accessor to help SparseDataArray.

See the xarray accessor documentation, e.g. register_dataarray_accessor().

property COO_data#

True if the DataArray has sparse.COO data.

convert()[source]#

Return a SparseDataArray instance.

property dense#

Return a copy with dense (ndarray) data.

property dense_super#

Return a proxy to a ndarray-backed DataArray.

class genno.core.sparsedataarray.SparseDataArray(*args, **kwargs)[source]#

DataArray with sparse data.

SparseDataArray uses sparse.COO for storage with numpy.nan as its sparse.COO.fill_value. Some methods of DataArray are overridden to ensure data is in sparse, or dense, format as necessary, to provide expected functionality not currently supported by sparse, and to avoid exhausting memory for some operations that require dense data.

ffill(dim: Hashable, limit: int | None = None)[source]#

Override ffill() to auto-densify.

classmethod from_series(obj, sparse=True)[source]#

Convert a pandas.Series into a SparseDataArray.

item(*args)#

Like item().

sel(indexers: Mapping[Any, Any] | None = None, method: str | None = None, tolerance=None, drop: bool = False, **indexers_kwargs: Any) SparseDataArray[source]#

Return a new array by selecting labels along the specified dim(s).

Overrides sel() to handle >1-D indexers with sparse data.

to_dataframe(name: Hashable | None = None, dim_order: Sequence[Hashable] | None = None) DataFrame[source]#

Convert this array and its coords into a DataFrame.

Overrides to_dataframe().

to_series() Series[source]#

Convert this array into a Series.

Overrides to_series() to create the series without first converting to a potentially very large numpy.ndarray.

class genno.compat.xarray.DataArrayLike[source]#

Class with xarray.DataArray -like API.

This class is used to set signatures and types for methods and attributes on the generic Quantity class. SparseDataArray inherits from both this class and DataArray, and thus DataArray supplies implementations of these methods. In AttrSeries, the methods are implemented directly.

Internals and utilities#

genno.compat.graphviz.unwrap(label: str) str[source]#

Unwrap any number of paired ‘<’ and ‘>’ at the start/end of label.

These characters cause errors in graphviz/dot.

genno.compat.graphviz.visualize(dsk: Mapping, filename: str | PathLike | None = None, format: str | None = None, data_attributes: Mapping | None = None, function_attributes: Mapping | None = None, graph_attr: Mapping | None = None, node_attr: Mapping | None = None, edge_attr: Mapping | None = None, collapse_outputs=False, **kwargs)[source]#

Generate a Graphviz visualization of dsk.

This is merged and extended version of dask.base.visualize(), dask.dot.dot_graph(), and dask.dot.to_graphviz() that produces output that is informative for genno graphs.

Parameters:
  • dsk – The graph to display.

  • filename (Path or str, optional) – The name of the file to write to disk. If the file name does not have a suffix, “.png” is used by default. If filename is None, no file is written, and dask communicates with dot using only pipes.

  • format ({'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional) – Format in which to write output file, if not given by the suffix of filename. Default “png”.

  • data_attributes – Graphviz attributes to apply to single nodes representing keys, in addition to node_attr.

  • function_attributes – Graphviz attributes to apply to single nodes representing operations or functions, in addition to node_attr.

  • graph_attr – Mapping of (attribute, value) pairs for the graph. Passed directly to graphviz.Digraph.

  • node_attr – Mapping of (attribute, value) pairs set for all nodes. Passed directly to graphviz.Digraph.

  • edge_attr – Mapping of (attribute, value) pairs set for all edges. Passed directly to graphviz.Digraph.

  • collapse_outputs (bool, optional) – Omit nodes for keys that are the output of intermediate calculations.

  • kwargs – All other keyword arguments are added to graph_attr.

Examples

Prepare a computer:

>>> from genno import Computer
>>> from genno.testing import add_test_data
>>> c = Computer()
>>> add_test_data(c)
>>> c.add_product("z", "x:t", "x:y")
>>> c.add("y::0", itemgetter(0), "y")
>>> c.add("y0", "y::0")
>>> c.add("index_to", "z::indexed", "z:y", "y::0")
>>> c.add_single("all", ["z::indexed", "t", "config", "x:t"])

Visualize its contents:

>>> c.visualize("example.svg")

This produces the output:

Example output from graphviz.visualize.

See also

describe.label

genno.core.describe.MAX_ITEM_LENGTH = 160#

Default maximum length for outputs from describe_recursive().

genno.core.describe.describe_recursive(graph, comp, depth=0, seen=None)[source]#

Recursive helper for describe().

Parameters:
  • graph – A dask graph.

  • comp – A dask computation.

  • depth (int) – Recursion depth. Used for indentation.

  • seen (set) – Keys that have already been described. Used to avoid double-printing.

genno.core.describe.is_list_of_keys(arg: Any, graph: Mapping) bool[source]#

Identify a task which is a list of other keys.

genno.core.describe.label(arg, max_length=160) str[source]#

Return a label for arg.

The label depends on the type of arg:

  • xarray.DataArray: the first line of the string representation.

  • partial() object: a less-verbose version that omits None arguments.

  • Item protected with dask.core.quote(): its literal value.

  • A callable, e.g. a function: its name.

  • Anything else: its str representation.

In all cases, the string is no longer than max_length.

class genno.core.graph.Graph(*args, **kwargs)[source]#

A dictionary for a graph indexed by Key.

Graph maintains indexes on set/delete/pop/update operations that allow for fast lookups/member checks in certain special cases:

unsorted_key(key)

Return key with its original or unsorted dimensions.

full_key(name_or_key)

Return name_or_key with its full dimensions.

These basic features are used to provide higher-level helpers for Computer:

infer(key[, dims])

Infer a key.

full_key(name_or_key: Key | str) Key | str | None[source]#

Return name_or_key with its full dimensions.

infer(key: str | Key, dims: Iterable[str] = []) Key | str | None[source]#

Infer a key.

Parameters:

dims (list of str, optional) – Drop all but these dimensions from the returned key(s).

Returns:

  • str – If key is not found in the Graph.

  • Keykey with either its full dimensions (cf. full_key()) or, if dims are given, with only these dims.

pop(k[, d]) v, remove specified key and return the corresponding value.[source]#

If the key is not found, return the default if given; otherwise, raise a KeyError.

unsorted_key(key: Key | str) Key | str | None[source]#

Return key with its original or unsorted dimensions.

update([E, ]**F) None.  Update D from dict/iterable E and F.[source]#

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

genno.core.key.KeyLike#

Type shorthand for Key or any other value that can be used as a key.

alias of Union[Key, str]

genno.core.key.iter_keys(value: Key | str | Tuple[Key | str, ...]) Iterator[Key][source]#

Yield Keys from value.

Raises:

TypeErrorvalue is not an iterable of Key.

See also

Computer.add

genno.core.key.single_key(value: Key | str | Tuple[Key | str, ...] | Iterator) Key[source]#

Ensure value is a single Key.

Raises:

TypeErrorvalue is not a Key or 1-tuple of Key.

See also

Computer.add

class genno.core.operator.Operator[source]#

Base class for a callable with convenience methods.

Example

>>> from genno import Operator
>>>
>>> @Operator.define
... def myfunc(q1: Quantity, q2: Quantity) -> Quantity:
...     # Operator code
>>>
>>> @myfunc.helper
... def add_myfunc(f, computer, *args, **kwargs):
...     # Custom code to add tasks to `computer`
...     # Perform checks or handle `args` and `kwargs`.
add_tasks(c: Computer, *args, **kwargs) Tuple[KeyLike, ...][source]#

Invoke _add_task to add tasks to c.

static define(func: Callable) Operator[source]#

Create an Operator object that wraps func.

func: ClassVar[Callable]#

Function or callable for the Operator.

helper(func: Callable[[...], KeyLike | Tuple[KeyLike, ...]]) Callable[source]#

Register func as the convenience method for adding task(s).

genno.util.REPLACE_UNITS = {'%': 'percent'}#

Replacements to apply to Quantity units before parsing by pint. Mapping from original unit -> preferred unit.

The default values include:

  • The ‘%’ symbol cannot be supported by pint, because it is a Python operator; it is replaced with “percent”.

Additional values can be added with configure(); see units:.

genno.util.clean_units(input_string)[source]#

Tolerate messy strings for units.

  • Dimensions enclosed in “[]” have these characters stripped.

  • Replacements from REPLACE_UNITS are applied.

genno.util.collect_units(*args)[source]#

Return the “_unit” attributes of the args.

genno.util.filter_concat_args(args)[source]#

Filter out str and Key from args.

A warning is logged for each element removed.

genno.util.parse_units(data: Iterable, registry=None) Unit[source]#

Return a pint.Unit for an iterable of strings.

Valid unit expressions not already present in the registry are defined, e.g.:

u = parse_units(["foo/bar", "foo/bar"], reg)

…results in the addition of unit definitions equivalent to:

reg.define("foo = [foo]")
reg.define("bar = [bar]")
u = reg.foo / reg.bar
Raises:

ValueError – if data contains more than 1 unit expression, or the unit expression contains characters not parseable by pint, e.g. -?$.

genno.util.partial_split(func: Callable, kwargs: Mapping) Tuple[Callable, MutableMapping][source]#

Forgiving version of functools.partial().

Returns a partial object and leftover kwargs not applicable to func.

genno.util.unquote(value)[source]#

Reverse dask.core.quote().

Utilities for testing#

genno.testing.add_dantzig(c: Computer)[source]#

Add contents analogous to the ixmp Dantzig scenario.

genno.testing.add_large_data(c: Computer, num_params, N_dims=6, N_data=0)[source]#

Add nodes to c that return large-ish data.

The result is a matrix wherein the Cartesian product of all the keys is very large— about 2e17 elements for N_dim = 6—but the contents are very sparse. This can be handled by SparseDataArray, but not by xarray.DataArray backed by np.array.

genno.testing.add_test_data(c: Computer)[source]#

add_test_data() operating on a Computer, not an ixmp.Scenario.

genno.testing.assert_logs(caplog, message_or_messages=None, at_level=None)[source]#

Assert that message_or_messages appear in logs.

Use assert_logs as a context manager for a statement that is expected to trigger certain log messages. assert_logs checks that these messages are generated.

Derived from ixmp.testing.assert_logs().

Example

>>> def test_foo(caplog):
...     with assert_logs(caplog, 'a message'):
...         logging.getLogger(__name__).info('this is a message!')
Parameters:
  • caplog (object) – The pytest caplog fixture.

  • message_or_messages (str or list of str) – String(s) that must appear in log messages.

  • at_level (int, optional) – Messages must appear on ‘genno’ or a sub-logger with at least this level.

genno.testing.assert_qty_allclose(a, b, check_type: bool = True, check_attrs: bool = True, ignore_extra_coords: bool = False, **kwargs)[source]#

Assert that objects a and b have numerically close values.

Parameters:
  • check_type (bool, optional) – Assert that a and b are both Quantity instances. If False, the arguments are converted to Quantity.

  • check_attrs (bool, optional) – Also assert that check that attributes are identical.

  • ignore_extra_coords (bool, optional) – Ignore extra coords that are not dimensions. Only meaningful when Quantity is SparseDataArray.

genno.testing.assert_qty_equal(a, b, check_type: bool = True, check_attrs: bool = True, ignore_extra_coords: bool = False, **kwargs)[source]#

Assert that objects a and b are equal.

Parameters:
  • check_type (bool, optional) – Assert that a and b are both Quantity instances. If False, the arguments are converted to Quantity.

  • check_attrs (bool, optional) – Also assert that check that attributes are identical.

  • ignore_extra_coords (bool, optional) – Ignore extra coords that are not dimensions. Only meaningful when Quantity is SparseDataArray.

genno.testing.assert_units(qty: Quantity, exp: str) None[source]#

Assert that qty has units exp.

genno.testing.get_test_quantity(key: Key) Quantity[source]#

Computation that returns test data.

genno.testing.pytest_runtest_makereport(item, call)[source]#

Pytest hook to unwrap genno.ComputationError.

This allows to “xfail” tests more precisely on the underlying exception, rather than the ComputationError which wraps it.

genno.testing.random_qty(shape: Dict[str, int], **kwargs)[source]#

Return a Quantity with shape and random contents.

Parameters:
  • shape (dict (str -> int)) – Mapping from dimension names to lengths along each dimension.

  • **kwargs – Other keyword arguments to Quantity.

Returns:

Random data with one dimension for each key in shape, and coords along those dimensions like “foo1”, “foo2”, with total length matching the value from shape. If shape is empty, a scalar (0-dimensional) Quantity.

Return type:

Quantity

genno.testing.test_data_path()[source]#

Path to the directory containing test data.

genno.testing.ureg()[source]#

Application-wide units registry.