API reference

Contents

API reference#

Top-level classes and functions#

configure([path])

Configure genno globally.

Computer(**kwargs)

Class for describing and executing computations.

Key(name_or_value[, dims, tag, _fast])

A hashable key for a quantity that includes its dimensionality.

Quantity(*args, **kwargs)

A sparse data structure that behaves like xarray.DataArray.

genno.configure(path: Path | str | None = None, **config)[source]

Configure genno globally.

Modifies global variables that affect the behaviour of all Computers and operators. Configuration keys loaded from file are superseded by keyword arguments. Messages are logged at level logging.INFO if config contains unhandled sections.

Parameters:
  • path (pathlib.Path, optional) – Path to a configuration file in JSON or YAML format.

  • **config – Configuration keys/sections and values.

class genno.Computer(**kwargs)[source]#

Class for describing and executing computations.

Parameters:

kwargs – Passed to configure().

A Computer is used to prepare (add() and related methods) and then execute (get() and related methods) computations stored in a graph. Advanced users may manipulate the graph directly; but most computations can be prepared using the methods of Computer.

Instance attributes:

default_key

The default key to get() with no argument.

graph

A dask-format graph (see 1, 2).

keys()

Return the keys of graph.

modules

List of modules containing operators.

unit_registry

The pint.UnitRegistry used by the Computer.

General-purpose methods for preparing computations and tasks:

add(data, *args, **kwargs)

General-purpose method to add computations.

add_queue(queue[, max_tries, fail])

Add tasks from a list or queue.

add_single(key, *computation[, strict, index])

Add a single computation at key.

aggregate(qty, tag, dims_or_groups[, ...])

Deprecated.

apply(generator, *keys, **kwargs)

Add computations by applying generator to keys.

cache(func)

Decorate func so that its return value is cached.

describe([key, quiet])

Return a string describing the computations that produce key.

eval(expr)

Evaluate expr to add tasks and keys.

visualize(filename[, key, optimize_graph])

Generate an image describing the Computer structure.

Executing computations:

get([key])

Execute and return the result of the computation key.

write(key, path, **kwargs)

Compute key and write the result directly to path.

Utility and configuration methods:

check_keys(*keys[, predicate, action])

Check that keys are in the Computer.

configure([path, fail, config])

Configure the Computer.

full_key(name_or_key)

Return the full-dimensionality key for name_or_key.

get_operator(name)

Return a function, Operator, or callable for use in a task.

infer_keys(key_or_keys[, dims])

Infer complete key_or_keys.

require_compat(pkg)

Register a module for get_operator().

Deprecated:

add_file(*args, **kwargs)

Deprecated.

add_product(*args, **kwargs)

Deprecated.

convert_pyam(*args, **kwargs)

Deprecated.

disaggregate(qty, new_dim[, method, args])

Deprecated.

graph: genno.core.graph.Graph = {'config': {}}#

A dask-format graph (see 1, 2).

Dictionary keys are either Key, str, or any other hashable value.

Dictionary values are computations, one of:

  1. Any other, existing key in the Computer. This functions as an alias.

  2. Any other literal value or constant, to be returned directly.

  3. A task tuple: a callable (such as a function or any object with a __call__() method), followed by zero or more keys (referring to the output of other computations), or computations directly.

  4. A list containing zero or more of (1), (2), and/or (3).

genno reserves some keys for special usage:

"config"

A dict storing configuration settings. See Configuration. Because this information is stored in the graph, it can be used as one input to other operators.

Some inputs to tasks may be confused for (1) or (4), above. The recommended way to protect these is:

  • Literal str inputs to tasks: use functools.partial() on the function that is the first element of the task tuple.

  • list of str: use dask.core.quote() to wrap the list.

add(data, *args, **kwargs) Key | str | Tuple[Key | str, ...][source]#

General-purpose method to add computations.

add() can be called in several ways; its behaviour depends on data; see below. It chains to methods such as add_single(), add_queue(), and/or apply(); each can also be called directly.

Returns:

Some or all of the keys added to the Computer.

Return type:

genno.core.key.KeyLike or tuple of genno.core.key.KeyLike

The data argument may be:

list

A list of computations, like [(list(args1), dict(kwargs1)), (list(args2), dict(kwargs2)), ...] → passed to add_queue().

str naming an operator

e.g. “select”, retrievable with get_operator(). add_single() is called with (key=args[0], data, *args[1], **kwargs), that is, applying the named operator to the other parameters.

Key or other str:

Passed to add_single().

add() may be used to:

  • Provide an alias from one key to another:

    >>> from genno import Computer
    >>> rep = Computer()  # Create a new Computer object
    >>> rep.add('aliased name', 'original name')
    
  • Define an arbitrarily complex operator in a Python function that operates directly on the ixmp.Scenario:

    >>> def my_report(scenario):
    >>>     # many lines of code
    >>>     return 'foo'
    >>> rep.add('my report', (my_report, 'scenario'))
    >>> rep.finalize(scenario)
    >>> rep.get('my report')
    foo
    
add_queue(queue: Iterable[Tuple], max_tries: int = 1, fail: str | int | None = None) Tuple[Key | str, ...][source]#

Add tasks from a list or queue.

Parameters:
  • queue (collections.abc.Iterable of tuple) – Each item is either a N-tuple of positional arguments to add(), or a 2-tuple of (tuple of positional arguments, dict of keyword arguments).

  • max_tries (int, optional) – Retry adding elements up to this many times.

  • fail ("raise" or str or logging level, optional) – Action to take when a computation from queue cannot be added after max_tries: “raise” an exception, or log messages on the indicated level and continue.

This method allows to add many computations at once by, in effect, calling add() repeatedly with sets of positional and (optionally) keyword arguments taken from the queue. The argument may be:

  • A prepared/static data structure, like a list, where each item is either a 2-tuple of (args, kwargs) or only a tuple of args that can be passed to add().

  • A generator that yields items of the same type(s).

Given this initial sequence of items, add_queue() will…

  • Pass each item in turn to add();

  • If an item fails to be added—for instance, with MissingKeyError on one of its inputs—and max_tries > 1: re-append that item to the queue so that it can be attempted again;

  • If an item fails to be added at least max_tries times: take an action according to fail.

This behaviour makes add_queue() tolerant of entries in queue that are out-of-order: individual items may fail in calls to add() on initial passes through the queue, but eventually succeed once their inputs are available.

apply(generator: Callable, *keys, **kwargs) Key | str | Tuple[Key | str, ...][source]#

Add computations by applying generator to keys.

Parameters:
  • generator (typing.Callable) –

    Function to apply to keys. This function may take a first positional argument annotated with Computer or a subtype; if so, then it is provided with a reference to self.

    The function may:

    • yield or return an iterable of (key, computation). These are used to directly update the graph, and then apply() returns the added keys.

    • If it is provided with a reference to the Computer, call add() or any other method to update the graph. In this case, it should return a Key or sequence of keys, indicating what was added; these are in turn returned by apply().

  • keys (Hashable) – The starting key(s). These are provided as positional arguments to generator.

  • kwargs – Keyword arguments to generator.

The generator may have a type annotation for Computer on its first positional argument. In this case, a reference to the Computer is supplied, and generator can use the Computer methods to add many keys and computations:

def my_gen0(c: genno.Computer, **kwargs):
    c.load_file("file0.txt", **kwargs)
    c.load_file("file1.txt", **kwargs)

# Use the generator to add several computations
rep.apply(my_gen0, units="kg")

Or, generator may yield a sequence (0 or more) of (key, computation), which are added to the graph:

def my_gen1(**kwargs):
    op = partial(operator.load_file, **kwargs)
    yield from (f"file:{i}", (op, "file{i}.txt")) for i in range(2)

rep.apply(my_gen1, units="kg")
eval(expr: str) Tuple[Key, ...][source]#

Evaluate expr to add tasks and keys.

Parse a statement or block of statements using ast from the Python standard library. expr may include:

  • Constants.

  • References to existing keys in the Computer by their name; these are expanded using full_key().

  • Multiple statements on separate lines or separated by “;”.

  • Python arithmetic operators including +, -, *, /, **; these are mapped to the corresponding operator.

  • Function calls, also mapped to the corresponding operator via get_operator(). These may include simple positional (constants or key references) or keyword (constants only) arguments.

Parameters:

expr (str) – Expression to be evaluated.

Returns:

One key for the left-hand side of each expression.

Return type:

tuple of Key

Raises:
  • NotImplementedError – For complex expressions not supported; if any of the statements is anything other than a simple assignment.

  • NameError – If a function call references a non-existent computation.

Examples

Parse a multi-line string and add tasks to compute z, a, b, d, and e. The dimensions of each are automatically inferred given the dimension of the existing operand, x.

>>> c = Computer()
>>> # (Here, add tasks to compute a quantity like "x:t-y")
>>> added = c.eval(
...     """
...     z = - (0.5 / (x ** 3))
...     a = x ** 3 + z
...     b = a + a
...     d = assign_units(b, "km")
...     e = index_to(d, dim="t", label="foo1")
...     """
... )
>>> added[-1]
<e:t-y>
add_aggregate(qty: Key | str, tag: str, dims_or_groups: Mapping | str | Sequence[str], weights: DataArray | None = None, keep: bool = True, sums: bool = False, fail: str | int | None = None)#

Deprecated.

Add a computation that aggregates qty.

Deprecated since version 1.18.0: Instead, for a mapping/dict dims_or_groups, use:

c.add(qty, "aggregate", groups=dims_or_groups, keep=keep, ...)

Or, for str or sequence of str dims_or_groups, use:

c.add(None, "sum", qty, dimensions=dims_or_groups, ...)
Parameters:
Returns:

The key of the newly-added node.

Return type:

Key

add_file(*args, **kwargs)[source]#

Deprecated.

Deprecated since version 1.18.0: Instead use add_load_file() via:

c.add(..., "load_file", ...)
add_product(*args, **kwargs)[source]#

Deprecated.

Deprecated since version 1.18.0: Instead use add_binop() via:

c.add(..., "mul", ...)
add_single(key: Key | str, *computation, strict=False, index=False) Key | str[source]#

Add a single computation at key.

Parameters:
  • key (str or Key or collections.abc.Hashable) – A string, Key, or other value identifying the output of computation.

  • computation (object) – Any computation. See graph.

  • strict (bool, optional) – If True, key must not already exist in the Computer, and any keys referred to by computation must exist.

  • index (bool, optional) – If True, key is added to the index as a full-resolution key, so it can be later retrieved with full_key().

Raises:
  • KeyExistsError – If strict is True and either (a) key already exists; or (b) sums is True and the key for one of the partial sums of key already exists.

  • MissingKeyError – If strict is True and any key referred to by computation does not exist.

aggregate(qty: Key | str, tag: str, dims_or_groups: Mapping | str | Sequence[str], weights: DataArray | None = None, keep: bool = True, sums: bool = False, fail: str | int | None = None)[source]#

Deprecated.

Add a computation that aggregates qty.

Deprecated since version 1.18.0: Instead, for a mapping/dict dims_or_groups, use:

c.add(qty, "aggregate", groups=dims_or_groups, keep=keep, ...)

Or, for str or sequence of str dims_or_groups, use:

c.add(None, "sum", qty, dimensions=dims_or_groups, ...)
Parameters:
Returns:

The key of the newly-added node.

Return type:

Key

cache(func)[source]#

Decorate func so that its return value is cached.

See also

Caching

check_keys(*keys: str | Key, predicate=None, action='raise') List[Key | str][source]#

Check that keys are in the Computer.

Parameters:
  • keys (genno.core.key.KeyLike) – Some Keys or strings.

  • predicate (typing.Callable, optional) – Function to run on each of keys; see below.

  • action ("raise" or str) – Action to take on missing keys.

Returns:

One item for each item k in keys:

  1. k itself, unchanged, if predicate is given and predicate(k) returns True.

  2. Graph.unsorted_key(), that is, k but with its dimensions in a specific order that already appears in graph.

  3. Graph.full_key(), that is, an existing key with the name k with its full dimensionality.

  4. None otherwise.

Return type:

list of genno.core.key.KeyLike

Raises:

MissingKeyError – If action is “raise” and 1 or more of keys do not appear (either in different dimension order, or full dimensionality) in the graph.

configure(path: Path | str | None = None, fail: str | int = 'raise', config: Mapping[str, Any] | None = None, **config_kw)[source]#

Configure the Computer.

Accepts a path to a configuration file and/or keyword arguments. Configuration keys loaded from file are superseded by keyword arguments. Messages are logged at level logging.INFO if config contains unhandled sections.

See Configuration for a list of all configuration sections and keys, and details of the configuration file format.

Parameters:
  • path (pathlib.Path, optional) – Path to a configuration file in JSON or YAML format.

  • fail ("raise" or str or logging level, optional) – Passed to add_queue(). If not “raise”, then log messages are generated for config handlers that fail. The Computer may be only partially configured.

  • config – Configuration keys/sections and values, as a mapping. Use this if any of the keys/sections are not valid Python names, for instance if they contain “-” or “ “.

  • **config_kw – Configuration keys/sections and values, as keyword arguments.

convert_pyam(*args, **kwargs)[source]#

Deprecated.

Deprecated since version 1.18.0: Instead use add_as_pyam() via:

c.require_compat("pyam")
c.add(..., "as_pyam", ...)
default_key: genno.core.key.KeyLike | None = None#

The default key to get() with no argument.

describe(key=None, quiet=True)[source]#

Return a string describing the computations that produce key.

If key is not provided, all keys in the Computer are described.

Unless quiet, the string is also printed to the console.

Returns:

Description of computations.

Return type:

str

disaggregate(qty, new_dim, method='shares', args=[])[source]#

Deprecated.

Deprecated since version 1.18.0: Instead, for method = “disaggregate_shares”, use:

c = Computer()
c.add(qty.append(new_dim), "mul", qty, ..., strict=True)

Or for a callable() method, use:

c.add(qty.append(new_dim), method, qty, ..., strict=True)
full_key(name_or_key: Key | str) Key | str[source]#

Return the full-dimensionality key for name_or_key.

An quantity ‘foo’ with dimensions (a, c, n, q, x) is available in the Computer as 'foo:a-c-n-q-x'. This Key can be retrieved with:

c.full_key("foo")
c.full_key("foo:c")
# etc.
Raises:

KeyError – if name_or_key is not in the graph.

get(key=None)[source]#

Execute and return the result of the computation key.

Only key and its dependencies are computed.

Parameters:

key (str, optional) – If not provided, default_key is used.

Raises:

ValueError – If key and default_key are both None.

get_comp(name) Callable | None#

Alias of get_operator().

get_operator(name) Callable | None[source]#

Return a function, Operator, or callable for use in a task.

get_operator() checks each of the modules for a callable with the given name. Modules at the end of the list take precedence over those earlier in the list.

Returns:

  • typing.Callable

  • None – If there is no callable with the given name in any of modules.

infer_keys(key_or_keys: Key | str | Iterable[Key | str], dims: Iterable[str] = [])[source]#

Infer complete key_or_keys.

Each return value is one of:

  • a Key with either

    • dimensions dims, if any are given, otherwise

    • its full dimensionality (cf. full_key())

  • str, the same as input, if the key is not defined in the Computer.

Parameters:
Returns:

keys()[source]#

Return the keys of graph.

modules: MutableSequence[module] = []#

List of modules containing operators.

By default, this includes the genno built-in operators in genno.operator. require_compat() appends additional modules, for instance genno.compat.plotnine, to this list. User code may also add modules to this list directly.

require_compat(pkg: str | module)[source]#

Register a module for get_operator().

The specified module is appended to modules.

Parameters:

pkg (str or module) –

One of:

  • the name of a package (for instance “plotnine”), corresponding to a submodule of genno.compat (genno.compat.plotnine). genno.compat.{pkg}.operator is added.

  • the name of any importable module, for instance “foo.bar”.

  • a module object that has already been imported.

Raises:

ModuleNotFoundError – If the required packages are missing.

Examples

Operators packaged with genno for compatibility:

>>> c = Computer()
>>> c.require_compat("pyam")

Operators in another module, using the module name:

>>> c.require_compat("ixmp.reporting.computations")

or using imported module object directly:

>>> import ixmp.reporting.computations as mod
>>> c.require_compat(mod)
property unit_registry#

The pint.UnitRegistry used by the Computer.

visualize(filename, key=None, optimize_graph=False, **kwargs)[source]#

Generate an image describing the Computer structure.

This is similar to dask.visualize(); see compat.graphviz.visualize(). Requires graphviz.

write(key, path, **kwargs)[source]#

Compute key and write the result directly to path.

class genno.Key(name_or_value: str | Key | Quantity, dims: Iterable[str] = [], tag: str | None = None, _fast: bool = False)[source]#

A hashable key for a quantity that includes its dimensionality.

Quantities are indexed by 0 or more dimensions. A Key refers to a quantity using three components:

  1. a string name,

  2. zero or more ordered dims, and

  3. an optional tag.

For example, for a \(\text{foo}\) with with three dimensions \(a, b, c\):

\[\text{foo}^{abc}\]

Key allows a specific, explicit reference to various forms of “foo”:

  • in its full resolution, i.e. indexed by a, b, and c:

    >>> k1 = Key("foo", ["a", "b", "c"])
    >>> k1
    <foo:a-b-c>
    
  • in a partial sum over one dimension, e.g. summed across dimension c, with remaining dimensions a and b:

    >>> k2 = k1.drop('c')
    >>> k2 == 'foo:a-b'
    True
    
  • in a partial sum over multiple dimensions, etc.:

    >>> k1.drop('a', 'c') == k2.drop('a') == 'foo:b'
    True
    
  • after it has been manipulated by other computations, e.g.

    >>> k3 = k1.add_tag('normalized')
    >>> k3
    <foo:a-b-c:normalized>
    >>> k4 = k3.add_tag('rescaled')
    >>> k4
    <foo:a-b-c:normalized+rescaled>
    

Notes:

A Key has the same hash, and compares equal to its str representation. A Key also compares equal to another key or str with the same dimensions in any other order. repr(key) prints the Key in angle brackets (‘<>’) to signify that it is a Key object.

>>> str(k1)
'foo:a-b-c'
>>> repr(k1)
'<foo:a-b-c>'
>>> hash(k1) == hash("foo:a-b-c")
True
>>> k1 == "foo:c-b-a"
True

Keys are immutable: the properties name, dims, and tag are read-only, and the methods append(), drop(), and add_tag() return new Key objects.

Keys may be generated concisely by defining a convenience method:

>>> def foo(dims):
>>>     return Key('foo', dims.split())
>>> foo('a b c')
<foo:a-b-c>

Keys can also be manipulated using some of the Python arithmetic operators:

  • +: same as add_tag():

    >>> k1 = Key("foo", "abc")
    >>> k1
    <foo:a-b-c>
    >>> k1 + "tag"
    <foo:a-b-c:tag>
    
  • * with a single string, an iterable of strings, or another Key: similar to append() and product():

    >>> k1 * "d"
    <foo:a-b-c-d>
    >>> k1 * ("e", "f")
    <foo:a-b-c-e-f>
    >>> k1 * Key("bar", "ghi")
    <foo:a-b-c-g-h-i>
    
  • / with a single string or iterable of strings: similar to drop():

    >>> k1 / "a"
    <foo:b-c>
    >>> k1 / ("a", "c")
    <foo:b>
    >>> k1 / Key("baz", "cde")
    <foo:a-b>
    
add_tag(tag) Key[source]#

Return a new Key with tag appended.

append(*dims: str) Key[source]#

Return a new Key with additional dimensions dims.

classmethod bare_name(value) str | None[source]#

If value is a bare name (no dims or tags), return it; else None.

property dims: Tuple[str, ...]#

Dimensions of the quantity, tuple of str.

drop(*dims: str | bool) Key[source]#

Return a new Key with dims dropped.

drop_all() Key[source]#

Return a new Key with all dimensions dropped / zero dimensions.

classmethod from_str_or_key(value: str | Key | Quantity, drop: Iterable[str] | bool = [], append: Iterable[str] = [], tag: str | None = None) Key[source]#

Return a new Key from value.

Changed in version 1.18.0: Calling from_str_or_key() with a single argument is no longer necessary; simply give the same value as an argument to Key.

The class method is retained for convenience when calling with multiple arguments. However, the following are equivalent and may be more readable:

k1 = Key("foo:a-b-c:t1", drop="b", append="d", tag="t2")
k2 = Key("foo:a-b-c:t1").drop("b").append("d)"
Parameters:
  • value (str or Key) – Value to use to generate a new Key.

  • drop (list of str or True, optional) – Existing dimensions of value to drop. See drop().

  • append (list of str, optional) – New dimensions to append to the returned Key. See append().

  • tag (str, optional) – Tag for returned Key. If value has a tag, the two are joined using a ‘+’ character. See add_tag().

Return type:

Key

iter_sums() Generator[Tuple[Key, Callable, Key], None, None][source]#

Generate (key, task) for all possible partial sums of the Key.

property name: str#

Name of the quantity, str.

classmethod product(new_name: str, *keys, tag: str | None = None) Key[source]#

Return a new Key that has the union of dimensions on keys.

Dimensions are ordered by their first appearance:

  1. First, the dimensions of the first of the keys.

  2. Next, any additional dimensions in the second of the keys that were not already added in step 1.

  3. etc.

Parameters:

new_name (str) – Name for the new Key. The names of keys are discarded.

rename(name: str) Key[source]#

Return a Key with a replaced name.

property sorted: Key#

A version of the Key with its dims sorted().

property tag: str | None#

Quantity tag, str or None.

class genno.Quantity(*args, **kwargs)[source]#

A sparse data structure that behaves like xarray.DataArray.

Depending on the value of CLASS, Quantity is either AttrSeries or SparseDataArray.

astype(dtype, *, order=None, casting=None, subok=None, copy=None, keep_attrs=True)#

Like xarray.DataArray.astype().

bfill(dim: Hashable, limit: int | None = None)#

Like xarray.DataArray.bfill().

cumprod(dim: str | Collection[Hashable] | ellipsis | None = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any)#

Like xarray.DataArray.cumprod().

property data: Any#

Like xarray.DataArray.data.

ffill(dim: Hashable, limit: int | None = None)#

Like xarray.DataArray.ffill().

classmethod from_series(series, sparse=True)[source]#

Convert series to the Quantity class given by CLASS.

property name: Hashable | None#

The name of this quantity.

pipe(func: Callable[[...], T] | Tuple[Callable[[...], T], str], *args: Any, **kwargs: Any) T#

Like xarray.DataArray.pipe().

property shape: Tuple[int, ...]#

Like xarray.DataArray.shape.

shift(shifts: Mapping[Any, int] | None = None, fill_value: Any | None = None, **shifts_kwargs: int)#

Like xarray.DataArray.shift.

property size: int#

Like xarray.DataArray.size.

squeeze(dim: Hashable | Iterable[Hashable] | None = None, drop: bool = False, axis: int | Iterable[int] | None = None)#

Like xarray.DataArray.squeeze().

to_series() Series#

Like xarray.DataArray.to_series().

property units#

Retrieve or set the units of the Quantity.

Examples

Create a quantity without units:

>>> qty = Quantity(...)

Set using a string; automatically converted to pint.Unit:

>>> qty.units = "kg"
>>> qty.units
<Unit('kilogram')>

The Quantity constructor converts its arguments to an internal, xarray.DataArray-like data format:

# Existing data
data = pd.Series(...)

# Convert to a Quantity for use in reporting calculations
qty = Quantity(data, name="Quantity name", units="kg")
rep.add("new_qty", qty)

Common genno usage, e.g. in message_ix, creates large, sparse data frames (billions of possible elements, but <1% populated); DataArray’s default, ‘dense’ storage format would be too large for available memory.

The goal is that all genno-based code, including built-in and user functions, can treat quantity arguments as if they were DataArray.

exception genno.ComputationError(exc)[source]#

Wrapper to print intelligible exception information for Computer.get().

In order to aid in debugging, this helper:

  • Omits the parts of the stack trace that are internal to dask, and

  • Gives the key in the Computer.graph and the computation/task that caused the exception.

exception genno.KeyExistsError[source]#

Raised by Computer.add() when the target key exists.

exception genno.MissingKeyError[source]#

Raised by Computer.add() when a required input key is missing.

Operators#

Elementary operators for genno.

Unless otherwise specified, these functions accept and return Quantity objects for data arguments/return values.

Genno’s compatibility modules each provide additional operators.

Numerical operators:

add(*quantities[, fill_value])

Sum across multiple quantities.

aggregate(quantity, groups, keep)

Aggregate quantity by groups.

broadcast_map(quantity, map[, rename, strict])

Broadcast quantity using a map.

combine(*quantities[, select, weights])

Sum distinct quantities by weights.

disaggregate_shares(quantity, shares)

Deprecated: Disaggregate quantity by shares.

div(numerator, denominator)

Compute the ratio numerator / denominator.

group_sum(qty, group, sum)

Group by dimension group, then sum across dimension sum.

index_to(qty, dim_or_selector[, label])

Compute an index of qty against certain of its values.

interpolate(qty[, coords, method, ...])

Interpolate qty.

mul(*quantities)

Compute the product of any number of quantities.

pow(a, b)

Compute a raised to the power of b.

product(*quantities)

Alias of mul(), for backwards compatibility.

ratio(numerator, denominator)

Alias of div(), for backwards compatibility.

sub(a, b)

Subtract b from a.

sum(quantity[, weights, dimensions])

Sum quantity over dimensions, with optional weights.

add_sum(func, c, key, qty[, weights, dimensions])

Computer.add() helper for sum().

Input and output:

load_file(path[, dims, units, name])

Read the file at path and return its contents as a Quantity.

add_load_file(func, c, path[, key])

Computer.add() helper for load_file().

write_report(-> None  -> None  -> None)

Write a quantity to a file.

Data manipulation:

apply_units(qty, units)

Apply units to qty.

assign_units(qty, units)

Set the units of qty without changing magnitudes.

concat()

Concatenate Quantity objs.

convert_units(qty, units)

Convert magnitude of qty from its current units to units.

relabel(qty[, labels])

Replace specific labels along dimensions of qty.

rename_dims(qty[, new_name_or_name_dict])

Rename the dimensions of qty.

select(qty, indexers, *[, inverse, drop])

Select from qty based on indexers.

genno.operator.add(*quantities: Quantity, fill_value: float = 0.0) Quantity[source]#

Sum across multiple quantities.

Raises:

ValueError – if any of the quantities have incompatible units.

Returns:

Units are the same as the first of quantities.

Return type:

Quantity

See also

add_binop

genno.operator.aggregate(quantity: Quantity, groups: Mapping[str, Mapping], keep: bool) Quantity[source]#

Aggregate quantity by groups.

Parameters:
  • groups (dict of dict) – Top-level keys are the names of dimensions in quantity. Second-level keys are group names; second-level values are lists of labels along the dimension to sum into a group. Labels may be literal values, or compiled re.Pattern objects; in the latter case, all matching labels (according to re.Pattern.fullmatch()) are included in the group to be aggregated.

  • keep (bool) – If True, the members that are aggregated into a group are returned with the group sums. If False, they are discarded.

Returns:

Same dimensionality as quantity.

Return type:

Quantity

genno.operator.apply_units(qty: Quantity, units: str | Unit | Quantity) Quantity[source]#

Apply units to qty.

If qty has existing units…

  • …with compatible dimensionality to units, the magnitudes are adjusted, i.e. behaves like convert_units().

  • …with incompatible dimensionality to units, the units attribute is overwritten and magnitudes are not changed, i.e. like assign_units(), with a log message on level WARNING.

To avoid ambiguities between the two cases, use convert_units() or assign_units() instead.

Parameters:

units (str or pint.Unit) – Units to apply to qty.

genno.operator.assign_units(qty: Quantity, units: str | Unit | Quantity) Quantity[source]#

Set the units of qty without changing magnitudes.

Logs on level INFO if qty has existing units.

Parameters:

units (str or pint.Unit) – Units to assign to qty.

genno.operator.broadcast_map(quantity: Quantity, map: Quantity, rename: Mapping = {}, strict: bool = False) Quantity[source]#

Broadcast quantity using a map.

The map must be a 2-dimensional Quantity with dimensions (d1, d2), such as returned by ixmp.report.operator.map_as_qty(). quantity must also have a dimension d1. Typically len(d2) > len(d1).

quantity is ‘broadcast’ by multiplying it with map, and then summing on the common dimension d1. The result has the dimensions of quantity, but with d2 in place of d1.

Parameters:
  • rename (dict, optional) – Dimensions to rename on the result; mapping from original dimension (str) to target name (str).

  • strict (bool, optional) – Require that each element of d2 is mapped from exactly 1 element of d1.

genno.operator.combine(*quantities: Quantity, select: List[Mapping] | None = None, weights: List[float] | None = None) Quantity[source]#

Sum distinct quantities by weights.

Parameters:
  • *quantities (Quantity) – The quantities to be added.

  • select (list of dict) – Elements to be selected from each quantity. Must have the same number of elements as quantities.

  • weights (list of float) – Weight applied to each quantity. Must have the same number of elements as quantities.

Raises:

ValueError – If the quantities have mismatched units.

genno.operator.concat(*objs: Quantity, **kwargs) Quantity[source]#
genno.operator.concat(*args: IamDataFrame, **kwargs) IamDataFrame

Concatenate Quantity objs.

Any strings included amongst objs are discarded, with a logged warning; these usually indicate that a quantity is referenced which is not in the Computer.

genno.operator.convert_units(qty: Quantity, units: str | Unit | Quantity) Quantity[source]#

Convert magnitude of qty from its current units to units.

Parameters:

units (str or pint.Unit) – Units to assign to qty.

Raises:

ValueError – if units does not match the dimensionality of the current units of qty.

genno.operator.disaggregate_shares(quantity: Quantity, shares: Quantity) Quantity[source]#

Deprecated: Disaggregate quantity by shares.

This operator is identical to mul(); use mul() and its helper instead.

genno.operator.div(numerator: Quantity | float, denominator: Quantity) Quantity[source]#

Compute the ratio numerator / denominator.

Parameters:

See also

add_binop

genno.operator.drop_vars(qty: Quantity, names: str | Iterable[Hashable] | Callable[[Quantity], str | Iterable[Hashable]], *, errors='raise') Quantity[source]#

Return a Quantity with dropped variables (coordinates).

Like xarray.DataArray.drop_vars().

genno.operator.group_sum(qty: Quantity, group: str, sum: str) Quantity[source]#

Group by dimension group, then sum across dimension sum.

The result drops the latter dimension.

genno.operator.index_to(qty: Quantity, dim_or_selector: str | Mapping, label: Hashable | None = None) Quantity[source]#

Compute an index of qty against certain of its values.

If the label is not provided, index_to() uses the label in the first position along the identified dimension.

Parameters:
  • qty (Quantity) –

  • dim_or_selector (str or collections.abc.Mapping) – If a string, the ID of the dimension to index along. If a mapping, it must have only one element, mapping a dimension ID to a label.

  • label (Hashable) – Label to select along the dimension, required if dim_or_selector is a string.

Raises:

TypeError – if dim_or_selector is a mapping with length != 1.

genno.operator.interpolate(qty: Quantity, coords: Mapping[Hashable, Any] | None = None, method: Literal['linear', 'nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'polynomial'] | Literal['barycentric', 'krogh', 'pchip', 'spline', 'akima'] = 'linear', assume_sorted: bool = True, kwargs: Mapping[str, Any] | None = None, **coords_kwargs: Any) Quantity[source]#

Interpolate qty.

For the meaning of arguments, see xarray.DataArray.interp(). When CLASS is AttrSeries, only 1-dimensional interpolation (one key in coords) is tested/supported.

genno.operator.load_file(path: Path, dims: Collection[Hashable] | Mapping[Hashable, Hashable] = {}, units: str | Unit | Quantity | None = None, name: str | None = None) Any[source]#

Read the file at path and return its contents as a Quantity.

Some file formats are automatically converted into objects for direct use in genno computations:

.csv:

Converted to Quantity. CSV files must have a ‘value’ column; all others are treated as indices, except as given by dims. Lines beginning with ‘#’ are ignored.

User code may define an operator with the same name (“load_file”) in order to override this behaviour and/or add tailored support for others data file formats, for instance specific kinds of .json, .xml, .yaml, .ods, .xlsx, or other file types.

Parameters:
  • path (pathlib.Path) – Path to the file to read.

  • dims (collections.abc.Collection or collections.abc.Mapping, optional) – If a collection of names, other columns besides these and ‘value’ are discarded. If a mapping, the keys are the column labels in path, and the values are the target dimension names.

  • units (str or pint.Unit) – Units to apply to the loaded Quantity.

  • name (str) – Name for the loaded Quantity.

See also

add_load_file

genno.operator.mul(*quantities: Quantity) Quantity[source]#

Compute the product of any number of quantities.

See also

add_binop

genno.operator.pow(a: Quantity, b: Quantity | int) Quantity[source]#

Compute a raised to the power of b.

Returns:

If b is int or a Quantity with all int values that are equal to one another, then the quantity has the units of a raised to this power; for example, “kg²” → “kg⁴” if b is 2. In other cases, there are no meaningful units, so the returned quantity is dimensionless.

Return type:

Quantity

genno.operator.product(*quantities: Quantity) Quantity#

Alias of mul(), for backwards compatibility.

Note

This may be deprecated and possibly removed in a future version.

genno.operator.ratio(numerator: Quantity | float, denominator: Quantity) Quantity#

Alias of div(), for backwards compatibility.

Note

This may be deprecated and possibly removed in a future version.

genno.operator.relabel(qty: Quantity, labels: Mapping[Hashable, Mapping] | None = None, **dim_labels: Mapping) Quantity[source]#

Replace specific labels along dimensions of qty.

Parameters:
  • labels – Keys are strings identifying dimensions of qty; values are further mappings from original labels to new labels. Dimensions and labels not appearing in qty have no effect.

  • dim_labels – Mappings given as keyword arguments, where argument name is the dimension.

Raises:

ValueError – if both labels and dim_labels are given.

genno.operator.rename_dims(qty: Quantity, new_name_or_name_dict: Hashable | Mapping[Any, Hashable] | None = None, **names: Hashable) Quantity[source]#

Rename the dimensions of qty.

Like xarray.DataArray.rename().

genno.operator.round(qty: Quantity, *args, **kwargs) Quantity[source]#

Like xarray.DataArray.round().

genno.operator.select(qty: Quantity, indexers: Mapping[Hashable, Iterable[Hashable]], *, inverse: bool = False, drop: bool = False) Quantity[source]#

Select from qty based on indexers.

Parameters:
  • indexers (dict) –

    Elements to be selected from qty. Mapping from dimension names (str) to either:

    • list of str: coords along the respective dimension of qty, or

    • xarray.DataArray: xarray-style indexers.

    Values not appearing in the dimension coords are silently ignored.

  • inverse (bool, optional) – If True, remove the items in indexers instead of keeping them.

  • drop (bool, optional) – If True, drop dimensions that are indexed by a scalar value (for instance, "foo" or 999) in indexers. Note that dimensions indexed by a length-1 list of labels (for instance ["foo"]) are not dropped; this behaviour is consistent with xarray.DataArray.

genno.operator.sub(a: Quantity, b: Quantity) Quantity[source]#

Subtract b from a.

See also

add_binop

genno.operator.sum(quantity: Quantity, weights: Quantity | None = None, dimensions: List[str] | None = None) Quantity[source]#

Sum quantity over dimensions, with optional weights.

Parameters:
  • weights (Quantity, optional) – If dimensions is given, weights must have at least these dimensions. Otherwise, any dimensions are valid.

  • dimensions (list of str, optional) – If not provided, sum over all dimensions. If provided, sum over these dimensions.

genno.operator.write_report(quantity: object, path: str | PathLike, kwargs: dict | None = None) None[source]#
genno.operator.write_report(quantity: str, path: str | PathLike, kwargs: dict | None = None)
genno.operator.write_report(quantity: DataFrame, path: str | PathLike, kwargs: dict | None = None) None
genno.operator.write_report(quantity: Quantity, path: str | PathLike, kwargs: dict | None = None) None
genno.operator.write_report(obj: DataMessage, path, kwargs=None) None
genno.operator.write_report(quantity: IamDataFrame, path, kwargs=None) None

Write a quantity to a file.

write_report() is a singledispatch() function. This means that user code can extend this operator to support different types for the quantity argument:

import genno.operator

@genno.operator.write_report.register
def my_writer(qty: MyClass, path, kwargs):
    ... # Code to write MyClass to file
Parameters:
  • quantity – Object to be written. The base implementation supports Quantity and pandas.DataFrame.

  • path (str or pathlib.Path) – Path to the file to be written.

  • kwargs

    Keyword arguments. For the base implementation, these are passed to pandas.DataFrame.to_csv() or pandas.DataFrame.to_excel() (according to path), except for:

    • ”header_comment”: valid only for path ending in .csv. Multi-line text that is prepended to the file, with comment characters (”# “) before each line.

Raises:

NotImplementedError – If quantity is of a type not supported by the base implementation or any overloads.

Helper functions for adding tasks to Computers#

genno.operator.add_binop(func, c: Computer, key, *quantities, **kwargs) Key[source]#

Computer.add() helper for binary operations.

Add a computation that applies add(), div(), mul(), or sub() to quantities.

Parameters:
  • key (str or Key) – Key or name of the new quantity. If a Key, any dimensions are ignored; the dimensions of the result are the union of the dimensions of quantities.

  • sums (bool, optional) – If True, all partial sums of the new quantity are also added.

Returns:

The full key of the new quantity.

Return type:

Key

Example

>>> c = Computer()
>>> x = c.add("x:a-b-c", ...)
>>> y = c.add("y:c-d-e", ...)
>>> z = c.add("z", "mul", x, y)
>>> z
<z:a-b-c-d-e>
genno.operator.add_load_file(func, c: Computer, path, key=None, **kwargs)[source]#

Computer.add() helper for load_file().

Add a task to load an exogenous quantity from path. Computing the key or using it in other computations causes path to be loaded and converted to Quantity.

Parameters:
  • path (os.PathLike) – Path to the file, e.g. ‘/path/to/foo.ext’.

  • key (str or Key, optional) – Key for the quantity read from the file.

  • dims (dict or list or set) – Either a collection of names for dimensions of the quantity, or a mapping from names appearing in the input to dimensions.

  • units (str or pint.Unit) – Units to apply to the loaded Quantity.

Returns:

Either key (if given) or e.g. file foo.ext based on the path name, without directory components.

Return type:

Key

genno.operator.add_sum(func, c: Computer, key, qty, weights=None, dimensions=None, **kwargs) Key | str | Tuple[Key | str, ...][source]#

Computer.add() helper for sum().

If key has the name “*”, the returned key has name and dimensions inferred from qty and dimensions, and only the tag (if any) of key is preserved.

Internal format for quantities#

genno.core.quantity.CLASS = 'AttrSeries'#

Name of the class used to implement Quantity.

genno.core.quantity.assert_quantity(*args)[source]#

Assert that each of args is a Quantity object.

Raises:

TypeError – with a indicative message.

genno.core.quantity.maybe_densify(func)[source]#

Wrapper for operations that densifies SparseDataArray input.

class genno.core.attrseries.AttrSeriesCoordinates(obj)[source]#
property variables#

Low level interface to Coordinates contents as dict of Variable objects.

This dictionary is frozen to prevent mutation.

class genno.core.attrseries.AttrSeries(*args, **kwargs)[source]#

pandas.Series subclass imitating xarray.DataArray.

The AttrSeries class provides similar methods and behaviour to xarray.DataArray, so that genno.operator functions and user code can use xarray-like syntax. In particular, this allows such code to be agnostic about the order of dimensions.

Parameters:
  • units (str or pint.Unit, optional) – Set the units attribute. The value is converted to pint.Unit and added to attrs.

  • attrs (Mapping, optional) – Set the attrs of the AttrSeries. This attribute was added in pandas 1.0, but is not currently supported by the Series constructor.

name#

The name of this Quantity.

Like xarray.DataArray.name.

align_levels(other: AttrSeries) Tuple[Sequence[Hashable], AttrSeries][source]#

Return a copy of self with ≥1 dimension(s) in the same order as other.

Work-around for pandas-dev/pandas#25760 and other limitations of pandas.Series.

assign_coords(coords=None, **coord_kwargs)[source]#

Like xarray.DataArray.assign_coords().

bfill(dim: Hashable, limit: int | None = None)[source]#

Like xarray.DataArray.bfill().

property coords#

Like xarray.DataArray.coords. Read-only.

cumprod(dim=None, axis=None, skipna=None, **kwargs)[source]#

Like xarray.DataArray.cumprod().

property data#

Like xarray.DataArray.data.

property dims: Tuple[Hashable, ...]#

Like xarray.DataArray.dims.

drop(label)[source]#

Like xarray.DataArray.drop().

drop_vars(names: Hashable | Iterable[Hashable], *, errors: str = 'raise')[source]#

Like xarray.DataArray.drop_vars().

expand_dims(dim=None, axis=None, **dim_kwargs: Any) AttrSeries[source]#

Like xarray.DataArray.expand_dims().

ffill(dim: Hashable, limit: int | None = None)[source]#

Like xarray.DataArray.ffill().

classmethod from_series(series, sparse=None)[source]#

Like xarray.DataArray.from_series().

interp(coords: Mapping[Hashable, Any] | None = None, method: str = 'linear', assume_sorted: bool = True, kwargs: Mapping[str, Any] | None = None, **coords_kwargs: Any)[source]#

Like xarray.DataArray.interp().

This method works around two long-standing bugs in pandas:

item(*args)[source]#

Like xarray.DataArray.item().

rename(new_name_or_name_dict: Hashable | Mapping[Hashable, Hashable] | None = None, **names: Hashable)[source]#

Like xarray.DataArray.rename().

sel(indexers: Mapping[Any, Any] | None = None, method: str | None = None, tolerance=None, drop: bool = False, **indexers_kwargs: Any)[source]#

Like xarray.DataArray.sel().

property shape: Tuple[int, ...]#

Like xarray.DataArray.shape.

shift(shifts: Mapping[Hashable, int] | None = None, fill_value: Any | None = None, **shifts_kwargs: int)[source]#

Like xarray.DataArray.shift().

squeeze(dim=None, drop=False, axis=None)[source]#

Like xarray.DataArray.squeeze().

sum(dim: str | Collection[Hashable] | ellipsis | None = None, skipna: bool | None = None, min_count: int | None = None, keep_attrs: bool | None = None, **kwargs: Any) AttrSeries[source]#

Like xarray.DataArray.sum().

to_dataframe(name: Hashable | None = None, dim_order: Sequence[Hashable] | None = None) DataFrame[source]#

Like xarray.DataArray.to_dataframe().

to_series()[source]#

Like xarray.DataArray.to_series().

transpose(*dims)[source]#

Like xarray.DataArray.transpose().

class genno.core.sparsedataarray.SparseAccessor(obj)[source]#

xarray accessor to help SparseDataArray.

See the xarray accessor documentation, e.g. register_dataarray_accessor().

property COO_data#

True if the DataArray has sparse.COO data.

convert()[source]#

Return a SparseDataArray instance.

property dense#

Return a copy with dense (numpy.ndarray) data.

property dense_super#

Return a proxy to a numpy.ndarray-backed xarray.DataArray.

class genno.core.sparsedataarray.SparseDataArray(*args, **kwargs)[source]#

DataArray with sparse data.

SparseDataArray uses sparse.COO for storage with numpy.nan as its sparse.SparseArray.fill_value. Some methods of DataArray are overridden to ensure data is in sparse, or dense, format as necessary, to provide expected functionality not currently supported by sparse, and to avoid exhausting memory for some operations that require dense data.

ffill(dim: Hashable, limit: int | None = None)[source]#

Override ffill() to auto-densify.

classmethod from_series(obj, sparse=True)[source]#

Convert a pandas.Series into a SparseDataArray.

item(*args)#

Like item().

sel(indexers: Mapping[Any, Any] | None = None, method: str | None = None, tolerance=None, drop: bool = False, **indexers_kwargs: Any) SparseDataArray[source]#

Return a new array by selecting labels along the specified dim(s).

Overrides sel() to handle >1-D indexers with sparse data.

squeeze(dim=None, drop=False, axis=None)[source]#

Return a new object with squeezed data.

Parameters:
  • dim (None or Hashable or collections.abc.Iterable of Hashable, optional) – Selects a subset of the length one dimensions. If a dimension is selected with length greater than one, an error is raised. If None, all length one dimensions are squeezed.

  • drop (bool, default: False) – If drop=True, drop squeezed coordinates instead of making them scalar.

  • axis (None or int or collections.abc.Iterable of int, optional) – Like dim, but positional.

Returns:

squeezed – This object, but with with all or a subset of the dimensions of length 1 removed.

Return type:

same type as caller

See also

numpy.squeeze

to_dataframe(name: Hashable | None = None, dim_order: Sequence[Hashable] | None = None) DataFrame[source]#

Convert this array and its coords into a pandas.DataFrame.

Overrides to_dataframe().

to_series() Series[source]#

Convert this array into a Series.

Overrides to_series() to create the series without first converting to a potentially very large numpy.ndarray.

class genno.compat.xarray.DataArrayLike[source]#

Class with xarray.DataArray -like API.

This class is used to set signatures and types for methods and attributes on the generic Quantity class. SparseDataArray inherits from both this class and DataArray, and thus DataArray supplies implementations of these methods. In AttrSeries, the methods are implemented directly.

Internals and utilities#

genno.compat.graphviz.unwrap(label: str) str[source]#

Unwrap any number of paired ‘<’ and ‘>’ at the start/end of label.

These characters cause errors in graphviz/dot.

genno.compat.graphviz.visualize(dsk: Mapping, filename: str | PathLike | None = None, format: str | None = None, data_attributes: Mapping | None = None, function_attributes: Mapping | None = None, graph_attr: Mapping | None = None, node_attr: Mapping | None = None, edge_attr: Mapping | None = None, collapse_outputs=False, **kwargs)[source]#

Generate a Graphviz visualization of dsk.

This is merged and extended version of dask.base.visualize(), dask.dot.dot_graph(), and dask.dot.to_graphviz() that produces output that is informative for genno graphs.

Parameters:
  • dsk – The graph to display.

  • filename (pathlib.Path or str, optional) – The name of the file to write to disk. If the file name does not have a suffix, “.png” is used by default. If filename is None, no file is written, and dask communicates with dot using only pipes.

  • format ({'png', 'pdf', 'dot', 'svg', 'jpeg', 'jpg'}, optional) – Format in which to write output file, if not given by the suffix of filename. Default “png”.

  • data_attributes – Graphviz attributes to apply to single nodes representing keys, in addition to node_attr.

  • function_attributes – Graphviz attributes to apply to single nodes representing operations or functions, in addition to node_attr.

  • graph_attr – Mapping of (attribute, value) pairs for the graph. Passed directly to graphviz.Digraph.

  • node_attr – Mapping of (attribute, value) pairs set for all nodes. Passed directly to graphviz.Digraph.

  • edge_attr – Mapping of (attribute, value) pairs set for all edges. Passed directly to graphviz.Digraph.

  • collapse_outputs (bool, optional) – Omit nodes for keys that are the output of intermediate calculations.

  • kwargs – All other keyword arguments are added to graph_attr.

Examples

Prepare a computer:

>>> from genno import Computer
>>> from genno.testing import add_test_data
>>> c = Computer()
>>> add_test_data(c)
>>> c.add_product("z", "x:t", "x:y")
>>> c.add("y::0", itemgetter(0), "y")
>>> c.add("y0", "y::0")
>>> c.add("index_to", "z::indexed", "z:y", "y::0")
>>> c.add_single("all", ["z::indexed", "t", "config", "x:t"])

Visualize its contents:

>>> c.visualize("example.svg")

This produces the output:

Example output from graphviz.visualize.

See also

describe.label

genno.core.describe.MAX_ITEM_LENGTH = 160#

Default maximum length for outputs from describe_recursive().

genno.core.describe.describe_recursive(graph, comp, depth=0, seen=None)[source]#

Recursive helper for describe().

Parameters:
  • graph – A dask graph.

  • comp – A dask computation.

  • depth (int) – Recursion depth. Used for indentation.

  • seen (set) – Keys that have already been described. Used to avoid double-printing.

genno.core.describe.is_list_of_keys(arg: Any, graph: Mapping) bool[source]#

Identify a task which is a list of other keys.

genno.core.describe.label(arg, max_length=160) str[source]#

Return a label for arg.

The label depends on the type of arg:

  • xarray.DataArray: the first line of the string representation.

  • functools.partial() object: a less-verbose version that omits None arguments.

  • Item protected with dask.core.quote(): its literal value.

  • A callable, e.g. a function: its name.

  • Anything else: its str representation.

In all cases, the string is no longer than max_length.

class genno.core.graph.Graph(*args, **kwargs)[source]#

A dictionary for a graph indexed by Key.

Graph maintains indexes on set/delete/pop/update operations that allow for fast lookups/member checks in certain special cases:

unsorted_key(key)

Return key with its original or unsorted dimensions.

full_key(name_or_key)

Return name_or_key with its full dimensions.

These basic features are used to provide higher-level helpers for Computer:

infer(key[, dims])

Infer a key.

full_key(name_or_key: Key | str) Key | str | None[source]#

Return name_or_key with its full dimensions.

infer(key: str | Key, dims: Iterable[str] = []) Key | str | None[source]#

Infer a key.

Parameters:

dims (list of str, optional) – Drop all but these dimensions from the returned key(s).

Returns:

  • str – If key is not found in the Graph.

  • Keykey with either its full dimensions (cf. full_key()) or, if dims are given, with only these dims.

pop(*args)[source]#

Overload dict.pop() to also call _deindex().

unsorted_key(key: Key | str) Key | str | None[source]#

Return key with its original or unsorted dimensions.

update(arg=None, **kwargs)[source]#

Overload dict.pop() to also call _index().

genno.core.key.KeyLike#

Type shorthand for Key or any other value that can be used as a key.

alias of Union[Key, str]

genno.core.key.iter_keys(value: Key | str | Tuple[Key | str, ...]) Iterator[Key][source]#

Yield Keys from value.

Raises:

TypeErrorvalue is not an iterable of Key.

See also

Computer.add

genno.core.key.single_key(value: Key | str | Tuple[Key | str, ...] | Iterator) Key[source]#

Ensure value is a single Key.

Raises:

TypeErrorvalue is not a Key or 1-tuple of Key.

See also

Computer.add

class genno.core.operator.Operator[source]#

Base class for a callable with convenience methods.

Example

>>> from genno import Operator
>>>
>>> @Operator.define()
... def myfunc(q1: Quantity, q2: Quantity) -> Quantity:
...     # Operator code
>>>
>>> @myfunc.helper
... def add_myfunc(f, computer, *args, **kwargs):
...     # Custom code to add tasks to `computer`
...     # Perform checks or handle `args` and `kwargs`.

Or:

>>> from genno import Operator
>>>
>>> def add_myfunc(f, computer, *args, **kwargs):
...     # ... as above
>>>
>>> @Operator.define(helper=add_myfunc)
... def myfunc(q1: Quantity, q2: Quantity) -> Quantity:
...     # ... as above
add_tasks(c: Computer, *args, **kwargs) Tuple[Key | str, ...][source]#

Invoke _add_task to add tasks to c.

static define(deprecated_func_arg: Callable | None = None, *, helper: Callable | None = None) Callable[[Callable], Operator][source]#

Return a decorator that wraps func in a Operator instance.

Parameters:

helper (Callable, optional) – Equivalent to calling helper() on the Operator instance.

func: ClassVar[Callable]#

Function or callable for the Operator.

helper(func: Callable[[...], Key | str | Tuple[Key | str, ...]]) Callable[source]#

Register func as the convenience method for adding task(s).

genno.util.REPLACE_UNITS = {'%': 'percent'}#

Replacements to apply to Quantity units before parsing by pint. Mapping from original unit -> preferred unit.

The default values include:

  • The ‘%’ symbol cannot be supported by pint, because it is a Python operator; it is replaced with “percent”.

Additional values can be added with configure(); see units:.

genno.util.clean_units(input_string)[source]#

Tolerate messy strings for units.

  • Dimensions enclosed in “[]” have these characters stripped.

  • Replacements from REPLACE_UNITS are applied.

genno.util.collect_units(*args)[source]#

Return the “_unit” attributes of the args.

genno.util.filter_concat_args(args)[source]#

Filter out str and Key from args.

A warning is logged for each element removed.

genno.util.free_parameters(func: Callable) Mapping[source]#

Retrieve information on the free parameters of func.

Identical to inspect.signature(func).parameters; that is, to inspect.Signature.parameters. free_parameters also:

  • Handles functions that have been functools.partial()’d, returning only the parameters that have not already been assigned a value by the partial() call—the “free” parameters.

  • Caches return values for better performance.

genno.util.parse_units(data: Iterable, registry=None) Unit[source]#

Return a pint.Unit for an iterable of strings.

Valid unit expressions not already present in the registry are defined, e.g.:

u = parse_units(["foo/bar", "foo/bar"], reg)

…results in the addition of unit definitions equivalent to:

reg.define("foo = [foo]")
reg.define("bar = [bar]")
u = reg.foo / reg.bar
Raises:

ValueError – if data contains more than 1 unit expression, or the unit expression contains characters not parseable by pint, e.g. -?$.

genno.util.partial_split(func: Callable, kwargs: Mapping) Tuple[Callable, MutableMapping][source]#

Forgiving version of functools.partial().

Returns a partial object and leftover keyword arguments that are not applicable to func.

genno.util.unquote(value)[source]#

Reverse dask.core.quote().

Utilities for testing#

genno.testing.add_dantzig(c: Computer)[source]#

Add contents analogous to the ixmp Dantzig scenario.

genno.testing.add_large_data(c: Computer, num_params, N_dims=6, N_data=0)[source]#

Add nodes to c that return large-ish data.

The result is a matrix wherein the Cartesian product of all the keys is very large— about 2e17 elements for N_dim = 6—but the contents are very sparse. This can be handled by SparseDataArray, but not by xarray.DataArray backed by numpy.ndarray.

genno.testing.add_test_data(c: Computer)[source]#

add_test_data() operating on a Computer, not an ixmp.Scenario.

genno.testing.assert_logs(caplog, message_or_messages=None, at_level=None)[source]#

Assert that message_or_messages appear in logs.

Use assert_logs as a context manager for a statement that is expected to trigger certain log messages. assert_logs checks that these messages are generated.

Derived from ixmp.testing.assert_logs().

Example

>>> def test_foo(caplog):
...     with assert_logs(caplog, 'a message'):
...         logging.getLogger(__name__).info('this is a message!')
Parameters:
  • caplog (object) – The pytest caplog fixture.

  • message_or_messages (str or list of str) – String(s) that must appear in log messages.

  • at_level (int, optional) – Messages must appear on ‘genno’ or a sub-logger with at least this level.

genno.testing.assert_qty_allclose(a, b, check_type: bool = True, check_attrs: bool = True, ignore_extra_coords: bool = False, **kwargs)[source]#

Assert that objects a and b have numerically close values.

Parameters:
  • check_type (bool, optional) – Assert that a and b are both Quantity instances. If False, the arguments are converted to Quantity.

  • check_attrs (bool, optional) – Also assert that check that attributes are identical.

  • ignore_extra_coords (bool, optional) – Ignore extra coords that are not dimensions. Only meaningful when Quantity is SparseDataArray.

genno.testing.assert_qty_equal(a, b, check_type: bool = True, check_attrs: bool = True, ignore_extra_coords: bool = False, **kwargs)[source]#

Assert that objects a and b are equal.

Parameters:
  • check_type (bool, optional) – Assert that a and b are both Quantity instances. If False, the arguments are converted to Quantity.

  • check_attrs (bool, optional) – Also assert that check that attributes are identical.

  • ignore_extra_coords (bool, optional) – Ignore extra coords that are not dimensions. Only meaningful when Quantity is SparseDataArray.

genno.testing.assert_units(qty: Quantity, exp: str) None[source]#

Assert that qty has units exp.

genno.testing.get_test_quantity(key: Key) Quantity[source]#

Computation that returns test data.

genno.testing.pytest_runtest_makereport(item, call)[source]#

Pytest hook to unwrap genno.ComputationError.

This allows to “xfail” tests more precisely on the underlying exception, rather than the ComputationError which wraps it.

genno.testing.random_qty(shape: Dict[str, int], **kwargs) Quantity[source]#

Return a Quantity with shape and random contents.

Parameters:
  • shape (dict) – Mapping from dimension names (str) to lengths along each dimension (int).

  • **kwargs – Other keyword arguments to Quantity.

Returns:

Random data with one dimension for each key in shape, and coords along those dimensions like “foo1”, “foo2”, with total length matching the value from shape. If shape is empty, a scalar (0-dimensional) Quantity.

Return type:

Quantity

genno.testing.test_data_path()[source]#

Path to the directory containing test data.

genno.testing.ureg()[source]#

Application-wide units registry.

Testing Juypter notebooks.

Copied 2023-04-27 from the corresponding module in ixmp.

genno.testing.jupyter.get_cell_output(nb, name_or_index, kind='data')[source]#

Retrieve a cell from nb according to its metadata name_or_index:

The Jupyter notebook format allows specifying a document-wide unique ‘name’ metadata attribute for each cell:

https://nbformat.readthedocs.io/en/latest/format_description.html #cell-metadata

Return the cell matching name_or_index if str; or the cell at the int index; or raise ValueError.

Parameters:

kind (str, optional) – Kind of cell output to retrieve. For ‘data’, the data in format ‘text/plain’ is run through eval(). To retrieve an exception message, use ‘evalue’.

genno.testing.jupyter.run_notebook(nb_path, tmp_path, env=None, **kwargs)[source]#

Execute a Jupyter notebook via nbclient and collect output.

Parameters:
  • nb_path (os.PathLike) – The notebook file to execute.

  • tmp_path (os.PathLike) – A directory in which to create temporary output.

  • env (collections.abc.Mapping, optional) – Execution environment for nbclient. Default: os.environ.

  • kwargs

    Keyword arguments for nbclient.NotebookClient. Defaults are set for:

    ”allow_errors”

    Default False. If True, the execution always succeeds, and cell output contains exception information rather than code outputs.

    ”kernel_version”

    Jupyter kernel to use. Default: either “python2” or “python3”, matching the current Python major version.

    Warning

    Any existing configuration for this kernel on the local system— such as an IPython start-up file—will be executed when the kernel starts. Code that enables GUI features can interfere with run_notebook().

    ”timeout”

    in seconds; default 10.

Returns: