sciunit package

Submodules

sciunit.base module

The base class for many SciUnit objects.

class sciunit.base.SciUnit[source]

Bases: sciunit.base.Versioned

Abstract base class for models, tests, and scores.

__getstate__()[source]

Copy the object’s state from self.__dict__.

Contains all of the instance attributes. Always uses the dict.copy() method to avoid modifying the original state.

__init__()[source]

Instantiate a SciUnit object.

__module__ = 'sciunit.base'
property _class
property _id
_properties(keys=None, exclude=None)[source]
_state(state=None, keys=None, exclude=None)[source]
_url = None

A URL where the code for this object can be found.

classmethod dict_hash(d)[source]
property hash

A unique numeric identifier of the current model state

property id
json(add_props=False, keys=None, exclude=None, string=True, indent=None)[source]
property properties
raw_props()[source]
property state
unpicklable = []

A list of attributes that cannot or should not be pickled.

property url
verbose = 1

A verbosity level for printing information.

class sciunit.base.SciUnitEncoder(*args, **kwargs)[source]

Bases: json.encoder.JSONEncoder

Custom JSON encoder for SciUnit objects

__init__(*args, **kwargs)[source]

Constructor for JSONEncoder, with sensible defaults.

If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.

If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.

If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an OverflowError). Otherwise, no such check takes place.

If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.

If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.

If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.

If specified, separators should be an (item_separator, key_separator) tuple. The default is (‘, ‘, ‘: ‘) if indent is None and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.

If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError.

__module__ = 'sciunit.base'
default(obj)[source]

Implement this method in a subclass such that it returns a serializable object for o, or calls the base implementation (to raise a TypeError).

For example, to support arbitrary iterators, you could implement default like this:

def default(self, o):
    try:
        iterable = iter(o)
    except TypeError:
        pass
    else:
        return list(iterable)
    # Let the base class default method raise the TypeError
    return JSONEncoder.default(self, o)
class sciunit.base.TestWeighted[source]

Bases: object

Base class for objects with test weights.

__module__ = 'sciunit.base'
__weakref__

list of weak references to the object (if defined)

property weights

Returns a normalized list of test weights.

class sciunit.base.Versioned[source]

Bases: object

A Mixin class for SciUnit objects.

Provides a version string based on the Git repository where the model is tracked. Provided in part by Andrew Davison in issue #53.

__module__ = 'sciunit.base'
__weakref__

list of weak references to the object (if defined)

get_remote(remote='origin')[source]

Get a git remote object for this instance.

get_remote_url(remote='origin', cached=True)[source]

Get a git remote URL for this instance.

get_repo(cached=True)[source]

Get a git repository object for this instance.

get_version(cached=True)[source]

Get a git version (i.e. a git commit hash) for this instance.

property remote_url

Get a git remote URL for this instance.

property version

Get a git version (i.e. a git commit hash) for this instance.

sciunit.base.deep_exclude(state, exclude)[source]

sciunit.capabilities module

The base class for SciUnit capabilities.

By inheriting a capability class, a model tells the test that it implements that capability and that all of its methods are safe to call. The capability must then be implemented by the modeler (i.e. all of the capabilty’s methods must implemented in the model class).

class sciunit.capabilities.Capability[source]

Bases: sciunit.base.SciUnit

Abstract base class for sciunit capabilities.

class __metaclass__[source]

Bases: type

__module__ = 'sciunit.capabilities'
property name
__module__ = 'sciunit.capabilities'
__str__()[source]

Return str(self).

classmethod check(model, require_extra=False)[source]

Check whether the provided model has this capability.

By default, uses isinstance. If require_extra, also requires that an instance check be present in model.extra_capability_checks.

unimplemented(message='')[source]

Raise a CapabilityNotImplementedError with details.

class sciunit.capabilities.ProducesNumber[source]

Bases: sciunit.capabilities.Capability

An example capability for producing some generic number.

__module__ = 'sciunit.capabilities'
produce_number()[source]

Produce a number.

class sciunit.capabilities.Runnable[source]

Bases: sciunit.capabilities.Capability

Capability for models that can be run, i.e. simulated.

__module__ = 'sciunit.capabilities'
run(**run_params)[source]

Run, i.e. simulate the model.

set_default_run_params(**default_run_params)[source]

Set default parameters for all runs.

Note these are parameters of the simulation itself, not the model.

set_run_params(**run_params)[source]

Set parameters for the next run.

Note these are parameters of the simulation itself, not the model.

sciunit.converters module

Classes for converting from the output of a model/data comparison to the value required for particular score type.

class sciunit.converters.AtLeastToBoolean(cutoff)[source]

Bases: sciunit.converters.Converter

Converts a score to Pass if its value is at least $cutoff, otherwise False.

__init__(cutoff)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.converters'
_convert(score)[source]

Takes the score attribute of a score instance and recasts it as instance of another score type.

class sciunit.converters.AtMostToBoolean(cutoff)[source]

Bases: sciunit.converters.Converter

Converts a score to pass if its value is at most $cutoff, otherwise False.

__init__(cutoff)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.converters'
_convert(score)[source]

Takes the score attribute of a score instance and recasts it as instance of another score type.

class sciunit.converters.Converter[source]

Bases: object

Base converter class. Only derived classes should be used in applications.

__module__ = 'sciunit.converters'
__weakref__

list of weak references to the object (if defined)

_convert(score)[source]

Takes the score attribute of a score instance and recasts it as instance of another score type.

convert(score)[source]
property description
class sciunit.converters.LambdaConversion(f)[source]

Bases: sciunit.converters.Converter

Converts a score according to a lambda function.

__init__(f)[source]

f should be a lambda function

__module__ = 'sciunit.converters'
_convert(score)[source]

Takes the score attribute of a score instance and recasts it as instance of another score type.

class sciunit.converters.NoConversion[source]

Bases: sciunit.converters.Converter

Applies no conversion.

__module__ = 'sciunit.converters'
_convert(score)[source]

Takes the score attribute of a score instance and recasts it as instance of another score type.

class sciunit.converters.RangeToBoolean(low_cutoff, high_cutoff)[source]

Bases: sciunit.converters.Converter

Converts a score to Pass if its value is within the range [$low_cutoff,$high_cutoff], otherwise Fail.

__init__(low_cutoff, high_cutoff)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.converters'
_convert(score)[source]

Takes the score attribute of a score instance and recasts it as instance of another score type.

sciunit.errors module

Exception classes for SciUnit

exception sciunit.errors.BadParameterValueError(name, value)[source]

Bases: sciunit.errors.Error

Error raised when a model parameter value is unreasonable.

__init__(name, value)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.errors'
exception sciunit.errors.CapabilityError(model, capability, details='')[source]

Bases: sciunit.errors.Error

Abstract error class for capabilities

__init__(model, capability, details='')[source]

model: a model instance capablity: a capability class

__module__ = 'sciunit.errors'
action = None

The action that has failed (‘provide’ or ‘implement’)

capability = None

The capability class that is not provided.

model = None

The model instance that does not have the capability.

exception sciunit.errors.CapabilityNotImplementedError(model, capability, details='')[source]

Bases: sciunit.errors.CapabilityError

Error raised when a required capability is not implemented by a model. Do not use for capabilities that are not provided at all.

__module__ = 'sciunit.errors'
action = 'implement'
exception sciunit.errors.CapabilityNotProvidedError(model, capability, details='')[source]

Bases: sciunit.errors.CapabilityError

Error raised when a required capability is not provided by a model. Do not use for capabilities provided but not implemented.

__module__ = 'sciunit.errors'
action = 'provide'
exception sciunit.errors.Error[source]

Bases: Exception

Base class for errors in sciunit’s core.

__module__ = 'sciunit.errors'
__weakref__

list of weak references to the object (if defined)

exception sciunit.errors.InvalidScoreError[source]

Bases: sciunit.errors.Error

Error raised when a score is invalid.

__module__ = 'sciunit.errors'
exception sciunit.errors.ObservationError[source]

Bases: sciunit.errors.Error

Raised when an observation passed to a test is invalid.

__module__ = 'sciunit.errors'
exception sciunit.errors.ParametersError[source]

Bases: sciunit.errors.Error

Raised when params passed to a test are invalid.

__module__ = 'sciunit.errors'
exception sciunit.errors.PredictionError(model, method, **args)[source]

Bases: sciunit.errors.Error

Raised when a tests’s generate_prediction chokes on a model’s method

__init__(model, method, **args)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.errors'
argument = None

The argument that could not be handled.

model = None

The model that does not have the capability.

sciunit.suites module

Base class for SciUnit test suites.

class sciunit.suites.TestSuite(tests, name=None, weights=None, include_models=None, skip_models=None, hooks=None, optimizer=None)[source]

Bases: sciunit.base.SciUnit, sciunit.base.TestWeighted

A collection of tests.

__getitem__(item)[source]
__init__(tests, name=None, weights=None, include_models=None, skip_models=None, hooks=None, optimizer=None)[source]

optimizer: a function to bind to self.optimize (first argument must be a testsuite)

__len__()[source]
__module__ = 'sciunit.suites'
__str__()[source]

Represent the TestSuite instance as a string.

assert_models(models)[source]

Check and in some cases fixes the list of models.

assert_tests(tests)[source]

Check and in some cases fixes the list of tests.

check(models, skip_incapable=True, require_extra=False, stop_on_error=True)[source]

Like judge, but without actually running the test.

Just returns a ScoreMatrix indicating whether each model can take each test or not. A TBDScore indicates that it can, and an NAScore indicates that it cannot.

check_capabilities(model, skip_incapable=False, require_extra=False)[source]

Check model capabilities against those required by the suite.

Returns a list of booleans (one for each test in the suite) corresponding to whether the test’s required capabilities are satisfied by the model.

description = None

The description of the test suite. No default.

classmethod from_observations(tests_info, name=None)[source]

Instantiate a test suite from a set of observations.

tests_info should be a list of tuples containing the test class and the observation, e.g. [(TestClass1,obs1),(TestClass2,obs2),…]. The desired test name may appear as an optional third item in the tuple, e.g. (TestClass1,obse1,”my_test”). The same test class may be used multiple times, e.g. [(TestClass1,obs1a),(TestClass1,obs1b),…].

include_models = []

List of names or instances of models to judge (all passed to judge are judged by default).

is_skipped(model)[source]

Indicate whether model will be judged or not.

judge(models, skip_incapable=False, stop_on_error=True, deep_error=False)[source]

Judge the provided models against each test in the test suite.

Args:

models (list): The models to be judged. skip_incapable (bool): Whether to skip incapable models

(or raise an exception).

stop_on_error (bool): Whether to raise an Exception if an error

is encountered or just produce an ErrorScore.

deep_error (bool): Whether the error message should penetrate

all the way to the root of the error.

Returns:

ScoreMatrix: The resulting scores for all test/model combos.

judge_one(model, test, sm, skip_incapable=True, stop_on_error=True, deep_error=False)[source]

Judge model and put score in the ScoreMatrix.

name = None

The name of the test suite. Defaults to the class name.

optimize(model, *args, **kwargs)[source]

Optimize model parameters to get the best Test Suite scores.

set_hooks(test, score)[source]

Set hook functions to run after each test is executed.

set_verbose(verbose)[source]

Set the verbosity for logged information about test execution.

skip_models = []

List of names or instances of models to not judge (all passed to judge are judged by default).

tests = None

The sequence of tests that this suite contains.

sciunit.tests module

SciUnit tests live in this module.

class sciunit.tests.ProtocolToFeaturesTest(observation, name=None, **params)[source]

Bases: sciunit.tests.Test

Assume that generating a prediction consists of: 1) Setting up a simulation experiment protocol. Depending on the backend, this could include editing simulation parameters in memory or editing a model file. It could include any kind of experimental protocol, such as a perturbation. 2) Running a model (using e.g. RunnableModel) 3) Extract features from the results

Developers should not need to manually implement generate_prediction, and instead should focus on the other three methods here.

__module__ = 'sciunit.tests'
extract_features(model, result)[source]
generate_prediction(model)[source]

Generate a prediction from a model using the required capabilities.

No default implementation.

get_result(model)[source]
setup_protocol(model)[source]
class sciunit.tests.RangeTest(observation, name=None)[source]

Bases: sciunit.tests.Test

Test if the model generates a number within a certain range

__init__(observation, name=None)[source]

Args: observation (dict): A dictionary of observed values to parameterize

the test.

name (str, optional): Name of the test instance.

__module__ = 'sciunit.tests'
compute_score(observation, prediction)[source]

Generates a score given the observations provided in the constructor and the prediction generated by generate_prediction.

Must generate a score of score_type. No default implementation.

generate_prediction(model)[source]

Generate a prediction from a model using the required capabilities.

No default implementation.

required_capabilities = (<class 'sciunit.capabilities.ProducesNumber'>,)
score_type

alias of sciunit.scores.complete.BooleanScore

validate_observation(observation)[source]

Validate the observation provided to the constructor.

Raises an ObservationError if invalid.

class sciunit.tests.Test(observation, name=None, **params)[source]

Bases: sciunit.base.SciUnit

Abstract base class for tests.

__init__(observation, name=None, **params)[source]
Args:
observation (dict): A dictionary of observed values to parameterize

the test.

name (str, optional): Name of the test instance.

__module__ = 'sciunit.tests'
__str__()[source]

Return the string representation of the test’s name.

_bind_score(score, model, observation, prediction)[source]

Bind some useful attributes to the score.

_judge(model, skip_incapable=True)[source]

Generate a score for the model (internal API use only).

ace()[source]

Generate the best possible score of the associated score type.

bind_score(score, model, observation, prediction)[source]

For the user to bind additional features to the score.

check(model, skip_incapable=True, stop_on_error=True, require_extra=False)[source]

Check to see if the test can run this model.

Like judge, but without actually running the test. Just returns a Score indicating whether the model can take the test or not.

check_capabilities(model, skip_incapable=False, require_extra=False)[source]

Check that test’s required capabilities are implemented by model.

Raises an Error if model is not a Model. Raises a CapabilityError if model does not have a capability.

check_capability(model, c, skip_incapable=False, require_extra=False)[source]

Check if model has capability c.

Optionally (default:True) raise a CapabilityError if it does not.

check_prediction(prediction)[source]

Check the prediction for acceptable values.

No default implementation.

check_score_type(score)[source]

Check that the score is the correct type for this test.

compute_params()[source]

Compute new params from existing self.params. Inserts those new params into self.params. Use this when some params depend upon the values of others. Example: self.params[‘c’] = self.params[‘a’] + self.params[‘b’]

compute_score(observation, prediction)[source]

Generates a score given the observations provided in the constructor and the prediction generated by generate_prediction.

Must generate a score of score_type. No default implementation.

condition_model(model)[source]

Update the model in any way needed before generating the prediction.

This could include updating parameters such as simulation durations that do not define the model but do define experiments performed on the model. No default implementation.

converter = None

A conversion to be done on the score after it is computed.

default_params = {}

A dictionary containing the parameters to the test.

describe()[source]

Describe the test in words.

description = None

A description of the test. Defaults to the docstring for the class.

generate_prediction(model)[source]

Generate a prediction from a model using the required capabilities.

No default implementation.

classmethod is_test_class(other_cls)[source]

Return whether other_cls is a subclass of this test class.

judge(model, skip_incapable=False, stop_on_error=True, deep_error=False)[source]

Generate a score for the provided model (public method).

Operates as follows: 1. Checks if the model has all the required capabilities. If it does

not, and skip_incapable=False, then a CapabilityError is raised.

  1. Calls generate_prediction to generate a prediction.

  2. Calls score_prediction to generate a score.

  3. Checks that the score is of score_type, raising an InvalidScoreError.

  4. Equips the score with metadata: a) A reference to the model, in attribute model. b) A reference to the test, in attribute test. c) A reference to the prediction, in attribute prediction. d) A reference to the observation, in attribute observation.

  5. Returns the score.

If stop_on_error is true (default), exceptions propagate upward. If false, an ErrorScore is generated containing the exception.

If deep_error is true (not default), the traceback will contain the actual code execution error, instead of the content of an ErrorScore.

name = None

The name of the test. Defaults to the test class name.

observation = None

The empirical observation that the test is using.

observation_schema = None

A schema that the observation must adhere to (validated by cerberus). Can also be a list of schemas, one of which the observation must match. If it is a list, each schema in the list can optionally be named by putting (name, schema) tuples in that list.

classmethod observation_schema_names()[source]

Return a list of names of observation schema, if they are set.

optimize(model)[source]

Optimize the parameters of the model to get the best score.

params_schema = None

A schema that the params must adhere to (validated by cerberus). Can also be a list of schemas, one of which the params must match.

required_capabilities = ()

A sequence of capabilities that a model must have in order for the test to be run. Defaults to empty.

score_type

alias of sciunit.scores.complete.BooleanScore

property state

Get the frozen (pickled) model state.

validate_observation(observation)[source]

Validate the observation provided to the constructor.

Raises an ObservationError if invalid.

validate_params(params)[source]

Validate the params provided to the constructor.

Raises an ParametersError if invalid.

class sciunit.tests.TestM2M(observation=None, name=None, **params)[source]

Bases: sciunit.tests.Test

Abstract class for handling tests involving multiple models.

Enables comparison of model to model predictions, and also against experimental reference data (optional).

Note: ‘TestM2M’ would typically be used when handling mutliple (>2) models, with/without experimental reference data. For single model tests, you can use the ‘Test’ class.

__init__(observation=None, name=None, **params)[source]

Args: observation (dict): A dictionary of observed values to parameterize

the test.

name (str, optional): Name of the test instance.

__module__ = 'sciunit.tests'
_bind_score(score, prediction1, prediction2, model1, model2)[source]

Bind some useful attributes to the score.

_judge(prediction1, prediction2, model1, model2=None)[source]

Generate a score for the model (internal API use only).

bind_score(score, prediction1, prediction2, model1, model2)[source]

For the user to bind additional features to the score.

compute_score(prediction1, prediction2)[source]

Generate a score given the observations provided in the constructor and/or the prediction(s) generated by generate_prediction.

Must generate a score of score_type.

No default implementation.

judge(models, skip_incapable=False, stop_on_error=True, deep_error=False, only_lower_triangle=False)[source]

Generate a score matrix for the provided model(s). only_lower_triangle: Only compute the lower triangle (not include

the diagonal) of this square ScoreMatrix and copy the other values across. Leave the diagonal blank. If False, compute all.

Operates as follows: 1. Check if models have been specified as a list/tuple/set.

If not, raise exception.

  1. Create a list of predictions. If a test observation is provided, add it to predictions.

  2. Checks if all models have all the required capabilities. If a model does not, then a CapabilityError is raised.

  3. Calls generate_prediction to generate predictions for each model, and these are appeneded to the predictions list.

  4. Generate a 2D list as a placeholder for all the scores.

  5. Calls score_prediction to generate scores for each comparison.

  6. Checks that the score is of score_type, raising an InvalidScoreError.

  7. Equips the score with metadata: a) Reference(s) to the model(s), in attribute model1 (and model2). b) A reference to the test, in attribute test. c) A reference to the predictions, in attributes prediction1 and

    prediction2.

  8. Returns the score as a Pandas DataFrame.

If stop_on_error is true (default), exceptions propagate upward. If false, an ErrorScore is generated containing the exception.

If deep_error is true (not default), the traceback will contain the actual code execution error, instead of the content of an ErrorScore.

validate_observation(observation)[source]

Validate the observation provided to the constructor.

Note: TestM2M does not compulsorily require an observation (i.e. None allowed).

sciunit.utils module

Utility functions for SciUnit.

class sciunit.utils.MockDevice(buffer, encoding=None, errors=None, newline=None, line_buffering=False, write_through=False)[source]

Bases: _io.TextIOWrapper

A mock device to temporarily suppress output to stdout Similar to UNIX /dev/null.

__module__ = 'sciunit.utils'
write(s)[source]

Write string to stream. Returns the number of characters written (which is always equal to the length of the string).

class sciunit.utils.NotebookTools(*args, **kwargs)[source]

Bases: object

A class for manipulating and executing Jupyter notebooks.

__init__(*args, **kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.utils'
__weakref__

list of weak references to the object (if defined)

_do_notebook(name, convert_notebooks=False)[source]

Called by do_notebook to actually run the notebook.

clean_code(name, forbidden)[source]

Remove lines containing items in ‘forbidden’ from the code. Helpful for executing converted notebooks that still retain IPython magic commands.

convert_and_execute_notebook(name)[source]

Converts a notebook into a python file and then runs it.

convert_notebook(name)[source]

Converts a notebook into a python file.

classmethod convert_path(file)[source]

Check to see if an extended path is given and convert appropriately

do_notebook(name)[source]

Run a notebook file after optionally converting it to a python file.

execute_notebook(name)[source]

Loads and then runs a notebook file.

fix_display()[source]

If this is being run on a headless system the Matplotlib backend must be changed to one that doesn’t need a display.

gen_dir_name = 'GeneratedFiles'
gen_file_level = 2
gen_file_path(name)[source]

Returns full path to generated files. Checks to see if directory exists where generated files are stored and creates one otherwise.

get_path(file)[source]

Get the full path of the notebook found in the directory specified by self.path.

load_notebook(name)[source]

Loads a notebook file into memory.

path = ''
read_code(name)[source]

Reads code from a python file called ‘name’

run_notebook(nb, f)[source]

Runs a loaded notebook file.

classmethod strip_line_magic(line, magics_allowed)[source]

Handles lines that contain get_ipython.run_line_magic() commands

classmethod strip_line_magic_v2(line)[source]

strip_line_magic() implementation for Python 2

classmethod strip_line_magic_v3(line)[source]

strip_line_magic() implementation for Python 3

write_code(name, code)[source]

Writes code to a python file called ‘name’, erasing the previous contents. Files are created in a directory specified by gen_dir_name (see function gen_file_path) File name is second argument of path

sciunit.utils.assert_dimensionless(value)[source]

Tests for dimensionlessness of input. If input is dimensionless but expressed as a Quantity, it returns the bare value. If it not, it raised an error.

sciunit.utils.config_get(key, default=None)[source]
sciunit.utils.config_get_from_path(config_path, key)[source]
sciunit.utils.dict_combine(*dict_list)[source]

Return the union of several dictionaries. Uses the values from later dictionaries in the argument list when duplicate keys are encountered. In Python 3 this can simply be {**d1, **d2, …} but Python 2 does not support this dict unpacking syntax.

sciunit.utils.dict_hash(d)[source]
sciunit.utils.import_all_modules(package, skip=None, verbose=False, prefix='', depth=0)[source]

Recursively imports all subpackages, modules, and submodules of a given package. ‘package’ should be an imported package, not a string. ‘skip’ is a list of modules or subpackages not to import.

sciunit.utils.import_module_from_path(module_path, name=None)[source]
sciunit.utils.kernel_log(*args, **kwargs)[source]
sciunit.utils.log(*args, **kwargs)[source]
sciunit.utils.method_cache(by='value', method='run')[source]

A decorator used on any model method which calls the model’s ‘method’ method if that latter method has not been called using the current arguments or simply sets model attributes to match the run results if it has.

sciunit.utils.non_kernel_log(*args, **kwargs)[source]
sciunit.utils.path_escape(path)[source]

Escape a path by placing backslashes in front of disallowed characters

sciunit.utils.printd(*args, **kwargs)[source]

Print if PRINT_DEBUG_STATE is True

sciunit.utils.printd_set(state)[source]

Enable the printd function. Call with True for all subsequent printd commands to be passed to print. Call with False to ignore all subsequent printd commands.

sciunit.utils.rec_apply(func, n)[source]

Used to determine parent directory n levels up by repeatedly applying os.path.dirname

sciunit.utils.set_warnings_traceback(tb=True)[source]

Set to True to give tracebacks for all warnings, or False to restore default behavior.

sciunit.utils.warn_with_traceback(message, category, filename, lineno, file=None, line=None)[source]

A function to use with warnings.showwarning to show a traceback.

sciunit.validators module

Cerberus validator classes for SciUnit.

class sciunit.validators.ObservationValidator(*args, **kwargs)[source]

Bases: cerberus.validator.Validator

Cerberus validator class for observations.

__init__(*args, **kwargs)[source]

Must pass test as a keyword argument.

Cannot be a positional argument without modifications to cerberus

__module__ = 'sciunit.validators'
_types_from_methods = ()
_validate_iterable(is_iterable, key, value)[source]

Validate fields with iterable key in schema set to True

The rule’s arguments are validated against this schema: {‘type’: ‘boolean’}

_validate_units(has_units, key, value)[source]

Validate fields with units key in schema set to True.

The rule’s arguments are validated against this schema: {‘type’: ‘boolean’}

checkers = ()
coercers = ()
default_setters = ()
class sciunit.validators.ParametersValidator(*args, **kwargs)[source]

Bases: cerberus.validator.Validator

Cerberus validator class for observations.

__module__ = 'sciunit.validators'
_types_from_methods = ('current', 'time', 'voltage')
_validate_type_current(value)[source]

Validate fields requiring units of amps.

_validate_type_time(value)[source]

Validate fields requiring units of seconds.

_validate_type_voltage(value)[source]

Validate fields requiring units of volts.

checkers = ()
coercers = ()
default_setters = ()
units_map = {'current': 'A', 'time': 's', 'voltage': 'V'}
validate_quantity(value)[source]

Validate that the value is of the Quantity type.

validate_units(value)[source]

Validate units, assuming that it was called by _validate_type_*.

sciunit.validators.register_quantity(quantity, name)[source]

Register name as a type to validate as an instance of class cls.

sciunit.validators.register_type(cls, name)[source]

Register name as a type to validate as an instance of class cls.

sciunit.version module

Module contents

SciUnit.

A Testing Framework for Data-Driven Validation of Quantitative Scientific Models