nucleus.validate#

Model CI Python Library.

EvaluationCriterion

An Evaluation Criterion is defined as an evaluation function, threshold, and comparator.

ScenarioTest

A Scenario Test combines a slice and at least one evaluation criterion. A ScenarioTest is not created through

Validate

Model CI Python Client extension.

class nucleus.validate.EvaluationCriterion(**data)#

An Evaluation Criterion is defined as an evaluation function, threshold, and comparator. It describes how to apply an evaluation function

Notes

To define the evaluation criteria for a scenario test we’ve created some syntactic sugar to make it look closer to an actual function call, and we also hide away implementation details related to our data model that simply are not clear, UX-wise.

Instead of defining criteria like this:

from nucleus.validate.data_transfer_objects.eval_function import (
    EvaluationCriterion,
    ThresholdComparison,
)

criteria = [
    EvaluationCriterion(
        eval_function_id="ef_c6m1khygqk400918ays0",  # bbox_recall
        threshold_comparison=ThresholdComparison.GREATER_THAN,
        threshold=0.5,
    ),
]

we define it like this:

bbox_recall = client.validate.eval_functions.bbox_recall
criteria = [
    bbox_recall() > 0.5
]

The chosen method allows us to document the available evaluation functions in an IDE friendly fashion and hides away details like internal IDs (“ef_….”).

The actual EvaluationCriterion is created by overloading the comparison operators for the base class of an evaluation function. Instead of the comparison returning a bool, we’ve made it create an EvaluationCriterion with the correct signature to send over the wire to our API.

Parameters:
  • eval_function_id (str) – ID of evaluation function

  • threshold_comparison (ThresholdComparison) – comparator for evaluation. i.e. threshold=0.5 and threshold_comparator > implies that a test only passes if score > 0.5.

  • threshold (float) – numerical threshold that together with threshold comparison, defines success criteria for test evaluation.

  • eval_func_arguments – Arguments to pass to the eval function constructor

  • data (Any) –

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

classmethod construct(_fields_set=None, **values)#

Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

Parameters:
  • _fields_set (Optional[pydantic.v1.typing.SetStr]) –

  • values (Any) –

Return type:

Model

copy(*, include=None, exclude=None, update=None, deep=False)#

Duplicate a model, optionally choose which fields to include, exclude and change.

Parameters:
  • include (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) – fields to include in new model

  • exclude (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include

  • update (Optional[pydantic.v1.typing.DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data

  • deep (bool) – set to True to make a deep copy of the model

Returns:

new model instance

Return type:

Model

dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)#

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:
  • include (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) –

  • exclude (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) –

  • by_alias (bool) –

  • skip_defaults (Optional[bool]) –

  • exclude_unset (bool) –

  • exclude_defaults (bool) –

  • exclude_none (bool) –

Return type:

pydantic.v1.typing.DictStrAny

json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)#

Generate a JSON representation of the model, include and exclude arguments as per dict().

encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().

Parameters:
  • include (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) –

  • exclude (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) –

  • by_alias (bool) –

  • skip_defaults (Optional[bool]) –

  • exclude_unset (bool) –

  • exclude_defaults (bool) –

  • exclude_none (bool) –

  • encoder (Optional[Callable[[Any], Any]]) –

  • models_as_dict (bool) –

  • dumps_kwargs (Any) –

Return type:

str

classmethod model_construct(_fields_set=None, **values)#

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values

Parameters:
  • _fields_set (set[str] | None) – The set of field names accepted for the Model instance.

  • values (Any) – Trusted or pre-validated data dictionary.

Returns:

A new instance of the Model class with validated data.

Return type:

Model

model_copy(*, update=None, deep=False)#

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#model_copy

Returns a copy of the model.

Parameters:
  • update (dict[str, Any] | None) – Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

  • deep (bool) – Set to True to make a deep copy of the model.

Returns:

New model instance.

Return type:

Model

model_dump(*, mode='python', include=None, exclude=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, round_trip=False, warnings=True)#

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:
  • mode (typing_extensions.Literal[json, python] | str) – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.

  • include (IncEx) – A list of fields to include in the output.

  • exclude (IncEx) – A list of fields to exclude from the output.

  • by_alias (bool) – Whether to use the field’s alias in the dictionary key if defined.

  • exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults (bool) – Whether to exclude fields that are set to their default value.

  • exclude_none (bool) – Whether to exclude fields that have a value of None.

  • round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings (bool) – Whether to log warnings when invalid fields are encountered.

Returns:

A dictionary representation of the model.

Return type:

dict[str, Any]

model_dump_json(*, indent=None, include=None, exclude=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, round_trip=False, warnings=True)#

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump_json

Generates a JSON representation of the model using Pydantic’s to_json method.

Parameters:
  • indent (int | None) – Indentation to use in the JSON output. If None is passed, the output will be compact.

  • include (IncEx) – Field(s) to include in the JSON output.

  • exclude (IncEx) – Field(s) to exclude from the JSON output.

  • by_alias (bool) – Whether to serialize using field aliases.

  • exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.

  • exclude_defaults (bool) – Whether to exclude fields that are set to their default value.

  • exclude_none (bool) – Whether to exclude fields that have a value of None.

  • round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].

  • warnings (bool) – Whether to log warnings when invalid fields are encountered.

Returns:

A JSON string representation of the model.

Return type:

str

classmethod model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation')#

Generates a JSON schema for a model class.

Parameters:
  • by_alias (bool) – Whether to use attribute aliases or not.

  • ref_template (str) – The reference template.

  • schema_generator (type[pydantic.json_schema.GenerateJsonSchema]) – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

  • mode (pydantic.json_schema.JsonSchemaMode) – The mode in which to generate the schema.

Returns:

The JSON schema for the given model class.

Return type:

dict[str, Any]

classmethod model_parametrized_name(params)#

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Parameters:

params (tuple[type[Any], Ellipsis]) – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

Returns:

String representing the new class where params are passed to cls as type variables.

Raises:

TypeError – Raised when trying to generate concrete names for non-generic models.

Return type:

str

model_post_init(__context)#

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Parameters:

__context (Any) –

Return type:

None

classmethod model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None)#

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Parameters:
  • force (bool) – Whether to force the rebuilding of the model schema, defaults to False.

  • raise_errors (bool) – Whether to raise errors, defaults to True.

  • _parent_namespace_depth (int) – The depth level of the parent namespace, defaults to 2.

  • _types_namespace (dict[str, Any] | None) – The types namespace, defaults to None.

Returns:

Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.

Return type:

bool | None

classmethod model_validate(obj, *, strict=None, from_attributes=None, context=None)#

Validate a pydantic model instance.

Parameters:
  • obj (Any) – The object to validate.

  • strict (bool | None) – Whether to enforce types strictly.

  • from_attributes (bool | None) – Whether to extract data from object attributes.

  • context (dict[str, Any] | None) – Additional context to pass to the validator.

Raises:

ValidationError – If the object could not be validated.

Returns:

The validated model instance.

Return type:

Model

classmethod model_validate_json(json_data, *, strict=None, context=None)#

Usage docs: https://docs.pydantic.dev/2.6/concepts/json/#json-parsing

Validate the given JSON data against the Pydantic model.

Parameters:
  • json_data (str | bytes | bytearray) – The JSON data to validate.

  • strict (bool | None) – Whether to enforce types strictly.

  • context (dict[str, Any] | None) – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

Raises:

ValueError – If json_data is not a JSON string.

Return type:

Model

classmethod model_validate_strings(obj, *, strict=None, context=None)#

Validate the given object contains string data against the Pydantic model.

Parameters:
  • obj (Any) – The object contains string data to validate.

  • strict (bool | None) – Whether to enforce types strictly.

  • context (dict[str, Any] | None) – Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

Return type:

Model

classmethod update_forward_refs(**localns)#

Try to update ForwardRefs on fields based on this Model, globalns and localns.

Parameters:

localns (Any) –

Return type:

None

class nucleus.validate.ScenarioTest#

A Scenario Test combines a slice and at least one evaluation criterion. A ScenarioTest is not created through the default constructor but using the instructions shown in Validate. This ScenarioTest class only simplifies the interaction with the scenario tests from this SDK.

id#

The ID of the scenario test.

Type:

str

connection#

The connection to Nucleus API.

Type:

Connection

name#

The name of the scenario test.

Type:

str

slice_id#

The ID of the associated Nucleus slice.

Type:

str

add_eval_function(eval_function)#

Creates and adds a new evaluation metric to the ScenarioTest.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
scenario_test = client.validate.create_scenario_test(
    "sample_scenario_test", "slc_bx86ea222a6g057x4380"
)

e = client.validate.eval_functions
# Assuming a user would like to add all available public evaluation functions as criteria
scenario_test.add_eval_function(
    e.bbox_iou
)
scenario_test.add_eval_function(
    e.bbox_map
)
scenario_test.add_eval_function(
    e.bbox_precision
)
scenario_test.add_eval_function(
    e.bbox_recall
)
Parameters:

eval_function (nucleus.validate.eval_functions.available_eval_functions.EvalFunction) – EvalFunction

Raises:

NucleusAPIError – By adding this function, the scenario test mixes external with non-external functions which is not permitted.

Returns:

The created ScenarioTestMetric object.

Return type:

nucleus.validate.scenario_test_metric.ScenarioTestMetric

get_eval_functions()#

Retrieves all criteria of the ScenarioTest.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
scenario_test = client.validate.scenario_tests[0]

scenario_test.get_eval_functions()
Returns:

A list of ScenarioTestMetric objects.

Return type:

List[nucleus.validate.scenario_test_metric.ScenarioTestMetric]

get_eval_history()#

Retrieves evaluation history for ScenarioTest.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
scenario_test = client.validate.scenario_tests[0]

scenario_test.get_eval_history()
Returns:

A list of ScenarioTestEvaluation objects.

Return type:

List[nucleus.validate.scenario_test_evaluation.ScenarioTestEvaluation]

get_items(level=EntityLevel.ITEM)#

Gets items within a scenario test at a given level, returning a list of Track, DatasetItem, or Scene objects.

Parameters:

level (nucleus.validate.constants.EntityLevel) – EntityLevel

Returns:

A list of ScenarioTestEvaluation objects.

Return type:

Union[List[nucleus.track.Track], List[nucleus.dataset_item.DatasetItem], List[nucleus.scene.Scene]]

set_baseline_model(model_id)#

Sets a new baseline model for the ScenarioTest. In order to be eligible to be a baseline, this scenario test must have been evaluated using that model. The baseline model’s performance is used as the threshold for all metrics against which other models are compared.

import nucleus client = nucleus.NucleusClient(“YOUR_SCALE_API_KEY”) scenario_test = client.validate.scenario_tests[0]

scenario_test.set_baseline_model(‘my_baseline_model_id’)

Returns:

A list of ScenarioTestEvaluation objects.

Parameters:

model_id (str) –

class nucleus.validate.Validate(api_key, endpoint)#

Model CI Python Client extension.

Parameters:
  • api_key (str) –

  • endpoint (str) –

create_external_eval_function(name, level=EntityLevel.ITEM)#

Creates a new external evaluation function. This external function can be used to upload evaluation results with functions defined and computed by the customer, without having to share the source code of the respective function.

Parameters:
Raises:
  • - NucleusAPIError if the creation of the function fails on the server side

  • - ValidationError if the evaluation name is not well defined

Returns:

Created EvalFunctionConfig object.

Return type:

nucleus.validate.data_transfer_objects.eval_function.EvalFunctionEntry

create_scenario_test(name, slice_id, evaluation_functions)#

Creates a new Scenario Test from an existing Nucleus Slice:.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")

scenario_test = client.validate.create_scenario_test(
    name="sample_scenario_test",
    slice_id="YOUR_SLICE_ID",
    evaluation_functions=[client.validate.eval_functions.bbox_iou()]
)
Parameters:
Return type:

nucleus.validate.scenario_test.ScenarioTest

:param Created with an element from the list of available eval functions. See eval_functions.:

Returns:

Created ScenarioTest object.

Parameters:
Return type:

nucleus.validate.scenario_test.ScenarioTest

delete_scenario_test(scenario_test_id)#

Deletes a Scenario Test.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
scenario_test = client.validate.scenario_tests[0]

success = client.validate.delete_scenario_test(scenario_test.id)
Parameters:

scenario_test_id (str) – unique ID of scenario test

Returns:

Whether deletion was successful.

Return type:

bool

evaluate_model_on_scenario_tests(model_id, scenario_test_names)#

Evaluates the given model on the specified Scenario Tests.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
model = client.list_models()[0]
scenario_test = client.validate.create_scenario_test(
    "sample_scenario_test", "slc_bx86ea222a6g057x4380"
)

job = client.validate.evaluate_model_on_scenario_tests(
    model_id=model.id,
    scenario_test_names=["sample_scenario_test"],
)
job.sleep_until_complete() # Not required. Will block and update on status of the job.
Parameters:
  • model_id (str) – ID of model to evaluate

  • scenario_test_names (List[str]) – list of scenario test names of test to evaluate

Returns:

AsyncJob object of evaluation job

Return type:

nucleus.async_job.AsyncJob