nucleus.validate¶
Model CI Python Library.
An Evaluation Criterion is defined as an evaluation function, threshold, and comparator. |
|
A Scenario Test combines a slice and at least one evaluation criterion. A |
|
Model CI Python Client extension. |
- class nucleus.validate.EvaluationCriterion(**data)¶
An Evaluation Criterion is defined as an evaluation function, threshold, and comparator. It describes how to apply an evaluation function
Notes
To define the evaluation criteria for a scenario test we’ve created some syntactic sugar to make it look closer to an actual function call, and we also hide away implementation details related to our data model that simply are not clear, UX-wise.
Instead of defining criteria like this:
from nucleus.validate.data_transfer_objects.eval_function import ( EvaluationCriterion, ThresholdComparison, ) criteria = [ EvaluationCriterion( eval_function_id="ef_c6m1khygqk400918ays0", # bbox_recall threshold_comparison=ThresholdComparison.GREATER_THAN, threshold=0.5, ), ]
we define it like this:
bbox_recall = client.validate.eval_functions.bbox_recall criteria = [ bbox_recall() > 0.5 ]
The chosen method allows us to document the available evaluation functions in an IDE friendly fashion and hides away details like internal IDs (“ef_….”).
The actual EvaluationCriterion is created by overloading the comparison operators for the base class of an evaluation function. Instead of the comparison returning a bool, we’ve made it create an EvaluationCriterion with the correct signature to send over the wire to our API.
- Parameters:
eval_function_id (str) – ID of evaluation function
threshold_comparison (
ThresholdComparison
) – comparator for evaluation. i.e. threshold=0.5 and threshold_comparator > implies that a test only passes if score > 0.5.threshold (float) – numerical threshold that together with threshold comparison, defines success criteria for test evaluation.
eval_func_arguments – Arguments to pass to the eval function constructor
data (Any)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- classmethod construct(_fields_set=None, **values)¶
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
- Parameters:
_fields_set (Optional[pydantic.v1.typing.SetStr])
values (Any)
- Return type:
- copy(*, include=None, exclude=None, update=None, deep=False)¶
Duplicate a model, optionally choose which fields to include, exclude and change.
- Parameters:
include (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) – fields to include in new model
exclude (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include
update (Optional[pydantic.v1.typing.DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data
deep (bool) – set to True to make a deep copy of the model
- Returns:
new model instance
- Return type:
- dict(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False)¶
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
include (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]])
exclude (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]])
by_alias (bool)
skip_defaults (Optional[bool])
exclude_unset (bool)
exclude_defaults (bool)
exclude_none (bool)
- Return type:
pydantic.v1.typing.DictStrAny
- json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)¶
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
- Parameters:
include (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]])
exclude (Optional[Union[pydantic.v1.typing.AbstractSetIntStr, pydantic.v1.typing.MappingIntStrAny]])
by_alias (bool)
skip_defaults (Optional[bool])
exclude_unset (bool)
exclude_defaults (bool)
exclude_none (bool)
encoder (Optional[Callable[[Any], Any]])
models_as_dict (bool)
dumps_kwargs (Any)
- Return type:
str
- classmethod model_construct(_fields_set=None, **values)¶
Creates a new instance of the Model class with validated data.
Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
- !!! note
model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == ‘allow’, then all extra passed values are added to the model instance’s __dict__ and __pydantic_extra__ fields. If model_config.extra == ‘ignore’ (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == ‘forbid’ does not result in an error if extra values are passed, but they will be ignored.
- Parameters:
_fields_set (set[str] | None) – A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.
values (Any) – Trusted or pre-validated data dictionary.
- Returns:
A new instance of the Model class with validated data.
- Return type:
typing_extensions.Self
- model_copy(*, update=None, deep=False)¶
Usage docs: https://docs.pydantic.dev/2.9/concepts/serialization/#model_copy
Returns a copy of the model.
- Parameters:
update (dict[str, Any] | None) – Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.
deep (bool) – Set to True to make a deep copy of the model.
- Returns:
New model instance.
- Return type:
typing_extensions.Self
- model_dump(*, mode='python', include=None, exclude=None, context=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, round_trip=False, warnings=True, serialize_as_any=False)¶
Usage docs: https://docs.pydantic.dev/2.9/concepts/serialization/#modelmodel_dump
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
mode (Literal['json', 'python'] | str) – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include (IncEx | None) – A set of fields to include in the output.
exclude (IncEx | None) – A set of fields to exclude from the output.
context (Any | None) – Additional context to pass to the serializer.
by_alias (bool) – Whether to use the field’s alias in the dictionary key if defined.
exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.
exclude_defaults (bool) – Whether to exclude fields that are set to their default value.
exclude_none (bool) – Whether to exclude fields that have a value of None.
round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings (bool | Literal['none', 'warn', 'error']) – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
serialize_as_any (bool) – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A dictionary representation of the model.
- Return type:
dict[str, Any]
- model_dump_json(*, indent=None, include=None, exclude=None, context=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, round_trip=False, warnings=True, serialize_as_any=False)¶
Usage docs: https://docs.pydantic.dev/2.9/concepts/serialization/#modelmodel_dump_json
Generates a JSON representation of the model using Pydantic’s to_json method.
- Parameters:
indent (int | None) – Indentation to use in the JSON output. If None is passed, the output will be compact.
include (IncEx | None) – Field(s) to include in the JSON output.
exclude (IncEx | None) – Field(s) to exclude from the JSON output.
context (Any | None) – Additional context to pass to the serializer.
by_alias (bool) – Whether to serialize using field aliases.
exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.
exclude_defaults (bool) – Whether to exclude fields that are set to their default value.
exclude_none (bool) – Whether to exclude fields that have a value of None.
round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings (bool | Literal['none', 'warn', 'error']) – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
serialize_as_any (bool) – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A JSON string representation of the model.
- Return type:
str
- classmethod model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation')¶
Generates a JSON schema for a model class.
- Parameters:
by_alias (bool) – Whether to use attribute aliases or not.
ref_template (str) – The reference template.
schema_generator (type[pydantic.json_schema.GenerateJsonSchema]) – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications
mode (pydantic.json_schema.JsonSchemaMode) – The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- Return type:
dict[str, Any]
- classmethod model_parametrized_name(params)¶
Compute the class name for parametrizations of generic classes.
This method can be overridden to achieve a custom naming scheme for generic BaseModels.
- Parameters:
params (tuple[type[Any], Ellipsis]) – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.
- Returns:
String representing the new class where params are passed to cls as type variables.
- Raises:
TypeError – Raised when trying to generate concrete names for non-generic models.
- Return type:
str
- model_post_init(__context)¶
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- Parameters:
__context (Any)
- Return type:
None
- classmethod model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None)¶
Try to rebuild the pydantic-core schema for the model.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
- Parameters:
force (bool) – Whether to force the rebuilding of the model schema, defaults to False.
raise_errors (bool) – Whether to raise errors, defaults to True.
_parent_namespace_depth (int) – The depth level of the parent namespace, defaults to 2.
_types_namespace (dict[str, Any] | None) – The types namespace, defaults to None.
- Returns:
Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.
- Return type:
bool | None
- classmethod model_validate(obj, *, strict=None, from_attributes=None, context=None)¶
Validate a pydantic model instance.
- Parameters:
obj (Any) – The object to validate.
strict (bool | None) – Whether to enforce types strictly.
from_attributes (bool | None) – Whether to extract data from object attributes.
context (Any | None) – Additional context to pass to the validator.
- Raises:
ValidationError – If the object could not be validated.
- Returns:
The validated model instance.
- Return type:
typing_extensions.Self
- classmethod model_validate_json(json_data, *, strict=None, context=None)¶
Usage docs: https://docs.pydantic.dev/2.9/concepts/json/#json-parsing
Validate the given JSON data against the Pydantic model.
- Parameters:
json_data (str | bytes | bytearray) – The JSON data to validate.
strict (bool | None) – Whether to enforce types strictly.
context (Any | None) – Extra variables to pass to the validator.
- Returns:
The validated Pydantic model.
- Raises:
ValidationError – If json_data is not a JSON string or the object could not be validated.
- Return type:
typing_extensions.Self
- classmethod model_validate_strings(obj, *, strict=None, context=None)¶
Validate the given object with string data against the Pydantic model.
- Parameters:
obj (Any) – The object containing string data to validate.
strict (bool | None) – Whether to enforce types strictly.
context (Any | None) – Extra variables to pass to the validator.
- Returns:
The validated Pydantic model.
- Return type:
typing_extensions.Self
- classmethod update_forward_refs(**localns)¶
Try to update ForwardRefs on fields based on this Model, globalns and localns.
- Parameters:
localns (Any)
- Return type:
None
- class nucleus.validate.ScenarioTest¶
A Scenario Test combines a slice and at least one evaluation criterion. A
ScenarioTest
is not created through the default constructor but using the instructions shown inValidate
. ThisScenarioTest
class only simplifies the interaction with the scenario tests from this SDK.- id¶
The ID of the scenario test.
- Type:
str
- connection¶
The connection to Nucleus API.
- Type:
Connection
- name¶
The name of the scenario test.
- Type:
str
- slice_id¶
The ID of the associated Nucleus slice.
- Type:
str
- add_eval_function(eval_function)¶
Creates and adds a new evaluation metric to the
ScenarioTest
.import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.create_scenario_test( "sample_scenario_test", "slc_bx86ea222a6g057x4380" ) e = client.validate.eval_functions # Assuming a user would like to add all available public evaluation functions as criteria scenario_test.add_eval_function( e.bbox_iou ) scenario_test.add_eval_function( e.bbox_map ) scenario_test.add_eval_function( e.bbox_precision ) scenario_test.add_eval_function( e.bbox_recall )
- Parameters:
eval_function (nucleus.validate.eval_functions.available_eval_functions.EvalFunction) –
EvalFunction
- Raises:
NucleusAPIError – By adding this function, the scenario test mixes external with non-external functions which is not permitted.
- Returns:
The created ScenarioTestMetric object.
- Return type:
nucleus.validate.scenario_test_metric.ScenarioTestMetric
- get_eval_functions()¶
Retrieves all criteria of the
ScenarioTest
.import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] scenario_test.get_eval_functions()
- Returns:
A list of ScenarioTestMetric objects.
- Return type:
List[nucleus.validate.scenario_test_metric.ScenarioTestMetric]
- get_eval_history()¶
Retrieves evaluation history for
ScenarioTest
.import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] scenario_test.get_eval_history()
- Returns:
A list of
ScenarioTestEvaluation
objects.- Return type:
List[nucleus.validate.scenario_test_evaluation.ScenarioTestEvaluation]
- get_items(level=EntityLevel.ITEM)¶
Gets items within a scenario test at a given level, returning a list of Track, DatasetItem, or Scene objects.
- Parameters:
level (nucleus.validate.constants.EntityLevel) –
EntityLevel
- Returns:
A list of
ScenarioTestEvaluation
objects.- Return type:
Union[List[nucleus.track.Track], List[nucleus.dataset_item.DatasetItem], List[nucleus.scene.Scene]]
- set_baseline_model(model_id)¶
Sets a new baseline model for the ScenarioTest. In order to be eligible to be a baseline, this scenario test must have been evaluated using that model. The baseline model’s performance is used as the threshold for all metrics against which other models are compared.
import nucleus client = nucleus.NucleusClient(“YOUR_SCALE_API_KEY”) scenario_test = client.validate.scenario_tests[0]
scenario_test.set_baseline_model(‘my_baseline_model_id’)
- Returns:
A list of
ScenarioTestEvaluation
objects.- Parameters:
model_id (str)
- class nucleus.validate.Validate(api_key, endpoint)¶
Model CI Python Client extension.
- Parameters:
api_key (str)
endpoint (str)
- create_external_eval_function(name, level=EntityLevel.ITEM)¶
Creates a new external evaluation function. This external function can be used to upload evaluation results with functions defined and computed by the customer, without having to share the source code of the respective function.
- Parameters:
name (str) – unique name of evaluation function
level (nucleus.validate.constants.EntityLevel) – level at which the eval function is run, defaults to EntityLevel.ITEM.
- Raises:
- NucleusAPIError if the creation of the function fails on the server side –
- ValidationError if the evaluation name is not well defined –
- Returns:
Created EvalFunctionConfig object.
- Return type:
nucleus.validate.data_transfer_objects.eval_function.EvalFunctionEntry
- create_scenario_test(name, slice_id, evaluation_functions)¶
Creates a new Scenario Test from an existing Nucleus
Slice
:.import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.create_scenario_test( name="sample_scenario_test", slice_id="YOUR_SLICE_ID", evaluation_functions=[client.validate.eval_functions.bbox_iou()] )
- Parameters:
name (str) – unique name of test
slice_id (str) – id of (pre-defined) slice of items to evaluate test on.
evaluation_functions (List[nucleus.validate.eval_functions.base_eval_function.EvalFunctionConfig]) –
EvalFunctionEntry
defines an evaluation metric for the test.
- Return type:
:param Created with an element from the list of available eval functions. See
eval_functions
.:- Returns:
Created ScenarioTest object.
- Parameters:
name (str)
slice_id (str)
evaluation_functions (List[nucleus.validate.eval_functions.base_eval_function.EvalFunctionConfig])
- Return type:
- delete_scenario_test(scenario_test_id)¶
Deletes a Scenario Test.
import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] success = client.validate.delete_scenario_test(scenario_test.id)
- Parameters:
scenario_test_id (str) – unique ID of scenario test
- Returns:
Whether deletion was successful.
- Return type:
bool
- evaluate_model_on_scenario_tests(model_id, scenario_test_names)¶
Evaluates the given model on the specified Scenario Tests.
import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") model = client.list_models()[0] scenario_test = client.validate.create_scenario_test( "sample_scenario_test", "slc_bx86ea222a6g057x4380" ) job = client.validate.evaluate_model_on_scenario_tests( model_id=model.id, scenario_test_names=["sample_scenario_test"], ) job.sleep_until_complete() # Not required. Will block and update on status of the job.
- Parameters:
model_id (str) – ID of model to evaluate
scenario_test_names (List[str]) – list of scenario test names of test to evaluate
- Returns:
AsyncJob object of evaluation job
- Return type:
nucleus.async_job.AsyncJob