nucleus.validate ============= .. py:module:: nucleus.validate .. autoapi-nested-parse:: Model CI Python Library. .. autoapisummary:: nucleus.validate.EvaluationCriterion nucleus.validate.ScenarioTest nucleus.validate.Validate .. py:class:: EvaluationCriterion(**data) An Evaluation Criterion is defined as an evaluation function, threshold, and comparator. It describes how to apply an evaluation function .. rubric:: Notes To define the evaluation criteria for a scenario test we've created some syntactic sugar to make it look closer to an actual function call, and we also hide away implementation details related to our data model that simply are not clear, UX-wise. Instead of defining criteria like this:: from nucleus.validate.data_transfer_objects.eval_function import ( EvaluationCriterion, ThresholdComparison, ) criteria = [ EvaluationCriterion( eval_function_id="ef_c6m1khygqk400918ays0", # bbox_recall threshold_comparison=ThresholdComparison.GREATER_THAN, threshold=0.5, ), ] we define it like this:: bbox_recall = client.validate.eval_functions.bbox_recall criteria = [ bbox_recall() > 0.5 ] The chosen method allows us to document the available evaluation functions in an IDE friendly fashion and hides away details like internal IDs (`"ef_...."`). The actual `EvaluationCriterion` is created by overloading the comparison operators for the base class of an evaluation function. Instead of the comparison returning a bool, we've made it create an `EvaluationCriterion` with the correct signature to send over the wire to our API. :param eval_function_id: ID of evaluation function :type eval_function_id: str :param threshold_comparison: comparator for evaluation. i.e. threshold=0.5 and threshold_comparator > implies that a test only passes if score > 0.5. :type threshold_comparison: :class:`ThresholdComparison` :param threshold: numerical threshold that together with threshold comparison, defines success criteria for test evaluation. :type threshold: float :param eval_func_arguments: Arguments to pass to the eval function constructor Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. .. py:method:: construct(_fields_set = None, **values) :classmethod: Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if `Config.extra = 'allow'` was set since it adds all passed values .. py:method:: copy(*, include = None, exclude = None, update = None, deep = False) Duplicate a model, optionally choose which fields to include, exclude and change. :param include: fields to include in new model :param exclude: fields to exclude from new model, as with values this takes precedence over include :param update: values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data :param deep: set to `True` to make a deep copy of the model :return: new model instance .. py:method:: dict(*, include = None, exclude = None, by_alias = False, skip_defaults = None, exclude_unset = False, exclude_defaults = False, exclude_none = False) Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. .. py:method:: json(*, include = None, exclude = None, by_alias = False, skip_defaults = None, exclude_unset = False, exclude_defaults = False, exclude_none = False, encoder = None, models_as_dict = True, **dumps_kwargs) Generate a JSON representation of the model, `include` and `exclude` arguments as per `dict()`. `encoder` is an optional function to supply as `default` to json.dumps(), other arguments as per `json.dumps()`. .. py:method:: model_construct(_fields_set = None, **values) :classmethod: Creates a new instance of the `Model` class with validated data. Creates a new model setting `__dict__` and `__pydantic_fields_set__` from trusted or pre-validated data. Default values are respected, but no other validation is performed. !!! note `model_construct()` generally respects the `model_config.extra` setting on the provided model. That is, if `model_config.extra == 'allow'`, then all extra passed values are added to the model instance's `__dict__` and `__pydantic_extra__` fields. If `model_config.extra == 'ignore'` (the default), then all extra passed values are ignored. Because no validation is performed with a call to `model_construct()`, having `model_config.extra == 'forbid'` does not result in an error if extra values are passed, but they will be ignored. :param _fields_set: The set of field names accepted for the Model instance. :param values: Trusted or pre-validated data dictionary. :returns: A new instance of the `Model` class with validated data. .. py:method:: model_copy(*, update = None, deep = False) Usage docs: https://docs.pydantic.dev/2.8/concepts/serialization/#model_copy Returns a copy of the model. :param update: Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data. :param deep: Set to `True` to make a deep copy of the model. :returns: New model instance. .. py:method:: model_dump(*, mode = 'python', include = None, exclude = None, context = None, by_alias = False, exclude_unset = False, exclude_defaults = False, exclude_none = False, round_trip = False, warnings = True, serialize_as_any = False) Usage docs: https://docs.pydantic.dev/2.8/concepts/serialization/#modelmodel_dump Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. :param mode: The mode in which `to_python` should run. If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects. :param include: A set of fields to include in the output. :param exclude: A set of fields to exclude from the output. :param context: Additional context to pass to the serializer. :param by_alias: Whether to use the field's alias in the dictionary key if defined. :param exclude_unset: Whether to exclude fields that have not been explicitly set. :param exclude_defaults: Whether to exclude fields that are set to their default value. :param exclude_none: Whether to exclude fields that have a value of `None`. :param round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. :param warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError]. :param serialize_as_any: Whether to serialize fields with duck-typing serialization behavior. :returns: A dictionary representation of the model. .. py:method:: model_dump_json(*, indent = None, include = None, exclude = None, context = None, by_alias = False, exclude_unset = False, exclude_defaults = False, exclude_none = False, round_trip = False, warnings = True, serialize_as_any = False) Usage docs: https://docs.pydantic.dev/2.8/concepts/serialization/#modelmodel_dump_json Generates a JSON representation of the model using Pydantic's `to_json` method. :param indent: Indentation to use in the JSON output. If None is passed, the output will be compact. :param include: Field(s) to include in the JSON output. :param exclude: Field(s) to exclude from the JSON output. :param context: Additional context to pass to the serializer. :param by_alias: Whether to serialize using field aliases. :param exclude_unset: Whether to exclude fields that have not been explicitly set. :param exclude_defaults: Whether to exclude fields that are set to their default value. :param exclude_none: Whether to exclude fields that have a value of `None`. :param round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. :param warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError]. :param serialize_as_any: Whether to serialize fields with duck-typing serialization behavior. :returns: A JSON string representation of the model. .. py:method:: model_json_schema(by_alias = True, ref_template = DEFAULT_REF_TEMPLATE, schema_generator = GenerateJsonSchema, mode = 'validation') :classmethod: Generates a JSON schema for a model class. :param by_alias: Whether to use attribute aliases or not. :param ref_template: The reference template. :param schema_generator: To override the logic used to generate the JSON schema, as a subclass of `GenerateJsonSchema` with your desired modifications :param mode: The mode in which to generate the schema. :returns: The JSON schema for the given model class. .. py:method:: model_parametrized_name(params) :classmethod: Compute the class name for parametrizations of generic classes. This method can be overridden to achieve a custom naming scheme for generic BaseModels. :param params: Tuple of types of the class. Given a generic class `Model` with 2 type variables and a concrete model `Model[str, int]`, the value `(str, int)` would be passed to `params`. :returns: String representing the new class where `params` are passed to `cls` as type variables. :raises TypeError: Raised when trying to generate concrete names for non-generic models. .. py:method:: model_post_init(__context) Override this method to perform additional initialization after `__init__` and `model_construct`. This is useful if you want to do some validation that requires the entire model to be initialized. .. py:method:: model_rebuild(*, force = False, raise_errors = True, _parent_namespace_depth = 2, _types_namespace = None) :classmethod: Try to rebuild the pydantic-core schema for the model. This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails. :param force: Whether to force the rebuilding of the model schema, defaults to `False`. :param raise_errors: Whether to raise errors, defaults to `True`. :param _parent_namespace_depth: The depth level of the parent namespace, defaults to 2. :param _types_namespace: The types namespace, defaults to `None`. :returns: Returns `None` if the schema is already "complete" and rebuilding was not required. If rebuilding _was_ required, returns `True` if rebuilding was successful, otherwise `False`. .. py:method:: model_validate(obj, *, strict = None, from_attributes = None, context = None) :classmethod: Validate a pydantic model instance. :param obj: The object to validate. :param strict: Whether to enforce types strictly. :param from_attributes: Whether to extract data from object attributes. :param context: Additional context to pass to the validator. :raises ValidationError: If the object could not be validated. :returns: The validated model instance. .. py:method:: model_validate_json(json_data, *, strict = None, context = None) :classmethod: Usage docs: https://docs.pydantic.dev/2.8/concepts/json/#json-parsing Validate the given JSON data against the Pydantic model. :param json_data: The JSON data to validate. :param strict: Whether to enforce types strictly. :param context: Extra variables to pass to the validator. :returns: The validated Pydantic model. :raises ValueError: If `json_data` is not a JSON string. .. py:method:: model_validate_strings(obj, *, strict = None, context = None) :classmethod: Validate the given object with string data against the Pydantic model. :param obj: The object containing string data to validate. :param strict: Whether to enforce types strictly. :param context: Extra variables to pass to the validator. :returns: The validated Pydantic model. .. py:method:: update_forward_refs(**localns) :classmethod: Try to update ForwardRefs on fields based on this Model, globalns and localns. .. py:class:: ScenarioTest A Scenario Test combines a slice and at least one evaluation criterion. A :class:`ScenarioTest` is not created through the default constructor but using the instructions shown in :class:`Validate`. This :class:`ScenarioTest` class only simplifies the interaction with the scenario tests from this SDK. .. attribute:: id The ID of the scenario test. :type: str .. attribute:: connection The connection to Nucleus API. :type: Connection .. attribute:: name The name of the scenario test. :type: str .. attribute:: slice_id The ID of the associated Nucleus slice. :type: str .. py:method:: add_eval_function(eval_function) Creates and adds a new evaluation metric to the :class:`ScenarioTest`. :: import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.create_scenario_test( "sample_scenario_test", "slc_bx86ea222a6g057x4380" ) e = client.validate.eval_functions # Assuming a user would like to add all available public evaluation functions as criteria scenario_test.add_eval_function( e.bbox_iou ) scenario_test.add_eval_function( e.bbox_map ) scenario_test.add_eval_function( e.bbox_precision ) scenario_test.add_eval_function( e.bbox_recall ) :param eval_function: :class:`EvalFunction` :raises NucleusAPIError: By adding this function, the scenario test mixes external with non-external functions which is not permitted. :returns: The created ScenarioTestMetric object. .. py:method:: get_eval_functions() Retrieves all criteria of the :class:`ScenarioTest`. :: import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] scenario_test.get_eval_functions() :returns: A list of ScenarioTestMetric objects. .. py:method:: get_eval_history() Retrieves evaluation history for :class:`ScenarioTest`. :: import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] scenario_test.get_eval_history() :returns: A list of :class:`ScenarioTestEvaluation` objects. .. py:method:: get_items(level = EntityLevel.ITEM) Gets items within a scenario test at a given level, returning a list of Track, DatasetItem, or Scene objects. :param level: :class:`EntityLevel` :returns: A list of :class:`ScenarioTestEvaluation` objects. .. py:method:: set_baseline_model(model_id) Sets a new baseline model for the ScenarioTest. In order to be eligible to be a baseline, this scenario test must have been evaluated using that model. The baseline model's performance is used as the threshold for all metrics against which other models are compared. import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] scenario_test.set_baseline_model('my_baseline_model_id') :returns: A list of :class:`ScenarioTestEvaluation` objects. .. py:class:: Validate(api_key, endpoint) Model CI Python Client extension. .. py:method:: create_external_eval_function(name, level = EntityLevel.ITEM) Creates a new external evaluation function. This external function can be used to upload evaluation results with functions defined and computed by the customer, without having to share the source code of the respective function. :param name: unique name of evaluation function :param level: level at which the eval function is run, defaults to EntityLevel.ITEM. :raises - NucleusAPIError if the creation of the function fails on the server side: :raises - ValidationError if the evaluation name is not well defined: :returns: Created EvalFunctionConfig object. .. py:method:: create_scenario_test(name, slice_id, evaluation_functions) Creates a new Scenario Test from an existing Nucleus :class:`Slice`:. :: import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.create_scenario_test( name="sample_scenario_test", slice_id="YOUR_SLICE_ID", evaluation_functions=[client.validate.eval_functions.bbox_iou()] ) :param name: unique name of test :param slice_id: id of (pre-defined) slice of items to evaluate test on. :param evaluation_functions: :class:`EvalFunctionEntry` defines an evaluation metric for the test. :param Created with an element from the list of available eval functions. See :class:`eval_functions`.: :returns: Created ScenarioTest object. .. py:method:: delete_scenario_test(scenario_test_id) Deletes a Scenario Test. :: import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") scenario_test = client.validate.scenario_tests[0] success = client.validate.delete_scenario_test(scenario_test.id) :param scenario_test_id: unique ID of scenario test :returns: Whether deletion was successful. .. py:method:: evaluate_model_on_scenario_tests(model_id, scenario_test_names) Evaluates the given model on the specified Scenario Tests. :: import nucleus client = nucleus.NucleusClient("YOUR_SCALE_API_KEY") model = client.list_models()[0] scenario_test = client.validate.create_scenario_test( "sample_scenario_test", "slc_bx86ea222a6g057x4380" ) job = client.validate.evaluate_model_on_scenario_tests( model_id=model.id, scenario_test_names=["sample_scenario_test"], ) job.sleep_until_complete() # Not required. Will block and update on status of the job. :param model_id: ID of model to evaluate :param scenario_test_names: list of scenario test names of test to evaluate :returns: AsyncJob object of evaluation job