nucleus.validate.client

Validate

Model CI Python Client extension.

class nucleus.validate.client.Validate(api_key, endpoint)

Model CI Python Client extension.

Parameters:
  • api_key (str)

  • endpoint (str)

create_external_eval_function(name, level=EntityLevel.ITEM)

Creates a new external evaluation function. This external function can be used to upload evaluation results with functions defined and computed by the customer, without having to share the source code of the respective function.

Parameters:
Raises:
  • - NucleusAPIError if the creation of the function fails on the server side

  • - ValidationError if the evaluation name is not well defined

Returns:

Created EvalFunctionConfig object.

Return type:

nucleus.validate.data_transfer_objects.eval_function.EvalFunctionEntry

create_scenario_test(name, slice_id, evaluation_functions)

Creates a new Scenario Test from an existing Nucleus Slice:.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")

scenario_test = client.validate.create_scenario_test(
    name="sample_scenario_test",
    slice_id="YOUR_SLICE_ID",
    evaluation_functions=[client.validate.eval_functions.bbox_iou()]
)
Parameters:
Return type:

nucleus.validate.scenario_test.ScenarioTest

:param Created with an element from the list of available eval functions. See eval_functions.:

Returns:

Created ScenarioTest object.

Parameters:
Return type:

nucleus.validate.scenario_test.ScenarioTest

delete_scenario_test(scenario_test_id)

Deletes a Scenario Test.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
scenario_test = client.validate.scenario_tests[0]

success = client.validate.delete_scenario_test(scenario_test.id)
Parameters:

scenario_test_id (str) – unique ID of scenario test

Returns:

Whether deletion was successful.

Return type:

bool

evaluate_model_on_scenario_tests(model_id, scenario_test_names)

Evaluates the given model on the specified Scenario Tests.

import nucleus
client = nucleus.NucleusClient("YOUR_SCALE_API_KEY")
model = client.list_models()[0]
scenario_test = client.validate.create_scenario_test(
    "sample_scenario_test", "slc_bx86ea222a6g057x4380"
)

job = client.validate.evaluate_model_on_scenario_tests(
    model_id=model.id,
    scenario_test_names=["sample_scenario_test"],
)
job.sleep_until_complete() # Not required. Will block and update on status of the job.
Parameters:
  • model_id (str) – ID of model to evaluate

  • scenario_test_names (List[str]) – list of scenario test names of test to evaluate

Returns:

AsyncJob object of evaluation job

Return type:

nucleus.async_job.AsyncJob