A valid request URL is required to generate request examples{
"object": "eval.run",
"id": "<string>",
"eval_id": "<string>",
"status": "<string>",
"model": "<string>",
"name": "<string>",
"created_at": 123,
"report_url": "<string>",
"result_counts": {
"total": 123,
"errored": 123,
"failed": 123,
"passed": 123
},
"per_model_usage": [
{
"model_name": "<string>",
"invocation_count": 123,
"prompt_tokens": 123,
"completion_tokens": 123,
"total_tokens": 123,
"cached_tokens": 123
}
],
"per_testing_criteria_results": [
{
"testing_criteria": "<string>",
"passed": 123,
"failed": 123
}
],
"data_source": {
"type": "jsonl"
},
"metadata": {},
"error": {
"code": "<string>",
"message": "<string>"
}
}Retrieve a specific evaluation run by its ID.
NOTE: This Azure OpenAI API is in preview and subject to change.
A valid request URL is required to generate request examples{
"object": "eval.run",
"id": "<string>",
"eval_id": "<string>",
"status": "<string>",
"model": "<string>",
"name": "<string>",
"created_at": 123,
"report_url": "<string>",
"result_counts": {
"total": 123,
"errored": 123,
"failed": 123,
"passed": 123
},
"per_model_usage": [
{
"model_name": "<string>",
"invocation_count": 123,
"prompt_tokens": 123,
"completion_tokens": 123,
"total_tokens": 123,
"cached_tokens": 123
}
],
"per_testing_criteria_results": [
{
"testing_criteria": "<string>",
"passed": 123,
"failed": 123
}
],
"data_source": {
"type": "jsonl"
},
"metadata": {},
"error": {
"code": "<string>",
"message": "<string>"
}
}Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'.
preview The explicit Azure AI Foundry Models API version to use for this request.
v1 if not otherwise specified.
v1, preview The request has succeeded.
A schema representing an evaluation run.
The type of the object. Always "eval.run".
eval.run Unique identifier for the evaluation run.
The identifier of the associated evaluation.
The status of the evaluation run.
The model that is evaluated, if applicable.
The name of the evaluation run.
Unix timestamp (in seconds) when the evaluation run was created.
The URL to the rendered evaluation run report on the UI dashboard.
Counters summarizing the outcomes of the evaluation run.
Show child attributes
Usage statistics for each model during the evaluation run.
Show child attributes
Results per testing criteria applied during the evaluation run.
Show child attributes
Information about the run's data source.
Show child attributes
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Show child attributes
An object representing an error response from the Eval API.
Show child attributes