Skip to main content
POST
/
evals
/
{eval_id}
/
runs
/
{run_id}
Error
A valid request URL is required to generate request examples
{
  "object": "eval.run",
  "id": "<string>",
  "eval_id": "<string>",
  "status": "<string>",
  "model": "<string>",
  "name": "<string>",
  "created_at": 123,
  "report_url": "<string>",
  "result_counts": {
    "total": 123,
    "errored": 123,
    "failed": 123,
    "passed": 123
  },
  "per_model_usage": [
    {
      "model_name": "<string>",
      "invocation_count": 123,
      "prompt_tokens": 123,
      "completion_tokens": 123,
      "total_tokens": 123,
      "cached_tokens": 123
    }
  ],
  "per_testing_criteria_results": [
    {
      "testing_criteria": "<string>",
      "passed": 123,
      "failed": 123
    }
  ],
  "data_source": {
    "type": "jsonl"
  },
  "metadata": {},
  "error": {
    "code": "<string>",
    "message": "<string>"
  }
}

Authorizations

api-key
string
header
required

Headers

aoai-evals
enum<string>
required

Enables access to AOAI Evals, a preview feature. This feature requires the 'aoai-evals' header to be set to 'preview'.

Available options:
preview

Path Parameters

eval_id
string
required
run_id
string
required

Query Parameters

api-version
enum<string>
default:v1

The explicit Azure AI Foundry Models API version to use for this request. v1 if not otherwise specified.

Available options:
v1,
preview

Response

The request has succeeded.

A schema representing an evaluation run.

object
enum<string>
default:eval.run
required

The type of the object. Always "eval.run".

Available options:
eval.run
id
string
required

Unique identifier for the evaluation run.

eval_id
string
required

The identifier of the associated evaluation.

status
string
required

The status of the evaluation run.

model
string
required

The model that is evaluated, if applicable.

name
string
required

The name of the evaluation run.

created_at
integer<unixtime>
required

Unix timestamp (in seconds) when the evaluation run was created.

report_url
string
required

The URL to the rendered evaluation run report on the UI dashboard.

result_counts
object
required

Counters summarizing the outcomes of the evaluation run.

per_model_usage
object[]
required

Usage statistics for each model during the evaluation run.

per_testing_criteria_results
object[]
required

Results per testing criteria applied during the evaluation run.

data_source
object
required

Information about the run's data source.

metadata
object
required

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

error
object
required

An object representing an error response from the Eval API.