AI Data Quality Evaluation
Evaluate Data Quality Batch
Asynchronously evaluates the quality of multiple survey responses
POST
Authorizations
API key for authentication
Body
application/json
The name of the survey
URL where results should be sent when processing completes
Optional unique identifier for the survey.
Aftercare will use this to build a data model to associate questions and answers with the survey.
A description of the survey purpose and background.
Optional array of specific quality issues to evaluate. If not provided, all applicable quality issues will be checked.
Types of quality issues that can be detected in survey responses.
Nonsensical
- Response lacks logical meaning or sense. Likely gibberish.Irrelevant
- Response does not address the question.Low Effort
- Respondent did not put in much effort to answer the question. Lacks detail or concrete examples.LLM Generated
- Response appears to be generated by AI.Self Duplicated
- Responses from the same respondent contain duplicated content across multiple answers. Only evaluated when multiple survey entries are provided or if a survey identifier and response identifier is provided.Shared Duplicate
- Responses contain duplicated content across different respondents for the same question. Only evaluated if survey identifiers and question identifiers are provided.
Available options:
Nonsensical
, Irrelevant
, Low Effort
, LLM Generated
, Repeated Answers
, Duplicate Answers
The mode in which Aftercare will evaluate the quality of the survey responses.
Responsiveness
- Checks for poor responsesAuthenticity
- Checks for fraudulent responsesComposite
- Checks for all quality issues
If both a detectionMode and qualityIssues are provided, Aftercare will use the detection mode.
Available options:
Responsiveness
, Authenticity
, Composite