Looking for the API reference?

Introduction

Aftercare evaluates the quality of your responses in the context of the response itself and compared to all other survey responses. Not all quality checks are built the same, so we’ve built a few different metrics to give you a holistic view of your responses.

Components

Aftercare AI breaks down the quality of your responses into a a few different metrics:

  • Nonsense: how coherent and logical the response itself is. (ie, gibberish, troll responses, etc.)
  • Relevance: how pertinent the response is to the question.
  • Low-effort: how much effort was put into the response. (length, specific details, etc.)
  • Llm-generated: how likely the response is to be AI or LLM generated.
  • Self-duplication: how similar a particular answer is to previous answers in the same survey response.
  • Shared-duplication: how similar the response is to other responses for a particular question across respondents

Aftercare will generate a confidence score for each of these metrics along with an overall quality score you can use to compare the quality of your responses. Aftercare will let you know which (if any) of these quality metrics were violated and how many violations were found in total.

Tips for improving your response quality

While you can use the data quality API to evaluate each response individually, providing a survey ID, question ID, and response ID will help tie together responses across respondants. Doing so will provide a more accurate assessment of the quality of your responses.