Out-of-the-box Hallucination Evaluation | Datadog
LLM Observability

Out-of-the-box Hallucination Evaluation

Datadog LLM Observability’s out-of-the-box faithfulness evaluation enables you to detect hallucinations in your RAG-based LLM applications, measuring how factually consistent a generated answer is with the retrieved document’s provided context.

To qualify for this Product Preview, you must currently use or plan to use Datadog LLM Observability. Your LLM application(s) must also be written in Python and use OpenAI.

Related Resources

Are you currently a Datadog customer? *
Which languages are your LLM application(s) written in?
Which LLM provider/server do you use?
Which LLM frameworks do you use?
CONFIRMATION

Thank you for your submission!

Your response has been recorded. We will reach out to you soon! In the meantime, feel free to reach out to your CSM with any questions.

Interested in more of our latest features?

Help make the next releases of Datadog products our best yet.

ALL AVAILABLE PRODUCT PREVIEWS