Detecting Hallucinations and Truthfulness in Generative AI

Large Language Models

Evaluation Of Large Language Models — Hallucinations and Truthfulness

Perhaps the main reason why Large Language Models started blowing everybody's minds in late 2022 was the unprecedented scope of questions the model was able to answer, and the accurateness of the answers given. After all, nobody would be particularly impressed if all the models could do was to give grammatically well-formed but incorrect or nonsense answers. For the first time, it looked like you could have a conversation with an entity that has most if not all of human knowledge at its disposal.

That being said, LLMs are not infallible. Wrong or conflicting training data can result in less than stellar output, generally thought of as hallucinations (incorrect output not directly based on a specific input source) or truthfulness issues (incorrect output due to incorrect information in the training data). is able to help weed out these issues. Our specialized contributors can annotate any question-answer pair for the correctness of the answer. We can also adapt the exact nature of the annotation your needs. See some of our examples below!



Contact Us

If you want to learn more about our services for Generative AI, fill in the form, and we will get back to you shortly!
All fields are required

By submitting your contact request, you are agreeing with Privacy Policy.

You might also be interested in:

STEM Q&A Pairs

STEM Question-Answer Dataset of 150,000 units coming soon
DAI logo hosts the leading online marketplace for buying and selling AI data, tools and models, and offers professional services to help deliver success in complex machine learning projects. is a community of AI professionals building fair, accessible and ethical AI of the future.
1201 3rd Avenue, STE 2200, Seattle WA
[email protected]
Wired logo
Forbes 2019 AI50 logo
CB insights logo
Forbes 2020 logo
Inc. 5000 logo
PME logo

© 2023 DefinedCrowd. All rights reserved.