Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
YedsonUQ 's Collections
Fine-Tuning, PEFT
Hallucination Frameworks Ideas
Understanding LLM Representation
Efficient Inference
Test-Time Scaling (TTS)
Agents AI
Long-context
Foundational Deep Learning - Architecture
AI-Automated Scientific Research
Benchmark and Evaluation
Distributed Training and Federated Learning
Explainable AI - Interpretable AI
Findings
Theory, Conceptualization, Paradigms
Hallucination
Learning Paradigm/Scheme
Models Series
Reasoning - Chain-of-Thought
Reinforcement Learning (RL)
Retrieval Augmented Generation (RAG)
Uncertainty Quantification
Survey

Hallucination

updated Jun 9
Upvote
-

  • Linear Correlation in LM's Compositional Generalization and Hallucination

    Paper • 2502.04520 • Published Feb 6 • 11

  • How to Steer LLM Latents for Hallucination Detection?

    Paper • 2503.01917 • Published Mar 1 • 11

  • Are Reasoning Models More Prone to Hallucination?

    Paper • 2505.23646 • Published May 29 • 24
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs