Training Language Models to Self-Correct via Reinforcement Learning Paper • 2409.12917 • Published 28 days ago • 131
FactAlign: Long-form Factuality Alignment of Large Language Models Paper • 2410.01691 • Published 15 days ago • 8
LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations Paper • 2410.02707 • Published 14 days ago • 44
ECon: On the Detection and Resolution of Evidence Conflicts Paper • 2410.04068 • Published 12 days ago