MachineLearning

"I don’t know!": Teaching neural networks to abstain with the HALO-Loss. [R]

Current neural networks have a fundamental geometry problem: If you feed them garbage data, they won't admit that they have no clue. They will confidently hallucinate. This happens because the standard Cross-Entropy loss requires models to push the…