Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models
arXiv:2511.08577v2 Announce Type: replace-cross
Abstract: Improving reasoning abilities of Large Language Models (LLMs), especially under parameter constraints, is crucial for real-world applications. Looped transformers address this by performing mul…