cs.AI, cs.PL, cs.SE

Executing as You Generate: Hiding Execution Latency in LLM Code Generation

arXiv:2604.00491v1 Announce Type: cross
Abstract: Current LLM-based coding agents follow a serial execution paradigm: the model first generates the complete code, then invokes an interpreter to execute it. This sequential workflow leaves the executor …