Large Language Models for Multilingual Code Intelligence: A Survey

arXiv:2604.25960v1 Announce Type: cross Abstract: Large language models have transformed AI-assisted software engineering, but current research remains biased toward high-resource languages such as Python, with weaker performance in languages like Rust and OCaml. Since real-world systems are inherently polyglot, robust multilingual code intelligence is crucial. This survey focuses on two key tasks: multilingual code generation from shared natural-language requirements, and multilingual code translation that preserves semantics across languages. It reviews representative methods, benchmarks, and evaluation metrics, and highlights challenges and opportunities for trustworthy cross-language generalization.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top