Broken by Default: A Formal Verification Study of Security Vulnerabilities in AI-Generated Code
arXiv:2604.05292v2 Announce Type: replace-cross
Abstract: AI coding assistants are now used to generate production code in
security-sensitive domains, yet the exploitability of their outputs remains
unquantified. We address this gap with Broken by Default: a formal
verification study of 3,500 code artifacts generated by seven widely-deployed LLMs
across 500 security-critical prompts (five CWE categories, 100 prompts each).
Each artifact is subjected to the Z3 SMT solver via the COBALT analysis
pipeline, producing mathematical satisfiability witnesses rather than
pattern-based heuristics. Across all models, 55.8% of artifacts contain at
least one COBALT-identified vulnerability; of these, 1,055 are formally proven
via Z3 satisfiability witnesses. GPT-4o leads at 62.4% (grade F); Gemini 2.5
Flash performs best at 48.4% (grade D). No model achieves a grade better than
D. Six of seven representative findings are confirmed with runtime crashes
under GCC AddressSanitizer. Three auxiliary experiments show: (1) explicit
security instructions reduce the mean rate by only 4 points; (2) six industry
tools combined miss 97.8% of Z3-proven findings; and (3) models identify their
own vulnerable outputs 78.7% of the time in review mode yet generate them at
55.8% by default.