| We built a system, ProgramAsWeights (PAW), where a neural compiler takes a plain-English function description and produces a "neural program" (a combination of a continuous LoRA adapter and a discrete pseudo-program). At inference time, these adapt a fixed interpreter to perform the specified task. The target use case is fuzzy functions: tasks that are easy to describe in language but painful to implement with rigid rules. A concrete fuzzy function: word-guessing Consider the function "given a player's hint about a secret word, output the word." A keyword matcher will never bridge "fluffy thing that purrs" -> cat. A prompted LLM would work but is way too heavy to ship. We built this exact function as a single neural program adapting a 0.6B-parameter interpreter and turned it into a playable browser game called Alien Taboo: https://programasweights.com/alien Model Architecture
Results on FuzzyBench (fuzzy function tasks described in natural language, huggingface.co/datasets/yuntian-deng/fuzzy_bench_verified):
The PAW-adapted 0.6B beats the 50× larger raw-prompted 32B on this benchmark. The GPT-2 interpreter is small enough for browser inference via WebAssembly (~134 MB base + ~5 MB per program). Usage Compilation is a hosted API call (the 4B compiler is ~16 GB on GPU); inference is fully local through llama-cpp-python. Links
[link] [comments] |