How Prompts Move Language Model Behavior: Frames, Salience, and Construal as Semantic Control

arXiv:2512.12688v3 Announce Type: replace-cross Abstract: Prompt engineering is widely used to shape large language model behavior, yet it is often treated as a practical heuristic rather than as a form of natural-language control. This paper develops a cognitive-semantic account in which prompts function as semantic conditions on how a fixed model interprets inputs, foregrounds information, and structures tasks. We formalize this account through three notions -- frame activation, salience control, and construal selection -- and study them in natural language inference, claim verification, and multi-hop question answering. Across these settings, prompts produce measurable changes in label judgments, evidence use, and answer-support organization, showing that prompt effects differ not only in magnitude but also in semantic direction. The paper therefore reframes prompting as the analysis of how instructions move model behavior, rather than only whether they improve performance.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top