If you’ve been treating GPT5 like a faster version of GPT4, you’re likely leaving some of its best performance on the table. I read OpenAI’s guide for the GPT5 family (including 5.2 and 5.4) which says that the meta has shifted toward agentic workflows and structural precision. SO these are the structural changes I ve started making to my prompts
- The Reasoning Effort is a literal dial you can now programmatically set the
reasoning_effort(Low to XHigh).
- Pro-tip**:** For deep coding refactors, set this to XHigh. It forces the model to think for minutes if necessary to ensure it doesn't miss a single dependency.
- Mandatory Tool Preambles, I put a
<tool_preambles>block for any agentic task. You should instruct the model to rephrase your goal and outline a multi step plan before it even touches a tool. This prevents those runaway loops where the AI just starts clicking things without a strategy. - The guide suggests specific XML tags to stop the AI from over researching.
<context_gathering>and<persistence>are two I actively use - Respect through Momentum. This is my favorite new philosophy from the docs. The model is now trained to skip the "I understand" or "Sure, I can help" fluff. You should explicitly tell it to pivot immediately to action to maintain workflow momentum.
prompting is becoming more like "architecting" than writing. I’ve been messing around with a bunch of different prompting tools for these reasoning models because I want a one shot engine that doesn't need constant babysitting. Lt me know if you know good ones I should give a try and what are some other dials, structures that I should play around with
[link] [comments]