I remember not more than a couple years ago people used to laugh at people who call themselves “prompt engineer”.
The thing is, I do feel like actual prompt engineering, in the way it was practiced on earlier models, is today largely obsolete. The models are very capable of understanding and intelligently working through a problem posed in a couple of natural language sentences. That’s not to say that there isn’t an art in how you talk to the model, but that art has vastly changed form, and in my experience overly intricate, verbose and specific prompts tend to degrade performance.
What’s obviously been game changing lately, is that programming is now the models strongest skill.
I’m not a programmer, but I did study computer science for 3 years before ultimately dropping out, and now work in a completely unrelated field.
What I notice between my colleagues and I today, is that more and more, the best way to maximize returns on AI is to break down whatever product, question or problem you’re trying to solve into a programmable solution.
My colleagues who don’t have much computer experience tend to find AI either totally useless and untrustworthy, or the extent of their effort is to simply ask the model to solve a complex problem or produce an intricate document in one go, and return absolute slop.
Meanwhile me, my colleagues and my superiors are astounded at what those of us with computer experience can get the models to do. And the secret is as I said, break the complex tasks into very small steps that lead toward a programmable solution. I’ve yet to encounter a task that cant be solved this way. But my colleagues understandably won’t be bothered to learn the algorithmic logic, data analysis and programming methodology I’ve found is now required to be able to use AI to its maximum potential.
This is getting a bit rambley so I’ll stop, but I’m curious how others feel about this, what your experience has been, and where you think this is headed?
[link] [comments]