While generative AI has shown promising results in advancing software engineering, its inclusion within end-user applications is a different story. Features labeled as AI continue to pop up across every UI, but they’re not always helpful or useful. Often driven by hype, they can become a distraction, or worse, a productivity killer.
“Many fall into the trap of tacking on AI capabilities to cash in on the hype rather than because they solve a real, tangible user problem,” says Jody Bailey, chief product and technology officer at Stack Overflow. “The results are brittle features that introduce bugs, create security gaps, or disrupt workflows.”
As a result, end users are souring on “AI everywhere, all the time.” Only 8% of Americans would pay extra for AI, according to ZDNET-Aberdeen research. Amid rising AI slop concerns and growing consumer pushback, The Wall Street Journal reports that companies are becoming more cautious about how they promote AI in products.
“The biggest anti-pattern is AI everywhere without context,” says Neeraj Abhyankar, VP of data and AI at R Systems, a digital product engineering company. “Teams bolt chatbots or auto-generated content onto established products and workflows in ways that disrupt the user’s flow rather than enhance it.”
Sudden closures, such as the sunsetting of video generation app Sora, highlight the brittleness of AI offerings in the market. Enthusiasm for AI-generated content is also declining: 46% of users dislike companies that use AI to generate content, and 43% are less likely to purchase from them, according to SurveyMonkey’s State of Marketing 2025 report.
So, at the risk of losing potential business, is adding AI to your software product or service really worth it? And if so, how do you do it right? Below, we check in with experts to examine the downsides of hastily bolting AI onto existing products, as well as positive examples with measurable benefits, to help draw the line between useful AI and AI that just gets in the way.
Anti-patterns of bolting AI onto established products
Too many AI add-ons simply perpetuate the hype. “The most common anti-pattern is adding AI because of hype instead of a real user problem,” says Justin O’Connor, founder and CEO of Infracodebase, a unified cloud delivery platform. “That creates features people do not ask for and do not trust.”
When AI is force-fed to users without a fallback, they can feel trapped, especially when the features don’t map to clear benefits. “The biggest anti-pattern is forcing the use of AI features when the features don’t clearly provide value to the user,” says Brian Smith, principal product manager at Red Hat.
For example, chat experiences disconnected from the primary app are especially distracting, says Matt Martin, former CEO and co-founder of Clockwise, maker of a meeting scheduling assistant recently acquired by Salesforce. “Who cares about a chatbot inside Google Docs when I’m already typing? If it doesn’t enhance the core workflow or increase the efficiency, frequency, or impact of your core ROI, what is it doing?”
“Building without a clear understanding of user intent results in tools that look impressive but don’t meet actual user needs,” adds Stack Overflow’s Bailey. “In turn, these so-called solutions erode trust and slow teams down, causing frustration and leaving platforms worse off than before the AI was introduced.”
It’s also easy to fall into tracking the performance of flashy technology rather than actual user behaviors. “Teams overfocus on model quality metrics and ignore product outcomes, so the feature is technically clever but not useful,” says O’Connor.
In such situations, AI might not even be the right fit for the job at hand: Gartner reports that a key reason behind generative AI project failures is an unclear business value. As Melissa Ruzzi, director of AI at AppOmni, a cybersecurity company, explains, “it may cost much more time and money to solve the same problem with AI than by a statistics and data science approach.”
AI initiative failures are also often related to data accessibility issues, says Markus Nispel, head of AI engineering and CTO EMEA at Extreme Networks, a cloud networking services provider. “The required data is not available or not understood by the system because domain-specific expertise has not been properly integrated so AI can understand it,” says Nispel.
Finally, teams often underestimate future change management requirements. “They ship AI without rethinking onboarding, documentation, or support, so users get confused and annoyed,” says Infracodebase’s O’Connor.
When AI is tacked on haphazardly, the negative outcomes can snowball. At best, users roll their eyes and become acclimated to unnecessary AI feature bloat. At worst, gratuitous AI leads to loss of trust, feature abandonment, and increased support tickets.
Best practices for integrating AI into products
When adding AI features to an existing product, a number of approaches can help avoid annoying or alienating the existing user base. The first, and arguably most important, is to think from the perspective of the user.
“Build from the perspective of ‘what do our users want and need?’, not ‘what’s something cool we could do with AI?’,” says Charity Majors, CTO at Honeycomb, an observability platform. Taking a user-first approach means the product should remain usable without AI, with easy opt-outs and no intrusive requests for feedback or ratings.
“If you’re mostly motivated by wanting to slap ‘AI-powered’ on the feature, the product, or the marketing campaign, just stop,” she adds. “Nothing gets better just because it has AI in it.”
Beyond that, AI features should be optional rather than automatically enabled. “Start by having users opt-in, not opt-out,” says R Systems’ Abhyankar. This both respects user choice and helps developers test the effects and performance of AI feature rollouts incrementally.
You should also incorporate user input early. “Engaging users early and often helps avoid missteps,” says Stack Overflow’s Bailey. This can be done through feedback loops, open community prompts, or A/B testing. “Incremental rollouts with clear explanations also allow users to acclimate without feeling as if they were suddenly dropped into an unfamiliar platform,” Bailey adds.
When possible, identify users with an early adopter mindset before a broad rollout. “A test bed rollout with a smaller segment of users works well, ideally targeting early adopters who are enthusiastic about AI,” says Clockwise’s Martin. “You risk alienating users who aren’t ready to take the plunge on new tech if you bring them in too soon.”
Poor user experience is a leading cause of app and site abandonment. Therefore, avoid introducing AI features that introduce friction or force users into new ways of working. For consistency, it’s best to augment familiar controls rather than replacing them outright. As Abhyankar says, “keeping AI assistive rather than intrusive is key.”
Others agree that incremental rollouts should stay close to existing workflows. “Good patterns are AI assistance that sits in the background, shows up only when useful, and can be ignored without penalty,” says O’Connor. He adds that AI should be clearly identified for the user, with clear boundaries and cues in the UI that signal when it is being used.
Examples of user-first AI feature rollouts
The successful AI feature additions are typically built to solve a clear problem and don’t force a major overhaul of a user’s workflow. So what are some examples of successful AI rollouts within existing applications?
One example of an AI addition to an established platform is Stack Overflow’s AI Assist. Introduced in 2025, AI Assist helps users access Stack Overflow’s vast knowledge base through an LLM-powered, chat-enabled interface. Reviews have commended its community-first approach and transparency.
“One of the most important lessons I learned is that AI features must evolve based on real user input,” says Stack Overflow’s Bailey. “Our community has always been vocal and engaged, and their feedback on the integration of AI has been essential.”
On the business user side, Clockwise’s Martin highlights Superhuman as a strong example, pointing to beneficial features like automatic categorization and automatic drafts. “Low friction and not a differentiated SKU, just immediate value added to the product,” he says.
Generative AI has also shown promise in low-code and no-code platforms, improving both citizen development and application integration. With AI agents layered into systems like CRMs, IT leaders report measurable gains across workflows in finance, sales, and beyond.
Within software engineering, agents integrated into existing platforms have aligned well with developer workflows, prompting providers of some SaaS tools to move away from UIs toward AI-native designs.
As O’Connor explains: “We started with a traditional SaaS product and added AI on top. What surprised us was how quickly users changed their behavior. They preferred working with the agent to solve the problem, and once they had that, they stopped using most of the classic SaaS UI.”
Another example of thoughtful optionality is the LLM-powered command-line assistant in Red Hat Enterprise Linux. It is entirely optional — engineers can simply choose not to install it. Teams can also decide how it is deployed, adds Red Hat’s Smith, including connected, offline, or on-premises configurations.
Knowing when to add AI
In 2024, researchers at Washington State University found that consumers are less likely to purchase products that use the phrase “artificial intelligence” in product descriptions, citing reduced emotional trust as the underlying reason.
A similar skepticism appears in software engineering circles. For instance, half of developers now use AI tools daily, yet 79% say they do not plan to use AI for deployments, according to a 2025 Stack Overflow Pulse survey.
Given the uncertainty around user adoption, how should product managers determine if AI features are worth it? Experts say it comes down to product-first principles. “What hasn’t worked is hanging the AI badge on a product and hoping the shiny new object attracts good users,” says Martin. “You have to fall back on existing product fundamentals.”
Clear signals of positive ROI include sustained usage, positive user feedback, increasing adoption over time, and high-quality outputs. The percentage of users choosing AI-driven features over alternative workflows is another key metric.
More concrete indicators include time saved on tasks, higher task completion rates, fewer steps to finish work, and improved output quality based on user review. On the flip side, warning signs include high opt-out rates, increased support tickets, rising user complaints, and negative sentiment captured through user feedback or AI interactions.
Perhaps the strongest signal is when users don’t even notice the AI. “Bad: when customers hate it and engineers snark about it,” adds Honeycomb’s Majors. “Good: customers forget or don’t realize it’s AI. Did you know Google spam filters are all AI?”
Use AI to solve real problems
Users are quick to click past pop-ups for AI features they’ll never use, but they embrace the ones that genuinely save time. The latter features go beyond the hype and share a common thread: they solve real problems.
“Organizations that are able to achieve meaningful ROI will be the ones that treat AI not as a vague aspiration but as a series of targeted, rapidly deployed, AI-powered workflows focused on solving real user problems,” says Extreme Networks’ Nispel. “If you are not following this model, it leads to stalled AI pilots that don’t show an ROI.”