AI Taboo in UX: Designing a Future Nobody Asked For

AI Taboo in UX: Designing a Future Nobody Asked For

We’re in a strange era of product development. One where teams are building AI features because they’re expected to, not necessarily because they’re useful. Because that’s what “innovative” companies do now, right? Because someone on the exec team saw a McKinsey slide with “GenAI” and a hockey stick.

Nobody wants to say it out loud, but here it is: A lot of AI in UX is theater. The pressure to look like you’re ahead is distorting how we build. Instead of solving real user problems, teams are chasing feature parity with competitors. Or worse, building for the Gartner quadrant. What we’re calling “AI innovation” often has more to do with optics than utility.

This post isn’t about ethics, hallucinations, or AI risk frameworks. This is about product integrity. When you add AI because it’s trendy, you’re not really innovating. You’re performing. And that performance comes at a cost: clarity, coherence, and often, user trust.

What’s worse, the more AI becomes a checkbox for stakeholders, the more it crowds out the kind of product thinking that drives value. Teams are bending their roadmaps around “How do we make it feel AI-powered?” instead of asking, “Does this help our users do their job better?”

Innovation should feel invisible. Seamless. Grounded in need. But too many AI-powered experiences feel bolted on, like they were built for the investor deck, not the end user. And that’s the quiet taboo no one wants to admit. So, let’s talk about it.

Blog Summary:

This isn’t a story about bad features. It’s about what happens when entire product strategies start orbiting around the idea of intelligence rather than its actual impact. In this post, we unpack:

  • Why many AI-driven interfaces create friction

  • How prompting culture is reshaping user behavior

  • The damage vague AI initiatives do to priorities

  • How “smart” features erode trust 

  • What your product metrics might be celebrating that you shouldn’t

  • How to tell if your product is solving for clarity

  • Why intentional UX strategy is your best defense against the AI hype cycle

UX Design interface
Smartphone with floating UI elements showing modern UX design

Table of Contents:

  • Clarity vs. Cleverness

  • When Interfaces Start Asking

  • Metrics Aren’t What They Used to Be

  • The Roadmap Is Starting to Lie

  • When AI Doesn’t Know When to Shut Up

  • UX Contracts

  • Product Integrity Over AI Theater

Clarity vs. Cleverness

Every time a simple workflow gets replaced with a “smart assistant,” clarity takes a hit. You open a familiar interface, and suddenly there’s an AI bar asking you to describe what you want in natural language, when all you needed was a dropdown.

What used to be obvious now needs interpretation. We’re told this is the future. That these tools “reduce friction.” But friction for whom? Because from the user side, it often feels like we’re trading straightforward interactions for guesswork masked as personalization.

And the issue isn’t just bad implementation. It’s the mindset. Somewhere along the way, we stopped optimizing for understanding and started optimizing for cleverness. For surprise. For that “wow” moment in a demo.

The thing is, when users can’t predict what’s going to happen (or why something happened), they lose confidence. And the interface becomes a puzzle instead of a tool. This way, you lose UX integrity. Users get trained into passive behavior and stop exploring. They hesitate before clicking. Because they’re not sure what the system thinks they want.

The irony is that the smarter your product seems, the more fragile it becomes if that intelligence doesn’t deliver. And when it fails, it doesn’t fail quietly. This isn’t a call to ditch AI in UX. But it is a reminder: if it takes five tooltips to explain a feature, it probably doesn’t belong there.

When Interfaces Start Asking

AI features are supposed to streamline workflows. In practice, many of them just offload decision-making onto the user. You click a button, and suddenly the system wants you to guide it. “Describe what you want.” “Choose a tone.” “Select a use case.” What was once a clear flow becomes a choose-your-own-adventure prompt generator.

That can be helpful for some very specific set of users, but not for everyone. It’s delegation rather than help. The system was supposed to do something, but now it’s asking you what it should do.

This kind of UX trains users to guess. Internally, the effect is just as subtle. Teams start treating interaction as value. But usage isn’t usefulness. Just because someone interacts with a feature doesn’t mean it’s solving a real problem. And the more we rely on prompts to prove intelligence, the more we drift from what UX is supposed to do: reduce ambiguity, not outsource it.

Metrics Aren’t What They Used to Be

Once AI features enter the picture, traditional product metrics start breaking down. Prompt activity goes up. Click-throughs spike. Session time stretches. But under the surface, the signals get fuzzy. Are users engaged? Or just confused? Are they exploring new capabilities? Or chasing the same outcome five different ways? You can’t tell from the numbers alone.

The danger is in feedback loops. Teams start shipping more of what looks active. Prompt counts. Regeneration rates. “AI usage.” But if the feature isn’t delivering real outcomes, you’re just measuring struggle and mistaking it for traction.

This isn’t a new problem, actually. Vanity metrics have been misleading teams for years. Long before AI showed up. But with generative UX, the distortion gets harder to detect. Because the activity looks intelligent. It feels like progress. If you want to dig deeper into how these metrics creep into decisions, shift priorities, and silently take over your roadmap, we wrote a full breakdown in this post about vanity metrics.

Product roadmap
Person writing on sticky notes during product planning

The Roadmap Is Starting to Lie

It usually starts with a vague point. “AI-powered insights”, “predictive recommendations.”, or “smart assistant (v1).” No clear owner. No clear definition. Just an ambitious placeholder that somehow survives every roadmap review. And when it finally gets prioritized, nobody knows why; it just feels important.

That’s when you know you might be building the wrong thing. When AI shows up on your roadmap as a concept instead of a solution, it’s not a product initiative. It’s there to show momentum, not to solve a problem. And if you’re a PM, designer, or engineer staring at that line item thinking, “What does this even mean?” you’re not alone.

This is how teams end up working backwards. The roadmap becomes a branding exercise. Real product debt gets pushed down. Clear wins get shelved. And suddenly the plan isn’t anchored in user needs anymore. 

Roadmaps should reflect conviction, not confusion. And if they’re full of vague AI plays with no strategy behind them, it’s worth asking whether the vision is about the user or just about looking innovative from a distance.

When AI Doesn’t Know When to Shut Up

One of the least discussed problems in AI UX is overexposure. It starts small: an “AI suggested” label here, a chat assistant there. Then it spreads. Auto-complete in every field. Dynamic recommendations in places nobody asked for. Help overlays that inject themselves into flows that were already working fine.

Suddenly, the interface feels crowded. Not visually; cognitively. There’s a kind of fatigue that sets in when AI insists on participating in every interaction. Even when it’s trying to be helpful, it starts to feel invasive. Like a colleague who chimes in on every thread, even when they don’t have context.

Most of the time, it’s because of some misplaced ambition. Teams assume that more AI equals more value. So, they start injecting it everywhere. But not every moment needs a suggestion. Not every field needs a prediction. And not every task should be interrupted by something “smart.”

AI has the potential to elevate the experience, but only when it understands when to participate and when to step back. If every moment becomes an opportunity for intervention, users will either ignore it or worse, lose trust in the system altogether.

UX Contracts

Good UX is built on trust. Not the abstract kind. The subtle kind. The kind you build over time by doing what the user expects, every time. When I click a button, I know what’s going to happen. When I undo an action, I know what I’m getting back. These are UX contracts. But AI breaks those contracts all the time.

You click “summarize” and get wildly inconsistent results. You open the same prompt twice and get two different tones. You correct something once, and it doesn’t learn. From the outside, nothing looks broken. But the experience feels off. And that’s where trust erodes.

This kind of inconsistency is more than just annoying. It creates a low-grade mental friction. It gets users to stop exploring. They hesitate before clicking. They could end up not trusting undo, or save, or preview, because they’re not sure what logic is running underneath.

Once that confidence breaks, the product doesn’t feel smart anymore. It feels unstable. Mysterious. Even risky. And you know what? It’s usually not the AI that’s failing. It’s the UX layer around it that’s not setting expectations, not showing boundaries, not giving users a way to understand why something happened. AI doesn’t need to be perfect, but it does need to be predictable.

Product Integrity Over AI Theater

If there’s one thing this whole AI wave is revealing, it’s this: Too many products are being designed for impressions, not outcomes. 

It’s easy to get swept up in the hype. To chase what’s new. But that’s how clarity gets compromised, interfaces get bloated, and teams end up solving for optics, not utility. And I know we gave AI its fair share of criticism regarding its skills in areas like software development, but in this case, AI isn’t the problem. The real issue is how we’re choosing to integrate it.

You can’t bolt intelligence onto a product that doesn’t understand its users. You can’t automate workflows that were never clear to begin with. You can’t solve real problems with AI theater.

If you want to build software that lasts, you know, something useful, usable, and trustworthy, you need a product foundation that’s grounded in actual needs, not trends. That starts with UX. Not as a layer. Not as polished. But as a strategy.

That’s the kind of work we do at CodingIT. We help companies make better product decisions. That means understanding how users think. Designing with intention. And yes, knowing when AI adds value and when it just adds noise. If that is what you are after, then let’s talk.

Share this blog

Let's build awesome things together!