The AI UX Crisis: How Tech Giants Are Breaking Software Design Principles
The Sixth Button That Broke My Workflow
Last week, I opened Microsoft Excel and discovered something that made me question everything I know about software design: a floating Copilot button that follows my cursor around, covering content, lagging behind my clicks, and generally making my life miserable. This wasn't just any AI button—it was the sixth way Microsoft had found to inject Copilot into my workflow.
Let me count them:
- The floating button that hovers over cells
- The giant Copilot button in the ribbon
- The context menu when right-clicking
- The Copilot app pinned to my taskbar (by default)
- The "click-to-do" feature on Copilot Plus PCs
- A physical Copilot key on my keyboard
And I'm betting it won't be the last.
This isn't just a Microsoft problem. The same day, I opened Notion and found four different AI buttons on my screen simultaneously—including one placed by Firefox, my web browser, just one pixel away from my cursor.
We've Been Here Before: The Ghost of Clippy Returns
If you're experiencing déjà vu, you're not alone. This aggressive AI integration reminds me of two dark chapters in computing history:
1. Clippy: The Original Sin of Intrusive Assistance
Remember Microsoft's animated paperclip? That overeager assistant who'd pop up asking "It looks like you're writing a letter. Would you like help?" every time you typed "Dear"? We collectively hated Clippy so much that it became a cautionary tale in UX design textbooks.
Yet here we are in 2025, with Notion sporting an animated AI face custom-designed to distract, and Microsoft cramming AI suggestions into every possible interaction.
2. Browser Toolbar Hell
For those who remember the early 2000s, companies would sneakily install browser toolbars that "added functionality" but really just promoted their services. Sound familiar? Today's AI buttons feel exactly like those toolbars—unwanted additions that companies inject into our workflows because they have a product to sell, not because we asked for them.
The Current State: A Tour of AI UX Disasters
Let's examine how major tech companies are breaking fundamental design principles:
Microsoft Office: The Poster Child of AI Overreach
The Outlook example is particularly egregious. It summarized an automated OneDrive email about a shared file as: "Maya shared a document titled document with you via a link that works for everyone." After 19 seconds of processing. For an email it sent itself.
Google Workspace: Nine Buttons to Nowhere
One user reported finding nine Gemini buttons on a single Google Drive page. When clicked? "Gemini is still learning and can't help with that." If your AI can't actually do anything, maybe don't add nine buttons for it?
Meta: The Accidental AI Chat
WhatsApp, Instagram, and Facebook users report accidentally triggering AI chats they never wanted. I discovered I apparently had a conversation with Meta AI about "Sya Uno lottery results" from Colombia—something I have zero memory of ever asking about. This perfectly encapsulates Meta AI: a feature you can't disable that you trigger by accident.
Xiaomi: Breaking Core Functionality
Perhaps the most user-hostile example: Xiaomi moved the copy button—one of the most fundamental text operations—into a submenu to make room for an AI button in the primary position. Imagine breaking copy-paste to promote your AI assistant.
Why This Is Happening: The Two Driving Forces
1. The Shareholder Panic
Every tech executive I know tells the same story:
- Shareholders heard about AI being "the next big thing"
- They panicked about missing out
- Board mandates trickled down: "AI in everything, NOW!"
- Product teams are now measured by how much AI they can cram in
This inverts the entire product development process. Instead of identifying user needs and selecting appropriate technology, teams are handed a technology (AI) and told to find problems for it to solve.
2. The Platform Wars
There's a genuine race happening across the software stack:
- Application level: Notion wants its AI to be your writing assistant
- Browser level: Firefox wants to be your web AI companion
- OS level: Microsoft wants Copilot to rule them all
Each layer is fighting to establish itself as the AI interface before user habits solidify. Those popup notifications and prominent buttons? They're not accidents—they're territory markers in a platform war.
The Real Cost: Productivity and Trust
The Irony: These "productivity enhancing" AI features are actively destroying productivity by:
- Covering content we're trying to read
- Adding lag to basic operations
- Distracting with animations and popups
- Requiring us to hunt for settings to disable them
- Breaking muscle memory by moving familiar buttons
But there's a deeper cost: trust. When software actively works against user intentions, when it prioritizes corporate goals over user needs, it breaks the fundamental contract between developer and user.
A Better Way: Principles for Ethical AI Integration
Not all AI integration is bad. Here are examples of AI done right:
Good AI Integration
- Adobe's voice enhancement: Saved unusable audio recordings
- DaVinci Resolve's magic masks: Enables previously impossible video editing
- Transcription services: Assists with accessibility and note-taking
What makes these good?
- They solve real user problems
- They're invoked intentionally by the user
- They don't interrupt existing workflows
- They provide clear value
Design Principles for AI Features
The Path Forward: Reclaiming User-Centered Design
As AI engineers and product developers, we have a responsibility to push back against this trend. Here's how:
For Developers
- Champion user needs over feature mandates
- Measure success by user satisfaction, not AI adoption metrics
- Design for intent: AI should enhance, not interrupt
- Provide control: Every AI feature must be easily disabled
- Respect the workflow: Don't break existing patterns
For Users
- Vote with your feet: Support software that respects your workflow
- Provide feedback: Tell companies when AI integration hurts productivity
- Explore alternatives: Linux, LibreOffice, and other options exist
- Share experiences: Public pressure can drive change
For Companies
- Start with problems, not solutions
- Test with real users in real workflows
- Measure actual productivity, not engagement metrics
- Provide granular controls for AI features
- Remember Clippy: Learn from history
Conclusion: The Future We Choose
We're at a crossroads. We can continue down the path of aggressive, user-hostile AI integration—creating a new generation of Clippy-like disasters that users will mock for decades. Or we can choose a different path: thoughtful, user-centered AI that genuinely enhances our capabilities without destroying our workflows.
The technology isn't the problem. AI can be transformative when applied thoughtfully. The problem is prioritizing corporate metrics over user needs, platform wars over productivity, and feature checkboxes over genuine utility.
As someone deeply invested in AI's potential, I believe we can do better. We must do better. Because if we don't, we risk poisoning the well for AI adoption entirely—creating a generation of users who reflexively reject AI assistance because their first experiences were so frustratingly bad.
The choice is ours. Let's choose wisely.
Source: Inspired by "Software is evolving backwards" by TechAltar
Video: https://www.youtube.com/watch?v=oXtvAQ-e0iE