In the often cited MIT report, State of AI in Business 2025, the highlight is: 95% of GenAI pilots fail. What is it that defines the success of the 5% though? They define the core barrier as learning. GenAI systems need to be able to “retain feedback, adapt to context, or improve over time.” This is our core goal at Bigspin as well. Our mission is to help teams create feedback and optimization loops that turn their data and domain knowledge into continuously improving, trustworthy AI products.
Jason Snyder in his piece written for Forbes, interprets the findings that 95% of GenAI pilots fail not because AI lacks power, but because organizations try to remove friction. The 5% that succeed design for it. Let’s explore Synder’s ideas about friction, viewed through the lens of product design.
Frictionless, effortless, seamless.
These are all common catchphrases used to describe designing digital experiences in the last decade. Product Designers already understand friction. We've just been trained to reduce it (frequently, but not all of the time). However, as a Product Designer focused on GenAI experiences, I think we should lean into friction even more, both in applying it positively and using it as a learning mechanism.
The two key types of friction I believe are important to consider when designing GenAI systems are:
- Cognitive friction: This friction is good when deliberately imposed as it’s important for building accountability and trust with AI
- Social friction: We want less of this type, but recognizing when it comes up is useful as it provides learnings for designers to build from
Let's get into them.
Cognitive friction: The pause that builds trust
Cognitive friction is the mental resistance users feel when a product challenges them to think, decide, or adapt. (We have all felt this.)
In design, we've often focused on reducing cognitive friction by making interfaces intuitive, predictable, and sometimes even invisible. But not all mental effort is bad.
Productive cognitive friction helps users learn, reflect, and make better decisions. Think about the pause before confirming a transaction, or the "Are you sure?" prompt before deleting something important. These moments don't reduce friction, they introduce it intentionally.
In traditional UX, we reduce cognitive friction by hiding complexity. In GenAI UX, we need to expose complexity at decision points where stakes are high. This translates to forcing reflection at the right moments where human judgment is needed.
Just as you might design a "moment of pause" in an experience to build confidence, GenAI systems can use similar friction to build trust (an extremely important ingredient) and accountability.
Here’s one example: After transcribing a tense project meeting, an AI might ask: “Should I summarize this emphasizing decisions made, or including the disagreements that led to those decisions?” For a team lead, the disagreements might be crucial for understanding buy-in. For an executive reading updates, just the decisions matter. The friction creates space for the user to shape the output to their actual need, helping them feel seen in the process.
In another example, when creating something in Claude Artifact, the AI’s thinking process is exposed. And it’s exposed as a very large chunk of text and/or code. On the surface, this could look like quite the cognitively demanding moment, but the idea is not that the user needs to actually read all of this, it’s there as a means to build trust by attempting to foster transparency.
Of course, too much cognitive friction becomes user hostility. The art is in calibration. Pause at high-stakes decisions, but don't make users justify every minor action. Designers need to introduce this friction wisely.
Social friction: The realities of being human
Social friction in product design refers to resistance in adoption or collaboration that comes from interpersonal dynamics, organizational politics, or misaligned goals. It often reveals where a product conflicts with how people actually work.
I've encountered this throughout my career. Years ago, I designed an updated interface for personal development coaches. Despite extensive generative research and user feedback sessions, there was a subset that still complained. Change is hard. These coaches didn’t want to have to learn a new flow, even if it was faster than their previous workarounds.
This dynamic becomes even more pronounced with AI systems. We're still in the early stages of understanding how to build and use AI effectively. While some people embrace AI readily, others approach it with skepticism or confusion. Their mental models, their workflows, their trust levels, all of these create social friction that designers must navigate.
One of the most common sources of social friction we've observed at Bigspin is fragmented feedback processes. Teams are using threaded Slack messages, cumbersome spreadsheets, email, screenshots – there is no centralized repository, and no source of truth.
When this happens, it becomes impossible to capture and coordinate the needs of diverse stakeholders and users. Important feedback slips through the cracks. Engineers can't find what product managers documented. Domain experts aren't sure if their input was even considered.
This isn't just an organizational inconvenience, it's social friction signaling a deeper problem. The lack of systematic feedback collection reveals misalignment about who owns AI quality, how decisions get made, and whose input matters.
Designing with friction, not against it
This designer's approach, being intentional about friction, is central to how we've built Bigspin. Our platform helps teams systematically collect feedback and optimize their AI systems, helping identify when to introduce productive cognitive friction while learning from the social friction that signals deeper challenges.
The catchphrases of the last decade got us this far, but they won't carry us forward. The difference between the 95% of GenAI pilots that fail and the 5% that succeed isn't in the AI's capabilities. It’s whether teams lean into friction productively, learn from their feedback, and turn it into a continuous learning loop to optimize their systems.
As a GenAI designer, some questions to consider:
- Where in your AI workflow do users need to pause and reflect?
- What social friction is your team experiencing in AI adoption?
- Are you collecting feedback systematically or in scattered channels?




