AI Strategy

/

feb 16, 2025

The Design of AI in 2026: Strategy, Power Shifts, and the Cost of Pretending You Understand AI

AI is no longer a differentiator. It is infrastructure that forces us to change how we think, work, and value interactions.By 2026, the gap will not be between companies that use AI and those that do not. It will be between organizations that restructured how they create value and those that simply layered AI onto existing products, workflows, and org charts.

/

AUTHOR

/

AUTHOR

/

AUTHOR

Arpy Dragffy
Arpy Dragffy

Across 49 episodes of the Design of AI podcast, our hosts Arpy Dragffy-Guerrero (Founder and Chief Strategy Officer at PH1) and Brittany Hobbs (Founder, BeCertain) spoke with product leaders, researchers, designers, and executives who are actively shipping AI-powered systems. Design of AI is a long-form strategy podcast for product leaders, designers, researchers, founders, and executives building AI-enabled products and services. A clear pattern emerged:

Most failures are not technical. They are structural, organizational, and strategic.

The biggest risk heading into 2026 is not missing the next model release. It is operating without a coherent AI strategy that re-imagines customer experience, decision-making, and accountability.

What follows is a strategic synthesis of the most important lessons, grounded in real product experience, research, and evidence from 2024 onward.

1) Workforce reality: AI compresses skill ladders faster than organizations can adapt

The dominant story still claims AI “frees people for higher-value work.” In practice, AI is compressing roles and collapsing apprenticeships faster than organizations can redesign capability development.

In Episode 48, AI Trap: Hard Truths About the Job Market (Listen on Spotify), our hosts examine labor-market signals showing steep declines in UX, research, and mid-level creative work. The strategic risk is not job loss alone. It’s that organizations are deleting the pipeline that produces senior judgment.

Dr. Jan Emmanuele (Director of Generative AI Consulting, Superside) describes what this looks like inside enterprise creative teams in Episode 27 (Listen on Spotify): production work is being automated first because it’s easiest to cost-justify. But production work was also where people learned to see edge cases, spot broken assumptions, and develop taste.

This aligns with the World Economic Forum’s latest Future of Jobs Report showing major skill disruptions and shifting role demand in the near term. We're about to see massive changes across almost every industry.

Strategic implication: If your AI workforce plan assumes “smaller teams” without explicitly rebuilding apprenticeship and review pathways, you are manufacturing future incompetence. The day you need human judgment most is the day you’ll discover you optimized it away.

2) Strategy and ROI: shipping AI features is not a moat, it’s often AI debt

AI adoption has created a dangerous board-level incentive: “Show me what you shipped.” That pressure creates AI theater: features launched for signaling rather than customer outcomes.

In Episode 46 (Listen on Spotify), AI commercialization expert Jessica Randazza Pade ( Neurable) makes a critical distinction: customers do not buy AI. They buy measurable outcome change. Most AI products are sold on the promise because they aren't yet delivering.

Nicholas Holland (SVP Product & Head of AI, HubSpot ) reinforces this in Episode 42 (Listen on Spotify): wrapping an LLM without proprietary context and workflow ownership creates features that are easy to copy and hard to defend. The goal should be to enable users to handle tasks and workflows that previously seemed impossible or too effortful.

For external grounding, Gartner’s public guidance on AI value emphasizes that value is use-case specific and depends on ambition and willingness to pay, not AI presence. (Gartner: Here’s why the value of AI lies in your use cases).

Strategic implication: AI features that do not change customer behavior in durable ways become maintenance burdens, governance burdens, and trust burdens. They are “AI debt,” and it compounds.

3) Trust in the agentic era: confidence is cheap, recoverability is everything

As systems shift from copilots (suggest) to agents (act), trust becomes operational.

In Episode 29, Trust Is a Double-Edged Sword (Listen on Spotify), AI trust expert Sarah Gold (Founder of Projects by IF) draws on years designing high-stakes services (including public sector) to explain why over-trusted systems fail more catastrophically than distrusted ones: users stop checking them. She also sees a risk in organizations failing to consider the importance of trust when it comes to adoption and ultimate organizational change.

In Episode 12 (Listen on Spotify), content design expert Trisha Causley (Shopify) raises the alarm about how unusual this era is where we need to warn users that these tools could be wrong and make mistakes. AI workers and AI users both have to be more critical in how they select, use, and trust products.

Now as AI moves into more autonomous capacities, the stakes rise more. The risk is no longer having to recover a document, it could be the damage that happens to an entire database or brand trust. This isn't science fiction: AI went rogue and wiped out entire database.

Strategic implication “Trust” is not a UI tone. Trustworthiness is a design requirement: visible uncertainty, reversible actions, clear explanations, and fast recovery paths.

4) Ontology explained: why agents fail inside most organizations

You're going to hear the word "Ontology" a lot more in AI product circles because it defines how useful your data and workflows actually are.

Your ontology is the set of entities your organization believes exist and how they relate. Customers, accounts, tickets, orders, permissions, policies, outcomes. It’s the map of meaning and authority.

Humans operate with implicit ontology. Agents cannot. They need the world explain to you or they start making very risky and dangerous assumptions on what data and actions mean.

In Episode 45, Agentics (Listen on Spotify), Kwame Nyanning (EY Seren & author of Agentics) argues that agents fail because organizations never formalized decision rights and meaning. AI makes that ambiguity visible. AI highlights the inconsistencies and fragmented strategies that are common in large orgs. Until those are resolved no amount of improved AI performance will matter.

In Episode 39 (Listen on Spotify), ✨Jochem van der Veer (Founder of TheyDo - Journey Management) shows how siloed data and inconsistent definitions prevent AI from reasoning across journeys. That's why he believes the unlocking the future of CX will be in creating data surfaces that allow never before imaginable views into the steps, actions, and results of customers. AI makes it possible to weave that together.

Gartner estimates that over 40% of agentic AI projects may be scrapped by 2027, citing costs and unclear business outcomes, and “agent washing.” (Reuters: Gartner on agentic AI project cancellations).

Strategic implication: Agents amplify organizational clarity or organizational confusion. If your organization cannot clearly state who owns which decision and why, an agent will automate conflict and inconsistency at scale.

5) Discovery theater: the synthetic user trap

AI makes it tempting to simulate users instead of engaging them. That produces confident roadmaps that fail in the market.

In Episode 44 (Listen on Spotify), Teresa Torres (author of Continuous Discovery Habits) is explicit: as long as we build for humans, humans must remain in the process. The belief that synthetic users can enable product teams to build better and faster is misguided because synthetic users are a facsimile of what organizations believe they know about users, not the reality of how, when, why they actually do.

In Episode 17 (Listen on Spotify), Spotify's former data guru, glenn mcdonald warns about closed loops: AI generating insights, then AI validating them. You get an out-of-control system that supports the bias of the team in charge, not what users actually need. This is why he believes that most tasks GenAI is not the best solution and we should continue to first consider more efficient and more effective solutions to data challenges.

For reference here is NN/g’s 2024 guidance on synthetic users defines what they are and, crucially, when they become dangerous. (Nielsen Norman Group: Synthetic users, if/when/how to use them).

Researchers continue to find concerns when it comes to normative bias and regurgitating patterns from training data rather than generating net-new insights: Fake Users, Real Problems: A Startup Guide to AI in UX Research (Dec 2025)

Strategic implication AI can speed synthesis. It cannot replace the accountability that comes from hearing a real customer say “this isn't good enough.”

6) Engineering reality: vibe coding and the quality cliff

AI has collapsed the cost of producing code and producing creative. It has not collapsed the cost of maintaining it.

In Episode 41 (Listen on Spotify), Maor Shlomo (Founder of Base44) frames the upside: small teams can ship more. The downside: markets flood and internal systems become harder to reason about. Vibe coding platforms have kicked off a race to delivering not only faster, but also more efficiently. The challenge is that prompts aren't very effective ways of describing the complex nuance of a brand and context requirements of these requests.

Scott Jenson (ex-Apple, Google) warns in Episode 13 (Listen on Spotify) that products that “look correct” but collapse under edge cases are the most corrosive failures. GenAI and especially vibe coding are making us all believe that we can do it all ourselves. What it is most likely to result in is a world fill with AI slop and the rise of consultants specifically to clean up the messes.

Stack Overflow’s 2024 discussion of AI code quality, highlights maintainability risks and code reuse problems. (Stack Overflow: Is AI making your code worse?). stackoverflow.blog

Strategic implication: In 2026, engineering advantage shifts from writing code to auditing systems, enforcing standards, and preventing hidden fragility from becoming customer-visible failure.

7) From creation to curation: taste becomes infrastructure, not “nice to have”

Most teams talk about “content volume” and “productivity.” That’s the wrong frame. In 2026, the scarce resource is not output. It’s coherence and curation.

Generative tools create abundance: more copy, more UI, more variations, more prototypes. Abundance is not value. In fact, abundance often becomes a liability because:


  • customers experience inconsistency across channels,

  • brands drift because standards aren’t enforced,

  • and internal teams lose confidence in what “good” looks like.


In Episode 20 (Listen on Spotify), creative technologist Phillip Maggs (Superside) frames the new role: the human becomes the editor, not the producer. That’s not a metaphor. It’s a structural shift in how quality is created. Those people and orgs that can codify what good means and create ontologies for brands, content, interactions, and the customer experience will be light years ahead.

In Episode 26 (Listen on Spotify), AI design expert Sarah Vienna (Chief Design Officer, Metalab) believes that product teams will need to shift their relationship with AI by becoming 10 times better at being an editor and being a curator because GenAI can produce faster than anyone can possibly review.

Strategic implication: Treat taste and standards like infrastructure. Revere your customers and their tastes. Challenge the pressure to ship more at risk of harming your brand.

8) Data provenance and model risk: your product inherits liabilities you didn’t choose

Most AI-enabled products are built on foundations the product team does not control, cannot fully inspect, and cannot fully explain.

This is why Ovetta Sampson —former Director of AI and Compute Enablement at Google — argues that LLMs are the wrong default for most enterprises. When organizations treat frontier models as plug-and-play infrastructure, they quietly surrender control over data provenance, model behavior, and governance. From that moment on, every product decision inherits risk the team can no longer see, audit, or reverse. (Listen to Episode 49 on Spotify).

That creates four classes of risk that compound:


  1. Legal/IP risk: training data disputes, derivative output claims, vendor liability ambiguity

  2. Security risk: prompt injection, data leakage, tool access escalation, model misuse

  3. Trust risk: users and enterprises increasingly demand provenance and auditability

  4. Procurement risk: large customers will choose vendors who can provide defensible documentation


In Episode 5 (Listen on Spotify), Virginie Berger (rights and licensing expert) explains why this becomes an existential threat in creative and enterprise contexts. You cannot claim ethical output when you cannot explain inputs. There is undeniable evidence that all of the frontier models were trained inappropriately and you inherit the risks of their legal disputes. They can't offer consistent outputs if part of their training data is removed as part of a dispute.

In Episode 34 (Listen on Spotify), Emily M. Bender and Alex Hanna, Ph.D. (authors of The AI Con) underline the structural issue: LLMs fundamentally will never be able to deliver on all of the promises being made. Worse yet, the promise of these models is to degrade the value or work and workers in ways that lead to massive societal risks.

Frontier models are becoming massive power brokers, ones capable of destroying businesses and individuals once they eventually become compromised.

Stanford’s Center for Research on Foundation Models published the Foundation Model Transparency Index (Dec 2025) with scored transparency reports across major developers. While the 2024 report showed improving trust, it fell from 2024 to 2025.

Why this matters in 2026:


  • Enterprises will increasingly ask for transparency artifacts (risk evals, training disclosures, red-teaming, incident response)

  • Regulators and litigators will treat “we used a model” as insufficient explanation

  • Reputation damage from provenance scandals is faster than your remediation cycle


Strategic implication Provenance is becoming a competitive advantage. Build for it:


  • prefer models/vendors with stronger transparency practices when feasible

  • implement data minimization and clear user-consent boundaries

  • create “model risk” ownership internally (product + legal + security + CX)

  • treat model updates like major dependency upgrades with regression testing and evals


If your strategy is “we’ll deal with it if it becomes a problem,” you are betting your product on legal and public patience. That’s not a strategy, it’s wishful thinking.

9) Interfaces beyond chat: customers don’t want prompts, they want outcomes

Chat is not the future of AI interfaces. It’s the first convenient container we all defaulted to. By 2026, the winning interfaces will reduce cognitive load by capturing intent through:


  • structured inputs (forms, toggles, constraints)

  • progressive disclosure (options surfaced when needed)

  • multimodal context (documents, UI state, signals)

  • and narrow decision surfaces (the system proposes, the user approves)


In Episode 10 (Listen on Spotify), AI design expert Alexandra Holness (Klaviyo) explains the practical lesson: forcing users into natural language for everything is cognitively expensive and often worse UX than traditional UI. Over the course of many product releases they learned that prompt boxes often performed far worse that traditional interfaces that demand users to give up so much control.

In Episode 35 (Listen on Spotify), Tuhin Kumar (design leadership across Apple/Airbnb/Facebook; now at Luma) sees a creative revolution coming once we can master intent capture and developer richer interaction models. The future won't be frontier models, it will be bespoke ones that master specific subject matter and are deeply interwoven with our ideal workflows.

Microsoft Research’s New Future of Work Report 2024 asks the question of what's “Beyond ChatBots” as we move towards more personalized model responses. (Microsoft Research: New Future of Work Report 2024 PDF).

Strategic implication: Stop asking “where do we add chat?” Start asking:


  • where is user intent currently hard to express?

  • where do users get stuck choosing among options?

  • where does the system already have context and can propose next-best action?

  • where is confirmation and reversibility required?


10) Hype, power, and the responsibility gap: AI amplifies your incentives, not your values

If you want a clean strategic lens for 2026, it’s this: AI amplifies whatever your organization is already optimizing for.

If you optimize for speed, AI gives you speed. It also gives you fragility. If you optimize for engagement, AI gives you engagement. It also gives you manipulation risk. If you optimize for cost cutting, AI gives you cost cutting. It also gives you capability collapse.

Maya Ackerman, PhD. in Episode 47 (Listen on Spotify) explained her concern about taking the incredibly creative technology of GenAI and use it to create "oracle" that we will outsource our thinking to. She is concerned that the incentives aren't appropriately aligned for the tech to be used in its most effective role, as a humble collaborator. Her book, Creative Machines: AI, Art & Us, explores this in extensive detail.

Ben Yoskovitz is the co-founder of incubator Highline Beta and in Episode 6 (Listen on Spotify) he raised alarm bells about the AI hype driving founders to believe they should solve problems that simply don't matter to prospective customers. The massive funding being injected into AI startups is fueling a rise in products that are intentionally and unintentionally competing with one another.

The data is unambiguous: AI now captures 71% of U.S. venture funding, yet 95% of enterprises report zero return on their generative AI investments. Meanwhile, peer-reviewed research confirms AI-generated dialogues outperform traditional media in shifting human beliefs. We're witnessing an industry that has mastered attracting capital and influencing behavior—without proving it can deliver proportionate value. That gap demands our attention.

Strategic implication: In 2026, the differentiator will be who can say, credibly:


  • what their AI is allowed to do

  • what it is not allowed to do

  • who is accountable when it fails

  • and how users can understand and correct it


The “responsibility gap” is where products lose trust and organizations lose legitimacy.

⚠️ Reminder: Follow us on Substack for our full newsletter ⚠️

Refine Your AI Strategy in 2026

Opportunities

AI is becoming less scarce and more structural. That changes what’s possible if you stop treating it like a feature and start treating it like an operating capability.


  • LLMs and small language models are now broadly accessible. Capability is no longer gated to a handful of well-funded teams. The practical opportunity is not “use the best model,” it’s design the best system around the model: evaluation, governance, UX, and workflow fit.

  • MCP-style interoperability is pushing the ecosystem toward plug-and-play tooling. As model + tool orchestration becomes more standardized, teams can shift effort away from bespoke integrations and toward workflow ownership and intent capture. The winners will be the ones who can move fast without building brittle one-off pipelines.

  • Adoption will increase because AI is becoming ambient. More users will interact with AI through existing products and interfaces, not because they love AI, but because it’s simply there. This creates an opportunity to design AI that feels invisible and helpful rather than “a thing you must learn.”

  • Organizational reorgs can finally break the silos that block value creation. AI increases pressure to unify customer data, unify decision ownership, and unify workflows across support, sales, product, and research. That’s painful, but it unlocks something most companies have never achieved: a coherent customer truth that can drive faster decisions and better delivery.


If you treat 2026 as the year to build connective tissue across silos, AI stops being a cost center and starts acting like an engine for compounding operational advantage.

Risks

The biggest risks in 2026 will come from complexity, uncertainty, and executive overconfidence colliding.


  • Maintenance costs will rise as stacks multiply. Teams are adding models, orchestration layers, agent frameworks, and evaluation tooling on top of already sprawling SaaS ecosystems. Without ruthless consolidation and clear ownership, AI becomes a permanent tax on engineering and operations.

  • Customer data commodification will accelerate, and low-quality data will be exposed. Most organizations believe they have “proprietary data moats.” In reality, a large portion of customer data is incomplete, noisy, outdated, or contextless. As competitors gain similar datasets and synthetic data floods the ecosystem, the value of average customer data declines, and the cost of cleaning and governing it becomes unavoidable.

  • Platform uncertainty will destabilize roadmaps. OpenAI and Apple (and others) can change distribution, integration norms, and interface defaults quickly. If your product strategy depends on one vendor’s direction, pricing, or policies, your roadmap is a hostage negotiation.

  • Many AI startups will die because they’re chasing the same use cases. The market is saturated with lookalike copilots, generic agents, and “LLM wrappers.” When differentiation collapses, capital dries up fast. The same thing happens inside enterprises: internal AI initiatives get cut when they look indistinguishable from what vendors already offer.

  • Security threats and social concerns will keep escalating. Prompt injection, data leakage, model abuse, identity fraud, deepfakes, and automated persuasion will become more operationally relevant and more regulated. Trust failures will be brand failures.

  • Executives will lose their jobs because “AI literacy theater” is becoming obvious. Boards are already noticing which leaders can explain AI tradeoffs, costs, failure modes, and governance—and which leaders only speak in slogans. As outcomes disappoint and risks surface, some executives will be removed not for lack of ambition, but for lack of competence.


2026 will reward leaders who can simplify stacks, clean reality, and stop pretending.

New horizons

The next advantage won’t come from retrofitting old products with chat. It will come from rebuilding products and customer experiences around what AI enables when you start from first principles.


  • This era has been overly focused on retrofitting. The dominant pattern has been “add an assistant” or “add an agent” to an existing workflow. The next phase is redesign: re-imagining customer experience and products so the system can anticipate, coordinate, and resolve outcomes end-to-end.

  • Real-world data will transform model usefulness. As models connect more deeply to live systems, sensors, and context-rich signals, the constraint shifts from “model capability” to “data integrity + workflow design.” Products that integrate real-world feedback loops will outperform purely text-driven systems.

  • Open source models will meet most needs, and the paid frontier will narrow. For many enterprise and vertical use cases, open source models will be “good enough,” cheaper, more governable, and easier to deploy privately. Frontier models will remain valuable, but their usage will be more selective and justified. This is a strategic unlock: it reduces dependency and can radically lower operating costs if paired with strong evaluation and governance.

  • Competition will heat up, and being first AI-native in a category will require frontier-changing ideas. “AI-native” will stop meaning “we have an assistant.” It will mean: the workflow is redesigned so that AI changes what the product is, not just how it’s used. That demands new interaction patterns, new value measurement, and new adoption strategies that go beyond prompts.


In 2026, the differentiator won’t be the model. It will be whether you redesigned the system, the workflow, and the customer experience around outcomes.

Get Expert Advice

If you want to plan your 2026 AI strategy and beyond, contact Arpy Dragffy Guerrero (arpy@ph1.ca).

If you want research to improve the adoption, retention, and monetization of AI products, contact Brittany Hobbs (brittany@ph1.ca).

Dive into Your Favourite Podcast Episodes