🎙️ EP63 - Is the AI bubble coming? And if so… who do you trust to deal with it? June 12, 2025 | 7 min Read | Originally published at www.linkedin.com

🎙️ EP63 - Is the AI bubble coming? And if so… who do you trust to deal with it?

Why trusted TECH advisors and NEDs are now critical to avoid the AI bubble.


Hey there, digital warriors! ⚔️

Last week, we uncovered how industries evolved through socio-technical mastery, not frameworks, with two mind-blowing examples from Nicolás Bañados. The mining industry transformation, from pickaxe wielders dying in tunnels to operators controlling 50-ton excavators from drone-monitored rooms with zero casualties, was an absolute blast 🤯.

This is the power of technology paired with social engineering and behavioral change management. Something we in the software industry still struggle to achieve. We remain trapped in the belief that high-performance comes from tools and frameworks, instead of from behavioral evolution, mentorship, and feedback loops. We exposed what our industry lacks:

a modern blend of engineering excellence with true craftsmanship to shape the next generation of IT talent.

A mission that academia is not yet equipped to fulfill. Can we blame them? Fresh graduates enter the workforce amidst waves of digital disruption that reshape the landscape before their textbooks are even printed. Case in point: AI.

A new wave of IT engineers will enter a market flooded with AI-first consultants; many with zero academic or real-world technical experience. Boards are being sold visions that sound revolutionary but lack depth.

🗓️ This brings us to the midpoint of 2025.

AI is everywhere. But boards and organizations are struggling to invest wisely, adopt effectively, and evolve sustainably. The bubble is inflating; fast. Too fast. And more than one elite player has already been hit hard 💥.

Take Duolingo. After proudly announcing an “AI-first” strategy aimed at replacing human contractors, CEO Luis von Ahn quickly walked back the statement amid public backlash and operational concerns. The company now emphasizes that AI will augment human roles, not replace them.

If even Duolingo is hesitating, the smartest leaders are starting to whisper:

👉 Are we flying blind?


Are we in a new dot-com bubble?

After this week’s amazing conversation with Maria de la Puente, renowned investor advisor, we see our observations confirmed:

  • ☝️ Boards urgently need to support their organizations with real TECH advisory knowledge.
  • ✌️ And even further, they must deploy TECH NEDs (Non-Executive Directors) to guide their companies through the coming turbulence.

Without these trusted voices in the boardroom, many will be flying blind straight into the bubble.

Maria says it clearly:

“We don’t want to repeat the same story of the dot-com bubble. We’re putting money into AI, but we don’t know enough about AI. And those who know about AI don’t know enough about AI-driven business models. So, how can we trust these investments?”

We are indeed seeing the signs of a potential AI bubble taking shape:

  • AI-first companies go bankrupt or quietly rehiring humans.
  • Boards relying heavily on financial dashboards, yet blind to engineering culture. In AI-driven or AI-enhanced organizations, they’ve begun freezing capital and making more cautious funding decisions.
  • Brilliant founders are now facing stricter due diligence, which increasingly reveals gaps in real enterprise experience, both technical and business.

This social and behavioral undercurrent is growing deeper than most boards realize. Public discourse is starting to highlight AI-driven failures like Kharma and Duolingo, though often through a lens of misinformation and shallow gossip; what I call “disinformed gossip.” Others, however, are digging deeper.

Consider:

  • Git-Clear - AI Copilot Code Quality: A study analyzing 211 million lines of code reveals an increased defect rate in AI-assisted code, with projections worsening into 2025.
  • SWE-Lancer: Research examining whether frontier LLMs can genuinely replace freelance software engineers, exposing the hype-driven placebo effect around “vibe coding.”

Don’t get me wrong; AI isn’t inherently evil. It offers incredible potential. But only if you truly understand how it works. But, this recent research from Arizona State University (May 2025) reinforces the fact that not everybody is aware of the illusion being sold from the so-called “new models”:

many of the “reasoning traces” produced by modern AI models, the so-called Chain of Thought outputs, are not grounded in genuine reasoning. They are stylistic artifacts crafted to sound human, not signals of true comprehension or reliability.

Boards and leadership are mistakenly taking these traces for real cognitive capability, risk-making bets on syntactic mimicry, not systemic robustness.

👉 This is exactly why boards need trusted TECH advisors and TECH NEDs.

Without deep technical expertise at the table, they risk falling for AI theater. They need seasoned operators who can discern hype from substance, challenge assumptions, and guide decisions with real data-driven insight, not stylistic artifacts or marketing gloss. In case boards don’t act, by embedding trusted TECH advisors and TECH NEDs, the risk of falling into a new bubble is very real.

This is the same pattern we saw during the dot-com bubble:

Boards confusing apparent traction with genuine business viability. If we’re not careful, we’re about to repeat the cycle, at machine speed.

🤔 Are all these organizations embracing AI fully aware of its strengths, weaknesses, and market hallucinations?


Behavioral Risk Is the New Blind Spot

You can’t govern architecture, culture, or code through spreadsheets.

This is the fatal flaw.

Behavioral signals (Key Behavioral Indicators) are what reveal:

  • Whether teams are fragile or adaptive.
  • Whether leadership is seen as credible.
  • Whether technical debt is silently growing under your roadmap.

KBI give boards something dashboards never will: Insight into what makes or breaks your product delivery at the human level.

But recognizing behavioral risk is only one part of the equation. Boards need the right expertise embedded to act on those insights and prevent costly missteps.

TECH advisor

A TECH advisor is like a static engineer in construction:

they confirm whether the organization’s technical and architectural foundations are solid, or if they’re built on sand. During the due diligence phase, a trusted TECH advisor ensures that:

  • The architecture is scalable and resilient.
  • Technical debt is well-understood and manageable.
  • Delivery pipelines are sound and aligned with business needs.
  • Team topology is built mirroring the product domain.
  • AI capabilities are real and operationally viable, enhancing human potential, not fragile demos or syntactic theater.
  • Software must be built with engineering mastery blended with craftsmanship, moving far beyond the simplified, metrics-driven narratives popularized by books like Accelerate. True quality comes from disciplined engineering practices rooted in experience, not just surface-level velocity metrics.

No board should approve significant AI investment without this level of validation. In case they skip deploying a trusted TECH advisor during the due diligence phase, the risk of injecting capital into technically fragile or unsustainable AI ventures is extremely high; directly fueling the next AI bubble.

TECH NED

A TECH NED plays a different but equally critical role: they are the construction site supervisor. An independent, trusted voice who ensures that the way of working delivers a robust, high-quality organization.

During the operational phase, a TECH NED ensures:

  • Engineering socio-technical behaviors aligns with strategic business objectives.
  • Leadership practices foster sustainable growth, not burnout.
  • Delivery processes reinforce customers’ feedback loop, and quality.
  • Organizational culture can scale effectively with technology.

In AI-native organizations, this is not a luxury. It is an operational necessity. Boards that lack this embedded expertise risk governing based on spreadsheets and surface metrics, leaving them blind to the structural and behavioral risks that undermine AI-driven performance.

Boards need both:

👉 TECH advisors to validate foundations before investing.

👉 TECH NEDs to ensure engineering and behavioral excellence as the organization scales.

Not spreadsheet consultants. Not brand-name advisors who’ve never touched code.

But people who’ve built, scaled, and refactored gigantic spaghetti legacy monsters in the trenches.

Refactoring legacy systems isn’t for brand-name advisors. Trust those with trench 🪖 scars.
Refactoring legacy systems isn’t for brand-name advisors. Trust those with trench 🪖 scars.

📺 Enjoy the podcast


Don’t wait to act in the AI bubble

👉 Is your board flying blind in this new AI era?

👉 Are your advisors badge-tested, or battle-tested?

👉 Will your next AI investment be built on rock, or on sand?

You are not alone. Many boards are quietly asking these same questions, behind closed doors. But the ones who act first will avoid costly mistakes.

That’s why we work with boards and funds to embed battle-tested TECH advisors and TECH NEDs. The real-world expertise needed to navigate this AI-driven future.

📈 Ready to benchmark your governance against Unicorn-grade orgs?

👉 Start your free assessment here 🔎

The bubble isn’t waiting. And neither should you.


To stay in the loop with all our updates, be sure to subscribe to our newsletter 📩 and podcast channels 🎧:

📰 LinkedIn

🎥 YouTube

📻 Spotify

📻 Apple Podcast

Michele Brissoni

Michele Brissoni

Visionary Digital Evolution Strategist

Rooted in Formula 1 excellence, with over 30 years in IT starting as a child in the 1980s, …