🚨 OPENER

In July 2025, MIT's NANDA project released "The GenAI Divide: State of AI in Business 2025," a study examining enterprise AI implementation across multiple industries. The research, based on structured interviews with 52 organizations, surveys of 153 senior leaders, and a review of 300+ public AI initiatives, claimed that despite $30-40 billion in GenAI spending, 95% of companies achieve zero measurable returns from their AI investments.

The "95% failure rate" statistic spread rapidly across social media and financial channels, contributing to broader tech stock volatility. While sensational coverage linked the headlines to market pullbacks, no verifiable data supports specific multi-billion-dollar losses attributed directly to this report for any company, including claims about Nvidia's market capitalization.

It's strange that the first publication of this newsletter isn't about a tool that claims to solve all problems or fire your entire team, but rather the opposite: it's about news that generates hype around questionable statistics. When I was creating this newsletter, I never thought this would be the first article... but that's what we're here for: to discuss and talk about AI without hype, wherever it comes from.

🎯 CORE INSIGHT

The MIT NANDA report's core problem isn't its conclusions about AI implementation challenges; it's the research methodology that makes those conclusions unreliable for business decision-making. When you examine how they arrived at that "95% failure rate," the foundation crumbles under scrutiny.

The researchers interviewed only 52 organizations to make sweeping claims about enterprise AI adoption across all major industries. For context, there are over 18,000 companies with revenues exceeding $100 million in the United States alone. This sample size creates what methodologists call a "generalizability problem.” Imagine surveying 52 of your customers and declaring that 95% of your entire market is dissatisfied.

More concerning is their definition of "success." The study dismisses efficiency gains, risk reduction, and customer experience improvements as "zero return" unless they translate to immediate revenue increases. Under this framework, an AI system that reduces compliance violations or improves response times by 40% counts as a complete failure. This narrow definition ignores how most enterprise technology delivers value through operational improvements rather than direct revenue generation.

The study also suffers from selection bias. Organizations willing to publicly discuss AI implementation challenges may systematically differ from those achieving competitive advantages through AI. Companies protecting their AI-driven advantages logically avoid research participation, while those struggling are more willing to share frustrations.

Perhaps most telling is the institutional context. NANDA is fundamentally a cryptocurrency project with commercial interests in promoting "agentic AI" and blockchain-based solutions. Their conclusions conveniently position their proposed solutions as the answer to current AI failures, a classic conflict of interest that should trigger additional scrutiny.

However, the report's qualitative insights about implementation challenges remain valuable: poor workflow integration, insufficient executive sponsorship, and misaligned success metrics. These observations, supported by multiple independent sources, deserve attention regardless of the study's methodological limitations.

The real lesson here connects directly to the MOBILIZE phase of our MAPS Framework, successful AI implementation starts with defining clear, measurable business problems and establishing proper evaluation criteria before technology selection, not accepting sensational research claims that might misguide strategic decisions.

Feel free to download the report from the link below and see what you think

Best, Héctor

v0.1_State_of_AI_in_Business_2025_Report.pdf

v0.1_State_of_AI_in_Business_2025_Report.pdf

901.98 KBPDF File

Keep Reading

No posts found