It seems everyone's talking about AI transformation right now as if it's some kind of single finish line. But here's the truth: AI isn't a project, it's a capability.
The organizations that are going to succeed with AI aren't the ones that build the most models or automate the fastest. They're going to be the ones who learn the fastest, and that's exactly where agility becomes your strategic advantage. We're used to doing this. We've solved this problem a long time ago, but today I want to show you how an agile strategy and agile strategy cycles can help your teams test, learn, and adapt AI investments before they turn into million-dollar mistakes.
Hi, I'm Lance Dacy, otherwise known as Big Agile. And over the last few years we've seen billions of dollars poured into AI projects, and some of them never made it past a pilot.
The Problem: Strategy Without Adaptability
We find that companies are just throwing good and bad money at will into this new space because of the hypercompetitive nature of the markets. I've seen big retailers that are investing in customer prediction models that never launched. I've seen banks—one of our most risk-averse clients—they're hiring data scientists faster than they can actually define what are the models that we're supposed to solve for. I've seen manufacturing firms that are building these digital twins that no one ever used or, worse, caused huge safety problems because no one thought to provide oversight and governance to how these models are operating.
So the problem wasn't the technology, even though it's all new and we're learning how to do it. It was really the strategy without adaptability. And these organizations adopted AI well before they understood why or how it would actually create value. It's what we used to do in product development too. So if you've done that before as well, you aren't alone. It's normal.
What I'd like to do is unpack an agile strategy cycle idea to give leaders a way to test, learn, and adapt AI investments before they turn into these million-dollar mistakes. And surprisingly, it's going to turn out that agile is the solution. We've been doing it a long time, but it's more of the mindset and the way we approach it. Why have we regressed? I don't know.
The Myth: AI Transformation Is Something You Can Complete
I want to start with a myth that AI transformation is something you can complete. Have y'all ever heard the idea that an agile transformation is done? So I'm kind of just drawing the parallel here. AI is this new thing out there that people are trying to adapt and learn, but most organizations are now approaching this AI thing like a big waterfall project.
It's got a big roadmap, it's got fixed milestones, huge budget approvals, and a finish date. That sounds good on a slide deck. Sound familiar?
Now, I think that's perhaps because the funding of what needs to be done in AI—they go all in to get all the money up front because it's very costly. But I've spent decades trying to unravel that myth. But here we are once again. No one really knows what to do and we want to be first and best, rightfully.
What Really Happens
But here is what really happens. I want to share with y'all some real stories. By the time some of these models are even deployed, the market's changed. Well, we've learned that before in product development. By the time you've trained your team on the data pipelines and everything that we built to do that, now that data's obsolete. We're tracking something else.
Now that's because the AI learning curves move faster than our typical corporate planning cycles. So agile strategy can replace these 18-month plans with more like rolling learning loop cycles that we do every quarter so that every quarter our teams can ask: What did we learn? What assumptions were wrong and where do we pivot or scale next?
Too often we want to scale first. We want to do all the big stuff first. And you'll see the parallel. We've done this in product development for years and we've been unraveling that, but now it's just something new that's come along. And AI is one of those big game changers that we haven't seen in a long time as far as changes in the industry. But I want you to remember small incremental development. Does that sound familiar? There's a reason that we want to get inspection early so we minimize drift.
Reframing AI Projects as AI Capability Systems
And so I want to talk about how AI projects—we need to reframe them as AI capability systems, like products. Here's where I think most leaders get stuck. They run these AI projects instead of treating them like we would do in product development: projects versus products.
AI projects, we need to think of them as capability systems. So product people do this as well. Well, many of them still do. But if we kind of resurface what a project is: a project ends. In AI, when the model ships, the capability evolves because it's designed to learn.
So I'd like to think of a capability like infrastructure:
- People who know how to experiment responsibly is the infrastructure we need
- That built-in teams who can validate data and bias before scaling
- Can make decisions at the latest responsible moment based on that data—just in time, just enough
So we find that works in product development. Having a strategy cadence can help you find out what's not working early enough to prevent tragic consequences.
Why Leaders Fear Experimentation
I think most leaders are afraid to let teams experiment because the risks if they fail could be really big. So rightfully so, they're kind of hesitant to allow that. But if we do small, steady experiments like we do in product development, I feel like that gives us insight earlier and helps us course correct before we drift too far into a huge problem.
So I find teams can even become more confident in those experiments because they learn in small incremental steps. And that's what I call agility at the strategic level, just like we would with roadmap planning for a product. It's not chaos, it's controlled experimentation—basically roadmap planning for a product, but planning for capabilities.
Learning From Big Company AI Failures
Now, I want to give you a few examples of some big company names that are out there that you've probably heard of before, but maybe not have seen some of their failures. And I don't know that I would even call them failures. They're failure learning. But I find these companies are so big, they've spent so much money on them that if we were to do this ourselves, it would be very detrimental to the health of our business. So let's learn from them a little bit.
IBM Watson Health
The first one that I want to tackle here is IBM Watson. I'm going to share—there's a lot of articles about this—but what happened is IBM poured billions into building Watson Health, which was an ambitious AI that was meant to help revolutionize healthcare diagnosis and decision making.
You can read on there why it failed is because hospitals didn't have the data maturity or the workflow integration to make it useful. And so the system couldn't adapt to real clinical variability.
The lesson: With iterative feedback loops, we could have looked at that earlier and we could have seen the difference between the product and the practitioners and even realized that the best models fail. AI needs co-evolution, not some kind of waterfall deployment. And we would've been able to see how it's performing in a clinical setting. But of course that's hard too because you have HIPAA compliance and medical compliance and all these other things, and that probably prevented them from doing it. So they went all in and it's IBM, so they can probably afford to tackle a little bit more of those bigger chunks, but it didn't go so well for them.
Zillow Offers
The next one that I want to talk about is Zillow. Anybody ever heard of Zillow? Zillow had this product, it was called Zillow Offers, and they used these AI algorithms to help predict home prices at scale without home buying operations. Right now, if you want to sell your home or find the value, you typically have to go pay 800 bucks to get an appraisal or get a real estate agent to do comparative analysis with the MLS or something like that.
So we don't always have that at our disposal. And I see the real estate industry obviously going to be very disrupted by AI and relying on real estate agents less as the information containers. We're going to let the computers do that, but they're still going to be the relationship builder for home buying. It's a big decision we have.
Now, Zillow failed because these models were trained on pre-pandemic market patterns before 2020, and it couldn't adapt when the housing market shifted. Zillow ended up losing over $500 million and laid off 25% of its workforce.
The lesson: Prediction without adaptability is really just speculation. And so an agile strategy cycle would've maybe surfaced those issues earlier with feedback maybe at the local level before scaling the model nationally. So that's a great learning that we could do where we could have done things in a little bit smaller chunks.
Amazon's AI Recruiting Tool
How about another company? Y'all are probably familiar with Amazon. They built this AI recruiting tool. A lot of people may not know about it, and it was just an internal AI candidate screening tool and it failed because the actual AI model learned biases basically from historical hiring data, and it would downgrade resumes with the word "women's." So think about women's coding club. So if I had that on my resume, it would downgrade the resume just because of that traditional hiring data that nobody really cared to look at.
The lesson: Agile experimentation would've caught that bias signal way earlier through the feedback loop because AI does need social and ethical learning cycles. I've blogged about this before. It's not just static automation. It's like, what is this data that's coming in? Are we scrutinizing that data and is it biasing our decisions too much? Because garbage in and garbage out is a big problem with that. So there's another example of a big organization like Amazon that had a problem with using their own data, building out a bias.
Google Duplex
Another one that you may have heard of before is Google. And so what I would like to talk about here with Google is they unveiled this product called Duplex and it could make natural-sounding phone calls, schedule appointments, and it generated a lot of hype. But its adoption stalled and it basically failed because while the technology worked—it was Google by the way—they underestimated the social and regulatory complexity of automating human communication just like Amazon might've done with the biasing of their hiring data.
The lesson: With AI strategy, we must include the organizational sensemaking, the social and policy adaptation loops, not just technical iterations. We focus too much on the technical.
GE's Predix Platform
The last one I want to talk about is GE. Anybody ever heard of GE? Right? One of the largest companies in America. And it built an industrial AI and Predix platform is what it was called. And what happened with them is they actually invested heavily with its own industrial AI platform. Y'all ever have platforms in your product development? Build this big platform?
And Predix was a platform to optimize equipment maintenance at GE. And so it failed because it was designed as a monolithic platform instead of looking at the modular system that could evolve with user feedback or very specific instances of types of machines that are going to be maintained. Now, the company later scaled it back and learned from that. So they probably saved quite a bit of money doing that.
The lesson: The architecture—agile architecture and emergent architecture is what we call it—and feedback-driven strategy, those are essentials. Not every AI platform needs to be a platform before it's built. Maybe start small and modularizing and then eventually it can turn into a platform that can be extended and used by other things.
AI Without Agility Becomes Fragility
As we go through this week of talking points, these aren't necessarily stories of bad technology, even though there's some of that in there. These are stories of how organizations moved faster than what they could learn from. And so they were really good at that, but they didn't learn. They didn't build in learning as they go.
So as I've mentioned some really big names here, I'd imagine they have the money to kind of absorb that failure. Some of us don't. So each one proves the same lesson: AI without agility becomes fragility. We've heard that before as well. Agile, fragile, things like that.
Now, organizations that are winning in 2025 are the ones turning AI into capability, not a campaign or a project.
What's Coming This Week: Measuring What Matters
So as I blog this week, we're going to talk about measuring what matters, and I find that agility shines and how we measure success. So I encourage you this week to follow us on our blog, our newsletter that we'll do an email as well as LinkedIn. We have some information there and we're going to talk about measure what matters.
We've done a pretty good job in product development to date, but perhaps AI and capabilities need different metrics and we're kind of learning what those are. I want to give you some that I've come across and would love to hear what y'all have.
Agility Is the Control Mechanism
The last thing that I want to talk about though is how agility is the control mechanism. That sounds kind of like a paradox or an oxymoron. Agility is the control mechanism because AI brings uncertainty inherently by design. And the more that we automate decisions, the more that we need oversight. Visibility, iteration and feedback are required because we have to inspect and prevent drift.
And so that's why agility isn't the chaos, it's the control system. A lot of people think Agile's chaos. It's like, no. That's what's going to prevent AI experiments from turning into this uncontrolled complexity that drifts out of what we're actually trying to accomplish.
Now, because in the age of agile AI strategy, it's not really about predicting the future. Used to in product development, we were trying to predict what the customer wants before they need it, so we could be the first product out there to launch. Very similarly, AI's got the same problem, except we're not really predicting the future like product features. It's about learning fast, learning fast enough to thrive with the new technology. So that's our focus.
Join Us This Week for AI Strategy Meets Agility
As we close up, I'd like to say join us this week. We're going to unpack how to make this real—from shifting AI projects into adaptive capabilities using agile governance that balances speed and safety. We know how to do it. We're just not used to maybe applying it to brand new technology that is really giving us angst.
So I encourage you to subscribe and follow Big Agile. For the rest of this week's series, we're going to call it AI Strategy Meets Agility. Welcome to Agility. So the goal is to learn faster than the rate of change. Join me as we go on that journey.
Ready to master agile strategies for AI transformation?Explore the comprehensive classes that Big Agile offers to help your organization learn faster, adapt smarter, and turn AI investments into sustainable competitive advantages.