Betty's Blog

MIT Says 95% of AI Projects Fail. Here is the Hidden Roadmap to ROI

Written by Thomas Altman | Sep 16, 2025 4:22:40 PM

 “AI projects are failing.”

Headlines like that are making the rounds across different platforms, but how true is it? And what is the real story?

MIT recently published a report that came to the conclusion that 95% of Generative AI projects fail to achieve a desired level of ROI. Everyone read the headline and got spooked. But that is not the real story. (The report can be found at this link https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf for anyone interested in taking a deep dive).

 The real story from this MIT study is a roadmap to high ROI Generative AI projects, and that roadmap is hiding in plain sight. 

The MIT study looks negative at first glance, but the deeper read is clear. High ROI comes from systems that learn, remember, and work inside real workflows. The paper even says it outright: “The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.” That is the playbook.

This piece takes a deeper, more nuanced dive. We unpack what the study really shows and how we translate it at Betty. Four ideas drive results: memory over model, agentic orchestration, domain depth, and the decision to partner rather than go it alone. If you want measurable outcomes, this is where to start.


It’s Not the Model, It’s the Memory

One of the things the MIT report highlights is that AI projects that fail, that do not achieve the desirable level of ROI, fail not because of the AI itself. There is nothing about the AI itself that determines the ROI. The same AI can deliver ROI or not.

The report puts it clearly:“The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.” (Executive Summary, p. 3–4)

What changes outcomes, what makes a project successful, is whether the AI project is rooted in feedback. If it is designed in a way that incorporates feedback, that can adapt to the specific needs of a specific organization, or even a specific workflow within a given organization, then there tends to be high ROI.

For associations, this is especially relevant. Each association has its own unique workflows. Each one has its own institutional knowledge that needs to be applied. While associations all produce valuable content, the interpretation, the use, and the storage of that content is very specific to each organization. And a lot of the most important context is not written down anywhere. It floats in the air among the staff and volunteers. Coaching up an AI to absorb that context and apply it to workflows is where ROI comes into play. That is what makes for a successful AI project.

At Betty, this is exactly what we do. From day one, we built Betty to be coached. When you start with a fresh instance of Betty, she does a good job, but she does not do a perfect job out of the box. The real design is that people can provide feedback. They can say, “Actually, this document needs to be interpreted this way, not that way.” Or, “This practice is not documented anywhere, but it is extremely important to respond correctly.” Betty incorporates that feedback, absorbs it, and uses it to understand the very niche particularities of each association.

That is the difference. Betty takes in feedback, turns it into memory, and applies it back into purposeful, valuable member engagement. That is where Betty shines, and that is exactly what the MIT report identifies as the missing ingredient in most failed AI projects.


Agentic AI, Orchestration, and Memory

One of the findings from the MIT paper is that the projects that do deliver ROI are the ones where AI is used as part of a workflow in an orchestrated way, and in a way that can persist memory of previous interactions. The report makes the point directly:

“Agentic AI, the class of systems that embeds persistent memory and iterative learning by design, directly addresses the learning gap that defines the GenAI Divide. Unlike current systems that require full context each time, agentic systems maintain persistent memory, learn from interactions, and can autonomously orchestrate complex workflows.” (p. 14)

At the end of the day, this means starting with orchestration. For an AI project to deliver high ROI, it needs a deep understanding of the workflow and the job to be accomplished. It is not just about throwing AI at a problem or dumping it into ChatGPT. It is about intentionally designing a stepwise process: do this, then that, and if you encounter this, go back to step two. These larger, intentional processes define what orchestration means. When generative AI projects are orchestrated this way, with memory incorporated so the system learns and adapts, they deliver success.

Memory matters because feedback and corrections need to persist, but that is only part of the picture. The real power comes when an AI system can use those memories to shape the context of its work. Powerful AI responds to the context that it is in. And associations each have their own unique contexts. Every association has its own mission and values. Its members bring diverse needs and biases to each interaction. Associations themselves are complex, and a generic AI that treats them all the same will always fall short.

This is where context engineering comes in. An agentic system does not just recall information, it creates context on the fly by pulling from memories, organizational content, prior feedback, and its developing understanding of both the member and the organization. It is able to say, “For this problem, I need this piece of feedback, that article, this stored memory of how we handled a similar request before.” It blends these sources together in real time to deliver value in the moment. For associations, that means more relevant answers, better engagement, and higher ROI. By delivering more value back to members, the association increases its own value as well. That is the engine that drives successful projects across the divide.

At Betty, we have always seen Betty not just as AI, but as an architecture, a solution designed to help associations manage, organize, deliver, and even create new information, content, and knowledge. From the beginning, we identified a key problem associations face: the knowledge management lifecycle. By intentionally orchestrating Betty to solve that problem, and by making sure she brings in context developed and learned over time, Betty stays focused. She does not try to boil the ocean. Instead, Betty zeroes in on what associations need most, delivering value by being deeply dialed in to the workflows that matter.


Domain expertise leads the ROI

The third thing from the report that helps generative AI projects increase ROI is that it is less about the AI and more about the domain understanding of the problem that is trying to be solved.

Generative AI projects tended to have higher ROI when they were being built not just by AI specialists or an IT team that is very good at software engineering. No hate on those people at all. They are a critical part of the process. But when the work is led by the domain experts, that is when you see higher ROI. This also relates back to orchestration. How do you know what the high value workflow is. It is not the AI that is going to determine that. It is the people who are in it every day and who understand what needs to be done and where the high value workflows actually are. When you intentionally focus on the domain you are trying to solve for and build to enable those workflows, you get better outcomes.

The study makes this buyer preference explicit. Executives emphasize a vendor who understands their workflow and improves over time, with minimal disruption and clear data boundaries. Their words are pretty direct: “Most vendors don’t get how our approvals or data flows work.” “It’s useful the first week, but then it just repeats the same mistakes.” “Our process evolves every quarter. If the AI can’t adapt, we’re back to spreadsheets.”

So why does that lead to value. The reason is that AI itself is a tool. And like any tool, it can be used well or it can be used poorly. Determining what you want it to do only leads to value when the person, the human being, understands where the value is in the first place. There are a lot of different ways to use AI and many different workflows you can orchestrate with it. But it only creates ROI when, from the beginning, a domain expert can clearly see where value can be created and then orchestrate toward that vision.

This is exactly what Betty does. Betty was built from the ground up by association people for association people. We understand the value proposition of knowledge management and the knowledge that associations create. Our goal is to enable two things. First, the retrieval and dynamic understanding of that knowledge, with adaptability back to the association context. Second, to solve the problem that comes with creating so much value. Associations produce so much trusted knowledge that it often becomes hard to find and navigate. Betty learns who members are, what they are coming for, and how to match them with the most relevant content. At the same time, Betty understands the content itself and adapts to new knowledge as it comes in. By mapping members, content, and association context together, Betty delivers real engagement and measurable ROI.

And we know this works because we are association people. We have seen the value of solving this problem firsthand. Betty exists to do just that, built by association people for association people, to drive engagement through the knowledge associations create.


Buy vs Build, Partner vs Go It Alone

The fourth point from the MIT paper is about strategy. Organizations are more likely to succeed, about two times more likely, when they partner with a trusted AI vendor rather than going it alone. The study makes it clear:

“Strategic partnerships achieve roughly double the success rate of internal builds, as enterprises benefit from vendor experience, existing solutions, and faster time-to-value.”

In practice, we see this play out all the time. Groups that decide to build on their own often hit the same stumbling blocks that a good vendor encountered years ago and has already solved. A strong partner has not only worked through those issues but has also seen dozens, maybe hundreds, of other challenges that an internal team has not faced yet but inevitably will. When you go it alone, you are forced to relearn those lessons the hard way.

It is often easy to get 75 percent of the way there. You can spin up a proof of concept, connect to some data, maybe build a minimum viable product. But that last stretch, the final two or three miles to get from “works in a demo” to “ready for production, stable, and valuable”, is the hardest by far. That is where many in-house projects stall out. We have seen it repeatedly. A year later, they still do not have a working product, and not only that, they have spent more time, more effort, and often more money than they would have spent simply partnering with the right vendor from the start.

The other piece is focus. When you choose a good partner, you are choosing a team whose entire job is to run, maintain, and continuously improve the product. This is not one of twenty competing priorities. It is the priority. In an IT shop, even if the internal team is brilliant, generative AI will always be one project among many. Once the MVP is shipped, ongoing maintenance and iteration compete for time with other tickets, other systems, and other fires. That is a manageable risk for a stable technology. But AI is not stable. It is evolving daily. Keeping up with changes requires a team that is dedicated 100 percent of the time to that task. Without that focus, the project drifts, and the value never compounds.

That is why buy vs build matters so much. With a trusted partner, you inherit solutions to the hard problems, you reduce risk, and you get faster time-to-value. More importantly, you ensure the product evolves as fast as the technology itself does.

At Betty, this is exactly what we do. This is 100 percent of our focus. We are always combing the latest research. We are always in conversation with the association community, learning where the problems lie. And we sit at the intersection of those two things. On one side, AI changes every day. What is possible today might not work tomorrow. On the other side, the needs of associations are also shifting as members, staff, and leaders learn how to use these tools. Having a team committed to both is critical.

That is what Betty delivers. We keep pace with the technology. We keep pace with the association community. And we grow with both. Every resource, every bit of time, every ounce of energy at Betty is dedicated to understanding how these two forces come together and making sure you always have the right solution at the right time. We can do that because we do not do anything else. We only think about associations and how AI can serve them. That is why Betty is the partner that can turn AI into measurable ROI.


Conclusion

The study is not a warning against AI. It is a roadmap for what works. Systems that learn and remember win. Orchestration across real workflows wins. Domain expertise that knows where value lives wins. The right partner doubles your chances of getting there.

That is the path we follow at Betty. We capture feedback, turn it into memory, and apply it where members feel the value. We orchestrate end to end so context carries through every step. We bring association depth so the work lands on day one.

Ready to turn the roadmap into ROI? Schedule a short demo focused on one high value workflow you care about most.