All posts
Lessons Learned March 18, 2026 8 min read

What 30 GTM AI Projects Taught Us About What Actually Works

After building AI systems for revenue teams across industries, patterns emerge. Here's the honest version of what we've learned.

We’ve built a lot of GTM AI systems. Call intelligence platforms, lead scoring models, outreach personalization engines, pipeline forecasting tools, CRM automation workflows. Different industries, different company sizes, different levels of internal technical sophistication.

At some point you stop being surprised by what breaks and start being surprised by what works. Here’s the honest version of what we’ve learned. The stuff that doesn’t usually make it into vendor case studies.

Data Quality Is the Problem 80% of the Time

You schedule the kickoff. The client has a clear problem. The business case is there. You start looking at the data and within a week you realize the project is actually a data quality project that happens to involve AI at the end.

CRM fields with no consistent values. Call recordings stored in three different systems depending on which rep made the call. Pipeline data that’s been migrated twice and has orphaned records from companies that were acquired. Email history that lives in personal Gmail accounts because someone didn’t set up GSuite correctly in 2021.

This is not an edge case. It’s the median state of a GTM data stack at a company that hasn’t had a dedicated revenue ops function for more than two years.

The implication: scope for data remediation. Not as a separate pre-project, but as a built-in phase of every AI engagement. If a vendor promises to have your call intelligence system live in two weeks, ask them what happens if your call data is in three different formats across two platforms. The answer will tell you a lot about whether they’ve done this before.

The Best Signal Is Already in Your Stack, Unread

Almost every implementation we’ve done has had a version of the same moment: we connect to the call recording API, run the first pass of analysis, and surface something that’s been sitting in the data for months that nobody had seen.

A pattern of a specific competitor being mentioned in late-stage deals that were lost. A discovery question that, when asked in the first fifteen minutes, correlates strongly with closed-won. A stakeholder title that, when absent from early calls, predicts a stalled deal with high reliability.

None of this required new data collection. It was already there. It just required a system to read it.

The practical implication: before you invest in new data sources, instrument what you have. Your call recordings, your email threads, your calendar data, and your CRM history, properly analyzed, contain more signal than most teams realize. New data sources (intent data, technographic enrichment, buying signals) add value, but they’re additive on top of a foundation you probably already have.

Reps Adopt Tools That Make Them Look Good in Front of Prospects

Sales adoption is the graveyard of GTM AI projects. A tool that doesn’t get used is worse than no tool. It consumes budget, creates technical debt, and breeds cynicism about the next initiative.

The tools that get adopted have something in common: they help reps not get caught off guard. They surface what the prospect asked on the last call so the rep doesn’t have to ask again. They flag that the champion mentioned a timeline pressure the rep forgot to log. They remind the rep that this prospect’s CTO came from a competitor and might have preconceptions worth addressing.

In other words: the tools that work help reps have better conversations, not just generate better reports.

The tools that fail are the ones designed for managers, not reps. Dashboards that aggregate activity metrics. Scoring systems that tell a manager which deals are at risk without telling the rep what to do about it. Coaching tools that surface feedback in a format that feels like surveillance.

If your AI rollout is primarily useful to people who aren’t in the deal, your reps will find ways not to use it.

The Hardest Part Is Almost Never the AI

We are an AI shop. We like the AI part. But if we’re being honest about where projects succeed or fail, it’s rarely the model that’s the determining factor.

The projects that go well have a champion who understands what they’re asking for and has organizational authority to make adoption happen. They have a rev ops function (or equivalent) that can own the data infrastructure. They have sales leadership that communicates clearly about why the tool exists and what behavior it’s designed to support.

The projects that struggle have a CTO who wants to “do AI” without a clear GTM problem to solve. Or a marketing team that bought a tool without looping in sales. Or no one who owns the integration between the AI output and the existing workflow.

Change management is not a soft problem. It’s the hard part. The AI is, in many ways, the easy part.

Narrow Systems Outperform Platforms

Every category produces a platform player: a company that argues you should consolidate your GTM AI on their system. One vendor. One integration. One dashboard.

We’ve seen more value created by narrow, purpose-built systems than by platforms. A call intelligence tool that does one thing exceptionally, surfacing the three most important things from every discovery call directly inside Salesforce within ten minutes of the call ending, gets used every day. A platform that does fifteen things adequately gets used for the one feature that works best and ignored for everything else.

The economic argument for platforms is real: fewer contracts, less integration overhead, cleaner data flow. But the adoption argument cuts the other way. Reps don’t use platforms. They use tools that solve specific problems.

Our usual recommendation: start narrow, prove value, expand deliberately. Don’t consolidate until you know which tools are actually being used.

The ROI Is Usually Obvious In Hindsight, Invisible At The Start

Clients want to know the ROI before you build the thing. This is understandable and almost impossible to answer honestly.

What we’ve learned: the ROI on GTM AI is real, but it surfaces in unexpected places. The call intelligence system you built to reduce ramp time for new reps ends up having its biggest impact on pipeline visibility for managers. The lead scoring model you built to help prioritize outreach ends up being most valuable for identifying churn risk in the existing customer base.

The projects that succeed are the ones where leadership is willing to define a hypothesis, commit to measuring it for six months, and resist demanding a projected ROI before go-live.

GTM AI that moves pipeline is measurable after it’s deployed. The companies that figure this out get compounding returns. The ones that need a business case first, before seeing any results, often never get started. Their competitors, who did get started, take the market.

Want to talk through what this means for your pipeline?
We do this for a living. No pitch, just a conversation.
Get in touch