Usage-Based Comp, AI, and the End of the SDR? (Tyler Will, VP of RevOps @ Intercom)
Tyler Will on rebuilding Intercom’s go-to-market engine for the AI era
Most revenue leaders think RevOps maturity means specialization. Comp analysts. Dashboard builders. Deal desk processors. Clean swim lanes, narrow mandates, expertise in one domain.
Tyler Will is betting the opposite direction. At Intercom, he’s watching the “file a ticket, get a dashboard” model die in real time. When leaders can query Snowflake from Claude Code in three minutes, the specialists who only knew how to execute are exposed. What survives is the ability to ask the right question — to show up to the forecast meeting with a point of view, not a notepad.
AI doesn’t eliminate junior work and preserve senior judgment. It eliminates answers and elevates questions. The RevOps generalist, once seen as a phase you grow out of, is becoming the endgame.
Tyler is VP of RevOps at Intercom, where he’s spent the last three years rebuilding the go-to-market engine for the AI era. Before that: six years at LinkedIn across RevOps and strategy, and five years as a management consultant at Bain. He’s seen revenue operations from every angle — strategy, execution, and now transformation.
Intercom has become a case study in AI-native go-to-market. They launched Fin, their AI agent, and shifted from seat-based to resolution-based pricing. But the product transformation forced something harder: rewiring comp plans, post-sales motions, and the entire RevOps operating model. Tyler has been in the middle of all of it.
This conversation covers the tactical details most AI discussions skip — how comp plans actually change for usage-based models, why post-sales teams are growing instead of shrinking, and what it means to instrument a revenue system when the old metrics no longer apply.
1. The RevOps Specialist Is an Endangered Species
The classic RevOps org chart is built on specialization. Comp analysts who own comp. Dashboard builders who own BI. Deal desk processors who own approvals. It mirrors the Frederick Taylor model of scientific management — break work into discrete tasks, assign specialists, optimize each function independently.
Tyler is watching that model collapse.
“The ability to ask the right question suddenly becomes way more valuable. It’s always been valuable, but now that’s the thing that’s going to keep you employed and busy — versus ‘I can follow a process and do it accurately.’”
The catalyst is self-service. Tyler queries Snowflake directly from Claude Code — questions that used to require a ticket, a dashboard build, and a week of waiting now take three minutes. His SVP of Sales found $9 million in miscategorized pipeline the same way. (It was a Claude hallucination. But the speed of discovery — and correction — would have been impossible a year ago.)
This doesn’t eliminate the analytics team. It changes what they do. McKinsey’s 2024 research on AI in the workplace found that knowledge workers spend 60% of their time on tasks that AI can now accelerate or automate — data collection, processing, and basic synthesis. What remains is judgment, context, and business acumen.
“They actually have to be closer to the business. There’s no longer a ‘file a ticket, we’ll build a dashboard, here you go.’ It’s them getting embedded and coming to the forecast call and hearing ‘here’s a thing we’re concerned about’ — and being able to dig into that and write a great analysis.”
The RevOps analyst who shows up to take notes is already redundant. The one who shows up with a point of view is irreplaceable.
The shift: From “I know how to build this” to “I know what question to ask and what the answer means for the business.”
2. AI Literacy Is an Asymmetric Bet
Jason Lemkin argued in a recent episode that CROs need to implement at least one AI tool end-to-end — not delegate it, do it themselves. Tyler doesn’t go quite that far, but he lands closer to Lemkin than to the “my RevOps team will figure it out” camp.
“If you sort of draw the pendulum — he’s at the extreme of ‘got to do it yourself’ to ‘can be completely ignorant of what’s going on’ — I fall probably two-thirds of the way toward Jason’s position.”
The logic is asymmetric risk. Being at 95% AI fluency beats 75% even if 85% is technically optimal. The downside of over-investing is wasted time. The downside of under-investing is existential.
“There are plenty of things in life where you could be off by 20% either way and it kind of doesn’t matter. This one is like — you’re probably better at 95% than you are at 75% if 85% is the right answer.”
This echoes what Ethan Mollick calls the “jagged frontier” of AI capability — the boundary between what AI does well and what it doesn’t is irregular and constantly shifting. Leaders who stay close to that frontier learn where to trust AI and where to override it. Leaders who delegate entirely never develop that judgment.
For Intercom specifically, AI fluency isn’t optional — they’re selling an AI product. A CRO who can’t explain how Fin works, what RAG means, or how the ATTD flywheel (analyze, test, train, deploy) operates will “look pretty stupid” in front of customers.
But even for companies not selling AI, Tyler sees the same dynamic playing out internally. The leaders who understand their tools — Clay, Gong, Claude — will instrument their revenue systems in ways that leaders who delegate simply can’t imagine.
The frame: AI literacy isn’t a setting to optimize. It’s a bet where the penalty for undershooting is getting lapped.
3. The Post-Sales Motion You Knew Is Gone
Here’s a narrative violation: Intercom sells an AI product designed to automate customer support. Their post-sales team is growing.
The shift from seat-based to resolution-based pricing broke the old CSM playbook. Tyler describes the old world with a kind of nostalgia:
“There’s something almost quaint at this point about selling credit seats. They provision it to everybody, assign a bunch of users, run a tutorial. ‘How about it?’ And you call us if you need anything.”
That motion assumed a ceiling — X seats, fixed price, predictable usage. The CSM’s job was adoption and retention. Expansion happened at renewal.
Resolution-based pricing inverts the model. Customers start small — maybe 15% of their support volume routed through Fin. The CSM’s job becomes continuous expansion: unlock the next tier, add the next channel, build the next procedure. Every increase in usage is both a success outcome and a commercial event.
“There’s much more of a continuous learning and continuous engagement that is needed. It becomes both a success motion and engagement motion — getting people comfortable with that. But then it becomes a commercial conversation again.”
Gainsight’s 2024 Customer Success Index found that usage-based models require 40% more customer touchpoints than seat-based models to achieve the same net revenue retention. The motion is heavier, not lighter.
Intercom responded by splitting roles and creating new ones. Pre-sales and post-sales SEs are now separate specializations. They built an R&D Services team (think forward-deployed engineers) to help customers implement sophisticated use cases. CSMs are being upskilled to be more technical and more commercially aware.
“We’re asking more from certain roles. We’re splitting roles, creating new roles. That’s all a product of the nature of this AI product.”
The insight: AI doesn’t eliminate customer success. It transforms it from reactive support into continuous commercial engagement.
4. Comp Plans That Reward Usage, Not Just Contracts
When Intercom shifted to resolution-based pricing, their comp plans broke immediately.
The problem: customers would commit to a small Fin contract, then blow past it in actual usage — entering “overages” or pay-as-you-go territory. None of that incremental revenue hit quota. So reps did the rational thing: they chased every overage customer, trying to get them to add $500 or $1,000 to their contract. Terrible experience for customers. Waste of rep time.
“As soon as somebody would go into overages, they’d run them down and go, ‘Put another hundred Fin resolutions a month onto a contract.’ That wasn’t a great experience for anybody.”
Tyler’s team rebuilt comp in phases.
Phase 1: Credit for overages. Reps got quota credit for contracted amounts plus the MRR of non-contracted usage. They didn’t annualize it (to avoid rewarding seasonal spikes), just gave monthly credit. This stopped the overage-chasing behavior while rewarding reps who drove actual usage.
Phase 2: Pure MRR accumulation. The account management team moved to a model where they accumulate the MRR of all their accounts over the comp period. A solid contract flows for six months; you get credit each month. Usage goes up, you benefit. Usage drops, you feel it.
“It’s been interesting. It’s complicated. I don’t not recommend it, but I also am not like, ‘We found the answer, everyone should do this.’ It’s a little messy at times.”
The messiness is the point. OpenView’s 2024 SaaS Benchmarks found that companies with usage-based pricing models change their comp plans 2.3x more frequently than seat-based peers. The business model requires iteration.
Tyler’s comp principles survived the chaos:
Two metrics max (maybe three for one plan)
Meaningful weights — no 5% slivers for pet projects
Instrumentable — reps can see their performance in real time
Explainable in five minutes on a whiteboard
“If we feel like one of those breaks, we’re going to go back to the drawing board.”
The principle: Comp plans for usage-based models must reward the behavior you want (driving adoption) without punishing customers for experimenting.
5. Success Metrics Without Input Metrics Is Just Noise
Tyler opened with a provocation: revenue leaders are obsessed with success metrics and “not rigorous enough on input metrics.”
The degenerate case is the CRO at SKO who tells everyone to “close more bigger deals” and expects that to translate into behavior change on Monday morning.
“What am I supposed to do with that? You need to actually understand the whole system and how it works — be able to instrument it, monitor it, talk about it, and importantly, set targets.”
But the opposite failure mode exists too. Work backward far enough and you land on “people coming to work” as your input metric. That doesn’t drive outcomes either.
The art is finding the inputs that matter — the engagement points in the customer lifecycle that actually predict success. For Intercom’s account management team, that might be quarterly business reviews with the right customer segments. For hunters, it might be demos or qualified pipeline.
Research on Goodhart’s Law — “when a measure becomes a target, it ceases to be a good measure” — explains why this is hard. Push pipeline targets to individual reps and they’ll stuff junk into Salesforce. Monitor the same metric at the manager level with quality guidance, and you get accountability without gaming.
“Not only is it the metrics, but it’s the level of your org that you’re going to inspect them at that matters a lot — where you can get the goodness without getting some of that bad behavior.”
Tyler’s heuristic: find the minimum inputs for maximum signal. Like hitting a tennis shot — if you’ve pulled your opponent off the court, you don’t need to hit it 100 mph. Just put it in play and win.
“What is it we need to do to really understand our business — to not find ourselves in a spot where, ‘Oh, if only everybody had done this thing, we would have been fine,’ and it turned out nobody was doing it?”
The discipline: Instrument the system, not just the scoreboard. And know which level of the org should see which metrics.
The Bottom Line
The AI transformation isn’t one transformation. It’s a cascade — product changes force pricing changes, pricing changes break comp plans, comp changes reshape post-sales motions, and all of it demands a different kind of RevOps team.
Tyler’s Intercom is a preview of where B2B go-to-market is heading. Not because every company will sell AI products, but because the tools available to revenue teams are changing what work looks like. The leaders who self-serve on data will outpace the ones who wait for dashboards. The CSMs who drive continuous commercial engagement will outlast the ones who “call us if you need anything.” The RevOps generalists who ask the right questions will replace the specialists who only knew how to execute.
The question isn’t whether your revenue engine needs to change. It’s whether you’re rebuilding it now — or waiting until the gap is too wide to close.
And the clock is already running.
If this breakdown was useful, I’d really appreciate a rating or review on Apple Podcasts or Spotify. It helps more revenue leaders find these conversations.

