A Product-Informed Approach for GTM (Usha Iyer: Chief Customer & Growth Officer @ Hivebright)
Your Frameworks Are Making You Lazy: 5 Things Revenue Leaders Should Steal from the Product Playbook
Revenue leaders love frameworks. MEDDIC, BANT, Challenger, SPICED; we collect them like trading cards and deploy them like cheat codes. But frameworks are pattern-matching tools, and pattern matching fails when the pattern changes.
Product leaders operate differently. They start from scratch every time. First principles. No inherited assumptions. And it turns out that mindset is exactly what revenue orgs need right now, because the patterns are changing faster than any framework can keep up.
Usha Iyer has seen both worlds from the inside. She started her career writing C++ algorithms for demand planning, eventually ran a $350 million product portfolio at Honeywell, and now serves as Chief Customer and Growth Officer at Hivebright, where she owns GTM, customer success, revenue strategy, and the US market. She went from building product to selling it to being accountable for the value it delivers post-sale.
That full-journey perspective is rare. Most revenue leaders have never sat in a product review, and most product leaders have never had to explain a forecast miss to the board. Usha has done both. And the gap between how these two disciplines think about problems is where this episode gets interesting.
We talked about what happens when you bring a product mindset to the customer journey: how to do real discovery, how to run experiments that don’t blow up in your face, and how to connect daily activity to outcomes that actually move the business. Here are five things revenue leaders can steal from the product playbook.
1. Your Frameworks Are Making You Lazy
Usha’s first piece of advice for revenue leaders sounds almost too simple: think from first principles. When a problem lands on your desk, resist the urge to reach for a framework. Instead, try to understand what’s actually happening as if you know nothing about the field.
This is hard for experienced leaders. Daniel Kahneman’s work on cognitive heuristics in Thinking, Fast and Slow explains why. Our brains default to System 1 thinking — fast pattern recognition built from years of experience. It’s efficient, and it’s often right. But when conditions shift (new market, new buyer, new competitive dynamic), those patterns become liabilities. You’re solving last year’s problem with last year’s playbook.
Product leaders are trained to fight this instinct. Usha described it bluntly:
“When something comes to you as a problem, you don’t try to apply patterns right away or frameworks all the time. Try to understand what is the underlying problem as if you’re a complete layman.”
She gave three specifics on what this looks like in practice. First, go deeper in discovery. Don’t stop at “they want to run events.” Ask how long the event runs, how many attendees show up, what the agenda structure looks like. Second, be an expert in your own product — know what it can and can’t do, and be honest about both. Third, don’t try to “yes” your way through a deal. Saying no, or admitting you don’t know, builds more trust than bluffing.
I think the deeper lesson here is about intellectual humility. Product leaders think in hypotheses. Revenue leaders think in decisions. A hypothesis can be wrong and that’s fine, because the process is designed for iteration. A decision carries ego, and unwinding it feels like failure. I’ve watched leaders run at an unsuccessful initiative for two or three quarters because reversing course felt like admitting they were wrong. Product teams kill experiments after two weeks and nobody blinks.
Paul Nutt’s research at Ohio State found that over half of organizational decisions fail, and the leading cause was premature commitment to a single solution before the problem was properly understood. Revenue leaders aren’t immune to this. We’re probably more susceptible, because we’re rewarded for speed and decisiveness.
2. Product Is Input-Driven. GTM Is Output-Driven. The Best Leaders Are Both.
This was the most interesting framing in the conversation. Usha’s CTO, Tharek, puts it this way: product is input-driven and GTM is output-driven. They’re constantly searching for equilibrium between the two.
Here’s what that means. A product team starts with a broad problem — say, “increase user engagement.” They don’t know what the solution will be. They run customer interviews, study the competition, analyze usage data, form a few hypotheses, build a proof of concept, test it with a small group, and let the output emerge from the process. They trust their inputs.
A revenue team starts with the output. Hit $10M this quarter. Work backward. How many deals, at what ACV, at what conversion rate, with how many reps? The entire operating rhythm is reverse-engineered from the number.
Neither approach is wrong. But Usha made a distinction that stuck with me: outcomes and outputs aren’t the same thing. The outcome she wanted was increased engagement. The output was a specific feature. You can reach the same outcome through different outputs, but only if you’re willing to let the process guide you there instead of locking in the answer before you start.
Teresa Torres’ Opportunity Solution Tree is the best visualization of this I’ve seen. You start with a desired outcome at the top, branch into opportunity spaces, then branch again into potential solutions. It forces you to map the full problem space before committing to any single path.
Revenue leaders can apply this directly. Take churn. The output-driven instinct says: churn is up, deploy a save playbook, run retention incentives, do more QBRs. The input-driven approach says: what data do I have on these churning customers? What patterns exist across use case, company maturity, champion engagement, integration adoption? Which segments retain best, and can I find customers who looked identical but churned anyway?
Usha nailed the practical version of this:
“Take your successful customers. Then go look at customers who look just like them but did not succeed. It becomes really easy to figure out what you need to double down on.”
That’s a control group. It’s how product teams test features. And revenue leaders almost never think this way.
3. The Metric That Doesn’t Show Up in Your Dashboard
Usha made an observation that I keep coming back to: product teams can measure every adoption signal. Feature usage, login frequency, NPS, support tickets. What they can’t see is friction. And by the time friction shows up in a metric you’re tracking, the customer is already gone.
She framed it as a tax:
“Friction is like tax. You have to decide how much tax you want to pay for the money you make. At some point, it’s not worth it.”
Customers will tolerate friction as long as the value exceeds the cost. But that calculation is happening in their heads constantly, and you have zero visibility into it until they tip.
Robert Sutton and Huggy Rao at Stanford spent seven years researching this for their book The Friction Project. They found something that should worry every operator: humans are wired to solve problems by adding complexity rather than subtracting it. We instinctively add processes, steps, approvals, and check-ins. We almost never remove them. They call it “addition sickness,” and it means the friction in your customer journey is almost certainly growing, even if nobody is doing it on purpose.
Richard Thaler and Cass Sunstein coined a term for this: sludge. It’s the dark side of the nudge. One-click to subscribe, phone call during business hours to cancel. Every extra step between your customer and the value they’re trying to get is sludge, and it compounds silently.
Revenue leaders tend to think about friction in terms of sales process — too many steps in the buying journey, too many approvals, too long a procurement cycle. But Usha’s point is broader. Friction exists across the entire customer lifecycle, and the post-sale friction is what kills retention. She gave an example from Hivebright: customers running in-person events had to manually upload attendee lists through APIs after the event. Some were hiring temps to do the data entry. The product team couldn’t see this in any dashboard. It took direct customer conversation to uncover it.
Once they found it, they built a simple QR code check-in feature. Small fix. Massive friction reduction. And it opened an entirely new product surface area they hadn’t planned for.
Product leaders sometimes dismiss these as “quality of life improvements” — nice to have, low priority. I’ve been on the other side of that conversation and it drives me crazy. When you’re sitting across from a customer and they’re describing the workaround they’ve built just to use your product, that’s not a nice-to-have. That’s a retention risk wearing a polite face.
4. Your Worst Instinct Is to Roll It Out to Everyone
Here’s what usually happens. A revenue leader identifies a problem, develops a conviction about the fix, and deploys it across the entire team on Monday. New talk track? Every rep gets it. New qualification framework? Updated in the CRM by end of week. New tool? Org-wide rollout, mandatory adoption, let’s go.
Product teams don’t work this way. And there’s hard data on why they’re right.
Stefan Thomke at Harvard Business School studied experimentation culture across major companies. At Microsoft, when teams run experiments based on reasonable, well-thought-out hypotheses, only about one-third produce positive results. Another third are neutral. The final third actively make things worse. That means even smart, well-reasoned ideas fail two-thirds of the time.
If those odds apply to your revenue org (and they do), rolling out every initiative to every rep is essentially gambling your team’s productivity on a coin flip. Worse than a coin flip, actually.
Usha described the alternative: small team, manual process, prove it works, then operationalize.
“Take a hypothesis. Pilot it. Set up a SWAT team if you need to. Pull out your best sellers or your best account managers. Let them run with that experiment. Do it manually. If you have to run things in spreadsheets, it’s okay. Then learn from that and operationalize it at scale.”
We did exactly this at Owner. We wanted to test an AI-powered outbound workflow for BDRs. Instead of pushing it to 50 reps and hoping, we picked four. Our head of GTM AI and a biz ops lead flew to Toronto and sat next to them. They ran iteration loops hourly. Try 30 calls, rebuild the prompts, try 30 more. Within two days they’d improved per-rep output by 85%. The rest of the BDR team watched the booking numbers climb and started asking when they could get access. We rolled it out from a position of proof, not hope.
Usha had an even bigger example. When Hivebright acquired Orbit, she didn’t hand it to the sales team and say “go sell this too.” She stood up a three-person SWAT team, pitched the first deal herself, ran six months as an overlay team, picked up the first million in revenue, and only then rolled it into the broader org.
She also shared an experiment that didn’t work, which I think is equally instructive. She built a Level 2 support team to handle technical escalations. Three or four months in, she realized she’d just created more friction between Level 1 and Level 2. The real fix was upskilling the core team. She dismantled it and moved on.
That’s the whole point. Small experiments give you the permission to be wrong quickly and cheaply, instead of being wrong expensively across the whole org.
5. You’re Pitching Your Product Team Wrong
I asked Usha what advice she’d give revenue leaders who feel like their product team never listens to them. Her answer was immediate:
“Don’t come with the solution. Come with the problem you’re looking to solve. What is the pain point? What is the impact of not solving that problem? What is the opportunity cost? If you can articulate that clearly, your product leader will listen.”
This hits a nerve because I’ve been on both sides of it. Revenue leaders walk into product reviews with fully formed feature requests: “We need a Salesforce integration. Customers keep asking for it. Build it.” And then they’re surprised when the product team pushes back or deprioritizes it.
Thomas Wedell-Wedellsborg surveyed 106 C-suite executives across 91 companies and found that 85% agreed their organizations were bad at problem diagnosis, and 87% said that flaw carried significant costs. The issue isn’t that leaders can’t solve problems. It’s that they solve the wrong ones because they never properly defined the right one.
Usha put it simply: “You’re not the solutioning person. Be the problem person.”
She flipped it to make the point stick. Imagine a product leader walking into your forecast call and telling you how to pitch a deal. You’d be annoyed. You’d dismiss it. That’s exactly how product leaders feel when you show up with a spec.
The better approach: frame the customer pain, quantify the impact, estimate the opportunity cost of inaction, and then let the product team figure out how to solve it. Come with ideas, sure. But hold them loosely. More often than not, the product team will find a better, cheaper, faster solution than what you had in mind, because that’s what they’re trained to do.
Dwayne Spradlin’s research at InnoCentive backs this up across thousands of problem-solving challenges: “The rigor with which a problem is defined is the most important factor in finding a good solution.” The better the problem statement, the better the solutions it attracts. Vague asks get vague results. Precise problem framing gets creative, targeted answers.
This applies beyond the product relationship, by the way. It’s how revenue leaders should approach any cross-functional ask — finance, marketing, ops. Lead with the problem and the impact. Let the domain expert own the how.
The Real Unlock
Every one of these ideas comes back to one shift: slow down the diagnosis so you can speed up the execution.
Revenue leaders are built for action. We’re hired for it. The instinct to move fast, commit to a direction, and rally the team behind it is what makes great sales orgs great. But that same instinct, applied to the wrong problem or deployed without evidence, is how you end up three quarters deep into an initiative that never had a chance.
Product leaders aren’t smarter than revenue leaders. They just have a different default. They assume they don’t know the answer yet. They build small before they build big. They instrument everything so they can learn from what happened, not just celebrate or mourn the result.
You don’t need to become a product person. But borrowing these five habits will make you a sharper operator: think from first principles before reaching for a framework, trust your inputs as much as your outputs, go find the friction your dashboards can’t see, pilot before you deploy, and lead every cross-functional conversation with the problem, not the solution.
The leaders who figure out how to blend product discipline with revenue urgency are going to be very difficult to compete against. And the ones who keep pattern-matching their way through a market that’s changing underneath them are going to wonder why the old playbooks stopped working.
If you enjoyed this breakdown, I’d really appreciate a rating or review on Apple Podcasts or Spotify. It helps more revenue leaders find the show, and it only takes 30 seconds.


Phenomenal article. Thank you.