The Revenue Leadership Podcast

The Revenue Leadership Podcast

Lossiness: Why Your GTM AI Tool Feels “Close But Not Quite”

The hidden culprit behind why your AI implementations suck

Kyle Norton's avatar
Kyle Norton
Oct 27, 2025
∙ Paid
Share

I wanted to share an important insight that has been rattling around the back of my brain for a while but I was only able to bring definition to quite recently.

I think one of the biggest reasons many GTM AI implementations fail is because they stack multiple layers of generative outputs on top of one another, which creates compounding lossiness.

We’ve all been there. You see a really slick demo of an AI product, rally the resources to run a proof of concept then the results of this tool are cool but nowhere near good enough to put in front of reps or customers. The deal strategy that’s suggested has some solid suggestions but 40% is a little off or too obvious or not quite as good as your reps would do. Or the emails that are generated are plentiful and decently personalized but still not good enough to push to tens of thousands of prospects. There’s SO much potential there and it feels like with a few incremental improvements, it could work but you just can’t get it across that imaginary line.

Lossiness Explained

Think about what happens when you photocopy a document, then photocopy that copy, then photocopy that one. Each generation degrades slightly—text gets fuzzier, details blur, contrast shifts. By the fifth or sixth copy, you’re looking at something barely readable.

This is lossiness. It’s the degradation of information quality that occurs each time data gets transformed, compressed, or regenerated.

In AI systems, lossiness happens because generative models don’t reproduce information, they approximate it. When an LLM generates text, it’s making probabilistic predictions about what comes next based on patterns it learned during training. It’s not retrieving perfect information; it’s creating a best guess. And every best guess introduces small deviations from perfect accuracy.

Here’s what happens technically: An AI model takes input data, processes it through multiple layers of neural networks, and outputs something new (tokens). An LLM is a token prediction machine and knowledge is represented as those tokens. At each layer, the model is lossy; it compresses information through tokenization, makes approximations during generation, and introduces uncertainty. The model might be 95% accurate at each step, which sounds great. But here’s the problem: when you chain these steps together, the losses compound exponentially.

Stacking lossiness

Here’s where it gets dangerous for GTM teams.

Keep reading with a 7-day free trial

Subscribe to The Revenue Leadership Podcast to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Kyle Norton
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture