Latest update

Benjamin Schwarz
Dec 5, 2025
Posted originally

Benjamin Schwarz
Dec 5, 2025
After discussing where Pixop came from and how it works, I wanted to understand why it exists. Why does a team of developers in Denmark spend years building filters that make video look better? And what’s next?
So I asked Jon.
Real-Time Isn’t Just Speed. It’s Predictability.
Pixop processes live video in under a second. Why does that matter?
Jon explains: In live sports, latency affects everything from betting to remote commentary. A few hundred milliseconds? Fine. A 5-second delay? Unacceptable.
More importantly, it’s not just about being fast. It’s about being consistently fast. Pixop’s processing time varies by less than 10%.
In workflows where every component must stay in sync, that reliability is crucial.
Lightweight By Design
Pixop deliberately avoids heavyweight models. Why?
They’re harder to run in real time.
They cost more to deploy.
They burn more energy.
A lighter model means broadcasters can enhance video without a rack of GPUs. It’s about enabling quality without demanding scale.
AI Is a Tool, Not the Goal
Jon is clear: Pixop isn’t an AI company. It’s a video quality company that happens to use AI.
“We exist to make video more beautiful. That’s the mission. AI is just the way we do it.”
In 10 years, Pixop may use something entirely different. But the goal (clarity, colour, consistency) will stay the same.
The Sustainability Case for Centralised Enhancement
If 1 million 4K TVs each do their own upscaling, that’s 3–4 megawatts of compute during a football match.
If Pixop enhances once, centrally, it uses a few kilowatts.
That’s a factor of 1000. It’s early days, but Pixop is exploring how to quantify this advantage, possibly with help from Greening of Streaming.
Funny Bugs and Monster Eyes
Early Pixop models had a bug where blurry crowd faces got ultra-sharp eyes. The effect was… unsettling. But it speaks to how AI behaves: it sees patterns, not meaning. Fixing it meant retraining with more context.

What’s Next: Diffusion Models and a Step Change in Quality
Jon is excited about diffusion models (the tech behind DALL-E and Sora). Early tests show they may dramatically improve frame interpolation and enhancement.
They’re not quite real-time yet. But once they are? Expect another leap.
A Moonshot Question
If Pixop had infinite resources, what would Jon do?
“I’d hire more developers. It's not just about shipping more features, but also about experimenting more.
Some of the best ideas come from curiosity. Infinite resources would mean the freedom to tinker, like in the early days.”
More filters, more control, more quality. That’s the dream.
So What Does “Good Video” Mean to Jon?
“Clarity,” he says. “I’m a bit of a high-def nerd.”
It’s that simple. And that complex.
Closing Thought
Pixop isn’t chasing buzzwords. It’s quietly, thoughtfully helping video look better, in real time, at scale, with purpose.
No magic. Just a bit of maths. A lot of clarity. And a CTO who still loves pixels like he did at 14.
Next up: The Deep Tech
In the following piece, we’ll bring Jon Frydensbjerg back, this time joined by independent expert Tania Pouli, for a deep dive into how Pixop’s technology really works under the hood. Expect kernels, graphs, pipelines, and maybe even a secret sauce or two. Stay tuned.






