Rethinking Validation
Anyone going from zero to one in product development will inevitably face the challenge of getting feedback on their product idea. The conventional wisdom calls this "validation research"—typically involving user interviews, focus groups, and low-fidelity prototypes in Keynote or Figma. The underlying questions seem straightforward: Are we on the right track? Is this what people want? Will they use it?
But here's where I've seen countless teams stumble. The very language we use—"validation"—sets us up for failure. It sounds a lot like unit testing for code: let's run this part and see if it works. But what do we really expect? To throw an infinite number of solutions onto the wall until one hits the jackpot?
The term "validation" is both misleading and limiting. It creates unrealistic expectations for clear "yes" or "no" signals from users, fosters attachment to initial ideas, and fundamentally misses the point of early-stage research. Instead, I prefer calling this crucial phase "concept testing"—a term that establishes a more neutral, curious mindset focused on exploration rather than confirmation.

It's About the Problem, Not Your Solution
Imagine you've built wireframes for a short video editing tool aimed at content creators. It's tempting to dive in with questions like: "Does my product resonate with video creators?" or "Which features do they like or dislike?"
While these inquiries have their place, they're missing the forest for the trees. The directional confidence you're seeking doesn't come from gauging reactions to your current solution—it comes from a firm grasp and deep understanding of the human problem you're trying to solve.
This is where most teams get it wrong. Even though you might think you know a lot about video creators' problems pre-MVP, recognize that you're still very early to the game. Your view of their problems might still be a hundred feet high, and the hypothesis you built your MVP on might need to be proved wrong. Most people know they need to pivot the solution a few times, but in many cases, you'll need to pivot the key problem you're solving many times too.
So pivot from solution-oriented questions to problem-oriented ones:
- "What is the workflow for video creators to create short-form videos?"
- "How do they decide on different toolkits to use?"
- "What are the barriers and pain points commonly experienced when making short videos?"
Think of your concepts as triggers, backdrops, invitations—artifacts you use intentionally to elicit reflection and recall of people problems so they can be contextualized and made concrete.
Pro tip: Broaden your line of inquiry. Don't just ask about short-form videos—ask about long-form videos and textual content too. Ask how video creators manage time daily. Ask for concrete stories of successful or unsuccessful videos to understand the deep, emotional drive. This is usually where true innovative ideas come from.
The 50/50 Rule
This doesn't mean you can't learn about your prototypes. In my experience, it's useful to structure concept tests according to the 50/50 rule: allocate at least 50% of time to understanding fundamental people problems—their concrete behaviors, needs, motivations—and the remaining 50% can center around your prototypes.
These two halves reinforce each other beautifully. The first 50% sets the context to fully understand people's reactions in the latter 50%, while prototype testing helps you understand problems better. When participants see concrete designs, their minds light up. It triggers recall of situations, frustrations, and coping strategies they might not have remembered otherwise. You now have a container for all the fundamental insights people discussed.
This ratio will evolve as you progress from your first-ever concept test (the "directional bet" stage) to mid-stage design refinement, to the pre-ship "make the design right" phase. A Keynote presentation with big empty color blocks might suffice for directional betting, but you'll need a functioning prototype with real data for later stages.
"Falsify" Over "Validate"
When executing research, adopt a proactive mindset to "falsify" yourself instead of "validating." This isn't just about design or wireframes—concept tests shed light on business strategy, marketing, positioning, and sales processes too.
Instead of asking leading questions like "How would you use this product?" or "Is this something you'd pay for?", challenge yourself to explore the other side:
- "Even though this concept seems useful, are there any barriers to adoption?"
- "Would you need to switch from an existing solution? What concerns might you have?"
- "Is the reason people aren't doing X purely because of missing tools, or are there other factors?"
The goal isn't to seek validating evidence—it's to paint a rich picture showing the nuances and complexity of human decisions and mental models. We're exploring not just ideas, but their boundaries and associated risks.
Beyond Yes or No: The Art of Contextual Synthesis
Here's where many teams expect concept testing to provide clear-cut signals. They want to know: "Does the concept resonate?" or "Which features do people like?"
If you've found concept testing insights ambiguous or difficult to act on, consider whether you had such binary expectations. The reality is that a handful of positive or negative reactions cannot definitively answer whether you're on the right track. True validation requires a holistic approach combining user research with market analysis, design iterations, and behavioral observations over time.
Instead of seeking simple answers, focus on what user research does best: uncovering nuances and contexts in which your product might succeed or fail. Rather than cataloging likes and dislikes, pay attention to contextual differences:
- How do novice creators differ from established ones?
- How might features need adaptation for fitness creators versus news creators?
- What distinguishes TikTok-primary users from Instagram-primary ones?
You're dealing with multiple strategic questions at this stage—evolving product directions, shifting problem definitions, fluid target markets. This is why you need expansion before contraction, where contextualized insights become invaluable.

The Real Secret Weapon
Having something concrete to present in user research is indeed a milestone. But the real good news isn't that we can throw this onto the wall hoping it sticks, or expect magic feedback giving us unwavering confidence that "this is it."
The real good news is that you finally have a secret weapon to make research conversations—and your investigation of human problems—10 times more effective and better.
In the early 0→1 phase, your MVP or design concept isn't the focal point—it's a research tool. Running concept tests is fundamentally about expansion, not contraction. The 50/50 rule maintains this balance, ensuring we don't lose sight of user needs while exploring solutions. By adopting a "falsification" mindset, we challenge assumptions and push understanding boundaries. Most importantly, synthesis should never reduce to simple "yes" or "no" answers—it should paint a rich, multifaceted picture of your product's potential across various user segments and contexts.
Even though building is more tempting, the real important task in the early stage is thinking through which battlefield to pick. Consider concept testing not as reactive validation attempts, but as continuous discovery. Open it up, broaden your line of inquiry, before you commit and build.