Prototype improvement
Learning Prioritization Canvas
Helps teams decide which questions, assumptions, or hypotheses matter most right now. Instead of testing everything at once, it creates focus by making importance, uncertainty, and evidence visible, so learning efforts are intentional and decision-driven.
Why Use this tool?
Focus learning where it actually moves decisions
After ideation or early testing, teams often face too many open questions. This tool helps you step back and ask: What must we learn next for this idea to move forward (or stop)? It reduces wasted testing and aligns the team on what really matters.
what you should know
Start With: A concept, solution, or direction with open questions or assumptions
End With: A prioritized set of learning questions or hypotheses to test next
Time Needed: 30 – 60 minutes
Difficulty: ⭐ ⭐⭐ ☆☆ (3 out of 5 – requires discussion and synthesis)
People: 2 – 6 core team members ( Optional facilitator)
A quickguide to start
2. Make them explicit. Phrase them as hypotheses or learning questions.
3. Choose a prioritization lens. Importance, evidence, risk, or dependency.
4. Discuss and map. Use evidence from research, tests, or experience.
5. Select priorities. Focus on what blocks decisions or progress.
6. Define next steps. Decide what to test, explore, or pause.
helpful tips
- This tool prioritizes learning, not ideas or solutions.
- Revisit it often, priorities change as evidence grows.
- If everything feels important, ask: “What would block us from moving forward?”
Sub-Techniques for Learning Prioritization
Use these when you need to decide what to test, validate, or explore next. You can apply one or combine several, depending on where you are in the process.
Importance × Evidence Matrix
When to use it?
Use when you want to decide which assumptions or hypotheses are most critical to test first, especially before designing experiments.
What It Is
A 2×2 matrix that maps assumptions based on:
- How important they are for the idea to work
- How much real evidence you already have
High-importance / low-evidence items become top priorities.
How to Run It
- Write assumptions or hypotheses on sticky notes.
- Ask: How important is this for success?
- Ask: How much evidence do we have?
- Place each item on the matrix.
- Circle the top-right quadrant (high importance, low evidence).
Example
“We assume users trust this recommendation.” → very important, little evidence → test first.
Risk-Based Question Ranking
When to use it?
Use when you have many open questions and need to sequence learning over time.
What It Is
A ranking exercise that orders questions based on learning risk and dependency.
How to Run It
- List 6–10 learning questions.
- Discuss which ones:
- Could kill the idea if false
- Must be answered before others
- Are expensive to get wrong
- Rank them from highest to lowest priority.
Example
“Will users pay?” ranked above “Which pricing tier works best.”
Desirability / Feasibility / Viability Lens
When to use it?
Use when teams are over-focusing on one dimension (usually desirability).
What It Is
A way to cluster learning questions across the three classic design thinking lenses.
How to Run It
- Label questions as Desirability, Feasibility, or Viability.
- Check for imbalance (e.g., all desirability questions).
- Prioritize across lenses, not within just one.
Example
Desirability validated → shift focus to feasibility risks.
Learning Status Mapping
When to use it?
Use after running tests to decide what still needs validation.
What It Is
A lightweight status check on assumptions based on current evidence.
How to Run It
- List assumptions again.
- Mark each as:
- Untested
- Partially validated
- Strongly supported
- Invalidated
- Decide which ones still block decisions.
Example
Feature appeal validated → usability still unclear → next test focuses on flow.
RACU meets AI
Test Card
How Can AI Make RACU Easier ?
AI can be your creative partner and research assistant, ready to help you move faster and think deeper at every step of the RACU process.
For each RACU tool, we’ll share a ready-to-use AI prompt. Just copy the prompt into your favorite AI tool (like ChatGPT or Copilot) and it will guide you through the method step by step.
The AI becomes your facilitator, asking the right questions so you can build your thinking as you go. No need to fill out a blank form, the prompt starts the conversation and adapts to your answers in real time.
PROMPT – COPILOT, CHAT GPT
You are a facilitator helping me complete a Research & Discovery Card for a design thinking challenge.
Guide me step-by-step by asking the following questions one at a time, and wait for my answer before moving on. You can ask follow-up questions if needed to clarify or improve my responses.
Start with general context:
1. What is the challenge, project, or topic you’re working on? (Briefly describe the scope or goal.)
Then go into Research (existing data):
2. What existing information do we need to gather to better understand this challenge?
3. Where can we get that information? (e.g., internal reports, dashboards, previous research, public sources)
4. What specific questions will this data help us answer?
5. Who on the team will be responsible for gathering this information?
Then move to Discovery (new research):
6. Who should we learn from? (e.g., users, clients, collaborators, stakeholders)
7. Where can we find or reach them?
8. What topics, needs, or behaviors should we explore in the research?
9. What discovery methods could work best for this challenge? (Examples: interviews, shadowing, observation, journaling, immersing yourself in the experience, etc.)
10. How many people should we involve or study?
11. When will this research happen?
12. Who on the team will lead or coordinate this discovery work?
At the end, summarize my answers as a Research & Discovery Plan with two sections:
- Research (existing data)
- Discovery (new fieldwork)
Use bullet points and keep it simple enough to copy into a worksheet.


