Back to Projects

RunItBack Hanoi:
How a Wrong Hypothesis Led to the Right Question

Phase 1: User Research & Chatbot Prototype - In Progress

Back to Projects

RunItBack Hanoi:
How a Wrong Hypothesis Led to the Right Question

Phase 1: User Research & Chatbot Prototype - In Progress

How the problem appeared in practice

How the problem appeared in practice

Hanoi's basketball scene has grown over the past few years. There are now hundreds of courts across the city, along with established academies like HNBA and Thang Long Warriors. On the surface, it looks like access to basketball should be easy. But for many students, especially in central districts, playing regularly is still not.

I live in Ba Dinh and played for Nguyen Trai High School. Organising competitive practice games was consistently difficult. Most of us were tied to school schedules: classes during the day, mock exams, and university preparation in the evenings. Our free time rarely aligned. Even when we managed to gather enough players, it was often close to 9 p.m., and the available courts were far away, sometimes nearly ten kilometers from where we lived.

At first, this felt like a personal inconvenience. Over time, I began to see it differently, not as a lack of courts, but as a coordination problem. The resources existed, but they were not reaching the people who needed them at the right time.

Hanoi's basketball scene has grown over the past few years. There are now hundreds of courts across the city, along with established academies like HNBA and Thang Long Warriors. On the surface, it looks like access to basketball should be easy. But for many students, especially in central districts, playing regularly is still not.

I live in Ba Dinh and played for Nguyen Trai High School. Organising competitive practice games was consistently difficult. Most of us were tied to school schedules: classes during the day, mock exams, and university preparation in the evenings. Our free time rarely aligned. Even when we managed to gather enough players, it was often close to 9 p.m., and the available courts were far away, sometimes nearly ten kilometers from where we lived.

At first, this felt like a personal inconvenience. Over time, I began to see it differently, not as a lack of courts, but as a coordination problem. The resources existed, but they were not reaching the people who needed them at the right time.

Who did what, and why it mattered

Who did what, and why it mattered

The AI Young Guru competition gave us a clear structure to examine the problem more seriously. I formed a team of three: myself, Long, and Ky Anh.

The AI Young Guru competition gave us a clear structure to examine the problem more seriously. I formed a team of three: myself, Long, and Ky Anh.

We divided responsibilities based on how each of us naturally worked.

We divided responsibilities based on how each of us naturally worked.

Ky Anh

Team Lead

Ky Anh acted as the team lead. He kept track of timelines, coordinated tasks, and led our presentations. This helped the team stay organised and move forward steadily.

Nam (me)

Product Owner

I took on the role of product owner. I was responsible for defining the problem framing, leading the pivot from SmartCourt AI to RunItBack Hanoi, and deciding the overall direction of the chatbot approach.

Long

Tech Lead

Long focused on the technical side. He handled coding, early AI integration, and turning our ideas into a working chatbot prototype.

This division allowed each of us to contribute where we were strongest. More importantly, it shaped how we handled disagreement. When the data challenged our initial assumptions, having clear roles made it easier to question ideas without questioning people. Instead of defending what we had already built, we focused on adjusting the product direction based on what the evidence actually showed.

This division allowed each of us to contribute where we were strongest. More importantly, it shaped how we handled disagreement. When the data challenged our initial assumptions, having clear roles made it easier to question ideas without questioning people. Instead of defending what we had already built, we focused on adjusting the product direction based on what the evidence actually showed.

SmartCourt AI: the assumption we started with

SmartCourt AI: the assumption we started with

At the start, I believed the problem was mainly about matching. If players could find the right court, at the right time, with others at a similar skill level, then games would naturally happen. This felt reasonable. It matched my own experience and fit neatly into a resource-allocation mindset.

At the start, I believed the problem was mainly about matching. If players could find the right court, at the right time, with others at a similar skill level, then games would naturally happen. This felt reasonable. It matched my own experience and fit neatly into a resource-allocation mindset.

I assumed that enough players already wanted to play. The real barrier, I thought, was coordination: aligning location, schedule, and skill level.

I assumed that enough players already wanted to play. The real barrier, I thought, was coordination: aligning location, schedule, and skill level.

Hypothesis 1.0: SmartCourt AI

Assumption: There are enough players who want to play. The problem is matching the right court, time, and skill level.

Method: We designed a survey focused on preferences, where people liked to play, when they were free, how far they were willing to travel, and what level they felt comfortable playing at. Based on this, I started thinking about an AI-based matching model.

The early results seemed to validate this approach. When we asked players about their preferred courts and typical playing times, the responses formed clear patterns. Most players in Ba Dinh preferred certain courts, most were free on weekends, most wanted competitive matches. The survey data looked promising; we had identified clear preferences that an algorithm could match. What we missed was that these preferences meant nothing if players weren't actually available when a match was proposed. The survey confirmed our hypothesis, but only because we had designed it to do so.

The early results seemed to validate this approach. When we asked players about their preferred courts and typical playing times, the responses formed clear patterns. Most players in Ba Dinh preferred certain courts, most were free on weekends, most wanted competitive matches. The survey data looked promising; we had identified clear preferences that an algorithm could match. What we missed was that these preferences meant nothing if players weren't actually available when a match was proposed. The survey confirmed our hypothesis, but only because we had designed it to do so.

Looking back, the gap is clear. I asked what people preferred, but not what they were actually able to do in real time. I didn't ask whether they were available that day, that week, or at all. That missing question would later force me to rethink the entire problem.

Looking back, the gap is clear. I asked what people preferred, but not what they were actually able to do in real time. I didn't ask whether they were available that day, that week, or at all. That missing question would later force me to rethink the entire problem.

The pattern we didn't expect

The pattern we didn't expect

We conducted surveys during the school's Spring Festival and through basketball-related social networks, collecting responses from more than 150 students. Alongside the surveys, I spoke directly with many players. Those conversations began to challenge how I had framed the problem at the start.

We conducted surveys during the school's Spring Festival and through basketball-related social networks, collecting responses from more than 150 students. Alongside the surveys, I spoke directly with many players. Those conversations began to challenge how I had framed the problem at the start.

When asked about their biggest frustrations, the answers were simple and consistent:

When asked about their biggest frustrations, the answers were simple and consistent:

"Sân đẹp nhưng thiếu người."

(The courts were fine, there just weren't enough people to play.)

(The courts were fine, there just weren't enough people to play.)

"Hẹn 6h, 6h15 mới đủ đội."

(We agreed on 6 p.m., but by 6:15 we still didn't have enough players.)

(We agreed on 6 p.m., but by 6:15 we still didn't have enough players.)

The data reflected the same pattern. 67% of games were cancelled because not enough people showed up, not because the matchups were poor. 89% of respondents said that coordinating through group chats was their single biggest source of frustration.

The data reflected the same pattern. 67% of games were cancelled because not enough people showed up, not because the matchups were poor. 89% of respondents said that coordinating through group chats was their single biggest source of frustration.

The pattern became clear through direct conversations, not just statistics. When I asked players why games kept failing, the answers pointed to something different than I expected. It wasn't about finding better matches. It was about decision making at the moment of commitment. Players needed to know: Who else is going right now? Is the court actually available today? Can I commit in the next 30 minutes, not next week?

The pattern became clear through direct conversations, not just statistics. When I asked players why games kept failing, the answers pointed to something different than I expected. It wasn't about finding better matches. It was about decision making at the moment of commitment. Players needed to know: Who else is going right now? Is the court actually available today? Can I commit in the next 30 minutes, not next week?

This was the variable our survey missed: user readiness. We measured preferences, but not real time availability. We designed for optimization, but people needed agency at the point of decision, the ability to see current options and commit immediately, without depending on a prescheduled match.

This was the variable our survey missed: user readiness. We measured preferences, but not real time availability. We designed for optimization, but people needed agency at the point of decision, the ability to see current options and commit immediately, without depending on a prescheduled match.

The Missing Variable

One thing was missing from our survey: user readiness. We asked who wanted to play, but not who was actually available to play at a given moment. That gap affected how we interpreted the data and forced us to rethink the direction of the project.

From SmartCourt AI to RunItBack Hanoi

From SmartCourt AI to RunItBack Hanoi

The data forced me to rethink the problem. It wasn't about a lack of courts or imperfect skill matching. It was about coordination friction: how hard it is to get enough people to show up at the same place, at the same time.

The data forced me to rethink the problem. It wasn't about a lack of courts or imperfect skill matching. It was about coordination friction: how hard it is to get enough people to show up at the same place, at the same time.

That shift led us to change the project name from SmartCourt AI to RunItBack Hanoi. The choice was intentional. "Run" is the word players actually use for a pickup game, it comes from the court, not from tech terminology. "Hanoi" keeps the project grounded in a specific place, rather than presenting it as a generic platform.

That shift led us to change the project name from SmartCourt AI to RunItBack Hanoi. The choice was intentional. "Run" is the word players actually use for a pickup game, it comes from the court, not from tech terminology. "Hanoi" keeps the project grounded in a specific place, rather than presenting it as a generic platform.

More importantly, the project direction changed as well.

More importantly, the project direction changed as well.

Before — SmartCourt AI

Efficiency

Treated the problem as one of efficiency: if players were matched optimally, games would naturally happen.

After — RunItBack Hanoi

Friction

Treats the problem as one of friction: if coordination becomes simple enough, people are more willing to show up.

The data consistently supported the second framing.

The data consistently supported the second framing.

The shift in framing changed what we were solving for. An optimization system assumes the problem is efficiency: finding the best match from available options. But if players aren't available when the match is made, efficiency is meaningless. A coordination system assumes the problem is friction: reducing the effort required to turn intent ("I want to play") into action ("I'm going now"). The chatbot addresses this directly: it lives inside existing conversations, shows who's available now, and lets players commit in real time without prescheduling or profile matching.

The shift in framing changed what we were solving for. An optimization system assumes the problem is efficiency: finding the best match from available options. But if players aren't available when the match is made, efficiency is meaningless. A coordination system assumes the problem is friction: reducing the effort required to turn intent ("I want to play") into action ("I'm going now"). The chatbot addresses this directly: it lives inside existing conversations, shows who's available now, and lets players commit in real time without prescheduling or profile matching.

Why a chatbot, not an app?

Why a chatbot, not an app?

We chose a chatbot as the product format, and this choice followed directly from the same insight that led to the pivot. The main barrier was not a lack of information, but the effort required to organise a game. Group chats with 20 people often went quiet. Many games fell apart even when nine out of ten players had confirmed, simply because one person never replied.

We chose a chatbot as the product format, and this choice followed directly from the same insight that led to the pivot. The main barrier was not a lack of information, but the effort required to organise a game. Group chats with 20 people often went quiet. Many games fell apart even when nine out of ten players had confirmed, simply because one person never replied.

A chatbot fits this situation because it operates where players already are. There is no new app to download and no new habit to build. A player can post a game, see who is available, and coordinate details within a conversation they are already using.

A chatbot fits this situation because it operates where players already are. There is no new app to download and no new habit to build. A player can post a game, see who is available, and coordinate details within a conversation they are already using.

Product Direction: Chatbot

A conversational interface that allows players to announce pickup games, check real time availability, and coordinate participation with less friction than a large group chat, and without the commitment required by a full standalone application.

What the pivot taught me about assumptions

What the pivot taught me about assumptions

The shift from SmartCourt AI to RunItBack Hanoi was more than a change in name, reflecting a change in how I understood the problem itself. I moved from starting with what technology could do, to paying closer attention to what users were actually struggling with.

The shift from SmartCourt AI to RunItBack Hanoi was more than a change in name, reflecting a change in how I understood the problem itself. I moved from starting with what technology could do, to paying closer attention to what users were actually struggling with.

Noticing the missing "user readiness" variable made this clear. Data does not automatically point to the truth. Its usefulness depends on what is measured and, just as importantly, what is left out. Our survey was carefully designed, but it was designed around an assumption that had not yet been tested. As a result, it collected precise answers to the wrong questions.

Noticing the missing "user readiness" variable made this clear. Data does not automatically point to the truth. Its usefulness depends on what is measured and, just as importantly, what is left out. Our survey was carefully designed, but it was designed around an assumption that had not yet been tested. As a result, it collected precise answers to the wrong questions.

This project taught me to treat early hypotheses as provisional rather than fixed. When 67% of respondents highlighted a problem we had not explicitly asked about, the appropriate response was not to defend the original framing, but to revise it. The data did not fail, our assumptions did, and the project improved once we acknowledged that.

This project taught me to treat early hypotheses as provisional rather than fixed. When 67% of respondents highlighted a problem we had not explicitly asked about, the appropriate response was not to defend the original framing, but to revise it. The data did not fail, our assumptions did, and the project improved once we acknowledged that.

Beyond the technical lesson about survey design, this experience taught me something broader: a system that looks efficient on paper can still fail if it doesn't reflect how people actually make decisions. Real-world behavior often doesn't match theoretical models. Players don't want the optimal match three days from now, they want to know if a game is happening tonight, in the next hour, with whoever is free. Designing for that reality, rather than for algorithmic elegance, made the difference between a product that solved the problem I wanted to exist and one that solved the problem people actually had.

Beyond the technical lesson about survey design, this experience taught me something broader: a system that looks efficient on paper can still fail if it doesn't reflect how people actually make decisions. Real-world behavior often doesn't match theoretical models. Players don't want the optimal match three days from now, they want to know if a game is happening tonight, in the next hour, with whoever is free. Designing for that reality, rather than for algorithmic elegance, made the difference between a product that solved the problem I wanted to exist and one that solved the problem people actually had.

Project Log: Learning Through Iteration

Project Log: Learning Through Iteration

Week 1
(Jan 22)
Week 2
Week 2-3
(Jan 31)
Week 3
Week 3
Week 3 - 4
(Feb 15)
Week 4+
Week 3
(Feb 15)
Team Formation

Formed team, registered for AI Young Guru. Agreed on topic based on personal experience with basketball coordination in Hanoi.

Survey & Preparation

Designed and launched survey on Tally.so. Identified need for additional learning beyond competition resources. Began exploring supplementary materials independently.

Field Research

Conducted in-person survey at school Spring Festival. Received Coursera access confirmation from organisers the same day.

Learning & Revision

Started Coursera learning track. Reviewed early survey responses. Identified limitation: survey did not capture user readiness dimension. Began revising.

Pivot Discovery

Direct user interviews revealed: problem is coordination (missing players), not court availability. 67% of games cancelled due to insufficient players. Rebranded from SmartCourt AI to RunItBack Hanoi.

Ongoing

Shifted focus to chatbot-based coordination. Data collection continues with revised framing. Currently building early prototype. Estimated completion: April 2026.

Project Status now: In Progress; Estimated Completion: April 2026

Project Images

Project Images

Spring Festival survey Nguyen Trai Ba Dinh High School, January 31, 2026

Field research session where in-person conversations with basketball players revealed coordination challenges that shaped the project pivot.