GO/ON (Hackathon)
2-day hackathon language-practice prototype. Scenario-based conversations with text and voice, translation toggle, custom scenario creation, and a responsive chat flow wired to backend endpoints.
- Last significant update
- Aug 2025
- Status
- Archived
- Tech
- React, TypeScript, Ollama, APIs (Web Speech & ElevenLabs)
Overview
This was a 2-day hackathon prototype built to test a simple idea: language learners do not just need more content, they need an easy way to practice real conversations in the languages they are learning, in situations that feel relevant, such as ordering food, making introductions, or asking for directions.
The app is centered around scenarios. Users can pick a scenario or create their own, then practice by having a conversation in that context through text or voice. To make speaking practice smoother, it supports speech-to-text for turning spoken input into messages, and text-to-speech so learners can hear replies and practice listening and pronunciation. Every scenario is handled by a different Ollama persona or model prompt, so the assistant’s tone and responses match the situation and the target language.
As a hackathon prototype, this version prioritizes demonstrating the idea and user experience quickly. Some technical details are simplified compared to how I would build a longer-term project.
Demo
Short walkthrough of the hackathon prototype: browse a scenario → start a conversation → use translation support when needed.
This demo uses footage captured from the original hackathon submission build.
1) Browse the homepage and available scenarios
The homepage introduces the scenario-based practice flow and lets users choose from built-in scenarios or navigate through the available learning paths.
2) Start a conversation in a scenario
This example shows a conversation in Chinese, demonstrating the core idea of practicing language in context rather than through generic prompts.
3) Use translation support during the conversation
When needed, the user can reveal translations to better understand the exchange without completely breaking the practice flow.
What I built
- Scenario-based language practice: Built a language learning prototype where users can practice conversations in realistic contexts instead of relying on isolated prompts.
- Text and voice chat flow: Created a chat experience that supports both text and speech, making it possible to practice speaking, listening, and written responses in one place.
- Scenario-specific assistant behavior: Used different Ollama personas and prompts so the assistant responds in ways that better fit each practice situation.
- Translation support: Added a translation toggle that helps users understand messages without breaking the overall conversation flow.
- Custom scenario creation: Built a flow that lets learners define their own situations and immediately use them for practice.
- Frontend concept validation build: Tied together scenario selection, conversation, translation, and voice support into a fast end-to-end prototype.
Key decisions
- Make scenarios the core interaction model: I centered the app around practical situations rather than generic chat so the learning experience feels more grounded and relevant to how people actually use a language.
- Support both text and voice: I included speech-to-text and text-to-speech so practice is not limited to typing. This makes the prototype closer to real conversation and lets learners practice speaking and listening as well.
- Use scenario-specific Ollama personas: Different scenarios route to different prompts or models so the assistant’s responses better match the context instead of relying on a single generic chatbot for every situation.
- Treat custom scenarios as part of the core loop: I wanted learners to go beyond fixed content, so I made it possible to create scenarios and feed them back into the same browsing and practice flow.
- Use translation as optional support: Translation is available when needed, but the main experience still encourages users to stay in the target language first and only fall back to help when necessary.
- Prioritize hackathon speed: Since this was built in two days, I prioritized proving the user flow end to end over deeper architecture or production-grade technical decisions. The goal was to test whether the concept works, not to present this as the correct long-term implementation.
Constraints & limitations
- Hackathon-speed implementation: This project was built in two days, so the focus was on validating the core idea and getting an end-to-end prototype working quickly. As a result, some technical decisions were made for speed rather than long-term maintainability or scalability.
- Scenario behavior is prompt-driven: Different conversation behaviors are created by routing users to different Ollama personas or prompts depending on the selected scenario. This works well for prototyping, but it also means the quality and consistency of responses depend heavily on prompt design rather than a more robust underlying system.
- Speech features depend on external and browser capabilities: Speech-to-text relies on browser speech recognition support, and text-to-speech depends on external voice configuration and API availability. This makes the experience less predictable across browsers and environments.
- Translation is a support layer, not a full learning system: Translation helps users follow the conversation, but it is still a lightweight assistive feature rather than a deeply designed feedback system. Over-reliance on translation could also reduce the challenge of staying in the target language.
- Custom scenarios are lightweight and locally stored: User-created scenarios are persisted in local storage and merged back into the app, which was enough for a hackathon prototype. However, this means they are device-specific and not designed for syncing, collaboration, or long-term content management.
- Limited depth of language-learning logic: The app demonstrates a useful interaction loop for practice, but it does not yet provide deeper learning features such as personalized feedback, progress tracking, adaptive difficulty, or strong evaluation of learner performance.
Future considerations
- Explore stronger scenario handling beyond the current prompt and persona setup, with more dedicated conversation behaviors that can better adapt to different situations, teaching styles, and learner needs.
- Evaluate whether separate specialized roles or models would work better than the current lightweight scenario routing, especially for cases where different situations need clearly different behavior.
- Explore making the application more agentic, where the system can manage a fuller learning loop instead of only responding turn by turn. For example, by tracking the learner’s goal, deciding when to encourage, when to correct, and when to adjust difficulty.
- Extend custom scenarios so they carry more structure, such as context, difficulty, learning goals, and expected conversation style, instead of acting only as lightweight prompt setups.
- Improve the voice experience so speech input and output feel more reliable and natural across languages, browsers, and devices.
- Evolve translation from a simple support layer into something more helpful for learning, such as contextual hints, partial guidance, or explanation-based assistance.
- Add longer-term learning support, such as progress tracking, conversation history, and review loops based on repeated mistakes or weak vocabulary areas.
- Revisit the overall technical design with a stronger architecture if the concept proves worth pursuing beyond hackathon validation.