Scout
Scout generates personalised learning roadmaps. You give it a goal and your current skill level, it produces a structured path with resources, projects, and milestones. The hard part wasn't the generation - it was making it affordable.
The first model I tried was GPT-4. The output quality was noticeably better than alternatives, but the cost was a problem. At free-tier limits, a single complex roadmap could eat 30% of the daily quota. Fine for testing, not fine for a real product. I switched to Gemini Flash - comparable quality for this use case, and the free tier is generous enough to handle actual usage. Picked Gemini Flash because GPT-4 would blow the free tier in a day.
Rate limits still mattered. The frontend can submit requests faster than the backend can process them, so I added per-IP rate limiting with Redis. Not sophisticated, but it stopped the accidental hammering during development.
Mock mode was important. The AI backend takes 2-3 seconds to respond; the frontend shouldn't have to wait for that during UI iteration. I added a MOCK_MODE env flag that returns a pre-generated fixture response instantly. Frontend and backend could develop independently.
Supabase stores the generated roadmaps so users can come back to them. The schema is simple: roadmaps belong to users, each roadmap has a JSON payload containing the structured output. I didn't try to normalize it into relational tables - the LLM output is opaque enough that storing it as JSON made more sense.
The one thing I'd change: the current backend waits for the full generation before returning. Adding streaming responses would drop perceived latency significantly. It's on the list.
Stack: Python, FastAPI, React, Google Gemini, Supabase, Upstash Redis.