BLR Neighborhood Explorer
A data-driven neighborhood comparison engine for Bengaluru
The problem
Moving to a new city is painful. Existing tools are outdated, scattered across 5 tabs, or behind paywalls. You need one place to understand neighborhoods — livability, rentals, amenities, commute times, weather — before signing a lease.
How it works
- —Aggregates 4+ live data sources: NoBroker rentals, Overpass API (OSM), OpenWeatherMap, custom livability scoring
- —Scores 100+ Bengaluru neighborhoods algorithmically: proximity to schools, hospitals, supermarkets, commute zones
- —Renders interactive maps with MapLibre GL — click any neighborhood to drill into rental listings and scores
- —Refreshes nightly via scheduled GitHub Actions data pipeline
100+
neighborhoods scored
4+
live data sources
200+
commits
Live
deployed on Vercel
Architecture
NoBroker ┐ OpenWeatherMap ├─► Python pipeline ─► Supabase ─► Next.js API ─► MapLibre Overpass (OSM) ┘ (nightly cron) (PostgreSQL) (REST) (frontend)
Stack
What this shows
- ✓Full-stack data pipeline → backend → frontend, end to end
- ✓Operational thinking: scheduled jobs, data freshness, pipeline reliability
- ✓Consumer product instinct: what someone actually needs when moving cities
- ✓Shipping mindset: deployed, live, not a prototype deck
Fitness Progress Coach
A Telegram-native AI coaching agent that knows your training history
The problem
Generic fitness apps don't know your programme. Manually tracking sets and weights is tedious, and none of it connects to coaching that actually references what you did last week. There's no tool that combines natural language logging with contextual AI feedback based on your specific history.
How it works
- —Text a keyword on Telegram (chest · back · shoulder · legs) and receive your pre-filled workout template instantly
- —Fill in sets, reps, weight, RPE and reply — a code node parses the log and writes every exercise as a row in Google Sheets
- —GPT-4o-mini fetches your last 4 sessions per exercise from Sheets, detects plateaus and PRs, and sends coaching feedback as Marcus — a direct, data-driven coach persona
- —Anti-hallucination architecture: session counts and plateau/trend detection flags are computed in code nodes before the LLM sees the data — the model can only reference what it's explicitly given
- —3-workflow n8n architecture: Router (webhook + switch), Template Sender, and Log & Coach — each workflow is independently maintainable
3
n8n workflows
4
workout splits tracked
Live
active on Telegram
GPT-4o-mini
coaching model
Architecture
Telegram msg → Router Workflow → Switch
keyword (chest/legs/…) → Template Sender → Telegram reply
workout log → Log & Coach Workflow
↓
OpenAI parse → Google Sheets (log)
↓
Sheets (fetch history) → format + flags
↓
OpenAI "Marcus" coach → Telegram replyStack
What this shows
- ✓AI agent design: parse-then-reason pattern, anti-hallucination via data validation in code before LLM
- ✓Product thinking on constraints: Marcus persona has strict coaching priority order to prevent generic output
- ✓No-code + code hybrid: n8n for orchestration, JavaScript nodes for logic that needs precision
- ✓Real usage: built to solve a personal problem, runs daily, has known bugs documented honestly
For Job Hunt
Automated job search assistant for HR/talent roles in India
The problem
Job hunting in India means manually checking 5+ job boards every day — Naukri, LinkedIn, Indeed, SmartRecruiters, Workday. That's 2–3 hours of repetitive, soul-destroying work before you've even applied to anything.
How it works
- —Scrapes 5+ job boards daily using Selenium (JS-heavy sites) and BeautifulSoup
- —Filters results by resume keywords, location preference (Bengaluru/Remote), and experience level
- —Ranks output: 60% keyword match + 40% GPT-3.5 semantic confidence score
- —Sends curated HTML email alerts at 9 AM and 6 PM IST, color-coded by relevance
- —Deduplicates across a 3-day rolling window — no spam
5+
job boards monitored
2×
daily email delivery
3-day
deduplication window
GPT-3.5
semantic ranking
Architecture
Naukri ┐ ┌─► Top Matches (email)
LinkedIn ├─► Selenium/BS4 ─► Filter ─┤
Indeed │ + dedup (Supabase) └─► Other Openings (email)
Workday ┘
GPT-3.5 semantic re-ranking (optional)Stack
What this shows
- ✓Automation thinking: scheduled reliability, failure handling, deduplication
- ✓Data engineering: multi-source scraping, transformation, structured storage
- ✓Problem-first: built to solve a real personal pain, then abstracted to work for others
- ✓Operational logic: how to handle freshness, ranking, and volume at scale
Claude Token Efficiency
Analytics tool for measuring and optimizing Claude Code usage
The problem
Every Claude Code session generates token usage data — but it's buried in local JSON files. No one knows their cache hit rate, how much of the context window they're burning, or when they're being inefficient. If you can't measure it, you can't optimize it.
How it works
- —Reads local ~/.claude/ session history (no API calls, completely offline)
- —Computes cache hit rate, context window utilization, and wasted token analysis
- —Surfaces session statistics: average tokens, productivity patterns, activity heatmaps
- —Distributes as both a Claude Code skill (/token-efficiency) and standalone CLI
0
external dependencies
200K
context window analyzed
Offline
no network calls
CLI + skill
dual distribution
Architecture
~/.claude/sessions/*.json ─► Python analyzer ─► Console report
(zero deps) cache rate
context % used
session statsStack
What this shows
- ✓Measurement mindset: you can't improve what you don't track
- ✓Developer empathy: tools that solve real friction in the dev workflow
- ✓Pragmatic implementation: zero dependencies, ships as a one-liner install
- ✓Observability thinking: instrumentation as a first-class concern