— Bengaluru, India

Gaurav Patwardhan

Product Manager / Builder. I identify real problems and ship working products — from data pipelines to AI agents. Four projects below.


Consumer Product

BLR Neighborhood Explorer

A data-driven neighborhood comparison engine for Bengaluru

The problem

Moving to a new city is painful. Existing tools are outdated, scattered across 5 tabs, or behind paywalls. You need one place to understand neighborhoods — livability, rentals, amenities, commute times, weather — before signing a lease.

How it works

  • Aggregates 4+ live data sources: NoBroker rentals, Overpass API (OSM), OpenWeatherMap, custom livability scoring
  • Scores 100+ Bengaluru neighborhoods algorithmically: proximity to schools, hospitals, supermarkets, commute zones
  • Renders interactive maps with MapLibre GL — click any neighborhood to drill into rental listings and scores
  • Refreshes nightly via scheduled GitHub Actions data pipeline

100+

neighborhoods scored

4+

live data sources

200+

commits

Live

deployed on Vercel

Architecture

NoBroker      ┐
OpenWeatherMap ├─► Python pipeline ─► Supabase ─► Next.js API ─► MapLibre
Overpass (OSM) ┘     (nightly cron)     (PostgreSQL)   (REST)      (frontend)

Stack

FrontendNext.js, React, MapLibre GL, Tailwind CSS
BackendNode.js API routes, Supabase PostgreSQL
DataPython scraping, Overpass API, OpenWeatherMap
OpsGitHub Actions (nightly refresh), Vercel
LanguagesTypeScript 63% · Python 27% · JS 10%

What this shows

  • Full-stack data pipeline → backend → frontend, end to end
  • Operational thinking: scheduled jobs, data freshness, pipeline reliability
  • Consumer product instinct: what someone actually needs when moving cities
  • Shipping mindset: deployed, live, not a prototype deck
AI Agent

Fitness Progress Coach

A Telegram-native AI coaching agent that knows your training history

The problem

Generic fitness apps don't know your programme. Manually tracking sets and weights is tedious, and none of it connects to coaching that actually references what you did last week. There's no tool that combines natural language logging with contextual AI feedback based on your specific history.

How it works

  • Text a keyword on Telegram (chest · back · shoulder · legs) and receive your pre-filled workout template instantly
  • Fill in sets, reps, weight, RPE and reply — a code node parses the log and writes every exercise as a row in Google Sheets
  • GPT-4o-mini fetches your last 4 sessions per exercise from Sheets, detects plateaus and PRs, and sends coaching feedback as Marcus — a direct, data-driven coach persona
  • Anti-hallucination architecture: session counts and plateau/trend detection flags are computed in code nodes before the LLM sees the data — the model can only reference what it's explicitly given
  • 3-workflow n8n architecture: Router (webhook + switch), Template Sender, and Log & Coach — each workflow is independently maintainable

3

n8n workflows

4

workout splits tracked

Live

active on Telegram

GPT-4o-mini

coaching model

Architecture

Telegram msg → Router Workflow → Switch
  keyword (chest/legs/…) → Template Sender → Telegram reply
  workout log            → Log & Coach Workflow
                              ↓
                         OpenAI parse → Google Sheets (log)
                              ↓
                         Sheets (fetch history) → format + flags
                              ↓
                         OpenAI "Marcus" coach → Telegram reply

Stack

Automationn8n (3-workflow agent architecture)
AIOpenAI GPT-4o-mini (parse + coaching)
InterfaceTelegram Bot API (input + output)
StorageGoogle Sheets (one row per exercise per session)
LogicJavaScript code nodes (routing, parsing, anti-hallucination flags)

What this shows

  • AI agent design: parse-then-reason pattern, anti-hallucination via data validation in code before LLM
  • Product thinking on constraints: Marcus persona has strict coaching priority order to prevent generic output
  • No-code + code hybrid: n8n for orchestration, JavaScript nodes for logic that needs precision
  • Real usage: built to solve a personal problem, runs daily, has known bugs documented honestly
Automation Tool

For Job Hunt

Automated job search assistant for HR/talent roles in India

The problem

Job hunting in India means manually checking 5+ job boards every day — Naukri, LinkedIn, Indeed, SmartRecruiters, Workday. That's 2–3 hours of repetitive, soul-destroying work before you've even applied to anything.

How it works

  • Scrapes 5+ job boards daily using Selenium (JS-heavy sites) and BeautifulSoup
  • Filters results by resume keywords, location preference (Bengaluru/Remote), and experience level
  • Ranks output: 60% keyword match + 40% GPT-3.5 semantic confidence score
  • Sends curated HTML email alerts at 9 AM and 6 PM IST, color-coded by relevance
  • Deduplicates across a 3-day rolling window — no spam

5+

job boards monitored

daily email delivery

3-day

deduplication window

GPT-3.5

semantic ranking

Architecture

Naukri      ┐                        ┌─► Top Matches (email)
LinkedIn    ├─► Selenium/BS4 ─► Filter ─┤
Indeed      │   + dedup (Supabase)       └─► Other Openings (email)
Workday     ┘
                                GPT-3.5 semantic re-ranking (optional)

Stack

ScrapingSelenium, BeautifulSoup4, LXML
IntelligenceOpenAI GPT-3.5 (semantic re-ranking)
DataSupabase PostgreSQL (dedup + retention)
AutomationGitHub Actions (9 AM + 6 PM IST)
DeliverySMTP via Gmail, HTML email templates
LanguagesPython 55% · HTML 45%

What this shows

  • Automation thinking: scheduled reliability, failure handling, deduplication
  • Data engineering: multi-source scraping, transformation, structured storage
  • Problem-first: built to solve a real personal pain, then abstracted to work for others
  • Operational logic: how to handle freshness, ranking, and volume at scale
Developer Tool

Claude Token Efficiency

Analytics tool for measuring and optimizing Claude Code usage

The problem

Every Claude Code session generates token usage data — but it's buried in local JSON files. No one knows their cache hit rate, how much of the context window they're burning, or when they're being inefficient. If you can't measure it, you can't optimize it.

How it works

  • Reads local ~/.claude/ session history (no API calls, completely offline)
  • Computes cache hit rate, context window utilization, and wasted token analysis
  • Surfaces session statistics: average tokens, productivity patterns, activity heatmaps
  • Distributes as both a Claude Code skill (/token-efficiency) and standalone CLI

0

external dependencies

200K

context window analyzed

Offline

no network calls

CLI + skill

dual distribution

Architecture

~/.claude/sessions/*.json ─► Python analyzer ─► Console report
                                  (zero deps)       cache rate
                                                    context % used
                                                    session stats

Stack

LanguagePure Python 3.7+ (zero external dependencies)
Data sourceLocal ~/.claude/ JSON session files
DistributionClaude Code skill + standalone CLI
ArchitectureFully offline, no API calls or network access

What this shows

  • Measurement mindset: you can't improve what you don't track
  • Developer empathy: tools that solve real friction in the dev workflow
  • Pragmatic implementation: zero dependencies, ships as a one-liner install
  • Observability thinking: instrumentation as a first-class concern

About

Why I build this way

I'm a product manager who builds the things I spec. Not because I have to — because shipping something end-to-end is the fastest way to learn what actually matters. The judgment calls you make at 2 AM debugging a data pipeline are different from the ones you make in a document.

The projects above aren't portfolio pieces. They solve problems I ran into, tools I wanted to use, data I wanted to see. BLR was me trying to find a flat. For Job Hunt was me drowning in browser tabs. Token Efficiency was me wondering if I was using Claude Code well.

I'm most useful to founders who need someone who can both think about a product and build pieces of it — someone who understands that a great feature and a broken data pipeline are the same problem.