How I replaced a 1:1 coaching bottleneck with an AI-supported system that served 6x more teams, improved outcomes, and cost $7 to run.
Every semester, over 650 teams applied to NYU's Entrepreneurial Institute programs. Most were turned away before receiving any structured coaching, not because they lacked potential, but because our capacity couldn't keep up with demand. The existing model funneled everyone through 1:1 sessions with staff coaches, which meant we were simultaneously over-stretched and under-selecting.
The consequences were significant: we were rejecting over 75% of applicants before getting to know them, spending disproportionate coaching time on teams that weren't advancing, and leaving the majority of interested students without a meaningful entry point into entrepreneurial education.
"We weren't running a coaching program. We were running a filtering program. The real opportunity was to redesign the funnel so that more teams got value, not fewer."
The original structure was straightforward: attend a 4-hour Bootcamp workshop, then get matched with a staff coach for 1:1 sessions. Simple, but it hit a hard ceiling. Every additional team meant more staff hours with no leverage in the system.
How It Works
Leslie uses a traditional database query against our master spreadsheet to retrieve each team's exact application record via email match. No AI interpretation, no creative gap-filling. The data is read precisely as written.
Each session has a single defined objective tied to the team's current stage. Leslie guides founders through that objective using all workshop transcripts and coaching guidelines, without scope creep or generic advice.
After each session, the system auto-generates follow-up emails and CRM notes via Gemini, and logs every interaction into a tracking spreadsheet for real-time utilization monitoring.
Latest Feature
The most-requested feature from founders was the ability to continue where they left off. Leslie now stores a memory object in MongoDB at the end of every session and retrieves it at the start of the next. Teams no longer restart from their original application. The system knows where they are.
Tools and Stack
The AI system was only part of the work. The harder challenge was redesigning the entire program structure around it: doubling Bootcamp frequency, creating group coaching formats to complement AI sessions, defining clear KPIs for what "quality" meant at each stage, and getting a coaching staff that was initially skeptical to become advocates.
Change management was the real constraint. Coaches had built their identity around 1:1 relationships with founders. The pitch wasn't "AI replaces you." It was "AI handles the repetitive early-stage work so you can focus on the teams that actually need you." That reframe took time, iteration, and visible evidence that cohort quality wasn't dropping.
By the end of Fall '25, the same staff were describing the system as a force multiplier. Advancement rates and scoring benchmarks held across both semesters.
Cohort quality held. Completion rate improved from 80% to 85%. Teams advancing to later-stage programs showed no regression in scoring benchmarks versus the prior model.