NYU Entrepreneurial Institute 2024–2025 AI Systems · Change Management

AI Coaching Copilot: Scaling a Venture Program 6x Without Adding Headcount

How I replaced a 1:1 coaching bottleneck with an AI-supported system that served 6x more teams, improved outcomes, and cost $7 to run.

66 to 422 Teams served per semester
80 to 371 NYU students reached
75 to 8 Staff coaching sessions required
$7 Total system operating cost

The Problem

Every semester, over 650 teams applied to NYU's Entrepreneurial Institute programs. Most were turned away before receiving any structured coaching, not because they lacked potential, but because our capacity couldn't keep up with demand. The existing model funneled everyone through 1:1 sessions with staff coaches, which meant we were simultaneously over-stretched and under-selecting.

The consequences were significant: we were rejecting over 75% of applicants before getting to know them, spending disproportionate coaching time on teams that weren't advancing, and leaving the majority of interested students without a meaningful entry point into entrepreneurial education.

"We weren't running a coaching program. We were running a filtering program. The real opportunity was to redesign the funnel so that more teams got value, not fewer."

Before and After

The original structure was straightforward: attend a 4-hour Bootcamp workshop, then get matched with a staff coach for 1:1 sessions. Simple, but it hit a hard ceiling. Every additional team meant more staff hours with no leverage in the system.

Before
4-hour Bootcamp workshop
1:1 coaching with staff coaches
2 Bootcamps per semester
66 teams served
75 staff coaching sessions
After
4-hour Bootcamp workshop
"Leslie" AI Copilot sessions
Group coaching sessions
4 Bootcamps per semester
422 teams served
8 staff sessions + 173 AI sessions

How It Works

The "Leslie" AI Copilot

01
Zero-hallucination team lookup

Leslie uses a traditional database query against our master spreadsheet to retrieve each team's exact application record via email match. No AI interpretation, no creative gap-filling. The data is read precisely as written.

02
Narrowly scoped per session

Each session has a single defined objective tied to the team's current stage. Leslie guides founders through that objective using all workshop transcripts and coaching guidelines, without scope creep or generic advice.

03
Automated follow-through

After each session, the system auto-generates follow-up emails and CRM notes via Gemini, and logs every interaction into a tracking spreadsheet for real-time utilization monitoring.

Latest Feature

Persistent Memory Across Sessions

The most-requested feature from founders was the ability to continue where they left off. Leslie now stores a memory object in MongoDB at the end of every session and retrieves it at the start of the next. Teams no longer restart from their original application. The system knows where they are.

Target Customer Nuances
Profile updates, pivots, and new details discovered through customer discovery are stored and recalled in subsequent sessions.
Problem Definition
Pain points, validation levels, and hypothesis changes are tracked so each session builds on the last rather than repeating ground.
Interview Plan Progress
Interview guides, target interviewees, and session counts are remembered so coaching stays calibrated to where the team actually is.
Coaching Continuity
Founder challenges and coaching dynamics are retained, so Leslie can adapt its approach rather than treating every session as a cold start.

Tools and Stack

What I built it with

n8n
Workflow automation layer. Orchestrates the full session pipeline from team lookup through follow-up generation and usage logging.
Claude Sonnet
Primary coaching model. Replaced an earlier Gemini integration for significantly better conversation flow and contextual reasoning.
MongoDB
Persistent memory store. Saves and retrieves session memory objects so each coaching conversation builds on prior interactions.
ElevenLabs
Voice layer for session delivery, enabling a more natural coaching experience beyond a standard chat interface.
Notion
Houses coaching guidelines, workshop transcripts, and program documentation used to train and instruct Leslie's behavior.
Gemini (Institute account)
Backend model for follow-up email generation and CRM note creation, running on the Institute's own API account.

What I Actually Built

The AI system was only part of the work. The harder challenge was redesigning the entire program structure around it: doubling Bootcamp frequency, creating group coaching formats to complement AI sessions, defining clear KPIs for what "quality" meant at each stage, and getting a coaching staff that was initially skeptical to become advocates.

Change management was the real constraint. Coaches had built their identity around 1:1 relationships with founders. The pitch wasn't "AI replaces you." It was "AI handles the repetitive early-stage work so you can focus on the teams that actually need you." That reframe took time, iteration, and visible evidence that cohort quality wasn't dropping.

By the end of Fall '25, the same staff were describing the system as a force multiplier. Advancement rates and scoring benchmarks held across both semesters.

Results (Fall '24 vs. Fall '25)

6.4x
Increase in teams served (66 to 422)
4.6x
Increase in students educated (80 to 371)
89%
Reduction in staff coaching sessions needed
$7
Total AI system operating cost for the semester

Cohort quality held. Completion rate improved from 80% to 85%. Teams advancing to later-stage programs showed no regression in scoring benchmarks versus the prior model.

Lessons Learned

01 The system redesign mattered more than the AI. Doubling Bootcamp frequency and restructuring how coaching time was allocated drove as much of the outcome as the copilot itself. The AI enabled the redesign; it didn't replace it.
02 Adoption required proof, not persuasion. Coaches became advocates once they saw that team quality wasn't declining. Getting to that proof point quickly, by measuring the right things from the start, was the unlock.
03 Traditional lookup beats AI search for retrieval. Replacing AI-powered knowledge search with a precise database query eliminated hallucination at the data layer entirely. For factual record retrieval, deterministic systems outperform probabilistic ones every time.
04 Scope constraints are a feature. Limiting Leslie to information relevant at each specific stage made the coaching better, not worse. Narrowly defined tools outperform general-purpose ones in structured workflows.