#impactai

ImpactAI

Impact

Impact

An AI-powered conversational search tool that let anyone find Hollywood crew in plain language.

Screenshot 2026-04-13 at 12.41.29 PM

The Context

The platform was working.
The front door was closed.

By late 2024, Impact had a functioning network, a hiring system, and on-set tools keeping crew engaged through production. The ecosystem worked but it was a closed environment. You had to already be inside it to understand why it was valuable.

Acquisition was the bottleneck. Getting a Hollywood professional to sign up for a new platform without experiencing it first was a hard ask. So instead of asking first, we built impactAI, a conversational search tool powered by an LLM connected directly to the Impact database, available to anyone without logging in. 

The conversion mechanic was deliberately ungated. Anyone could search and browse real results. The wall appeared only at the moment of intent — connect, message, or view a full profile triggered a sign-up prompt. Value demonstrated before the ask.

Business Goal

Test AI as a growth mechanism. Make the Impact network legible to outsiders — specifically to coordinators and production companies at major studios — without requiring account creation upfront.

Where it lived

Two surfaces: embedded on the Impact homepage for existing visitors, and as a standalone public link that could be shared with anyone — no login required to access.

From MVP to Product

What the developer built.
What the product needed to be.

Engineering shipped a working prototype called "Listmaker." The LLM connected to the database, queries returned results, candidates appeared. But it communicated nothing about why you should trust it, how to use it, or what it was for. The name said it all — it was conceived as a utility, not a product.

mvplistmaker

From MVP to Product

Four decisions that changed
what the product communicated

FRAMING

Key To Hollywood" → "Who Do You Need?"

The Key To Hollywood" was aspirational — positioning impactAI as an all-in-one network solution. The ambition was right but it overpromised relative to what the tool actually did: conversational search within a defined database. The reframe was narrower but honest — and it doubled as onboarding. Users read "Who Do You Need?" and immediately understood what kind of input the system needed from them. The headline became part of the tutorial.

ONBOARDING

Empty input → Suggested prompts

An empty search box communicated nothing about what the system could do. Suggested prompts modeled three things simultaneously: the range of query types, the level of specificity the system needed to work well, and the kinds of questions Hollywood professionals actually ask when they're hiring. They weren't just shortcuts — they were a tutorial disguised as UI. The pattern was borrowed from tools like Perplexity, where users were already building intuition around conversational search — then adapted for a much higher-stakes professional context.

promptexmaples

TRUST

Based On" tags + Edit Filters + Report an Issue

After the AI parsed a query, we surfaced exactly what it had extracted as editable tags — role, location, experience level, genre, production type — mapped directly to Impact's existing filter infrastructure. Users could see the reasoning and correct it. "Edit Filters" opened a full panel. "Report an Issue" was built into results — partly as a trust signal (we know the system isn't perfect), partly as a real feedback loop into the model. In an industry where a bad hire damages your reputation, transparency about AI reasoning wasn't optional. It was the product.

Frame 63

AVATAR

Not a wizard. Not a database. Something in between.

The avatar went through significant deliberation across three territories: wizard/magic (intelligent but opaque), spiral/abstract (AI-coded but cold), generic profile silhouette (familiar but misleading about what the tool was). What we landed on was neutral without being clinical — visually focal, almost target-like, matching the Impact brand while feeling intentional rather than robotic. The icon wasn't decoration. It was a positioning decision about how users should relate to the AI: as a precision tool you control, not a magic box or a database query.

Avatar

Two Experiences

Unauthenticated shows enough.
Authenticated earns the account.

The unauthenticated experience was intentionally open, good enough to demonstrate real value without a sign-up. The authenticated experience was meaningfully richer across three dimensions:

Mutual connections — candidates one or two degrees from you surface higher and are flagged visually, changing who appears at the top of results entirely

Search history — previous queries inform current results, so the system gets more useful the more you use

Full network graph — ranking draws on your complete professional relationship map, not a generic database query


This gap was intentional product strategy. The unauthenticated version created desire. The authenticated version delivered at a level that made the value of having an account immediately visible the moment you signed in.

Screenshot 2026-04-13 at 12.45.34 PM
Screenshot 2026-04-13 at 12.45.58 PM

Shipping It

Manual testing, studio feedback,
and what happened next

How We Tested Reliability

The gap between "the model works in general" and "the model works for the specific queries real users type" is wide. We closed it manually — me, the PM, and the CX coordinator working through a Google Doc list of test queries together for hours before each release, checking whether results were consistent and accurate.

The ML engineer and I also worked together on the UI directly — iterating on how to surface the exit paths, filter logic, and reasoning states. Design and the model evolved together, not independently.

The Specificity Problem

"Producers in LA with four years of experience in horror" — accurate results. "Producers in LA with some experience in horror" — results appeared but weren't reliably relevant. "Some experience" gave the model no signal to score against. Discovering this in testing, not through user complaints, meant we could design around it before launch.

What It Achieved 

impactAI didn't become a public product — but it did something a pitch deck couldn't. It became the team's most effective sales tool, letting studio contacts experience Impact's network firsthand in under two minutes. The relationship graph it was built on continues powering search, profile cards, and onboarding across the core product today.