Ginger: A voice-led AI app

AIPRODUCTSAAS

Designing AI interview experiences for first-round recruitment.

Ginger is an AI-driven voice recruiter used by companies to screen candidates through natural, human-like conversations. I joined as the founding designer to lead the end-to-end product design from research to UI, prototyping and conversational design.

Timeline: 6 months

Role: LLMs, Product Design, Animation, SaaS

Tools: Figma, Cursor,

ginger

About Ginger

Ginger is an AI-driven voice recruiter used by companies to screen candidates through natural, human-like conversations. I joined as the founding designer to lead the end-to-end product design from research to UI, prototyping and conversational design.

THE PRIMARY CHALLENGE

First-round phone screens are one of the biggest bottlenecks for recruiters. They’re repetitive, hard to scale, and highly inconsistent from one interviewer to another. Meanwhile, candidates often feel anxious, rushed, or judged in the first five seconds.

PRODUCT PLACEMENT

  • No more back-and-forth: Reach out to candidates, invite them to a voice screen, and follow up.

  • Higher engagement: Allow candidates to take assessments at any time of the day.

  • Instant insights: Get interview summaries based on role specific probes.

  • Scale effortlessly: Screen hundreds of candidates daily without adding recruiters.

DESIGN SOLUTION (MY BRIEF)

I designed a voice-led interview experience that blended an LLM with ElevenLabs to create natural, human-like conversations. My focus was on crafting an interface and flow that helped candidates feel comfortable, understood and at ease while speaking to an AI.

We conducted research with 22 recruiters and hiring managers across Europe and NA and this is what we learned.

RECRUITERS CAN'T SCALE MEANINGFUL CONVERSATIONS

They spent 60%+ of their time on repetitive early screening calls but felt pressured to make decisions based on thin information.

VOICE AI CAN FEEL UNCANNY OR UNCOMFORTABLE IF POORLY DESIGNED

When tone, pacing, or conversational flow doesn’t align with human expectations, users experience friction rather than support.

EARLY INTERVIEWS FEEL UNNATURAL AND CHALLENGING

Candidates said they often over-prepare, freeze, or feel judged. Traditional ATS tools feel cold or robotic, causing drop-offs or shallow answers.

“I spent 60%+ of my time on repetitive early screening calls but felt pressured to make decisions based on thin information.”

“I don’t like note taking while listening to candidates speak and often forget key points if I wait to take them down later”

“We handle a large volume of applicants which makes it difficult to screen”

THESE INSIGHTS NARROWED OUR SCOPE

Design a voice experience that feels like talking to a thoughtful, patient human and not an algorithm

MY FOCUS AREA

THE VOICE INTERVIEW EXPERIENCE

I intentionally scoped into ONE product surface:
The real-time voice interview flow.

This required orchestrating three layers simultaneously:

Human-centered conversational UX

Goal: Make the AI feel calm, safe and natural to talk to.

LLM reasoning + adaptive questioning

Goal: Make the AI ask smart, relevant, non-scripted questions

Natural voice output via Eleven Labs

Goal: Make the AI sound human without being uncanny.

Batch vs stream processing

VOICE AI - UX ENHANCEMENTS

Implemented
real-time listening cues such as animated waveforms, avatars, and dynamic “listening…” indicators to show attentiveness.

Suggested developers to add
backchannel feedback like “uh-huh,” nods, and subtle visual signals to mimic natural human listening.

Introduced
progressive response display (as transcrispts) so users could see AI responses unfolding as they were generated.

Designed
interruption-friendly controls allowing users to speak, pause, or redirect the AI mid-response.

WHITE LABEL PRODUCTS CUSTOMISED FOR OUR CLIENTS