Articulate
An AI-powered interview practice app

Role
Product design, AI interaction design, and mobile UX
End-to-end ownership from concept through high-fidelity prototype
Team
Solo designer (AI-assisted prototyping)
Scope
-
Product concept & experience strategy
-
AI feedback model design (signals → insights → UI)
-
Mobile-first interaction design
-
Prototyping using AI-assisted tools (Replit Agent)
-
Error handling, confidence signaling, and edge states
Timeline
2–3 weeks (part-time)






The Problem
Interview candidates struggle to practice effectively because feedback is vague, delayed, or unavailable — especially for spoken communication skills like clarity, pacing, and confidence.
Why existing solutions fall short
-
Static prep lists don’t adapt to how someone actually speaks
-
Generic feedback lacks specificity
-
Users don’t see progress across attempts
Design Opportunity
Use AI to evaluate spoken responses in real time, translate signals into understandable insights, and support deliberate improvement through repetition.
Constraints
-
Solo designer
-
Short build timeline
-
AI-powered logic needed to be explainable, not opaque
-
Product needed to feel credible, not “experimental”
Role
-
Product strategy
-
UX / UI design
-
AI interaction design
-
Mobile-first prototyping
Why AI was the right solution
Spoken delivery generates rich signals (timing, pauses, phrasing)
Unlike written answers, spoken responses contain pacing, hesitation, and structural cues that are difficult to assess consistently without automation.
AI can translate those signals into feedback humans struggle to give consistently
AI enables repeatable analysis of speech patterns, reducing subjective bias while surfacing insights that would otherwise require expert coaching.
Iteration loops (attempt → insight → retry) benefit from automation
Automating feedback allows users to practice repeatedly without delay, reinforcing learning through fast, focused iteration.

AI's role in the product
Speech-to-text (transcript generation)
Transcripts provide a concrete reference for feedback, enabling users to see how their spoken response translates into language and structure.
Pattern detection (filler words, pacing, transitions)
The system identifies recurring speech patterns—such as filler words or long pauses—to surface behaviors that impact clarity and confidence.
Structured evaluation (STAR / PREP)
Responses are analyzed against selected frameworks, allowing feedback to align with common interview expectations rather than generic scoring.
Feedback confidence estimation
Each insight is paired with a confidence level, helping users understand when feedback is strongly supported versus when signals are less certain.

Designing the Practice Flow
Key UX Decisions
Minimal Cognitive Load During Recording
During recording, the interface is intentionally sparse to reduce performance anxiety and prevent users from multitasking while speaking.
One clear primary action at a time
Each screen exposes a single dominant action, helping users stay focused and reducing hesitation about what to do next.
Reassuring, neutral language (“This is practice mode”)
Language was carefully chosen to remove judgment and reinforce that the experience is low-stakes and exploratory.

Practice Screen Highlights
Large, single recording CTA
The primary recording control is visually dominant, ensuring users can begin speaking without scanning the interface.
Live timer for pacing awareness
A visible timer provides gentle feedback on response length without interrupting delivery or enforcing hard limits.
Clear state transitions (recording, paused, finished)
Distinct recording states confirm system behavior and give users confidence that their input is being captured correctly.
Translating AI Signals into Understandable Feedback
AI Signal
Interpretation
UI Representation
Pause Duration
Pacing Hesitation
"Long Pause" Transcript Highlight
Word Frequency
Filler Overuse
Highlighted Transcript Tokens
Structure Detection
Responce Clarity
STAR / Structure Score
Attempt Comparison
Improved Trend
History and Progress Indicator
Designing the Results Screen
urning AI analysis into clear, actionable insight
Design Principals
Neutral tone (no red/green shaming)
Feedback avoids pass/fail language or color semantics to prevent judgment and reduce anxiety, especially in high-stakes practice scenarios.
Explain why Feedback was Given
Each insight is tied to observable behavior (e.g., pauses or repetition) so users understand the reasoning behind the feedback.
Show confidence level of insights
Confidence indicators communicate how strongly the system supports each insight, helping users calibrate trust in the feedback.
Key UI Pattern
Circular score indicator (neutral color)
A non-judgmental score provides orientation without dominating the experience or implying success or failure.
Expandable performance breakdown
Detailed feedback is progressively disclosed, allowing users to explore insights without overwhelming the initial view.
Transcript highlights tied to feedback
Highlighted excerpts connect abstract feedback directly to moments in the response, grounding analysis in evidence.
“Insight confidence” label
Confidence labels reinforce transparency and acknowledge uncertainty in automated analysis.
AI feedback is framed as guidance, not judgment.

Designing the Retry Loop
Turning feedback into focused, repeatable improvement
-
Side-by-side comparison (Attempt 1 vs Attempt 2)
-
One primary improvement focus
-
Before → After guidance
-
Clear CTA to record again
This design encourages iteration without overwhelming users, reinforcing progress through small, intentional changes.

Designing for Progress Over Time
Supporting long-term improvement without gamification
Progress Design Goals
-
Make improvement visible over time
-
Reinforce habit formation
-
Avoid gamification pressure
Key Features
-
Session history grouped by date
-
Attempt counts per question
-
Trend indicators (improving / steady)
-
Delivery confidence levels
.png)
AI-assisted workflow
Focused design effort on decision-making, not boilerplate
By offloading repetitive implementation tasks, more time was spent refining interaction quality, feedback clarity, and AI UX patterns.
Iterated UI and logic rapidly using Replit Agent
Replit Agent enabled fast iteration on interaction flows, error states, and edge cases, allowing design decisions to be tested immediately.
Used AI tools to scaffold navigation and state management
AI assistance was used to quickly establish routing, screen structure, and basic state handling, reducing setup time.
Outcome
AI accelerated execution, allowing more time for UX refinement, edge cases, and interaction quality.
This approach enabled end-to-end ownership while maintaining a high bar for product thinking and user experience.
What I would Explore Next
Extending the system beyond the initial prototype
-
Real-time coaching mode (optional)
-
Personalized benchmarks by role
-
Multilingual analysis
-
Exportable progress reports
Final Takeaway
Designing human-centered AI products with speed and intent
This Project Demonstrates
-
Ability to design AI-powered products responsibly
-
Strong mobile UX fundamentals
-
Clear thinking about AI explainability and confidence signaling
-
Speed from concept to interactive prototype using AI-assisted tools
-
End-to-end ownership across product strategy, UX, and execution
Overall, this work reflects a product-first approach to AI—using automation to support learning, transparency, and thoughtful interaction design rather than novelty.