Andela - Centralized Feedback

PRODUCT DESIGN

RESEARCH

Currently, at Andela, all Talent feedback is spread out around different screens and even different platforms. This makes the process of vetting a talent more difficult for a matcher as it keeps them switching contexts and delaying their Time to Match.

Why is this important?

Centralizing feedback in a single, accessible location is essential for informed decision-making, improving efficiency, and enhancing the matcher experience. It minimizes context-switching, reduces client rejections, and empowers matchers to identify and select the right talent more effectively.

What problems are we trying to solve?

  • Fragmented & Hard-to-Find Feedback

  • Inefficiencies in the Closed & Active Pools

  • Decision-Making Barriers for Matchers

  • Context Switching & Workflow Disruptions

  • Inconsistent Feedback Storage & Prioritization

What are the high-level project goals?

↓ Reduce Time to Shortlist & Interview Talent

↓ Reduce Time to Match

After scoping the problem, I established bi-weekly check-ins with key stakeholders (PM, PM Lead, and Product Design Director) to share initiative progress and design proposals. I conducted four interviews with matchers to understand their feedback-visualization process and identify pain points and needs. The initiative took three weeks to complete before handoff to the Engineering team.

Research

Context

Most Matchers are engineers who transitioned into sales roles. They're highly technical individuals who prefer having comprehensive information readily available. They constantly switch contexts while managing up to 8 jobs simultaneously, typically working with large monitors and multiple open tabs.

To learn more about Talent feedback, we interviewed four IC Matchers who specialize in Enterprise businesses.

💡 Fun fact about our users: Each Matcher has developed their own unique workflow. When interviewing them, it's crucial to consider some key factors:


  • How long have they been working at Andela? They may be more reluctant to adopt new features and have a negative bias towards changes

  • What is their matching specialty? (niche jobs, design, enterprise, SMBs) Different specialties often require different internal processes and platform needs

  • What is their role? (Match Lead, IC) Depending on their role, they may prioritize different features—for example, Match Leads tend to focus more on reporting capabilities

Questions

Our research focused on understanding how and when feedback was being used, as well as which types of feedback were most relevant. We structured our interviews around these key questions:

  • To understand the importance of feedback timing versus type, we asked:

    • Do you find it more important to discover critical feedback or the most recent feedback when evaluating talent? Why?

  • To determine optimal feedback placement, we asked:

    • At what stage of the matching process do you look for feedback on a talent?

  • To establish an information hierarchy, we asked:

    • When reviewing [feedback type], how would you rank the following data points in order of importance: [data points]?

    • When talents have multiple types of feedback (e.g., Interview, Engagement, Rejection, Matcher Notes), which is more critical for making your decision?

Insights

Here are some of the key insights we gathered from our research:

  • One of the most relevant things to know about Talent is if they’re shortlisted for another job

  • Context is key when looking at feedback from past jobs:

    • What was the role of the job?

    • What were the skills considered?

  • Feedback is most valuable when it’s recent

  • Feedback ranked from most important to least:

    • Client feedback

    • Engagement feedback

    • Client Interviews

    • Matcher feedback

Solution

Scope

Before working on this initiative, the product team met to define what we wanted to address immediately and what changes we would make in the long term. We decided to focus on two key Jobs to be done:

  • Users should be able to check if talent is shortlisted for another job

  • Matchers (main users) should be able to view relevant profile feedback to make informed decisions about adding talent to the pool or rejecting them

Design Process

My design process for this project and most initiatives at Andela followed these steps:

  • Aligned with PMs (Match PM and Lead PM) to define the problem and scope

  • Interviewed multiple internal users to identify pain points, needs, and opportunities for improvement

  • Created multiple proposals, listed pros and cons, and reviewed them biweekly with stakeholders

  • Once confident about a solution, created a prototype and shared it with users to gather final feedback

  • Conducted a detailed handoff session with engineers

  • Once in staging, encouraged our users to test and provide feedback (this was straightforward since we have internal users)

  • Post-launch, monitored and quickly addressed any issues that arose

User Flows

Since this initiative involves straightforward user interactions (as shown below), creating detailed user flows would not add significant value.

Design Proposal

For the final design proposal, I focused on incorporating key insights from our user interviews:

  • Implemented a timeline UI to display feedback chronologically, since recent feedback was identified as the most valuable

  • Added visual indicators to highlight different feedback types, with Client Rejections given the highest priority, followed by Interviews, Matcher Notes, and Engagement Feedback. This prioritization also guided the organization of filters

  • Created a consistent UI across all feedback types to help matchers recognize patterns and optimize their workflow. For example, links to specific feedback instances are always placed below the client's name

  • Clearly distinguished between internal and external feedback

  • Included Required Skills for each job where available, reflecting our key learning that "context is key when it comes to feedback"

For the Styleguide, I used our existing DS which we maintain and scale between all designers on the team.

Prototype

Handoff

For design handoff, we share the Figma file with engineers before the meeting and encourage questions about use cases, edge cases, and any behaviors not covered in the file. When preparing the Figma file for handoff, I cover these key areas:

  • Why is this initiative important?

  • What problems are we trying to solve?

  • What are the high-level project goals?

  • Summary of our research & design process

  • E2E flow

  • Empty screens, error screens, and unexpected behaviors

  • Small resolutions

  • Component variables & behaviors

  • Scroll behavior

  • Changes across different instances

It's vital that engineers feel involved in the projects and commit to raising the quality bar. We encourage engineers to challenge the designs—this helps us improve and aim for excellence.

Testing in Staging

Working with internal users gives us the advantage of testing in a staging environment with minimal effort. This allows us to validate that we're delivering real value while keeping the feedback loop as short as possible. While this is an ideal approach, it isn’t always feasible due to time constraints.

Measuring success

Since launching this feature at Andela a couple of weeks ago, we’re tracking several key metrics to gauge its effectiveness:

  • Time to Shortlist & Interview Talent: We aim to cut this time from 3 days down to 1 day.

  • Overall Time to Match: Our goal is to reduce the overall matching time from 14 days to 12 days.

  • User Feedback: We're monitoring for positive responses specific to the new feature.

  • SUS Score: Our target is to boost our System Usability Scale score from 68% to 72% within this quarter.

These metrics will help us ensure that our improvements are delivering real value and enhancing efficiency for our users.

Looking Back

  • One of the biggest challenges was establishing a clear information hierarchy and standardizing the look and feel across all feedback. Engaging users early in the process helped us determine which data points were essential and which could be revealed progressively.

  • One of the biggest lessons I've learned from my design experience is that I'm always humbled when speaking with users. We often assume we know what’s best, but user research consistently challenges those assumptions and reveals the true needs of our audience. This insight applies not only to design but to life in general.

  • For future iterations, I proposed an advanced search feature to filter feedback by type, status, sentiment (negative or positive), and even similar roles. This option remains on the table if we discover that users struggle to find specific feedback.

Made with

by Ivan • Built in Framer

Made with

by Ivan • Built in Framer