This post sits within our series on Growth & Development
While we have a centralized recruiting operation that ensures the trains are running on time, teams here at Clipboard generally own what to assess for, how to assess and who assesses.
Internally, an invitation to help interview is considered recognition. We train new interviewers using several tactics. First, you’re paired with a buddy who should walk you through the basics of case review (read here to learn about our case-based process) and using our applicant tracking system. Then, you’ll work with your buddy on a calibration exercise where you grade ~10 or more cases and write written feedback. Your buddy will then countergrade each of them and provide their own written feedback on your reasoning. Lastly, we curate a list of example recorded interviews that we ask you to watch1 (each curated interview has written commentary on what to look out for).
When team members take their first few interviews, we provide written feedback on those interviews, as well as a grade. Interview recordings are our game tape. We strive to ensure new interviewers can assess effectively and gather sufficient signal from the live call. While it’s not tracked, those that regularly interview should be receiving at least one piece of written interview feedback every ~two-three months (to give you a sense of volume).
See below for real examples of interview feedback. The timestamps refer to specific points in the interview to make it easier for the recipient to scrutinize their own game tape. We’re sharing this with you knowing full well it looks like gibberish if you don’t have the necessary context. Having said that, we hope sharing it (combined with the other posts in this series) convinces readers our talk on feedback is not just lip service. Feedback like this is only one of many types of feedback incoming team members should expect.
Feedback #1 – an interview reviewing our Toledo case
Excited to have you actively involved in the S&O Loop. I’m looking forward to working with you on interviewing and will emphasize that a lot of interview improvement comes through feedback and repetition.
The primary things I’ll look for you to do in the future on Toledo interviews will be to 1) dig deeper into their reasoning, and 2) get into their ability to execute by having them talk through what actions they would take and how they would measure success and iterate.
I also want to thank you for pushing me to give you feedback on an interview; I’m excited to watch more in the future and as you have others where you’re stuck on how to iterate or interviews that you think went well let me know and I’ll give them a watch!
Interview Feedback:
Prior to watching the interview I read [candidate’s name]’s case to get an idea of how he approached the problem and to build an “at least as good as” mental model around how I’d tackle his case interview. It’s a unique case; I’d want to press on why rider bins matter and how he’d test things in the real world. Both of these should provide some surface area; I don’t think rider bins should matter (what if it’s 50-50 between 0 and 2 rides and those cohorts alternate every month for example; that would be the extreme, how does that change the outcome in pricing?). He has a big assumption about the relationship between match and take; how can he prove / disprove?
Interview Notes:
1:47 - You did well with the smalltalk / lead into the actual interview. Minor thing but you should expect that you’ll be the first person they speak with in the S&O Loop (which will be the case pretty much every time).
Backgrounds: You spent 4-5 minutes here, I think you can skip over this for the most part. There’s nothing too enlightening that I’d expect to get from a candidate and we really don’t care about background much. Early on in the interview I’d optimize to specific places we can dive into with the time we have; since we’re more focused on the case I’d start there.
8:56 - good question about the bins, it’s the same one I would have asked.
13:38: I’m glad you’re still digging on this, this is a good job by you. You’re not letting him off the hook
14:52 - You did a good job of walking him through this but I’d want to press on how you want to use this to evaluate him. If you’re looking for him to grab ahold of what you said and propose something else I think he missed here. For a candidate that we’re still evaluating I’d caution against leading them too much unless you think it will help reset the conversation or put you on a different path.
18:44 - I’d steer heavier into the case here instead of reorienting around Clipboard; I want to dive deep into how he’s thinking about this problem as much as we can. I think asking him why bins matter at all could work; what if every rider is simply requesting one ride per month? Why doesn’t that work for his model?
22:35 - I like the question / line of questioning the take / match relationship takes you down. One thing I’d press on is to get them to talk about what they’d do in reality first, which should lead them here. If you ask “you can modify take however you want with a couple clicks for whoever you want, what do you actually do?” They should then state that they made this assumption and talk about how they’d validate it; if they don’t realize they made this large assumption it’s an issue (“we should make the price $21.50 because the model says so, work is done” is a bad answer).
25:33 - I’m not sure what a good answer would have looked like for this question (I think maybe “that’s exactly what we should be doing”, but he’s not there).
Overall I don’t think we got that deep into how [candidate] thinks about solving this problem. I’d advocate for a couple of things on this in the future: 1) bring the case into the real world and then walk through each step they’d take; experiment design to validate assumptions, roll out to users, measuring success metrics, next iterations, roll out to users, measuring success (rinse, repeat). While you do 1, you should also do 2) take their points to the logical extremes / pressure test them. For example “why does binning matter at all” would be an interesting question for [candidate]. You can also throw assumptions he makes out the window; let’s say you move to your price point you proposed and the match rate is 62%, it barely moved. Where do we look? What do we do next?
The main thing that I want to avoid with Toledo’s is a lack of a dispositive read; I am uncertain whether or not [candidate]’s WBD will be strong or weak, and I want you to walk out of a Toledo thinking “his WBD is going to be great” or “he’s not a fit, let’s reject now” and save both of us the time going forward.
…
Feedback #2 – an interview reviewing our Toledo case
Had a chance to watch your recent interview.
Overall
I thought you did well and would peg at 6/10, which means I think you did well enough to get the signal you needed to make a decision. This really was the tale of two interviews. Your first half needs work, but your case interview was great. You clearly are in your comfort zone probing case details and pushing for specificity, which I loved. On the behavioral side, you seem a bit hesitant to demand a similar level of fidelity. You shouldn't be!
Running notes:
7:51 --> you let him off the hook
Benchmark for excellence --> you can close your eyes and envision the literal data visualizations that he's referencing. "see how much they're braking" doesn't do it for me. Same goes for "using machine learning"
11:29 --> wouldn't hesitate to say "Oh interesting, tell me more about how your customer conversations taught you about how a good driver should behave? What questions did you ask? What decisions were made differently because of those conversations?"
[edit -- your Q handled this well, I jumped to conclusion]
~13:00 --> agree with your assessment that this was a good answer, but you should deblur --> what systems specifically?
Also, how was this tied to a customer insight? Sounds like it was from him riding the bus (which isn't bad, but imo doesn't answer your question)
~13:50 you're filling in the gaps instead of deblurring :)
~14:45 --> fuzzy answer from him, not sure what this means
~20:00 --> going back, i don't think he answered your original "insight --> data --> build" question at all
~29:00 --> good stuff digging into the code and deblurring his code/assumptions (you did this well in the excel too)
But....doesn't match rate go from 93% --> 100% as you go from $3 take to $2?
Would press him on why that makes sense
~31:00 --> don't know if I love "throw something in a python library" and see what happens approach...
~32:50 --> love this question, but you create risk in telling him what you're looking for re:rollout
35:38 --> good questions -- just a tip, I like to probe on selection control/treatment
36:17 --> :chefskiss: exactly the right question
~40:45 --> "the graph can have many shapes" hmm
graph doesn't seem to be answering your question re: guidance for his specific experiment
~44:30 --> great work being persistent
~47:00 --> again, great work pushing
We look forward to sharing new, different types of feedback that candidates should expect if they join.
I have no idea how teams reliably improve their interview skills without recordings