Alooba Objective Hiring

By Alooba

Episode 41
Disha Gupta on Leveraging Automation and Authenticity in AI-Driven Hiring

Published on 12/14/2024
Host
Tim Freestone
Guest
Disha Gupta

In this episode of the Alooba Objective Hiring podcast, Tim interviews Disha Gupta, Director of AI at a mission-driven edtech

In this episode of Alooba’s Objective Hiring Show, Tim interviews Disha and delves into the impact of AI and automation tools in the hiring process. They discuss practical experiences with tools, highlighting the importance of automation for managing large applicant pools. Disha shares insights on their approach to incorporating AI in hiring, such as permitting candidates to use ChatGPT during interviews while stressing the importance of honesty. The conversation also covers the challenges of bias in AI, strategies for equitable hiring, and the evolution of their hiring methodology, including the significance of job descriptions, mission alignment, and the use of job scorecards. They debate the balance between gut feeling and data-driven approaches, and the nuances of equitable representation and qualitative assessment in the hiring landscape.

Transcript

TIM: Welcome to the objective hiring show. It's great to have you with us today.

DISHA: Thank you for having me, Tim.

TIM: Disha I'd love to start by discussing everyone's favorite topic at the moment: AI. It's the buzzword of buzzwords, but maybe it's also justified because it is making ground waves through it feels like every bit of society at the moment, and where I'd love to drill down is in AI and hiring in particular, and I'd love to know just from your experience in hiring. Have you started to dabble with any AI hiring tools or even maybe some automation tools, whether or not they involve AI? And I'd love to get your experiences and thoughts around your experience there.

DISHA: Yeah, so we have not necessarily dabbled in very AI-driven hiring tools, like not at all LLMs or text-based tools, but a lot of automation, so we use this tool called Breezy. It really helps us organize our candidates, the whole pipeline, when they come in, and the different stages that they are at in the hiring process. and it also helps us send not super personalized emails but also not generic emails if there's this is a rejection due to lack of mission alignment or if it's a rejection due to lack of technical skills or just bad timing. Whatever it is, there is a way to organize them and send the same email to, let's say, 50 candidates. So it is a good automation tool for us to help us navigate and manage, I don't know, thousands of candidates that apply. We also use CoderPad for the technical interviews, which is also a really helpful tool to be able to grade candidates and make sure the tests are passing, etc. So we use a lot of automation tools. The only use of it, like LLMs of LLMs and AI, has been to draft those emails and to get the right language, etc., but it's more manual just using that language, not necessarily directly filtering candidates using that.

TIM: That's probably a helpful reminder for people listening who maybe haven't even got any automation tools in the hiring process yet. It doesn't necessarily have to be AI to help you; there's like a lot of software out there to automate different steps of the hiring process. You've mentioned skills testing. But also, yeah, it's just writing those job descriptions and then the organization of those candidates, so if you're in a, like, an early-stage startup and you don't have any HR tech at all, I could imagine hiring would be very difficult without some of these tools in place that you've got at the moment.

DISHA: Yeah, and we also have integrating AI and hiring in a way that is not disregarding its existence, so we have started to allow candidates to search things when they want when they're interviewing, like that's as long as they're honest and upfront about it, we allow for that because that kind of replicates how they would be operating in the job anyway, so acknowledging its existence that way and hiring and not necessarily doing it too hardcore

TIM: So that's an interesting philosophy, and yeah, I feel like that's where most people are going to get to because we can't put our heads in the sand and pretend that candidates aren't using these tools anyway, so I'd be interested to hear more about how that's gone, like candidates even in an interview would be maybe leveraging chat. GPT And how do you chat with them about that?

DISHA: Yeah, so it's basically making sure that the questions that we ask and the points that we hone in on during the interview process are not something that necessarily ChatGPT can get right; like they have to put in their own thought; they have to make their calls and judgments as to why they are making a decision. So even if they use the code provided by ChatGPT, asking them why we are doing this, et cetera, is getting through the thought process behind it and not necessarily whether something is working or not.

TIM: And is it the case then that the candidates are encouraged to use a large language model during the interview, or is it up to them to practically say, What are your actual expectations over their usage of those tools?

DISHA: It's up to them, so if they basically say that you're welcome to use it, do be upfront about it. So if they are using it, they just let us know that they can do a quick search. Can I do a quick search on ChatGPT, et cetera, and that's okay?

TIM: Yeah, and as you say, I think that's sensible because it replicates what they do in the job anyway. Why would you want them to not use these groundbreaking technologies? They're going to make them much more productive.

DISHA: Yeah, like it really impacts efficiency, and we don't want them to necessarily be judged on that anymore that they can leverage help with help off.

TIM: And you mentioned designing the interview questions almost in a way that ChatGPT can't easily answer them, and so is that because it involves the candidate providing real examples of what they've actually achieved, like how did you come up with those questions that weren't, let's say, easily answerable by ChatGPT?

DISHA: It's more like a kind of caring more for the why behind the solution than the solution, so, for example, the chat, for example, you must have heard of prompt engineering. Prompt engineering is so important to take care of the edge cases to get the right solution to really have those nuances in place. So the question in itself is not so generic that a HackerRank question or anything like that; it's more tied to our context, like we'll take our data into context, our problems into context, and frame a question around that, so while they can use the help of it, they cannot necessarily solve the problem. using chat GPT because it's all very context specific

TIM: I wonder then, like I've seen some interview techniques, especially for consulting data roles, that almost like deliberately initially give a vague requirement to the candidate, just like a client might, where they haven't filled in all the details and haven't given all the context, and then it's up to the candidate to almost replicate what they do in real life with a client, where they ask more questions and probe and use their experience. I wonder if that kind of approach would work well in an era of ChatGPT because then it forces the candidate to know what questions to ask, which is maybe more important than anything else.

DISHA: Yeah, exactly. It's very similar to that. Like, it's not about the solution; it's more about the thought process, and you can get to the thought process even if you're using ChatGPT.

TIM: And for candidates, I'm just thinking now, like if I were a candidate taking an interview, if I didn't tell the interviewer I was using ChatGPT, personally, I feel like I would find that extraordinarily difficult to listen to an interviewer have an engaging conversation, look at a ChatGPT output, interpret it, put it into my own real words, and do all that automatically at the same time. I feel like that would be incredibly nerve-wracking, and so the candidates are probably better off either not using it or using it very transparently in the way that you encourage.

DISHA: That's why we got to this approach, because we were like we had started to notice that someone was looking at a completely different screen during the interview and not necessarily making eye contact and things, and in the kind of questionnaires that we would ask as a part of the application, they would just write such a convoluted, complex language with 10 ands in one sentence it would be very evident that they used the help of an LLM, and then it's just not Let's be real: basically, what we were trying to get to you is to do what you have to do and just be honest about it.

TIM: Is there any limitation on where candidates in your view should not be using a large language model in the hiring process? So I'll give you one particular example. We were recently hiring salespeople, and in our initial application process, one of the questions we asked them was, Imagine it's your first day. At Alooba, what are the three most important things you would need from us to be successful in your role or to give you the best chance of success? And so it's a question designed very personally for those people that if they were hired, then I knew exactly where to focus their onboarding, but then also it allowed me to create a really nice onboarding document because I had the opinions of 50 different candidates over what they wanted day one to look like. So it's just like a data-driven way of creating an onboarding document, which I found very helpful, but the issue was that at least half of all the candidates had clearly used ChatGPT to even answer that question, which I found at the time quite frustrating because I don't care what a large language model thinks about this. I care what you think personally about whom I'm going to have a business relationship with. I care about what you think, and so that, for me, was a bit of a bridge too far. Personally, is there any bit of the hiring process from your end where you think I would really rather not have candidates use ChatGPT, though?

DISHA: Yeah, you hit the nail on the head on that, so anything that is a personal question, for example, I was telling you about how for us, mission alignment is very critical, and to get a sense of mission alignment even before starting interviews, we had them fill out a questionnaire along with other things. One of the questions was related to how you relate to Sown to Grow's mission and those responses. If we could determine it's not that hard to tell if it's a self-written response or if it's an LLM-generated response, we would not filter those candidates out; we would just not interview them like it's a In personal questions, it's non-negotiable to be authentic; the reason we are allowing for this is so that you can be authentic. Like, for example, if even if you're allowing for it and someone is using it and for whatever reason they're not being honest about it and we could tell By one way or the other, then it's a red flag because it's a value that we're trying to screen for.

TIM: Yeah, there's a trust element there, an authenticity element. I've had a similar experience myself with someone I know who had basically said to me Oh, this is part of the application process they'd said. Oh, look, I put in some extra work. I've even got started on building a sales strategy for the business. and I spent some of the weekend putting together the sales strategy, and they're pitching it as if they'd put in a lot of work and that was the value, like the effort they put in, which would be admirable if you'd gone out of your way to spontaneously create this document; that would be a fairly impressive amount of extra effort to go through, but they just whacked it into chat tbt and exported its answer. so the effort wasn't really there, and they weren't really being genuine, which I found distasteful, personally.

DISHA: Exactly, it's such a big red flag.

TIM: Back to the use of AI in hiring, I feel like we've seen probably more of it used so far on the candidate side, as you've mentioned, using it in an interview, using it as part of the testing process, using it to create CVs, and using it to apply for jobs. Candidates are probably naturally the earliest adopters of using this at scale; companies will start to use it more and more. I think the first place, personally, that that's most likely to happen would be that screening stage, some kind of CV screen tool. I suspect most companies will end up using quite soon because they're being inundated with hundreds of applications they have to automate somehow, and a large language model I think is a no-brainer to do more objective screening of CVS personally. But a lot of people would talk about the potential for bias, either AI coming up and introducing its own bias or maybe reinforcing existing biases. Do you have any thoughts on how that should be prevented, and is this a real threat? If we overestimated it, what are your thoughts there?

DISHA: As someone who builds AI, even for my day-to-day job, I think about this a lot, and it's really important for the AI-driven hiring tools to build features that don't reinforce bias. I don't think we are there yet, and that's one of the reasons that we're not even inclined to be using those tools yet, but I can totally imagine a world in which they have adapted these tools in a way that there are categorizations in the sense that there is a categorization for here are 10 candidates that are from the most underrepresented communities. Here are 10 candidates that are most mission aligned to you. Here are 10 candidates that are really good in Python, so whatever them, not necessarily making this call for you, like who you need to interview, but able to categorize candidates for you that you could look Someone from here looks like someone from here; maybe someone is in both of these categories. So you could look at them, so if they start to make that kind of distinction, I can imagine it being more helpful and more useful, so the concerns are very real, and one of my biggest concerns right now, like in LLMs generally, is that the biggest advantage is while there are generic LLMs that are really good at For it to work in a specific context, you have to fine-tune it. and fine-tuning for hiring, like for my company, for my team, in my context, like with all the—it's a team of 10, 20, or 30 people at most in a team; there are 30 people, I would say, even in big organizations, so then you cannot really fine-tune with that kind of limited data; it's hard to get it right. You may end up overfitting; you may end up underfitting. Yeah, I feel like there is promise in terms of if they start to categorize in certain ways, but I'm curious to see how it would evolve from the point of view of fine-tuning to specific contexts for smaller teams, but until that is possible, it's really hard to make it not reinforce biases for a specific team.

TIM: One angle that I'd like to run past you to see what you think, to see if this would chip away at some of the bias question marks, is let's just take the CV and application screening step, so let's say there's a desire to implement some kind of AI system to decide from these 500 CVs; let's either, as you say, categorize them or stack rank them or give them a score out of a hundred based on some criteria. Let's say we're going to try to do something like that, but we're worried about not accidentally selecting Oh, let's just find other candidates like who we've already hired or find other candidates like these data analysts, so we want to avoid that problem because we might think there's an existing bias, and we don't want to reinforce it. What if we have a set of criteria that we normally would have for a job, like we expect this person to have strong Python, SQL, and communication skills, whatever the criteria are, and then from the CV, a pass to a large language model to at least extract the information and remove the noise so it just says, I'm looking for SQL; does this CVF SQL or does this CVF Python or something similar to that? So, not like an exact text match but a bit more cleverness, and once it's extracted all that data and all that's left is information, then it's just aggregating how strong a match it seems to think if the CV is. If we did it that way, would that not eliminate the chance of any bias around accidentally selecting this gender or this ethnicity or this age or what have you because we've just lifted out only the information we care about? And then we do the scoring. What are your thoughts on that approach?

DISHA: So I think what you're leading to is filtering on the basis of essentials or minimum requirements, like maybe it's Python, maybe it's two years of minimum requirements, and maybe there is work with cross-validation classification problems, etc., so looking for some specific criteria for minimum qualifications. and you're saying after that applying LLMs to be able to score them

TIM: Two steps, so basically take the CV, use an LLM to extract the relevant information, and put it in a structured format based on whatever you're looking for—those criteria, those soft skills, experiences, whatever—based purely on what's on the CV, and then once you have that set, it's okay, cool, it's yes, it's got X years of experience. Yes, it's got these skills; yes, it's got these. It could be like aggregating a lot of yes nodes, or maybe there's a strength for weight; it could be like on a scale of one to five, but ultimately aggregating all that and saying, based on what you said you wanted from a CV, this CV matches a rate of 80%. So there's not really any decision layer in there about who is the best candidate. Find me other similar candidates like this. It's just extracting the data, putting it in a structured format, and then aggregating it with that, then mitigating the question marks of a bias.

DISHA: I think so; there is a little, like, depending on how we are filtering, it could be possible what you're suggesting; however, I think there is a lot of information that exists on how different races use different language; they write things differently, and the same for different genders, and I guess it's not the same for different college backgrounds, but at least for different races and different genders. And I think the challenge with bias is that we're not trying to make everyone equal but equitable, so when we're talking about making everyone equitable, it's more like I want to see what my resumes are, so let's say my team is right now 10 men, and I'm trying to hire for the 11th role, the 11th team member. I want to make sure my team has a good representation of different demographics, so I would want to look at the resumes of women, so even if after all the filtering we are giving all of them a fair chance, and then the top—I don't know, top 10 scores are of men, and I only interviewed them, and then the 11th person is a woman, and it turns out they were not capable enough or not comparable to those 10 men, they just don't get to interview, so our approach to being equitable has at least a good representation in our first round of interviews from different points of view of how we want to diversify our team. and then after that first round of interviews, it's all skills-based, but we do want to have at least that first chance at a conversation and see how that goes because there are a lot of whatever the scoring is. It can't score for soft skills; it can't score for empathy in a meeting; it can't score for bringing in a different perspective in a meeting. So even if the first candidate in this scoring scored 90 and I don't know if the 12th candidate scored 85, and we only got to the first 11 because they were between 90 and 85, 85 is not that bad or far off from 90 if we are able to diversify our team and if we are able to be more equitable in terms of our opportunities. So it's tricky. I think I would rely on these for categorization so that I can make my calls but not necessarily give me top candidates by A score.

TIM: You make an interesting point there when you say that even what's written in the CV by the person itself might be biased, so I guess an example might be that, on average, men might inflate their achievements a little bit more. The research I've seen is that if you do five hours on Udemy for a male, it's I'm now a grade in the skill. whereas a female might not even put that on the CV, so if that's all you've got to go off, then yeah, I can see how even if you then scored that against a criterion, what you're scoring on the basis of is a bit flawed to begin with because people aren't starting on a level playing field of explaining their achievements properly or in the same way. So that makes sense.

DISHA: Yeah, and a lot of things are not measurable, and to be able to capture that, you have to read the language, the surrounding language of the resume, to see how someone writes about their achievements; it also says a little something about them. Many times you can't. tell by reading the name in the resume the gender of the person you may not be able to, like for example Kristen can be both male and female, so it's not really about that, but it's about being able to find the right fit beyond the general score based on whatever criteria. structured data criteria you have established

TIM: So this sounds like there's an element of intuition you're saying that can't be captured by a metric. Even at the CV stage, fair enough, the interview is very different because it's an in-person evolving thing. Fair enough, but just at the CV stage, you feel like even that requires a level of intuitive gut feel that is hard to codify, or it would be hard to, for example, get a large language model to also replicate that.

DISHA: Yes, and it's so hard. Obviously, it's hard for LLM to do that. It's hard for you to potentially do that by just reading the resume, but that's why we want, for the first round of interviews, to diversify different populations so that we are able to, through conversations, pick up on some of those skills that are not measurable.

TIM: Yeah, absolutely, and when we were chatting last time, you'd mentioned iterating through a few different hiring processes, learning from what worked well and what didn't work well. You mentioned, I think in particular, hiring a machine learning role and then changing the trading through time. I'd love to hear a bit more about that experience in what you were doing originally, what you learned, and what prompted you to then change the hiring process to where it is today.

DISHA: So there were a lot of changes, I would say, between the process at that time and the process today to have a little bit of measurement to some of the skills that are not so easily measurable along with other things, so I feel like the very first thing that we changed was just the job description itself. like we realize that hiring starts with the job description, so you want to be as explicit in that about not just attracting the right people but also for the people who are not that good fit to self-select out, so that's where it starts. Then the second stage is like we get the CVs, but in between that we included this questionnaire for them to answer along with a couple of other free-form questions that were related to what your job experience looks like, etc. There was a mission-driven question and mission alignment questions, and this is where that intuitive part comes in, so this is why we have this questionnaire so that we can get a sense of the fit for this role, so one easy filter for this is it automatically eliminates people. who are not as interested, they won't, and as soon as they see there is a form that they have to fill out, they just don't apply, so it automatically reduces the number of applications. Then the second piece is when we read responses that are clearly written through ChatGPT or are not really authentic responses. So that's an easy filter, and then actually some responses in which the technical experience is a good fit but also there is a mission alignment, and those are the resumes that kind of move forward, so being able to check for both of those things made it easier to determine who moves forward. And then this last piece that turned out to be a significant one, which I was sharing with you, was the job scorecard. So we started this concept of the job scorecard in which we had three main parameters, which are technical skills, culture fit, and mission alignment, and each of those we had 10-ish specific criteria that we were looking for. And we shared that job scorecard with all interviewers to rate them. What do you think is the cultural fit on this metric? This metric, and then at that point, objectivity starts, so basically we look at the cumulative score of candidates on each of those metrics by all of the interviewers, and then we start to compare. Who is a better fit? and this job scorecard One of the biggest mistakes that we made in the previous round was that we just went into this with the mindset that we had to hire someone, not necessarily hiring the right person, but basically whosoever we interviewed. See, this seems to be the best of these, so let's just hire them. But this job scorecard helped us realize that, oh, this person does not even meet five out of these 15 criteria. It made it easy for us to see that this is not it, and it goes back to the job description. We have to be very clear and explicit in our job description to attract the people who are right for it so that the rest of the process flows smoothly.

TIM: Was it also a case then of changing other bits of the hiring process? Because if a candidate got to the end of the stage, suddenly, oh, they were only really hitting five out of these 15 criteria, maybe the screening stages were also not where we wanted them to be. Was it not just about the sourcing but also the other earlier steps?

DISHA: I think so. The only earlier step was, I would say, this form that we had, the questionnaire that we had them fill out, and that one didn't really—we were basically—there are free-form questions, and we are trying to get a sense of what their technical experience looks like, what their mission alignment looks like, and The third question was, do they have any concern right off the bat? And that never felt like a step that needs to be rethought; that was, in fact, a very helpful step to just screen weed out totally bad-fit candidates. So far that hasn't surfaced; maybe it might be the case one day, but not yet.

TIM: and so you've moved to this bit more quantitative more objective process but one thing that I think is almost like a hidden value of getting these scorecards going is that it forces you to even think about these things in the first place and get them down on paper and get everyone to agree to them which maybe it is already a lot of value just itself even if you didn't end up using the scorecards just agreeing on the criteria very clearly because what I've seen often happen is In some hiring processes there'll be someone involved in the process who will have their own view over what should be getting hired and the criteria and they'll inevitably introduce that in their interview where there'll be looking for X whereas X was not part of what you might've been thinking about So getting that out on paper, I think, is so valuable. Did you find that going through the exercise helped give clarity over what exactly you were looking for anyway?

DISHA: Yes, exactly 100%. I think what you're saying is one of the biggest values, and what I was trying to get to with the job description and scorecard is that doing the homework on our end and being really clear about what we want and what we're looking for within the job description and within the scorecard is important because only then will we be able to find the right fit. Like, we're not necessarily trying to attract the top talent out there; like, our culture is very different from Silicon Valley tech pro culture. So the intention behind both of these things is to be clear as to what we're looking for, and especially with the scorecard, it helps because in our interview process we are trying to have not just the technical team—let's say for my team, not just technical folks interview them but also the sales folks, the school-facing folks—because there is so much interaction, and for them it's really helpful to see, Oh, these are the five things that I have to hone in on,and get a sense of how good the candidate is versus going off their own heads and not really being aligned in the process. and then it's not very helpful when you debrief because that's not what you were looking for in the first place, so for example, I think I shared with you I recently interviewed for a product role, and it helped me so much to look at that job scorecard and be like, Okay, this is what I have to hone in on in my questions to get a sense of how the candidate is doing. how fit how good of a fit the candidate is

TIM: And I assume in that scorecard there was a combination of more technical and softer skills. Was it the case that if you were, let's say, I don't know, a stakeholder interviewer, that you were only then evaluating the softer skills, like it wasn't expected that you would start digging into the technical skills issue?

DISHA: Yeah, different folks were judging on different skills, and there could be overlap, like a particular skill could be judged in all interviews, but a particular skill won't be judged in certain interviews, so yes, it was more like they would see the whole job scorecard, but they could be not applicable in some of the technical skills.

TIM: and I'm sure what would have happened when you started doing this process is you might have seen some deviations in how candidates have been scored by different interviewers for the same criteria, and were there any sometimes big differences? I don't know where two interviewers had given very different scores for communication skills for a candidate or something like that. and if so, how did you reconcile those differences?

DISHA: Yeah, so I would say that this is where, yes, there have been big differences, so we generally, like, we had There are not too many categories that we had; it was like super green, yellow, red, and not tested, so just five, like double thumbs up, single thumbs up, nah, okay, things like that, so very basic. And we, because there were so many interviews, basically just consolidated all the scores, and then we looked at that, okay, this person has an overall higher score on a particular, let's say, Python efficiency versus mission alignment or culture fit and things like that, like we have things like prioritizes student voice in anything and everything they do. There are and then seeing What is the overall high score for that candidate? For example, if it's a school-facing team person, we would prioritize the mission alignment a little bit more versus on the technical side of skills. As long as it exists and it's in the green range, we would prioritize familiarity with ML algorithms more. Just, but eventually looking at the cumulative scores, not necessarily reconciling, Oh, I think A; you think B, but what does the overall score look like?

TIM: And again, probably almost a hidden benefit of this, let's say, objective approach is how easy it is then to compare candidates across the different criteria you're looking for because you've measured them in a consistent way across the same matrix, and so then you're not there sitting there wondering after 10 interviews with different candidates, Oh, hang on, who was better at this? Who was better at that? You can just look at the information like, Oh, okay, cool, this candidate's got a 10 in Python; someone's got a 7; that's their strength, whereas the other candidate's got a, I don't know, 10 in communication and an 8 in Python, or vice versa.

DISHA: Yeah, exactly. and this is also a little bit like this also gives the candidate multiple opportunities to get judged fairly. It could be that one interview didn't go so well, so I may think the communication was off, but someone else may think it was great, so it gives that the averages are more accurate than individual scores. It just makes it more reliable as to what we are trusting in.

TIM: and I'm interested in the final decision: was it the case that you were ultimately hiring the candidate who scored best? Was there a case of, let's say, you're the hiring manager; ultimately, it's your call, even if maybe you might've rated another candidate slightly higher than the one who scored higher overall; you were going to hire the second-place one because it was ultimately your call? How did you make the final decision?

DISHA: Yeah, so it is ultimately the hiring manager's call. It's just it's a more informed call. I know that I am making this call because I'm prioritizing technical skills over mission alignment or I'm prioritizing communication skills over culture fit, so whatever it is, I think it is, at the end of the day, the hiring manager's call, but not like a vague decision but more an informed decision as to I know what I'm getting into. and then what it does is it helps me learn what I need to coach them on as I onboard them, and we have had conversations with candidates right after hiring them that you know you did really well in these parts of the interview, not so well in this, but we can work on this together, and it's just so easy to even know where to start in terms of coaching and where their strengths would be. And things like that

TIM: That's a great point you make that, yeah, the hiring process doesn't stop when they sign the contract or their first day. This data can be used to inform the onboarding because you already have a sense of their strengths and weaknesses, as you say; you can already start to plan their learning and development. and actually in some markets, like speaking to a few people in Germany recently, the average notice period there is three months, so even once a candidate signed a contract, it's still going to take them three months to join, during which time I imagine this data could be used for that candidate to then upskill and probably fill in any gaps in those three months anyway, especially if you're like, Oh, I'm not quite sure if they're SQL skills or this skills. Three months is a huge amount of time where if you had that insight and you coach them and they were up for it, then they could certainly fix those issues.

DISHA: Yeah, exactly.

TIM: Now that you've gone through this objective process, I've spoken to a lot of hiring managers, particularly in data, who would not take this approach. They would still persist with a very loosey-goosey gut feel. I just trust my instinct; I'll just go and have a coffee chat with the candidates, and I'll figure it out with a kind of vibe. Having now gone through and seen all the improvements with this approach, what would you say to someone who has that mentality who's going for that just pure wing-it gut feel? What would you say to them? What are they missing out on by not having a bit more of an objective, thought-out, considered approach like you've established?

DISHA: I would make the case to them that both are important; you don't have to be on one extreme. I'm not, even though I lead data work and have been leading for years. I'm not a completely data-driven person nor a completely gut person; I believe in an informed gut, and I think this is what it helps with. like it makes you take more informed decisions What would I say to them? I I don't know. I think if you're doing it for the—I feel like it's fair to the candidate and fair to yourself to take a combined approach. Like, we're not by any means discrediting gut at all; there is a connection between our gut and brain, like it has proven scientifically now. So it's not stupid at all, but you have to get, like, hyper-tuned or fine-tune your gut by using these data points and being able to see them clearly.

TIM: What about thinking bigger picture? Now I think this is an apt time to ask this because AI at times feels like magic, and it feels like technology is improving so quickly. If you had a magic wand, how would you fix the hiring process?

DISHA: I think I've been talking about that throughout the conversation, but how can we make finding the right fit easy so authenticity meets authenticity in some ways? So if today's culture is the way candidates apply, I'm going to apply for everything and see where it hits. If they call me back, then I'll interview, and et cetera, so there is just absolutely no authenticity in either application or in terms of hiring, so if people really do the hard work to try to attract the right candidates and also if candidates do the hard work to really look for jobs that fit their needs, fit their skills, and fit their values, it would make everyone's life so much easier. Like, like, how the approach that Warren Buffet takes for investing, like values-based investing, and only investing in things that resonate with your values is a long-term game, so it's the same thing. Just, I don't know what tool could fix this, but just apply for the right things, look for the right things, and it should be easy. We don't need a thousand applications; we just need ten right applications.

TIM: Yeah, I wonder if the unlock then is going to be some new data sets because I feel like one of the barriers to this process at the moment is candidates apply to a lot of jobs because all they've got is a job description and a company to go off; they don't really know the details; they don't know who's in the team, a day in the life, how they're measured, or what their first week's going to look like. like that data normally isn't shared until you're in the hiring process, and so I wonder if that data was more easily compilable and shareable than maybe candidates and candidates could search and browse on it, so imagine like almost like a job description plus kind of data set, then maybe candidates can self-select a little better, and then on the other side I wonder if again the problem is At the early stages We just normally have a CV to go off, which is very little to really judge a camper on, especially because it's written by them or Chachi PT. It's very biased, sometimes bullshit, and sometimes got lies; also, people aren't great at selling themselves; some people exaggerate, and some people underestimate. So it's got such an issue. I wonder if again we need some more authentic data set around what the candidates have actually done in their jobs, like real work from a task system or from a Git repository or something where it's more real data on each side. I wonder if that would help with that matching.

DISHA: Yeah, yeah, more real data on both sides. Exactly. So on our end we have started to do that through our job descriptions. We also share what they'll be evaluated on if they join, like the parameters that they'll be evaluated on, so we try—we have started to take that approach and try to be really explicit. Unfortunately, the truth of today's job market is such that candidates still are applying for roles that they are not necessarily a good fit for and then quitting jobs nine months down the line. 10 months down the line, so it's a pretty bad situation right now, but a lot of it is just economics-driven.

TIM: Yeah, exactly. Candidates are competing, and they'll be looking at those applications and going, Wow, there's a thousand applicants already for this job. That means I'm going to have to apply to a lot of jobs to have a chance of getting an interview, which then means they probably apply to more, which then makes everyone apply to more. So it's some kind of explosive thing going on, and you're right: if you're out of a job and you just need a job at the end of the day, don't you sometimes say, Be a tricky market for candidates for sure?

DISHA: Yeah.

TIM: One final question today. Disha I'm wondering if you could ask our next guest one question. What would that question be?

DISHA: I don't know. I thought about this a lot, so maybe what is your number one non-negotiable quality, and how do you screen for that?

TIM: That's a good one. What is that for you? I'm interested, actually, now.

DISHA: I think for us the non-negotiable quality is mission alignment, and that's why we screen through that through a questionnaire. It's not very efficient, but that's how we do it today, and that kind of really helps us out. Of every 20 resumes, we just select one, so it's a good elimination process for us.

TIM: Mission alignment is an excellent place to finish on Disha. It's been a great conversation today. Thank you so much for coming and joining us and sharing all of your insights and wisdom on hiring for data roles.

DISHA: Thank you for inviting me.