In this episode of the Alooba Objective Hiring podcast, Tim interviews Aaron Swaving, Head of Data Science at SkillLab
In this episode of Alooba’s Objective Hiring Show, Tim interviews Aaron, the leader of SkillLab’s data science team. Aaron shares insights on navigating the complexities of the hiring process, including the challenges of sifting through CVs, the role of AI in hiring, and the importance of a skills-based approach. He elaborates on SkillLab's mission to connect people with employment opportunities through skill assessment and offers a glimpse into their unique hiring practices. The conversation delves into the nuances of technical and cultural fit interviews, the impact of AI on hiring, and maintaining a balance between efficiency and respect for candidates' efforts. Aaron also shares his thoughts on using brain teasers to evaluate potential hires and the future integration of prompt engineering in interviews.
TIM: We are live on the objective hiring show with Aaron. Welcome! Thank you so much for joining us.
AARON: Thanks a lot, Tim. Thanks for having me. Oh,
TIM: It is absolutely our pleasure, and I'm wondering if we can kick things off by just getting a little bit more information about yourself and your role leading the skill labs data science team.
AARON: Yeah, sure, so I've been at SkillLab for about two and a half years or so, and yeah, I was tasked with building the team, actually from one person to now I think we are five full-time, and we usually have a couple of interns with us, and data science in SkillLab is quite broad. We are responsible for the AI in our system, and we are also obviously doing all the standard data science work around reporting monitoring, and we do a lot of work in occupation taxonomies, which is really important for SkillLab. We're SkillLab. We are Looking at our vision, it is that everyone has a pathway to employment, and we do that. Our mission is to connect people with jobs and education through the universal language of skills, so we're really looking at using skills to help people get to employment. Essentially, we are really all about skills and how you can use them to find a job basically. yeah
TIM: That's awesome, and yeah, it would be interesting to unpack that, particularly for me, in a bit more detail because a little bit we're also about skills, but probably from quite a different angle. Where does Skill Lab play exactly, and how does it help job seekers? What's the business model? I'd love to understand a little bit more about the company.
AARON: yeah sure so we our main client would you would say is the public employment service more government based kind of organizations or international organizations as well where we try to we kind of work with our clients so the counselors who are helping people find jobs Basically we provide a way to actually help scale them up and help basically just help the counselor help the people find jobs and we do that with yeah a two face two sides of the same coin application where the counselors get to see how the users accounts are progressing and the users get a nice, easy-to-use, I-hope-fun user interface to learn about themselves, and we actually basically collaborate with the user to help them understand themselves and help them understand what skills they have because when you ask someone, Can you write down 20 skills that you have? people will struggle. Even people that are really well aware of their skills and are on the job market a lot, and so on, but if you talk about mothers in Mexico who are looking to understand what they could find, like what job they could find, can they find a job, what skills do they have, they need some kind of assistance and guidance. So we all do, and we basically guide people through this process; we recommend skills, we get immediate feedback from them, and yeah, essentially build this kind of team skill profile of a person, which we can then match to careers, potential careers, jobs, and education. yeah, through skills So everything is skill-based, yeah.
TIM: That's great, and again, great to hear, like a business taking the same skills-based approach but just on a slightly different side of the coin, so it's really always interesting to hear about other companies doing that. What about then thinking broadly about roles? So, into your team, I imagine you're hiring data scientists and data analysts as well. In your view, at the moment in the hiring market, what is the biggest challenge? What are the biggest pain points for you personally as a data leader?
AARON: Yeah, there are a number of obviously I think, and previous guests of yours have talked about many problems, and I think that maybe apart from the shared number of people applying for a job and the number of completely irrelevant applications, basically even in the pool of potentially relevant people, the It's hard to decipher, or, yeah, just to pick out the real gems from the pack just based on the CV, and that is always the hardest thing, and, yeah, I have in my mind a way that I feel like there was a good way to do a CV and a bad way, and for me the good way is it is skill-based; it is around,Hey, I have these skills related to this experience, and this is how I use them,basically in my—in your CV, so I know that you have an understanding of your role, you have some self-reflection of how you did it and so on, and you have read the job description and tailored it to this job because you want to be in this job; you want this. It's not just some blanket CV that you've sent out. I think that that is often The really the hardest part is that most CVs are pretty poor, and I haven't seen too many AI-generated ones yet, but I'm sure they're definitely coming in force soon. But yeah, so that's probably the toughest thing: the hiring manager and I, we talk together at the beginning of the process. We say we discuss what I am looking for in a person we predefine before we even put out the job at what the role is, the roles and responsibilities, everything we have it all there as we then talk about it. What are you looking for? They even then almost sample the incoming applications that they get. and then we discuss them together so that they get a better feel for what the filtering system should look like, and yeah, and then we go from there, but when I sit down and look at the CVs that I get, I always struggle to say that this one, yeah, they're going to be able to do the job because they're never, yeah, they're never informative enough for me, basically I would say
TIM: Yeah, it's so tricky when our current system of hiring, at least through job boards, I think has a few big flaws. One of them is okay, so it's an open system; anyone can apply, which is good because that's at least liberated, like everyone has an opportunity, in theory, to click apply and get their CV in front of the hiring manager as opposed to some kind of closed referral-based system where none of the jobs are actually advertised. I feel like we could end up going down that path, which might be worse at the moment it's open, but then that creates a problem of the noise and the spam and that anyone can apply with any CV, no matter how relevant it is. And the sense we're getting at the moment is that candidates might be doing that automatically so that then the marginal cost of the candidate of applying is almost zero if they can use a tool to automate it, so I suspect that's maybe going to cause even more spam or irrelevant applications. What do you think?
AARON: Yeah, I think so. I think it's inevitable that we're going to have some kind of AI filtering on the right at the beginning to try to stem the flow. I feel like it will come to us, but that in itself is also dangerous. I personally feel like I don't want to devalue someone's effort when someone truly puts an effort into an application. and I don't want us to be able to just auto reject them because their CV looked a bit wrong for whatever reason. Right? If we were to like to add some filtering level in front, it would have to be like a high recall kind of process, not worried so much on the precision but just to catch those really obvious cases. Yeah, so maybe that's one way we could go about it in the future, but I would be very cautious about it because, yeah, I feel like when someone applies for a job, this is a very personal thing, and a lot of people put a lot of effort into applying, and, yeah, like I said, I just don't want to devalue their effort. I think we should treat everyone that's applying with some respect, and I guess on the flip side, I also expect that they treat the company with respect as well and value our time at the same time.
TIM: Yeah, it's such a tricky equation because with such high application volumes, then a lot of companies simply can't read every CV manually even if they wanted to, so yeah, I agree there's probably going to be this AI screening layer that's introduced eventually. I think probably for companies, the tricky bit is because it's an AI and because it's people data, it fits into two quite high-risk, highly sensitive areas that I think will make it just quite slow for the applications to actually be used, even though at the moment, like you could clearly do an AI CV screening with Claude or ChatGPT on a photo. Lots of companies have hacked together their own little skunk works, but for an official product, I feel like it's got quite a high barrier to get through before it actually gets used and implemented.
AARON: I think that, yeah, it's interesting when you talk about all the personal data and the responsible component to it because I also feel like you, we, I suspect, because the EU has a new AI act out as well, you would absolutely have to also tell the candidate that you were filtered out through AI as well, which is only fair that you would say that, I feel, and it would be a responsibility for anyone in any EU company anyway to do that. So that might, I think, be a good thing because we then have to take a more cautious approach. You're forced to do that, yeah, but I agree that it is a scaling problem indeed. Like for some company that offers a filtering service, it would be, yeah, it would be a challenge, but I'm sure someone's out there doing
TIM: One thing I was thinking about recently is just the thresholds and almost like the standards that we are expecting from AI versus what we expect from humans, so as an analogy, as far as I'm aware, I'm not an expert, but driverless cars are already way safer than humans. driven cars yet anytime there's like a test that's auto crashed and someone's died, which is terrible That's like front-page news, but there's literally a hundred thousand human crashes a day that are not magnified at the same level, so I feel like sometimes we're holding technology to almost too high an expectation. I would argue that maybe for AI and hiring we also are already feeling like that because I feel like the way hiring is currently done with humans is so biased and unfair that it's hard for me to imagine how AI could make it worse. I'll give you one particular example: there have been a lot of experiments done at the application stage just to test if certain names from certain ethnic backgrounds were ever discriminated against, and so there are lots of these experiments that have been done in different countries. One in Australia tested whether Chinese people were discriminated against in the labor market here. And so the way they do the experiments is by getting these thousands of CVs and grouping them into buckets just based on the name, so one bucket had an Anglo first and last name, the second bucket had an Anglo first name and a Chinese last name, and the third bucket had a Chinese first and last name. They then go and apply it to thousands of different jobs in different industries and different cities, blah blah blah. and then measure the rate at which those three sets of CVS got a callback from the employer, and so this study from a few years ago from Sydney uni Found that the first group, which had Anglo first and last names, got a 12 percent callback; the third group, which had Chinese first and last names, got a 4 percent callback only. So, like, a third as high callback rate, so that's the current way hiring is done with humans. Maybe we shouldn't be so risk averse with AI because the system's already so flawed and unfair. How can we make it worse? In a sense, that's partly what I'm thinking. Maybe there's something I'm missing.
AARON: Yeah, I think that it's again, I think the right tool for the right job, right? You can like using ChatGPT, but I feel it's inappropriate, say, for this task, but I get your point: yes, humans have biases, but AI has plenty of bias in itself because it's just learned from us. It just knows what we did, and it's replicating it. That's what AI does, right? You could do a lot of work to try and fix the biases in AI, but whenever you fix one, you introduce another, or there are unseen consequences to doing that. We know we are I think there's a lot of studies and a lot of people looking at bias and humans and human recruiting. and I think it's pretty clear what those are, so in your example around names, for example, I think you could introduce a tool that just blanks out the name, right? So at each stage of course you need to know the person's name, right? But at each stage of the process, there's a tool that just redacts the names and maybe the picture if they've included the picture. and then once they have been accepted or rejected, only then is that reduction removed at each stage of the process or something, or maybe only at the first stage, because usually the HR obviously needs to know who the person is and all of that, and the second stage usually is more of a technical interview or something. and of course you will need to know who you're interviewing, but only after you've accepted that this person is right, so removing the ability to have that bias maybe is a more appropriate way. Yeah, just making sure that we use the right tool, and yeah, so I really don't feel like a blanket use of AI is a good idea. It's just as flawed as us, potentially more. I don't know; we don't know ways either. Yeah, I just be careful about it, is what I'm saying. Yeah.
TIM: One thing I would say is that at least in theory, I am in favor of AI or any kind of programmatic way to evaluate candidates in the hiring process, either at the CV stage, the interview stage, the test stage, or whatever is If you do it in a structured way using a large language model using a program using some kind of definable method, at least that can be transparent. So, for example, you have a CV screening system where it's okay. Hey LLM, take the bits of this CV that are relevant. Cool, now match them against a job description. Give it like one point per match or a match strength or whatever. Come up with some number; this CV scored 70%. Here's its strongest points: Is it weak? here are its weakest points At least that's data now that could be interrogated and potentially shared with the candidate as well with feedback. At the moment we don't have that at all because it's just someone clicking a button to reject on a system. Who knows why they rejected them? There's no real feedback loop there or any kind of What's the word oversight at all? There could be any kind of racist recruiter running, rejecting anyone whose name they don't like, and we would not know, at least again in theory, with a well-built AI system or any kind of programmatic system; maybe you'd have that data there to go back and go, You know what? It was consistently rejecting this kind of person or doing whatever. because then there's an opportunity for transparency, at least I think.
AARON: I don't think that you need AI to have that oversight, right? Because if you have a system that records who has been accepted or rejected by a recruiter, you have the metrics; you have the data there to look back at it and say, Okay, we can clearly see that they've rejected this type of person consistently and disproportionately to everyone else. Yeah, I think we just, by the sheer numbers of people applying and actually the number of CVs being read, have potentially all that data anyway to say, Okay, this hiring recruiter is not doing a good job, so I don't know how AI helps you in that way.
TIM: I feel like the difference would be the why, because the system, be it driven by AR or not, goes, Here's the scoring, the logic for how we score this is X; here's the score Y, but at the moment it's like CV rejected by a person at this time is the most data we would have; we wouldn't know why they chose that. And it's clearly a subjective decision that even they themselves probably couldn't explain a week later or two weeks later having reviewed 500 CVs.
AARON: Sure, yeah, okay. I think that if we were to introduce some kind of filtering at the beginning, I think that you're right; it's fair that there is an explanation for the rejection and on the automatic rejection filtering, if we put something on that upfront, we could just, yeah, we could then present this back to the user as well to say to the applicant, Sorry that Yeah, you didn't have Python in your CV. Therefore, you have, yeah, we have rejected your CV, but yeah, I think that there is still a lot of space where AI doesn't understand nuance. It doesn't, yeah, and I get what you're saying about qualitative versus quantitative. Are we Yeah, shouldn't we just be able to have a metric for everything? I don't think we can. I don't think any metric that we come up with apart from what's on the CV, like explicitly mentioned, then needs to be in the role. Yeah, it doesn't. We will probably have some qualitative component to work out what that value is and then assign that value, so why, yeah, you could do that, but that's not going to help the applicant. Like if you said, You've got two out of five for suitability, okay, what does that mean? Then you have to provide a sheet as to what these metrics are and what, and then you get into trouble again as to how did you really define that, and I don't think you can get around that in the hiring process. There is always going to be some subjective component, I think that Adding some quantitative values, as in they're more like objective guardrails, I would say maybe making sure that things are reproducible, that you're doing, you're making the same types of decisions for every applicant And so on. That's where it's useful, but yeah, not as like these are the—it must be above this value or below this value. I don't think it works fully.
TIM: part of the issue even if we went down this track of Oh, let's use AI-based CV screening, which does seem inevitable to me that we'll get to that point in the next, I don't know, year or so. It's still the old garbage-in, garbage-out problem because it's still a CV, even if we can automate it so we make it faster. Maybe we make it arguably more objective; maybe there's new biases we might introduce. We can solve that problem, but it's still a pretty crap data set to make a decision on. and I'm sure you've experienced interviewing people who look amazing on paper, but the AI would also think they look as amazing on paper but then sadly disappoint you in that first interview, so I wonder if we're going to need a new screening data set. Do we need something? We need to know more about the candidate. We need to validate something earlier in the hiring process, and if so, what would that be?
AARON: The validation is the interviews, right? That's validation; that's what it is. Yeah, I really would steer away from using AI to, like, validate people, but I would say, yeah, just as a checklist of things that must be there and not there, fine, but yeah, when it comes to validation… It's the interview process. Are they a reasonable person? Are they for the HR for the hiring manager? The first stage: do they meet the bare minimum requirements for this job? And so on, and then in the technical interview, you're validating their skills. You're also, to some levels, also building their skills and mini-skills, right? So not just the technical ones but also critical thinking and all of these other things that are valuable—that's where the validation goes, so the validation process, yeah, that's just hiring. That's just interviews, I feel.
TIM: Yes, I see what you mean. I guess the challenge is if there's such a high volume of candidates, the CV might become increasingly inaccurate, and it was already a poor predictor. Like if people are just going to start using ChatGPT to bullshit a CV, then it might end up being so inaccurate that you have to interview, I don't know, 50 people to hire one. which would be a complete waste of time
AARON: Yeah, I honestly don't know if I have an answer for that. Like, what I think there is probably a component of what industry you're in and what type of work you're doing that also matters in a lot of more tech jobs and things like that, like LinkedIn is everyone's CVs are there, and they're validated in some sense there by you assigning where you worked basically. But for other types of jobs, LinkedIn is not where you find a job or where people are using it, and then the validation becomes tougher in that component, but you could use other sources, of course, to do a validation in that sense, but that's not a blanket method. I feel yeah.
TIM: One aspect of hiring that, personally, I feel is especially on the subjective end of the spectrum is the cultural fit interviews, so I'd love to get your thoughts on those. Do you use them? If so, what are you asking? How are you trying to evaluate candidates? Have you ever tried to measure cultural fit, or is it something that's just inherently a bit more intuitive and a bit more subjective?
AARON: Yeah, it's good that we definitely use them; that's usually the third interview in the process, and over my career I've seen various versions of this, and I think what's important is that there are some values in the company, right? You can tangibly know what they are. Everyone knows what they are, and I think as an impact business, it may even be a little bit easier to have some really defined values. Potentially for us, it's like purpose-driven collaboration and learning mindset are like the three kind of core values for us. And then it's about asking the questions in the most personal way possible, and in the sense that they are very personal questions, you want to avoid people interpreting these as, Oh, I just didn't really like the way they answered that as a justification as to why they didn't pass the culture fit interview, but rather you need to be able to justify well to the group of people in the hiring committee, which we have at the end, why you feel they wouldn't fit and scale that based on the answer they gave. so very much about can they do this, can they do that? You know, I don't know if that will fit in the company, but rather have really defined questions around our values around learning mindset, right? What are you really excited to learn? And then you can say this person didn't have a very convincing answer. They didn't seem very enthusiastic about anything; they didn't have anything on their mind, whereas another candidate had a list of things that they wanted to do and wanted to learn, and they think that they can do that skill because of this and this, and then you get it right. Now that that's great, there's a contrast there, and that's great, and I've seen in the culture in the hiring process at the end, when we're looking at the culture fit stage as well, the hiring manager has called people out on talking more about that they don't feel like this person will fit. and so we take it very seriously that we try to be objective in how we look at this, so based on can they do the things that we asked them about, and yeah, so it's worked really well for us, and skill that is very diverse. I'm not sure; I think we're 25 nationalities, maybe more, and we have about 60 people in the company at the moment. So it's quite a very diverse company, and yeah, culturally, it sounds like, yeah, it must be everyone that's in the Netherlands is like, they all have the same culture; therefore, no, that's not what SkillLab is. We have people from around the world, and yet everyone has the same values. They're all working towards the same things, and we all get along very well. Actually, it's quite It's a very nice environment, so I feel like, yeah, CultureFit is a winner.
TIM: The way you described it then made me suddenly think maybe we shouldn't call it cultural fit because anytime I hear that, I feel like that seems like the opposite of diversity-based hiring because they seem like an oxymoron to me. Maybe it's more like values fit, and then I don't care where you come from as long as you have the same core values. Maybe it's almost a misnomer at the moment.
AARON: Yeah, but again, values are also something you could argue as also sounding a bit charged as well, depending on how you think of it. I'm not sure if there is a—I agree, like CultureFit sounds also a little bit leading, but yeah, I don't know if there is a good way to describe it. I think it's more that you take it seriously and you take it, and you do align it to your values as a company, not as people, so to speak. Yeah, yeah, yeah, I can't think of a better name, yeah.
TIM: One thing I was thinking about recently when it comes to cultural fit interviews is maybe we might not give candidates enough credit for being adaptable, and what I mean by that is we're taking their interview performance, and that's all we've got to go off is maybe their CV and how they come across in an interview, and we have to make a hiring decision. So it's obvious we're dealing with a kind of limited sample size problem, but we basically have to guess whether or not they're going to fit in with the culture based on that interview performance. As someone who has recently been traveling, I went to three very different cities: I went to Amsterdam. I went to Bangkok and Riyadh, and they could not be three more different places than where I live in Sydney, and what I noticed as I was going there was that you go to a new place and you quickly get a feel for how people act and behave, so I arrived in Riyadh, and my friend picked me up from the airport. I was wearing gym shorts; he put on some pants. Come on, I put on some pants just as a starting point. Let's at least get those skinny white legs covered up, okay? And then in Bangkok, you go and interact with someone; maybe you're going to get a coffee or something. After you take the coffee and give you the money, you can't just, like, immediately walk away; there's a kind of almost moment of reflection and bowing and saying thank you properly. and looking someone in the eyes, it will be rude to just dart off like you're in a rush, and so you wouldn't know that unless you went there, and you'd experience these cultures, and you'd have to also be quite disagreeable to not suddenly start acting like everyone else around you; that would be like most people would change the behavior, I think, a little bit. So is there something to be said for maybe thinking that if the candidates are adaptable, then maybe even if it feels like they're not the right fit in an interview, maybe we should give them the benefit of the doubt? What do you reckon?
AARON: Yeah, so I think that it's really about trying to avoid that even that decision. I think asking the questions about can they do this rather than they were acting differently than we would expect, therefore they weren't fit, tries its best to avoid that situation altogether. So if you can avoid it and try not to take culture into account in some sense, then yeah, like I said, we have 25 different nationalities, maybe more; I'm not 100 percent sure, and yeah, everyone has a different, yeah, a different way of acting and what they find acceptable and not, and yeah, we have still meshed together people that are very different into a really cohesive team, and yeah So I think that it's about avoiding it rather than adapting to it, so to speak. Yeah, if you think back to the candidates you've interviewed who have maybe fallen down at a cultural fit interview or technical interview at different stages, are there recurring patterns? Like, if you had to group common themes as to why candidates might not ultimately get hired, can you think of any common patterns? I think that from the technical interview side, it's usually a mismatch, right? That's always the obvious one, that what they said on the CV and what they can do is obviously very different, and so that's even if you could come back from that, it's really a disappointment. Essentially, I don't think you're going to win out in the end there. So yeah, from the technical interviews, it's more around people not listening to you that they don't want to. They want to either do it themselves, and that's already a red flag, right? that's there's no collaboration like they haven't really understood collaboration and so on, and we always say upfront, please ask us, please talk to us if you don't understand something or you're getting stuck, or we will help you, but they ignore that so that these people would not likely make it through, right? So I don't want someone to go away and solve something on their own because collaboration is It's like super important; any company right there can't solve problems on its own. You, as a company, need to work together. It's very clear, so that's, yeah, one of those attributes that, on the technical side, would definitely mean that they're not going to make it through.
TIM: And just to jump in, you would evaluate that because in the interview you're almost giving them a live problem to solve on the spot, and if they don't ask you for help, then that's where you get that uncollaborative kind of vibe from them. Yeah, exactly, so I really like the live coding, the live problem-solving interview. It's not for everyone, but I'm very, like I said, I tell them really upfront, Please ask me anything. I'm really here to help you solve this thing, and I don't have a predefined set of things that they need to do, but I don't send—I don't tell them what those are.
AARON: I presented in the interview, so it's not that I don't want you to consider things like this is a live interview. This is really some people for the first time; maybe they haven't done coding in front of people before, so I don't want to add the extra pressure that, oh, I'm only 10 percent of the way through this thing. Because people freeze up, and they took too long on this, and I'm not so interested in that they can get through everything, but rather how they did it that they can also collaborate and learn, so yeah, I'm a big fan of that kind of interview. Yeah, yeah, that's where you see it for sure.
TIM: Yeah, it must be a little anxiety-inducing for some candidates to do live coding if I just think about myself. Last time I worked in an office, if someone came across to my desk and asked me to start chopping some sequel, I think I would lose the ability to use my fingers pretty quickly, but I guess the alternative is really the take-home test, isn't it? which has its own pros and cons anyway because it normally takes the candidate way longer to do, and then you open up the opportunity for cheating, so is the live interview for you just maybe the most efficient way to get an understanding of their skills?
AARON: Yeah, I think so. I think it becomes very clear very quickly who will work and who will not work, basically, and, yeah, I think the take-home assignment, especially around coding, there's no point these days. There's no point in sending them a coding assignment whatsoever, take-home, because you're just gonna get a bunch of ChatGPT results, right? Fine, you can use JB Chat GBT; at least that's a minimum requirement for a job anyway these days, but, for example, I give maybe a task of can you define or pick a model on Hugging Face, and can you give me a model card of this model? And even if they just copy-pasted whatever information from Hugging Face, put it in ChatGPT, and produce something for me. I then asked the follow-up questions, of course, during the interview, and even if they've never seen it, they don't know much about it. I would expect that they have been able to reflect on it, and they understand it and so on, so you can get around in those kinds of cases, but on purely technical things, I don't find any value in the take-home anymore.
TIM: What is your view more broadly than of candidates using AI in the hiring process? So you've basically said one whole step is, in your view, null and void because it's just too impacted by AI. What about, for example, if they were trying to use AI during your interview, like if they were doing it remotely? Is that fair game? Would you have a problem with that? What's your view?
AARON: Yeah, obviously I don't have a problem with people using AI to code because that is how you should do it these days, quite frankly, but in the interview I asked them not to use generative AI of any kind because I just want to understand their level, where their understanding is, and what their critical thinking is. And if you're using some AI tool, it removes a lot of those components essentially, and that's fine when you're doing the job and you need to get it done quickly, but not when I want to see what you are capable of, what you understand essentially, so then I'm, yeah, I'm more so—that's why I asked that they do not use gendered AI. Okay, you do. I have some times we have suspect interviewees who are looking offscreen, and I'm sorry I'm having a problem doing this or that and so on, but okay, I don't know if they're doing it or not, and usually those people are, yeah, they're not quick enough. They're not like you. Can you get a sense? So it's a bit too awkward, the whole interview, so they're not doing themselves favors by doing it, but yeah, so
TIM: Yeah, until we get Neuralink or some direct brain access to ChatGPT, it's going to be too slow, isn't it, to listen to interview questions, look at ChatGPT's response, and put that back into your own words? answer question Surely that's too complicated; you may as well just try to do the interview yourself rather than relying on an LLM.
AARON: Yeah, exactly. I have thought about maybe integrating some kind of prompt engineering component into the interview just to see this is the problem: how do you get the best thing out of ChatGPT, basically to solve it, for example, just to see what people's experiences are in it because it is It is important how to successfully prompt, as well as natural language on its own, is not always going to give you the best answers that could be an extra component in future interviews, as we are really moving just into this new era of coding.
TIM: Yeah, when I've spoken to hiring managers who are doing some kind of project take-home test, they've by and large said, Yeah, I know they're going to use AI; that's fine. I want to know how they're using AI, so they almost go past the first level of all this weird game where one person's pretending like they're not using it and the company's hoping they're not. They're going to know you're going to use it. It's fine. How have you used it exactly in detail? Let's start scrutinizing that, and then you can almost get down an extra layer and, yeah, start evaluating this new skill that I guess everyone needs to have at the moment.
AARON: I think it's a really interesting topic in itself as to whether or not the gender of AI is going to ruin critical thinking for the next generations and so on. People, I think that like in the next five years or so, I don't think that I'm not a forecaster, but in the foreseeable future, probably the gender of AI is still going to hallucinate plenty. It doesn't have reasoning; it doesn't reflect on what it's giving you in any way, so we're always going to need a human in the loop, so to speak, to review the output, right? And you need to understand to get the best outputs. You also need to understand it, and even if it's just on a larger scale, not in all the specific details, you still need to have a good understanding. and I think junior candidates should, I think Thank you. It's a shame if they rely on generative AI to do these things for them because I think they're losing out on learning because we need to learn; we need experience; we need to fail; we need to go through the mundane, painful stuff to actually get that experience right. It's tough, and if you don't do that, then yeah, you're the one who is going to be able to reflect on what generative AI gave you. Yeah, it looks right. Okay, I'm just going to copy-paste this into this report. They're not going to think, Okay, this is not quite right. maybe we could look This is interesting, but maybe we should reframe this in a different way or so on. I think potentially we're going to lose that ability if we just rely on these tools.
TIM: Yeah, I've thought similar, and I know a lot of software engineers would say I don't really know what the solution is until they start writing it. They just like the thinking is the writing; it's all intertwined, but then part of me thinks I don't know; 50 years ago, people had to solder their own motherboards together and write machine code and then low-level code and high-level code. and we keep adding these layers of abstraction where now it's just natural language converted to code. Yeah, maybe it's just that process could become so accurate that then it's null and void for us to understand it in any detail, but I guess take
AARON: Yeah, I agree. I don't know if at least these current LLMs are ever going to solve hallucination just based on the underlying technology. It doesn't allow that, at least we haven't come up with that solution, so until that is, until you can be really confident that this is right, as in you just hired some data analysts to do this, then Yeah, you need someone to reflect on it, and even when you, like, give a junior analyst a task to do something, you probably also check it before it goes to a client or something. Right? There's probably another check anyway, right, from someone else? Yeah, I don't think anytime soon we can rely on it. and I would advise more junior people to find you can use change of AI maybe as a guide but then go back and try it yourself or I don't know at the very minimum basically
TIM: Yeah. I've noticed a tendency of some people to use ChatGPT and almost go, This is what it gave me, as opposed to, I've used it as a tool to solve my task; it's my responsibility, and the fact that I've made it more efficient is good for me, but I still have to own it, whereas I get the sense from some people like, Oh, here's the answer it gave me. Oh no, it's still your job to figure out whether or not it works; it's on you.
AARON: Exactly, yeah.
TIM: Aaron, as one final question I'm wondering if there is anyone or any company you've seen who does hiring in a really interesting or good way that you've learned a lot from and perhaps incorporated into your own process.
AARON: Yeah, I would say I'm well, one, I'm happy with SkillLab's hiring process in general. I think it and also from myself, right? I was hired by SkillLab, and I really enjoyed the hiring process, and I really got a sense of them valuing me, and yeah, that was really very different, actually, to be honest, from many other interviews I've been through. I really liked that I like that people just are really respectful of people's time and energy, right? I think that this is something you must have in your hiring process in terms of specific things. In the past I worked at a bank, and interviewing for banking roles is pretty intensive. And you go through many rounds in a day, maybe even multiple days, and what I actually took away from it was, and I think it's used everywhere in the big tech companies and things as well, of course, is the brain teaser, and I very much have a particular brain teaser that I like to ask, and it's not that there is an answer right It's open. It's just like, how would you solve this very broad problem where no one's worked out the answer? Right, it's not something feasible, and just ask them to describe to you how they're going to solve this or how they would solve this. What assumptions are you making? all these types of things, right? And it's a really good indicator to me—not really of yes or no for hiring, but rather how will I work with this person? Is this person someone that is detail-oriented, that needs the details of the tasks, or are they someone that kind of can go away? and work on their own a bit more. They're not so independent, things like that, so you can, and it's been a hundred percent indicator for me, basically, and it's also, yeah, it was a little bit of fun because it's a very interesting, very different way people answer things. So yeah, yeah.
TIM: I experienced a few of those brain teasers myself during my junior years going for consulting and banking roles, and yeah, I can remember a few of them. We won't ask you to leak your favorite one because then candidates can prepare, so it's better to hear their answer for the first time. I'm sure, so there you go: a tip there for people to evaluate candidates: get those brain teasers out—those How many balls can fit into a 747? styles of questions. Potentially and get those creative juices flowing from the candidates
AARON: Yeah, yeah.
TIM: Aaron It's been a great conversation today. Thank you so much for joining us and sharing all of your insights with our audience.
AARON: No, thanks a lot, Tim. It was fun.