In this episode of the Alooba Objective Hiring podcast, Tim interviews Paul Viefers, Analytics Specialist
In this episode of Alooba’s Objective Hiring Show, Tim interviews Paul to discuss the challenges of hiring top talent in the data and AI sectors across Europe. They explore the current job market's tightness, the impact of economic uncertainty, and the ongoing competition for talent. The conversation delves into the application volume disparities between North America and Europe, bias in the hiring process, and the potential for AI to enhance recruitment. Paul shares insights on leveraging AI for better candidate identification, the importance of transparency, and the need for a balanced approach between data-driven and human-centered hiring practices.
TIM: Paul Welcome to the Objective Hiring Show. Thank you so much for joining us.
PAUL: Thank you, Tim. Great opportunity. Let's talk some hiring.
TIM: Absolutely, let's do it. Let's jump in, and from your side, I'd love to get your perspective on just like a lay of the land of what the current hiring market looks like in Europe for data-related roles, because I'm hearing a lot of pain points from different people in different countries. I just love to get your perspective on things.
PAUL: Yeah, I think it's still very much a tight market in my view, at least for the roles that I have been recruiting throughout my career. Definitely, it was even a tighter market. I think with some economics softening that we expect, and we already see people tend to clench to their current roles and to their current employers. Fewer people being willing to join and being hired by other players definitely didn't help the situation. If you're looking for great talent in the data and AI space, it always has been scarce, and it definitely is still a challenging situation to be in. Europe
TIM: What about in terms of application volume? So one thing I've heard a lot recently, maybe more from North Americans if I think about it now, is that they're just being inundated with applications—sometimes hundreds, even thousands, of job ads—and these are for individual contributor data roles I'm talking about. Is that a pattern you've also experienced?
PAUL: I think the number of applications has not been in the thousands, but certainly we have seen a great variation over the years, but I think recently the number of applications is stated as definitely low; it's on the lower end that we receive, and I think in Europe it's definitely the economic uncertainty that leads people to be hesitant to stay with the current opportunities and not rock the boat too much because you don't know what might be coming over the next year or two. And then there's also the continuous competition from US players, especially for really the top-notch talent in the market, so salaries there are very competitive. People want to stay there; they're not inclined to move over to Europe due to just the incentives purely. I think it's a bit of a challenge situation if you're looking for great talent in the data and AI market for the reason that there was some significant uncertainty in the market.
TIM: Okay, so even though there's maybe a bit higher unemployment compared to a couple of years ago or that sort of peak pandemic period, even with that, the people that do have jobs are a little bit reluctant to even consider looking elsewhere because, as you say, there's this kind of growing uncertainty.
PAUL: Yeah, yeah, yeah. See, in Germany you would have that sort of few-month period where an employer can still, with a four-week notice, end your new employment contract, and I think that for some people is enough of an uncertainty rather than they would want to stay with their current role. And yeah, just see what's coming, and then in one or two years it might be a better situation to consider a new opportunity currently similar to other business decisions. The uncertainty is making people very much stick to where they are. at the moment
TIM: And what about AI? The two most said letters in probably the history of the world by this point. Have you noticed any impact that it's had on hiring, either on the candidate side of things, or perhaps have you dabbled with some of the tools yourself in a hiring context?
PAUL: Yeah, of course, on the one hand, all of the impact and the applications that everyone is talking about on the web and people are speaking about automated CV screening, et cetera, et cetera. I think the same on the candidate side: that applicants would be using AI to draft their CVs or improve them. I think based on that there's a lot of automation going on, but I do think that there are even more interesting applications or use cases for AI, where, for example, it can assist recruiters, be it in the corporate sector or elsewhere. to understand better the roles and the markets where to look for certain talent given a specific part of a specific skillset, I think it's not used there as widely as for automating some of the boilerplate tasks, right? CV screening, et cetera, et cetera. I think the most interesting applications probably actually are in the needs identification stage of recruiting when it comes to where we should be looking for candidates. How should we be positioning in the market? How should our job ad look? These kinds of things are also very interesting, and I think they are very impactful applications of AI in the recruitment process.
TIM: And are these things you've already seen attempted, or do you feel like this is an unexploited territory that you feel like there should be some change in soon?
PAUL: At least what I know and what I see in the corporate sector, but also in consulting, I think it is not used as widely and as much as it should be in the automation part, yes, like for the handling of recruitment documents along the life cycle, then yes. but before even receiving applications, and as I said, identifying where we should go—what are the marketplaces for talent that we're looking for? There it certainly is underutilized, and let's make an example. I think many tech players are looking for a very niche talent in, for example, causing the eye and one thing that I think I was surprised to learn is that many recruiters are not aware that there are, for example, economists who are very much trained in that space, and there's an annual job market by the American Economic Association, for example, where PhD candidates go, and they're on the market. They can be interviewed. Few recruiters and few organizations outside the big tech players like Amazon and Meta and so on are aware of this market, and I think if you and I had just tried that before our meeting, if you go into a conversation with ChatGPT or any of the other language models out there, if you look for that specific skill set, then you know that's being mentioned as one of the markets that you should be looking for and where people recruit top talent, and that kind of back-and-forth with an AI I think can be very helpful at that stage of the recruitment process.
TIM: This is really interesting to me because this is not something I know much about, so I'd like to drill down a little bit more here. So you're saying there's like a submarket of economists who have a skill set that overlaps with a requirement in AI and data science, and they're not necessarily going to apply to a job ad through LinkedIn or through a careers page. They're in some other network that could be tapped into to exploit.
PAUL: Exactly, and exactly, and just being aware of who the people are in that network, where it exists, and what events are around it, I think that is something where it can really provide you a much broader view of what is out there, and that's what few people use it for.
TIM: That's really interesting. I wonder if thinking about it now, there are probably equivalent other pockets of communities this could be like Slack channel groups, maybe not as popular now as they used to be, but probably Telegram channels as well. I know someone I spoke to recently has about 2,000 Telegram followers on a little analytics pool in their bit of a world. and yeah, there are probably these little subgroups around the place that don't really fit into mass market job boards, and so it would be easy to neglect them. There's not a full ecosystem built up, but you could tap into it if you were clever and cunning, and yeah, ChatGPT would be a way to identify them in the first place, but also you could customize the messaging to appeal to them and all sorts of things, I suspect.
PAUL: Yeah, exactly, and especially in longstanding corporate organizations and recruiting and HR, they probably know their niche they have been recruiting in for one or two decades, right? And now with data and AI, many of them now need to break into new kinds of roles. They have not recruited before; they might not be acquainted with where to look for that type of talent, and I think, yeah, that's a stage at which there could be much more use of AI than there currently is.
TIM: I personally have to say I've flipped over the past six months, probably because I've been a little bit skeptical of AI, and I feel like I'm feeling a little bit like it's overblown to the polar opposite, so I'm bought into the hype, and I feel like, in particular, there are so many things in hiring that could be drastically improved with current state LLMs. and so I feel like we're just waiting. We'll have to wait maybe the next six to 12 months for the application layer to catch up that actually implements these LLMs and some kind of new wave of AI-driven ATSs or HR tech because all the existing products are really built for what I think now is an old world where it's just a human sitting there clicking buttons, moving stuff around, and that's that world has to end. I think we can move on from that one. and oh my God, have I just thought about each of the steps in hiring? So many of them could be either automated or greatly enhanced or improved with AI. For example, I feel like feedback in a live human interview is a pretty good option for AI because at the moment most companies wouldn't provide feedback after an interview for many reasons, one of which is I think just the notes and the transcripts from those meetings. Meetings are often hacked together, like the last time we did research about this, which was only 18 months ago. 20 percent of all interviewees used a pen and paper, like literal paper, to write their interview notes, and so already the technology is there to record and transcribe those meetings with near-perfect accuracy. and you can imagine sharing that with candidates, sharing it internally, and that would seem like such a no-brainer to me.
PAUL: True, on the other hand, I think I'm probably still more of a traditionalist in the sense that I still call people after recruitment interviews to provide feedback, so I think a lot of the candidates that I speak to and that I have interviewed appreciate that formal and personal feedback. Of course you cannot scale that if you have thousands of applicants, but yeah, if the process allows, then I am very much inclined to make it a personal exchange via phone or by email. I appreciate it cannot always be scaled to that level, but I think if it can be done, I would always do it and rather opt for some personal feedback rather than automated by AI. and I think Yeah, especially in the consulting realm, for example, it's a people business, and, yeah, candidates value this very much, especially also when it comes down to the level where some players in the market, I think, they leave it to assistants to call back candidates that they probably don't want to proceed with. That's also something that I haven't done in the past, and the feedback to that was always very positive. So there's some level of human interaction that I think will also help to build long-term relationships. Maybe that changes for the new generation of people who are used to AI, and it's certainly fine to get AI-generated feedback as a summary of a call, but I think my recent experience is that still people are very impressed if you manage to really get back with personal feedback to them. so it would always be preferred if I had to choose.
TIM: Yeah, for sure, I completely agree. That would be better from a candidate's perspective. And yeah, good on you for doing that because you're definitely in the small minority, I think, of people who would consistently give actionable feedback, and I like that you call them because that's clearly a harder thing to do. it's more valuable it's more effort It gives the candidate an opportunity to also ask follow-ups, and I feel if you can deliver it tactfully, it's going to come across a lot better than something written, which is just you could immediately be defensive against, so yeah, kudos to you for doing that. And then I think about other processes and other companies where it's not happening, and for at least a section of those, I think it's just too much effort; they're drowning in interviews, especially once they get to the end of a day and they've interviewed six people. They haven't written good notes in some system. It's all over the place, and also just the mechanism to share it I feel like it's also broken; like in bigger companies, it's the hiring manager who writes it, they email it to HR, and HR then eventually has to tell the candidate, but there's a delay process before they wait for other… yeah, God, this whole mess I think could be just transformed completely with maybe good enough feedback from AI.
PAUL: totally every meeting being transcribed by Copilot, for example, yeah, it's super helpful in every business context, and I think you're right; it would be brilliant and very valuable for candidates where personal feedback just can't be mapped due to the sheer amount of applicants that you would have to get back to. And that's still better than just not letting them know anything or just sending them a standardized email, right? Yeah, totally.
TIM: I wonder, just thinking now, what else will be the barrier, because I feel like the technical barriers are now solved with large language models and accurate transcriptions. There's going to be an application built that could push this data around and the logic to when to send it to a candidate and whether or not we want to synthesize it. For example, I know when I'm writing interview notes manually myself, they're a little bit incoherent and sometimes very direct and harsh, so you can imagine some kind of layer to soften the blow and massage and make sure we don't send anything that's defamatory or whatever. I feel like that bit should be solvable right now, but what else? I wonder if some companies are still worried about giving feedback, like maybe there's some legal repercussions. I've heard of companies in America that just say we don't give feedback at all. Are there going to be other barriers, do you think?
PAUL: But from my experience, no, we have always been very transparent around that feedback. I think it could also help us or have anyone keep everyone in the process honest, right? If you're discussing a candidate in the group after the whole recruitment day, you compare candidates if you have an AI-supported transcript of what they said and what they performed and what the responses were. I think that can help also to keep things fair, right? Because, of course, there's also some human layer to this where people might have an intrinsic preference for one candidate over the other, but then if you put that next to the transcript, then that can help keep things honest. I don't see that as a barrier for use; maybe it should be something that should be even more widely employed, but I think it's definitely a great use case to make sure that candidates receive tailored feedback that actually builds on the content and what has happened in the recruitment interview themselves rather than being generic and being one-off for every candidate.
TIM: I like your idea of keeping people honest and also tapping into a data set that at the moment is completely missed in the whole hiring process, and that is those meetings you might have to discuss several candidates, so you have a CV screening stage in an interview, and they're quite discreet and specific, but often there's these kind of offline conversations where you might unpack a whole series of candidates, and yeah, having a transcription of that and an AI to look at that And as you say, keeping people honest could be so valuable; in particular, I feel like a really common problem is you'll be discussing all these candidates, and so I've experienced this as a recruiter and as a hiring manager discussing candidates with different people, and it's so easy to forget what it is that you're looking for in the first place. and so if you always had the objective criteria there in your conversation, and almost like, Hey, like, you kept talking about this candidate not being, I dunno, not being extroverted enough as if that's a bad thing, but hang on, it's nowhere in your criteria that their personality is whether or not they're introverted or extroverted. and so it's so easy for reasonable-sounding things to creep into the decision process that in fact probably shouldn't be there, and yeah, some kind of, maybe not monitoring, it's a bit too aggressive, but some kind of feedback tool from AI could be really helpful. That's, yeah, a really interesting idea.
PAUL: Yeah, and summarization, right? If you interview a couple of candidates every day, then you might not remember each and every good answer or a bad answer or a red flag that sort of came up in the interview, and having that handy, having that being ready and even interactive, right? I think you can think of systems that you can prompt to say Hey, how did that candidate Okay, perform again. What were some of the highlights in the conversations we had? Those things are perfectly possible, so I think you could infuse them at various stages of the process, and it's underutilized there. You might know it better than I do what some of the recruitment tech is that you have out there, but I have personally not seen that being sorted into every single stage of the conversations that you have and making transcripts available and so forth.
TIM: Yeah, that, to my knowledge, certainly doesn't exist yet, and yeah, that's a really exciting angle to think about it because if the whole recruitment process was digitized, recorded, and transcribed, yeah, you could have that kind of holistic view across a candidate who, of course, goes through several rounds, and you might not remember what they said in the first round, and there are different interviewers involved, and they've all got their own experiences. So some holistic picture of all those interviews, some summarization, some synthesis, and some recurring themes would be really insightful, and I think a huge value add compared to the way we currently do it.
PAUL: And I think it doesn't even end there; right in an ideal world, you would probably even build up a data set of candidates over time, right? And you can grade them on the curve; sometimes the LLMs are surprisingly good at really semantically interpreting what people said. I could imagine they would be recently well also at grading, for example, a consulting case that we do with people in interviews. And so that candidate performed well, and that candidate performed even better, and if you keep a track record of all these transcripts, then maybe you can come to a system where if you have a certain candidate and you take their transcript, their solution, their answers, and you ask ELM, How does that compare to the history of candidates we had over the past one or two years? That would be really interesting as well, and I would feel more objective than some of the current grading that you have with different people, very subjective assessments of interview performance, etc., but that at least be one single kind of evaluation system that stands behind us.
TIM: Yeah, there's just so much upside, and I'm really now quite excited to see what happens in the next few years because I don't know about you, but I feel like the concerns of AI are fair enough, especially in something as important as hiring, but I just feel like the way hiring is done at the moment It's so flawed in so many different ways that it's hard to imagine how we're going to make it worse by implementing a well-thought-through AI solution. I feel like there's just so much upside and not that much downside because we're already near the bottom. Maybe I'm being especially cynical, but I just see so many gaps and flaws with how hiring's approached, and it's on the spectrum of science to astrology. I feel like it's a lot closer to astrology than science.
PAUL: Yeah, see, it's similar to when people compare how AI is performed relative to humans on creative tasks. For example, I think sometimes there's a lot of that classical Dunning-Kruger effect being involved where we as humans overestimate how creative or how capable we are, how original we are. And in a similar vein, it might be that we overestimate how biased or underestimate how biased we are relative to an AI. I totally see the risks that people see with amplifying bias and training data through AI, but I think, to your point, I have not yet seen a clear comparison between how biased a process is If we are human only versus AI and systems, right? Is it more or less? And to your point, I think the reality is that a lot of the hiring processes are not as good as we would like them to be, not as unbiased as we would like them to be, and I also think that it can hardly get any worse; it probably would get better if we had some of these systems support the whole process. totally, yeah
TIM: I've got one particular example that I'd like to share just because I feel like it speaks to the current mess that we're in quite poignantly, and so I'm sure you've seen studies like this where researchers get a whole bunch of different CVs or resumes of candidates and they split them into groups with the only CVs So one recently in Australia was done by the University of Sydney, where they had three groups of thousands of CVs. The first group had an Anglo first name and surname; the second group had an Anglo first name and Chinese last name; the third group had Chinese first and last names. They then applied to thousands of different jobs around Australia in different industries and different domains. senior junior like a wide variety and then measured the rate at which their application got a callback for an interview, and so for the first group, which had Anglo first and surnames They got a 12 percent callback; the third group that had a Chinese first and last name got a 4 percent callback, and they did a very good job from what I could tell at considering all the other variables and controlling for obvious reasons why there might be a difference in callbacks, such that the only real difference was the names on the CVs. and if you're Chinese in Australia, you have one-third the chance of a callback for a similar CV to someone with a white name. That is dreadful. If I were Chinese, I would be pissed off because that's just call it what you like: unconscious bias or overt racism. It's not good enough, and if that's the current system, I fail to see how it could be more broken than that with a well-thought-through, implemented, transparent bit of AI. Personally
PAUL: Yeah, I can only second that there have been similar studies, I believe, by economists when it comes to racial discrimination based on names, et cetera, et cetera. I think if I remember correctly, there was a Harvard professor called Zenderman Linathan who had a paper investigating that being published a couple of years ago. Yeah, I think those are discriminations that have been in the market and have been documented here and there, and that's why I think the human process is not as ideal as we would like it to be, and therefore what are we really comparing to when we compare this to an AI assistant process? It isn't entirely clear to me. And I think there's a lot of, yeah, a lot of very heroic assumptions around how unbiased the human process is out there, yeah.
TIM: If one of the difficulties with this is unlike, let's say, where we're trying to improve the rate at which we can detect, I don't know, cancer in a patient very much, and so we have a current human way of doing it with a certain level of accuracy and recall and those typical metrics and so we know the false positives and false negatives of a good doctor, and then we just train an AI to try to be better than that, and it's very black and white, like either they had cancer or they didn't, but in hiring we don't really have an equivalent success metric because it's not like who was a good candidate—the one who got hired, the one who got tested, or the one who had the best skills. If they got hired, is that success? What happens if they got fired? What happens if they got promoted? If they left for a better job, is that good or bad? So it's just the thing we would optimize for isn't as obvious. Is that part of the problem? Maybe.
PAUL: Totally, but I think interestingly it gets us back to causal AI and causal inference. I think that's exactly why you have these experiments where people take CVs that are basically the same; you just change the name for something that sounds foreign or that has a similarity with a certain demographic group, and whether people just react to that change while all of the skills, all of the hard skills, all of the education are just exactly the same, right? and you get that randomization that basically enables you to say This is the only difference between these two CVs, but this one really got a substantially bigger return rate, and yeah, but those experiments you need to run in order to gauge the impact of these kinds of demographic or other features of a candidate in order to say, Hey, that's actually something where we see a discrimination that we would not want to see in the process, right? because it's not because someone has been to a more prestigious school or because someone has a higher degree, but this is just because it is a certain demographic group, and that's not okay, and that's why I think these studies, some of which I've mentioned earlier, became so prominent because they made this very clear that there isn't anything else at play other than the demographic background of individuals.
TIM: One thing I've also seen recently is an extension of this type of analysis but within a company or within an organization on promotions, so not necessarily the hiring decision but whether or not someone gets promoted. I saw recently someone in my network did a very comprehensive analysis over 20 years of Australian public service promotions. It's 100,000 data points, the entire population, and they found what I could only describe as catastrophic bias against people from English as a second language background, especially Asian Australians, whereby if you're in that group, your odds of getting a promotion, controlling for so many other relevant factors like the number of years of experience you have, your performance reviews, your education level, your language skills, like all the obvious things you'd think of, they control for. They're economists who are econometricians who are doing the research so that they know their stats, and just, yeah, having looked at all that, they just found this straight-up ridiculous level of bias, so it's not just in the hiring position; it's also, unfortunately, sometimes in the organization as well.
PAUL: I think there's a lot of politics involved in these decisions anyway. I think they're not, or probably to a smaller degree than we would like to skill-based or performance-based. Much of this is also a political decision. Be it who you would like to see take the reins of a company or lead a certain group within your organization Totally, that's not neutral, so I imagine I don't find it very hard to imagine that's the case, and I think there's also, yeah, some glass ceiling for women traditionally across organizations, so I think we've documented many of these throughout, and like I say, no surprise. And it could be that I could help to counter this right to provide a more neutral voice in the room, still maybe not unbiased but a more neutral voice than the other ones in the room that make that decision to the point where, you know, it might actually turn the decision to a different candidate. And yeah, it definitely sounds like a promising application because, like I say, it can hardly get worse.
TIM: One thing that I've been thinking about a lot recently is the fact that, at least from what I've seen, typically in talent acquisition and HR teams, the background of people leading those functions in an organization would rarely be from data from a technical field or a scientific field, and so I know just from six years of working in this field that proposing a sort of more data-driven way of doing hiring is anathema, and it's the polar opposite of what they think is the right way to go, and so they'll normally say something like, Oh, we prefer a more human approach to hiring, as opposed to as if that's the opposite of a numbers objective measurable approach. They almost view that the numbers and the metrics are slightly like that's unfair. There's almost a perception that the tests are the bits that are biased as opposed to the humans that are bits of the bias, which I have always found a little bit interesting. Is there a sense that maybe there's like a skill gap in talent and HR because they don't even view the lens through a scientific approach, the data-driven approach, and AI for them is adding bias as opposed to getting rid of it? That's their starting assumption. Is this a problem, do you think?
PAUL: think it's a good It's a good question. I think that better skills, more skills, and more data-driven recruitment certainly don't hurt, but I think there's only so much that you can quantify and that you should quantify and base a hiring decision or probably even a promotion decision on. I do see that in reality it will always be a mix, right? Because people also need to somehow fit into a team, somehow be relatable in most jobs, right? Imagine you have a brilliant scientist, someone who is just not very much a team player, right, who is great at inventing stuff but has a very irritating personality. It will be very hard for certain roles for such an individual to really thrive in an organization, so it may be that person hits all of the top grades in a quantitative evaluation, but then if you adopt the team lens, you already know it will create problems in your team because that person is likely going to irritate a lot of people in the organization. So I think it's fair to blend both views. But I find it hard to say where in practice we are on the spectrum of being completely not data-driven versus fully automated. I would like things to be somewhere in the middle, but I think different organizations are on different ends of that spectrum. When more data-driven hiring, which would be improving the situation and helping them also, whereas some might already do enough, and you can only go so far, I think certainly for roles in consulting or other sorts of network-related positions, you do need people who also can relate to other people who can build a relationship on a personal level. and that is it's hardly quantifiable, right? It's something that you learn through interactive conversations, and you build a view of whether that person has what it takes to build a relationship and relate to someone. Does that make sense?
TIM: Yeah, it does, and I'd love to drill down a little bit further here. I appreciate that some of those, let's say, softer skills bordering on cultural fit, like imagining how this person would interact with the client, are clearly something you need to engage in some kind of interview face to face or over a video call. You couldn't infer that really from a CV; you can't really test for that directly. I don't think you have to see it and make the human judgment that said what I feel like is sometimes when companies are evaluating these subjective things, they do it in a way that makes it almost more subjective than it has to be. and I'll give you an example: imagine you had a cultural fit interview; you could just have that as like a Oh, go and have a coffee with them and see whether or not you like them interview, and then the person comes out, and it's either a yes or a no based on a kind of vibe called the vibe interview. Okay, or you could say, Here are these five values. We're looking to hire into our team because they're our company values. Here are two questions for each of these values: Here's an example of a great answer to one of these questions. Here's an example of a poor answer. Score them on a scale of one to five and add it up. And then whoever has the most has the best answer. Cultural value match based on however your values are defined I feel like that is a more objective way to do something that is inherently subjective than the just pub test coffee chat vibe. What do you think of that approach and those kinds of trade-offs?
PAUL: I don't think I have. I haven't seen or heard of these sorts of wide interviews that may be out there. I haven't heard of it. It definitely has not been part of any of the recruitment processes I have been part of, but I think, in any case, there's only so much that you can quantify as part of the recruitment process and that you can learn at the end of the day, especially if you recruit not just super senior level but also entry-level positions. You need to adopt a long-term perspective from people. So you need to somehow trust and believe and place a bet on an individual to also grow into a recruiter, the role that you're looking to fill, right? And I don't think you can see that right on the spot, right? You might find this curious person; they have been doing different things in the past and have shown a tendency to explore different areas. That's fine, but still, you're placing that bet on an individual to go into your organization and grow and find a place and develop in a certain area. I don't think you can create certainty around that, so there will be some subjective assessment of whether you believe that person will definitely reach that point and take that development path. and I think that's where the subjective thing comes in where you need to extrapolate based on what I hear from the individual that I see reflecting on a team that we have. Do I believe that will work out right? And I think it's also a pressure on those that sit on the other side that recruit and make the decision to, yeah, to basically be calibrated well to find someone and place that bet on someone who would then also take that trajectory as intended because, yeah, it's something that you cannot be sure about after an interview or a recruitment day or whatever. Yeah. Challenges with hiring: Isn't it that you have, even if you have an elongated process like four interviews, still probably only four hours of interacting with a candidate? You could have different interviewers to get a sort of wisdom of the crowds; you could overlay AI; you could have a structured process, but at the end of the day, it's still a tiny sample size of data.
TIM: I wonder then if thinking about it now, another use of AI might be obtaining other unstructured data about the candidate, for example, if they have a YouTube channel, if they have done a few podcasts before, if they've written, I don't know, a Stack Overflow answer, if they've got a GitHub profile, if they've done whatever, a Medium blog. That kind of data might add some value, but it's just very difficult to hunt down, aggregate, and synthesize, so maybe that might give you almost like an interview plus some kind of extra data points. I guess we'd have to do it, though, in a fair way that isn't going and finding a Facebook post that they added when they were 15, which probably shouldn't be included. But what do you think?
PAUL: to a certain degree, these platforms have some KPIs that you can already use, right? That are somewhat neutral, say how many commits people have done on a GitHub profile, how many people have actually used the code that they have produced, maybe these kinds of things are quote unquote somewhat objective, especially if it comes around how many people have actually used your code, your application, have forked it, have built on it, that kind of thing, right So that's that. I think it is a relatively good assessment of your skills in that particular area, but it comes down to very specific, very technical things, right? So the whole coding data science area, like GitHub, etc., they provide these kinds of metrics that you can track, but other than that, how you have performed in previous internships, you probably have some evaluation from a candidate from a previous employer, but that's about it, right? and how neutral ideas how unbiased ideas hard to say, but I think you're raising a great point so far as that it's maybe worthwhile to think about as a decision-maker that hires to capture your thoughts and to capture some of the data points around a candidate and build a track record. and when you come to the point of whether you want to hire someone, you look at what has worked in the past, what your beliefs were in the past, and whether you were well calibrated, right? Have you predicted, like a coin flip, whether someone will actually be an organization for another two years, or have you been actually very good at identifying those that actually thrived in the organization or not? Just providing that track record on how hiring has been able to identify successful candidates would already be helpful to calibrate and find out how well the hiring process even works for the organization.
TIM: Yeah, and I wonder if an improved, let's say, HR 2.0 system would view hiring more holistically because at the moment these systems are normally like job ads through to offers most of the time, but really there's onboarding, there's training, there's a probation period, there's all the subsequent reviews, and there's all the work that they're doing. and surely something that's more holistic that can see further through time would be better, even something as simple as, Oh my God, all the stuff we learn about a candidate in a hiring process—surely some of that should inform then what their first day looks like or what their training looks like because we should have learned already about some of their skills gaps or what they're interested in or what have you. and so having a centralized place for that data synthesized might be a big help again.
PAUL: Yeah, but I think you're raising a really good point that I have also mentioned in some of the client conversations when we were talking to clients about how to use AI in the HR process, and I think you're highlighting very rightly that it doesn't, it's not just about recruitment, right? HR, of course, is also around what happens in an organization with those that have been hired that have been with the firm for many years and probably also those that exit due to retirement or that move on to the next opportunity, and what I say is that basically why would HR not be the custodian of all that knowledge that has been generated over the course of all of those careers? and I think it's probably up to HR to fill that gap and say we can use all of these AI agents; we can use Copilot and Microsoft kind of systems that tap into all of the documents, all of the things that we have written to help people onboard right to you have a You have some AI assistant that you can query before coming onto a new project, joining a new department, and you don't think you're losing that great deal of experience that someone from your organization has developed over the years, and then he or she is leaving. and moving on, and it's gone right I think that's where much of the value sits from my perspective when using AI and thinking about, especially language models, to maintain knowledge repositories and take this whole knowledge management process to a new level.
TIM: That's a really interesting angle. I never thought about it, so you're saying even if someone has left the organization, we should still be able to get value out of what they contributed through time, and it shouldn't kind of knowledge shouldn't walk out the door with them in a Yeah, you retain it basically. Yeah, I guess you, I think with nowadays technical possibilities, you can certainly retain it to some level. It's not going to replace that person; it's not going to be like they were still around, but you can definitely make a lot more things available that are captured and unstructured data that will be helpful for those who take on the role or those who join the firm and should be acquainted with some specific area that a previous employee has been working on.
PAUL: and I think there's not so much being done in that space currently, and a lot more could be done, but if you extrapolate several years into the future, maybe several decades, then there might be a point where you can actually retain all of that knowledge and almost like duplicate or spawn a surrogate of a previous employee and really retain that in the organization. Exactly. Yeah. Especially if you've got a trailing commission, if your bot has just been trained on you and all your intelligence, if you get a clip of its value, there you go; there's a cool idea for someone to implement. raises a lot of interesting questions, right? Like you say, around permission and being replaceable, but yeah, I think knowledge and people are two of the biggest assets that firms have nowadays. I think that's what we have been saying for decades now, and we have a very interesting new technological means to retain that in an organization that I think are not used that much.
TIM: Yeah, again, I'm just so excited about what we see in the next few years because I feel like, yeah, the large language models have improved so rapidly, and now we're just waiting a little while for all the applications to be built on top of it to actually make them interface with other tools and be accessible by people. So it's super exciting. What about Paul if you had the proverbial magic wand? AI at the moment feels like magic to me but doesn't have to be AI. If you had the proverbial magic wand and you could change one thing about the hiring process, what would that one thing be?
PAUL: I think that it would definitely help if we could get around the situation where we select candidates that are just not fit for the organization. Right? I think it's that whole mismatch kind of thing that's like the Gordian knot to me, both for the organizations as well as the candidates, right? I think if you could somehow improve even if it's just a little bit, I think every single little gain here would go a long way to improve the matching between a candidate and an organization. I think that would help people stress. I have seen a lot of people stress and suffer under finding out early on that they have made a wrong choice. be it on the organization side or the candidate side, so I think there's a lot of deadweight loss here, and therefore if there was anything that I could do about that, then I would definitely take my magic wand and say it's always going to be a perfect match.
TIM: We mentioned deadweight loss, like the good economist that I'm assuming you are, then in your background to continue that theme, I wonder if we already know enough about what predicts whether or not someone's going to be a good hire; it's just a matter of measuring everything. So we have our regression. We've got our six factors that matter: their personality, their IQ, their skills, their soft skills, their cultural fit, and whatever those variables are. And maybe at the moment the barrier is that it's just so expensive and time-consuming to collect all that data; like, you can't spend six months interviewing a candidate. You can't give them 10 hours of work to do. Another way to put it is that everyone knows within a week, probably, if they've made the wrong hire. But it's just that those things don't manifest themselves often in the hiring process because people are on guard; they're not really being themselves. As I said, you can't measure everything directly. So is it a case of just needing some intermediary that's going to measure all these things once for every candidate and then share them with every company rather than a candidate going through the same laborious process again and again? I don't know. What do you reckon?
PAUL: Yeah, I think that's totally correct. Right, that's why I think we sometimes have internships and all that where people work for you for a little bit, and as they graduate out of university, for example, they already know that organization works for them, and the organization knows that's a viable candidate. And certainly that has been a vehicle that I have seen work very well for both sides, so that's why I encourage especially people who are still in university to take every opportunity because it's very cheap, actually, and it gives you a lot of information and can really take much of the burden from making a wrong choice from you and provide a lot of certainty. So I think there are these kinds of possibilities that we have, and I can only encourage people to definitely make use of them because that's the easiest way you can get around that uncertainty for both sides, really, where you have some limited time where if it's a match, it's a match. That's great, but if not, then we can both move on, and it's no big loss, but of course if you're a young professional, if you have been in the job for a while, you would probably not want to take an internship for a couple of months in between before moving, so that doesn't become an option anymore later down the line. But for those who are still in a university or in their education or training, I would definitely encourage using that to get around a bit of that sort of uncertainty that you have if you just meet for a day or two and then have to place a bet on whether that's a good relationship or not.
TIM: Yeah, that was certainly helpful for me. My first internship told me I definitely was not cut out to work in corporates, and they certainly thought I wasn't cut out to work in corporates as well, so the feeling was mutual. And so that was helpful, and that's fine. You certainly have to have the maturity to be able to handle that. Which is probably hard when you're young, I think, but I feel like the more transparent hiring processes were The better off we'd all be if we could just come into the first meeting and go, This is everything about the job. Cool, this is everything about me. And just get it out of the way rather than having these facades and masks on all the time.
PAUL: Yeah.
TIM: We'd be better off.
PAUL: And you should, I think, many people adopt the perspective that in a recruitment process you should also, as an employer, try to gloss over many of the things that might be less interesting or part of the job, like long hours or certain activities that not everyone likes. I personally take the stance that it's better to be open about that also in a recruitment stage and the job interviews, so did you avoid that situation where people find out, Oh, actually that's not for me? Had I just known, and it's also some loss—no, it's a big loss, actually. You're also financially quantifiable, and if you didn't go on hiring someone who, if you had just been open around how the job profile, how the day-to-day looks like, that person would have probably not joined the firm, and you would have probably selected someone else who was a better fit.
TIM: Exactly. I agree completely. It's just all about expectations management. That's the way to avoid disappointment in life.
PAUL: Yeah, and that's why I think I can only encourage both sides, candidates and firms, to be as transparent as possible. My experience is that it backfires if you try to gloss over certain aspects of a job, of a firm, or of a team. Also, maybe the perspective of where things move for uncertainties is whether a certain part of the organization will also be there in two more years or whatever, right? These kinds of prospects, as much as you're allowed to disclose them, you should talk about that and give a view because it will pay back; it will, and if you don't, then it will definitely backfire at some point. That's my experience.
TIM: Transparency is key, and I hope we can remember that in the next couple of years as we implement AI and make sure that when we put that into hiring, we know what it's doing and keep everyone on board and on point. Paul, thank you so much for joining us for a great conversation with a wide range of topics. I had a great time, and I'm sure our audience did as well.
PAUL: Thank you, Tim. It was great, and thank you for having me.