Alooba Objective Hiring

By Alooba

Episode 35
Charles Shaw on Mastering the Art of Technical Interviews

Published on 12/5/2024
Host
Tim Freestone
Guest
Charles Shaw

In this episode of the Alooba Objective Hiring podcast, Tim interviews Charles Shaw, Data Science Director at T&P Group

In this episode of Alooba’s Objective Hiring Show, Tim interviews Charles, a Data Science Director at a media agency, shares his refined approach to technical interviews developed over a decade. Charles stresses the importance of assessing candidates' quality of reasoning over their depth of prior knowledge and describes his three-question technique that focuses on evaluating genuine understanding and problem-solving skills. He discusses the challenges and potential pitfalls of using AI in hiring processes, emphasizing the need for human discernment in evaluating candidates' true analytical capabilities. Additionally, Charles delves into the importance of adaptive thinking, methodological justification, and dealing with uncertainty, providing a detailed look at how he distinguishes between surface-level knowledge and deep analytical intuition. The episode offers a compelling exploration of effective interview techniques and reveals insights into Charles's philosophy on hiring and technical evaluation.

Transcript

TIM: Charles Thank you so much for joining us on the objective hiring show. I would love to start with everyone's favorite topic at the moment: AI is the buzzword of buzzwords, probably, but maybe it's justified, and I'd love to hear your thoughts on AI in hiring in particular, and have you had any experience using any kind of AI products in hiring? Have you seen candidates on the other side of the fence use any AI products? I'd love to hear your thoughts there.

CHARLES: Thanks for having me. It's a pleasure to be on your show, and so, yeah, it's great to be here. AI is a bit of a strange kind of word, really. It means so many different things to so many different people. I'm not even sure it has any meaning really anymore. Usually when we talk about AI, we talk about ML-type products, right? The statistical optimization type of ML techniques has been around for a while, so it's not immediately obvious why we'd need a new buzzword for them, and unfortunately, a lot of the kinds of products and services that are labeled AI sometimes they're not very intelligent for a start. and sometimes it's just not quite sure what they mean, but if we talk about it in the traditional sense, say, ML techniques used in hiring, then obviously there are certain websites that specialize in these CV aggregators and so forth. So I guess there are some kind of algorithms running in the background when it comes to my company, my firm, and, by the way, I should say that I'm a data science director in a media agency with parts of WPP, so we're fairly au fait with hiring statisticians and econometricians. We don't really use that much AI or ML or any automation in the hiring or recruitment process; as such, our recruitment processes are very traditional, very old school, face to face. When we review candidates, it's pretty much CVs. I'm guessing that maybe the recruiters that we deal with or reach out to maybe use intelligent searches for keywords and so forth to source candidates. That's probably true, but in my experience interviewing and talking to candidates, there's no AI or ML at all, so we hire people to build their own models, but essentially when we talk about their abilities and so forth, it's just a standard conversation, so the answer is no. Having said that, we do expect our candidates to have a bit of understanding of these techniques, so we may talk about AI and so forth, but that's part of the process. I think back to the last 10 years, and I don't think it's been overused. It's not like I've got 30,000 CVs, and I'm going to run a neural network to find the best candidate that's ready for that. It just doesn't. That's not a thing. It's very traditional, very face-to-face, very kind of old school.

TIM: One thing we're hearing incessantly at the moment, especially from people in your markets, so the UK and Western Europe, is that they're being inundated with ridiculous volumes of applications, so hundreds or sometimes even a thousand applications per open role. If you use generally external recruiters, I guess you'd almost be shielded from this problem a little bit, but for companies that hire through their talent team, Basically, I've got this pool of, I don't know, 500 applicants for one role. And now they are literally in a position where they cannot even if they wanted to, even if they thought it was a good idea, manually screen through those with it with a human, and so what I imagine will emerge is some kind of initial AI screening on the application data on the CV. What are your thoughts on that?

CHARLES: I'll be very surprised if they actually use AI to be honest, because that's not a new problem. Right when I was leaving the final year of university, I was looking—I was speaking to recruiters, and I wanted to get into sort of analytics or finance, and I think at the time there was a bank—not which one, but one of the main institutions—they had, I think the recruiter told me, it was 80 jobs, 80 vacancies, and 12,000 applications that year. and the way they did it Essentially, the recruiter said, Look, it's the recruitment team of this place that doesn't have time even to read all these, so what they have is this sort of staggered stage, so if stage one is, Are you part of it? Did you go to the target university? So if it's not one of the target universities, then you don't go to the second stage. The second stage is okay; another rule, and then this kind of filtering approach has been around forever. Essentially, do you need AI to filter candidates that way? Again, it's not obvious why there'd be a need to use AI or ML techniques for that may be in the sense that if you were to do some analysis on, say, you, okay, say you are part of the HR department and you want some analytics done on candidates and what kind of candidates, and you maybe can bring out some model, and you have a really high-dimensional model based on features Maybe someone, maybe I, could see an application there, but in terms of hiring recruitments, I'll be very surprised. Having said that, I'm not an HR professional; there may well be a good, articulable case for using ML, but I'll be very surprised, to be honest.

TIM: I think there's probably a couple of angles here, so one is you mentioned the graduate and intern recruitment, and you're right; that's always been a massive high-volume game where you've had a hundred or a thousand times as many candidates as you had open positions, and so, yeah, they'd use a kind of binary logic. What was your weighted average mark? Was it above X? Were you going to a particular university? All those kinds of things to then do that screening, which, as you say, is just on a field pretty simple. Probably the difference now is that these volumes are being seen in professional hires, where you can't really have that crude logic of what was your weighted average mark. What university did you go to? Because by the time you're in your thirties or whatever, it's irrelevant, so then they're looking for other ways to do that screening that would normally be done on someone looking through and getting a quick sense of the skills and experiences that match what we're after. and because that's normally in an unstructured data format, which suddenly now can be read with an LLM, maybe that's where the opportunity lies, and the other angle that we're hearing about is that there are more applications per applicant because candidates are able to apply en masse to different jobs. That's part of the driving factor. They're using LLMs to apply as well as LLMs to optimize the CV, which the average CV is matching a lot more than it used to because it's been optimized to match the GD, so now not only are they facing 500 CVs, but 400 of them look good, but it's a false dawn because there's just more lying and more bullshit on the CV, even more than there used to be. and so they're looking for a way to curb that; it's my current kind of lay of the land.

CHARLES: Yeah, isn't that the case for not using AI, though? If everything's going to become so modulus and keyword optimized, it essentially provides the same old keywords, then Yeah, I'm not sure that I think AI is a meaningful prism to look through in this context anyway. It's once you get down this rabbit hole that we start to talk about what kind of AI we're talking about. Are we talking about stuff that has been around since the sixties, like symbolic AI, or are we talking about more modern stuff? AI is not just about LLMs; far from it. And it can become really just another neologism, another buzzword. I can tell you how we recruit essentially or how I look at this problem or techniques I've used in the last two jobs—yeah, two or three jobs, if that helps, which maybe it might be helpful to illustrate my philosophy, my thinking about this if you're like

TIM: Okay, so there've been a lot of experiments around the world in different markets where they have so I'll give you a specific one in Australia, so a couple of years ago, the University of Sydney got several thousand CVs, split them into three groups, and the sets of CVs between each group were similar except for one factor: the names on the CVs. The first group had the CVs with an Anglo first name and surname; the second group had an Anglo first name. The Chinese last name third group had a Chinese first name. Chinese last name, they were the only substantial differences among the pools of CVs; they then applied en masse to thousands of different jobs in Sydney and Melbourne. senior jobs, junior jobs, different industries, different functions, like a wide variety Then they measured the rate at which those applications got a callback, either through an email or a call, or whatever. Long story short, the first group, which had the Anglo first and last name, got a 12 percent callback; the third group, Chinese first and last name, got a 4 percent callback. So if you had a Chinese name, you're applying for a job in Australia; you have one-third the chance as the equivalently talented candidate who had an Anglo name. So that is, I think, a pretty catastrophic scenario if you're from a background that's going to get discriminated against in those early screening stages, either through overt discrimination or some kind of unconscious bias where they're not really thinking about what they're doing, but I feel like humans doing that manual CV screen for 10 seconds Maybe once a week or something, and kind of bogging down the process is such a huge area for optimization because that's where 99 percent of the candidates currently fall down and get filtered out. I feel like that's where the huge unlock would be because once you get to the interview stage, I feel like it's pretty good. You get someone into a hiring manager screen; they can really dig into their skills and understand their experience, but it's those earlier stages where, God, some of the best candidates are never even getting a look in the door, and I'm sure you've had the experience of lots of kind of mediocre candidates clogging up the pipelines. I feel like the accuracy of that screening step at the moment is so low that surely even something like let's get the CV extract automatically, just the bits of Signal, eliminate the noise, get rid of their hobbies, and get rid of their name doesn't even give an opportunity for a human to do that. Buy a screen and then quantify based on the paper how well they match this job ad. Surely that would be an improvement on someone in talent reading a CV two weeks after the person applied because they didn't have enough time to screen it.

CHARLES: Okay, that problem is okay. Let's just first discuss whether that's a problem in Solvay and if that's a separate conversation, and I'll get to that, but you could just filter out names, right? That's also, I think, standard practice. So let me just get this right: there was a study in Australia where the study wasn't replicated in, say, China, for example, with Anglo-Saxons. Unless it's Chinese, it was only in Australia.

TIM: No, but there have been analogous similar studies in the United States, England, and Norway using different subpopulations of groups in Australia. There was another one using, I think, Middle Eastern names. Jewish names Aboriginal names, what have you, and then measuring the rate at which they were called back

CHARLES: But there is such a thing as statistical discrimination. Right? Say, for example, you were to replicate this study in China; you would also find some patterns. You could say that there's a sort of argument about fairness versus efficiency and so forth, but statistical discrimination itself is a recognized concept. You could, which I think is highly relevant to the discussion where the use of observable characteristics, such as demographic attributes, proxies for unobservable traits, such as skills or productivity or flight risk, for example, is so often due to perceived or actual statistical patterns in the population. So, I appreciate that obviously the employers or hiring systems might rely on stereotypes and generalizations related to education, gender, ethnicity, and other factors, but in you could also I'm not defending either way; I'm just pointing out that there's a broader discussion to be had in different countries; patterns of statistical discrimination might reflect local social economic dynamics in China, a certain set of backgrounds that might influence hiring decisions. urban candidates, for example, might be seen as having better access to education or resources in Western countries; factors such as socioeconomic backgrounds may come into play. So there's a broader kind of discussion here. I'm not necessarily saying that this is a cause for alarm and we should immediately say that this is globally unfair, totally unfair, but there's also a discussion to be had about what an automated screening is. You can say automation can reduce bad biases. If algorithms are trained on historical data that might, you know, perpetuate existing biases, that's also a problem. Even anonymized data can reflect systematic inequities, so educational institutions are being represented in certain high-paying jobs in certain industries. So it's a bigger issue than just pointing out a case study and saying you need to fix it by applying certain things. I'm going to be wary of these, so it's an interesting case study, but I'm wary of generalizations when it comes to these sorts of studies, right? There are efforts to mitigate some of this discrimination, for example, in fairness metrics and HR departments. My HR department, for example, incorporates fair fairness constraints, for example, to ensure that they don't disproportionately advantage certain groups. And that's fine. There are also in the UK the qualities and discrimination act 2010, which essentially mandates the quality of opportunity, so it's enshrined in law, and I'm obviously a big proponent of that—not to be confused with the quality of outcome, which is a different thing, of course. So yeah, in global it's a big, it's a difficult issue, but in the global sort of hiring context, different tools must be adapted to local socio-economic dynamics, and certainly a discussion to be had.

TIM: So I'd love to hear your thoughts on how you developed your philosophy on interviewing and how it all works.

CHARLES: Yeah, the short answer is that, so just the background, the context, and background, we, so we're part of a medium, we do marketing econometrics essentially, so we need to look for people who are competent econometricians or statisticians, and that usually involves hiring at the MSC or PhD level. So we're receptive to graduates and experienced hires, and in my current role I serve as a technical director, which means that in the third, usually final, and technical round of the interview, I get involved in the first two rounds of chemistry and business-type context, whatever So the third round is technical, and we give them a take-home test exam and so forth. And then we go give us the data; here's the problem; try and solve it; come back in a week, two weeks, or whatever, and we'll talk about it and do a presentation, and then my interview technique is basically three questions that hasn't changed over the last 10 years. Previously I worked in publishing for five years, and there I had to hire and build a team, so that's going to serve me well, and then hear it as well as certainly well, so essentially three questions are very straightforward to me: question one is talk me through your CV, and question two is why have you applied to work here? And then there's something about the analytical thinking, so that's the short version of that, right? The longer version is that how do you—so I'll expand on the final sort of the question—is how do you assess analytical thinking? That's difficult because in my experience hiring for professional roles, I've developed a technique that kind of centers around asking questions about familiar business or economic concepts or topics that the most business-aware people will have some exposure to in their lives. So, for example, I'll give you an example, so let's ask somebody what happens if interest rates rise tomorrow, and just to limit the answer, you'll feel the effects across three timeframes: short term, medium term, and long term, so short term being less than 12 months. medium term: one to five years and long term: five years plus So the question works well because most people have some familiarity with interest rates because of the paper, the credit card, or whatever. I have a mortgage loan, or you can replace the interest rates or something else. Just pick a newspaper and open up the front page or whatever; just any kind of common business topic will do. So the aim isn't a test of specific knowledge about interest rates or economics, right? Since that's not the point of this, we're going to try and see how candidates think through complex problems and handle uncertainty and, most importantly, how they recognize the boundaries of their knowledge. So how do they work within those boundaries? And then we want to see how they build upon these ideas and reach deeper inside, so first you want to find out what I call what floor they're on. You want to, so the basic level we're looking for is fundamental cause and effect understanding, right? So any kind of educated person might have this, then to me, an intermediate level might explore some kind of interconnection and second-order effects, and the advanced level might examine some complex systemic relationships of theoretical frameworks in the mind with the great economics in the mind. It's about some theory about it, but you want to find out what floor they're on, right? It might be just a history grad with no understanding of interest rates or that sort of economic theory, or they might have a master's or PhD in economics, so let's find what level we need to talk to them at. Yeah, so we first calibrate the candidate's knowledge level, and we need to listen carefully for the difference between memorized responses and genuine understanding, so once we identify the knowledge threshold, then we're going to enter the second stage, then we're going to engage with them, and then start pushing boundaries. and the point of this is that you want to have a person who we look for, what I look for is understanding of a concept at an intuitive level, so when I hire a statistician, I want to see understanding of statistics at the intuitive level when I hire in the previous role there were business analysts. I want to see business analysis being understood at an intuitive level, and that's quite difficult to test on an exam or whatever, so this is the right approach for this, right?

TIM: And when you say intuitive level, what does that mean exactly?

CHARLES: Okay, so we've asked the question, and let's say for the sake of argument, I'll give you three example answers, right? So if that's an entry-level graduate and you're asking a question, then you might say when interest rates rise, several things happen in the short term: people in business borrow less because loans become more expensive. Savings accounts become the stopping point for more interests; some people might say the medium-term housing market slows down; usually, mortgages become more expensive. Companies might have less to invest in because borrowing starts costing more, and obviously they have less to invest in projects or equipment. In the longer term, the economy slows down. which can help control inflation. Okay, that's a standard basic level. one sort of answer some of you might be more experienced in saying that it might be an economics grad or something that might say that interest rates would trigger several market adjustments in the short term; there'll be repricing of variable debt instruments Consumer discretionary spending might be decreased because debt servicing costs rise, bond prices fall, and adversity yields. banks might see high net interest margins capital expenditure might be reevaluated real estate market experiences double pressure on valuations because of high mortgage costs and in the long run, economic growth moderates as the cost of capital increases, and then you might have a more sophisticated professional with actual economics training, and we'll start talking about monetary policy tightening through the water. So there'll be cascades for the economy of ice transmission mechanisms, for in the short term there'll be immediate repricing of the yield curve with expected happenings. The mice start talking about labels, continued or whatever, and the mice start talking about the credit channels. reducing broad money supply growth interest rate pass-through multi-annual Miller frameworks so that sort of thing, but you need to find a level where to where to engage them, okay, fine, if they're an expert, fine, let's talk to them at that level if they're a basic understanding, and then so once you've obviously the red flags would be overconfident assertions. some statements beyond their knowledge limits heavy reliance on jargon; beware of candidates that can't explain concepts Or that sort of thing. On the positive side, just look for candidates who clearly articulate their reasoning and appropriately acknowledge their limitations, so you want to prioritize kind of the quality of reasoning over depth of prior knowledge, right? because you don't want to hire somebody who's just spouting received wisdom, so the idea is that you only want to pay attention to how clearly they communicate, and this approach helps to identify candidates that can not only handle the technical aspects but also help critically think about what they're saying and adapt to new situations. And then you just push the envelope a little bit and say, Okay, what happens if, okay, let's say interest rates rise, and what happens if they do the X, Y, Z? and then just see where they go from there, but it's very important to find that level where they're at and then push the push down below. If you were to hire statisticians, for example, you could apply the same sort of methodology where, again, let's basic understanding, more advanced understanding, and expert understanding, but so let's take a common sort of statistical concept, stationarity, and explain stationarity to me. So the basic understanding would be obviously talking about statistical properties, mean and variance, that don't change over time and so forth, right? So if you're looking at stock returns rather than stock prices, then the series tends to fluctuate around the constant mean, so you need stationary data for the normal statistical techniques to be valid, right? So that's a fine acceptable explanation, or you might have a more kind of advanced or PhD-level understanding; you might talk about the stationarity as a fundamental and economic rather than statistical concept. We often rely on texts like ADF and KPSS or whatever, but these really support our economic reasoning and then drive it. Once you find out where they're at, just say, Okay, fine, let's say you have a regression, and what happens if you have non-stationary data on the left side of the regression? What about the right Can you mix it and just see where they go? So the idea is to test them in an area where they haven't been before, right? Let's say you're analyzing GDP and inflation. Would you expect these series to be stationary? Right? Why or why not? How would you approach testing if you suspect a structural break in your series if you have a unit root in your series, and a unit root is not the same as stationarity, by the way, right? So again, do they know that, so do they rely on statistical tests, or do they show economic reasoning? Can they explain complex concepts in simple terms? Do they recognize limitations of the knowledge? By the way, it's fine to say, I don't know. Can they connect theory with practical applications? So red flags would be obviously overconfidence without consideration of limitations, so they just don't start spouting stuff. Inability to understand why stationarity matters beyond green flags would be recognition that your stationarity is an economic assumption. What kind of stationarity is right? Understanding the nuances, like near-unit routes near structural breaks and that sort of thing, so why? What? Okay, the point is this: the point is that you have certain benefits to this, right? So one is, you know, this approach kind of reveals—I feel it reveals true analytical capability over memorized knowledge, right? because you want to effectively distinguish between candidates, the memorized concepts in their class or whatever in their course, and when pushed beyond their knowledge comfort zone, candidates must demonstrate their raw analytical ability rather than relying on pre-prepared answers or academic theory. This is valuable because in real-life situations, memorized solutions don't really apply perfectly to unique business problems, and as a statistical consultant, you see all sorts of weird and wonderful problems in different industries with different data, and a lot of the stuff we do isn't in the textbooks, right? or you might be talking to a transport company and that their head of modeling will be an expert in transport stuff and you need to quickly become an expert in transport modeling, so spatial autoregressive data is difficult to model, for example, not towards the standard MSC economics course, right So the other benefit will be that it just creates a more level playing field. You can use familiar concepts like interest rates or supply chains as a starting point, and this allows candidates from diverse backgrounds to demonstrate their potential, right? So we might have a person in econometrics but with strong analytical skills. I have a math degree or whatever and potentially can perform better than an economics grad who struggles to think beyond textbook scenarios, so that helps you identify talented people who might be able to apply traditional interview approaches. The third option benefits is that, as I mentioned before, I'm going to unpack this a bit. So it stimulates real-world client interactions, so this method really mirrors actual client situations where the professional might need to navigate conversations with the clients that have deeper industry expertise than you will ever have, and admitting knowledge gaps while maintaining credibility is actually crucial. The hack to all this is just to say, I don't know. I'm really sorry, but please tell me. I'll learn. But you need to think through problems in real time with incomplete information, so building on basic principles, you can understand complex situations. So this becomes a new simulation of these kinds of real-world scenarios. You can get an insight into how candidates will perform an actual role. There's a clear differentiation, so another benefit is that there's a clear differentiation between candidates, right? And what I used to do is actually just ask that question and have notes and just have, I say, 20 CVs, 20 candidates, or whatever, 20 interviews, and I would just compare notes for that question alone because then I would be able to judge them. Based on that, right, so this kind of just creates a natural separation point between candidates, so some obviously will struggle beyond basic concepts, and some are not able to think of second-order effects, right, so these are not the people you want, obviously, right, but others will have a strong, intermediate, interrelated understanding of concepts, and one of the most struggled with complexity, but the best candidates will show the ability to reason from first principles and the capacity to build on this to get further insights. So you need good analysts to think through a problem, and that problem might be new, so I don't want to know, so I'm as basic as I want them to understand economic theory, right, econometric theory as a fundamental, but that's not enough. What I'm looking for is somebody I can think through a problem with. And the final benefit is that it just reduces interview gaming. Traditional interview questions are prepared for people who can't be you. You can get coached; you can get research, whatever, and then this approach really will focus on thinking processes rather than specific knowledge. and that's just harder to gain I think it's impossible to get candidates to simply memorize expected answers because the value lies in reasoning and the ability to think through the problem. I think beyond their preparation, right, so that's that in a nutshell. That's basically, as I say, it's only three questions, but the final question is where you want to find where they're at and then push them slightly beyond that knowledge comfort zone and see how they think through a problem. I'm going to just say that in the first example, what I used as a motivating example is just hiring a general business analyst. That's a little bit easier than when hiring for technical roles because when you're hiring for technical roles, you may encounter what I call extreme or kind of heterodox candidates. You might have people who might have a really excessive technical jargon, what I call "technobabble, and I just start spouting technical jargon at you. It's rare, but it can happen, or you might have a person who will maybe just say, Oh no, I'm not big on econometrics; whatever, I just tend to focus on traditional economic reasoning, and that's also fine. But as long as this doesn't, it's not a way to mask understanding gaps. I think quite famously the previous governor of the Bank of England, Mervyn King, was famously anti-econometrics, right? He was a fiercely intelligent guy, and if you read his writing, he was able to think through almost any economic issue at a really deep level. So that's a good sign, right? But if somebody is trying to mask their understanding, obviously that's bad, so you have to be generally more technical than the person you're interviewing, but you have to be mindful to talk to them at the level that they're at. and find where they find the level they're at and then push them beyond that boundary and then explore their reasoning, and that's what I mean by intuitive, so it's a bit of a long-winded answer, but it's as simple as it is difficult.

TIM: That's just a devastatingly Well-explained philosophy you've got; that's so well thought out and clearly developed over such a long period. I just like to drill down on just one specific area, which is you've got this diverse set of candidates in terms of their knowledge base. As you said, you could come in with a PhD in econometrics or no knowledge base. economics training at all, and yet they're going through ostensibly the same interview, but you're just picking their level at the start. How, at the end of the day, would you compare the performance then of a candidate who's come in with a PhD in econometrics and one who's got, let's say, a history background? What does that come out as, and how would you end up choosing one or the other to ultimately hire into a position as an econometrician, which I guess is what these roles mainly are?

CHARLES: This is where the concept of the elevator comes in. You've got to find what floor they're on, right, so you can go to, say, level three, and they might have a PhD in statistics. We might be hiring a fresh grad who's got a good degree but just hasn't done any kinematics, right? The assessment indicators are like this, so you would, it's obviously you want to have the candidate demonstrate adaptive thinking and some understanding, so red flags would be somebody who can't justify methodological choices. They're unable to discuss limitations; they are unable to connect the dots and connect methods. Superficial use of technical terminology cannot simplify technical explanations or rely on classroom examples or something like that. Right, there'll be red flags; too many of those clearly are not suitable, in which case green flags would be that this person articulates key assumptions clearly and understands them obviously. So you want some of the links in the methodology to the research question to show awareness of alternatives, recognize the value of simpler methods, or just have an understanding of techniques, so you want to have a mixture of those good benefits and good points. the green flags and maybe not so many of the red flags And how they perform will be essentially whether they essentially collapse or not the first hurdle or second hurdle, so one good thing to ask is, Okay, let's take a problem, and let's just start relaxing assumptions. Okay, let's take a linear regression. Okay, fine, let's relax. The assumption of Let's say it's multiple regimes; for example, it has non-continuous data, right? You have jumps in the data. What happens then? Let's say you have a non-stationary, non-continuous, multi-regime data set, and you just start relaxing these assumptions and just go through what happens. Then how would you model it, and how would you do it? Because everyone knows OLS, some people don't, but in this job that I'm in, you would expect someone to be familiar with linear regression, but it just starts, just starts relaxing assumptions, starts going through, okay, what happens, then what happens, then okay. Yeah, it really depends on the context. If there are PhD-level candidates, if we start talking about, again, the whole stationarity idea, then why start talking about ergodic or non-ergodic processes? It doesn't matter if we do get into that rabbit hole; then we can talk about that. If it's kind of more junior candidates, it might be you, but if you want to explore that, you want to explore the intuitive understanding of the space. and that's the whole idea, unless there's some kind of way to do a kind of Star Trek-style mind meld, which is that technology is not going to exist for a while; you want to see where they are comfortable up to the point where they come to the point where their knowledge stops, and then start exploring how they think without the background of the understanding of the dilettante in the classroom or the union or whatever. So that's the idea behind this.

TIM: And there would be scenarios, sorry, where there would be a history grad with no knowledge at all of economics who wouldn't know what supply and demand is against in the same cohort as a PhD in economics, and yet in theory the history grad Could it depend on what job you're hiring for? Yeah, so the previous company I had just four business analysts, and you didn't really need a degree in economics, but you needed an understanding of business, and there you could have a history graduate, and you could have a PhD in economics, or you could have a person with no degree at all.

CHARLES: Actually, one of the best guys ever was actually just so one had a degree in French literature, a fiercely intelligent guy; another had no degree at all, but he'd just been in business all his life, right? So that's perfectly possible. In the current role, I work in the data science team. We used to be called the economics team before they changed the job titles, so you expect people to have some sort of academic credential in the area, usually an MSc, so in this current role maybe a history grad might not apply; they wouldn't self-select to this role; they wouldn't probably pass the technical test, but there's no reason, but they say if they did pass the technical test, and we then they got to this, they got to round three, and I'm in the room with them. Then there will be okay level one, okay, okay, fine, you're comfortable there. Let's go to level two. Okay, are you comfortable there, you guys? Level three, okay, maybe not so much. Let's go back to level two, and then, okay, okay, this is where your knowledge stops, and then let's start probing into the more intuitive things that you don't know about, but let's think it through, and I'll help you guide there. Okay, what happens if X, Y, and Z Whatever, so that's the approach. Yeah, the key is actually so the key to executing this technique effectively lies in the follow-up questions, so you've got to ask the candidate how they arrive at their conclusions, so probe their assumptions. explore how their analysis might change if the key variables were different ask about potential second-order effects and key uncertainties in their analysis, right And then obviously the best practice would be to allow some silence for thinking, right? Don't rush to fill in quiet moments; avoid leading questions at all costs, right? Because they might suggest answers to you, that's not the practice, obviously, and pray for reasoning rather than conclusions and courage in thinking aloud. Yeah, so you want to, and then you might need to once you go out to the candidate for a familiar territory because they're in that familiar territory, so let's say we—I keep saying that I use this phrase a lot, but we look for statisticians with an intuitive understanding of statistics. and when that's actually quite an advanced level, I would say, because then you absorbed everything you thought you had done, slide projects, you thought through different things, right, and now you're ready for the raw industry, right, where data is messy. You have unique problems and very nonstandard situations, and it's always a trade-off because you're not going to have it. It'll be some kind of horrible combination of small sample sizes, plus nonstationary data, plus this and then you're going to find the right solution to have all that right, and then you're going to do it quickly, right? So that's a skill. The worst candidates to apply would be somebody that's just so… a big red flag would be somebody, and then I guess this relates to the CV screening, and the question is that there is a certain candidate, which I call the kind of CV technique collector, right? So, for example, you might have a candidate on a CV that says, I've got experience with ARIMA GMM panel data, instrumental variables, RDD, different diff machine learning, and whatever inserts five or six different modern methods, right? And to a recruiter, they might say, "Fantastic! This is the guy you want. Get this guy, because on his search, where this goes is really high, but this approach usually indicates toolkit mentality rather than understanding of appropriate application isolated learning without comprehensive understanding of the methods, so what you have is a guy with surface-level exposure without a deep implementation experience, really. so that's usually That's usually a red flag, so yeah, that's

TIM: That is a great explanation, and one thing that's probably worth mentioning is this approach would work very well but does require the interviewers themselves to be experts to be able to really judge the answers and drill in and dig into those details. Is that a fair comment?

CHARLES: You should Yeah, absolutely. So, yes, absolutely, this only works if the interviewer is an expert, or, because then you need to be able to converse depending on whether you would take the elevator to level one, two, or three, or go to the roof. Whatever you need to be able to do, now it might be the case, and it usually is the case, that somebody with a PhD in a very specialist area will have an expertise in that certain area, right? And that's fine, but we're not looking for—we're looking for evidence of broad general training, obviously, right? But also of ability to think through a problem right the goal is to understand their thought process and problem-solving abilities in the real-world context. That's what we tested. Yeah, yeah, the main challenge for technical roles hiring data professionals isn't the main challenge; it isn't finding people who know technical tools. That's not particularly in the current environment where data is essentially free and ubiquitous and there's a course for everything. You can say this online master's, there's Coursera, there's whatever. It's finding people who can both understand the contexts and apply technical skills appropriately, so that's why you need a multilevel interview technique to assess both technical depth and some kind of apply document, but that's why it's important to present a candidate with media scenarios, but then, so yeah, technical skills are necessary but not sufficient in this case. I think that's what I'm trying to say; the key thing is you need to look for the ability to apply these skills intelligently, so that's hence the technique of deliberately pushing candidates beyond their comfort zone to see how they're confronted with uncertainty. and a candidate who can reason from first principles and acknowledge their knowledge gaps. It's fine to say you don't know; in fact, it's better because I'd rather have somebody say, I don't know, than talk nonsense in front of a client. Right? So a candidate who can reason from first principles is actually more valuable than somebody with perfect knowledge. technical knowledge but rigid thinking So yeah, I'm not sure there's a role for AI in this just yet. Right now I'm not talking about cultural fits or other stuff that's also important, but in terms of hiring for technical roles, I think that's definitely my philosophy on this. and as I say, it's as simple as it is complex, basically just three questions. Yeah, do you feel, do you agree, do you think that's a

TIM: Yeah, I don't think I don't think AI is anywhere near replicating that, that's for sure. That level of depth of inquisition around the candidate's skills in a way that could go down infinitely many paths, like you must never have had an interview that was the same as any other interview, in a sense, even though they're structured the way you delve into their thought process and the way you have that leveling, means that you must have had a thousand unique interviews in your

CHARLES: Every interview is a challenge, but every interviewee is different, and therefore every interview is different. That's right, yeah, the last thing you want to be doing is ticking boxes, really. I think this could be done at the earlier stage; this is going to master So, can you take this guy who has two years of experience? By the time they come to me, we know that it's the case of trying to unpack. Yeah, so as I say, it's certainly the last three roles, three jobs. I think I'll probably stick to it. Yeah, it's a difficult one. It's human cause human is complicated, right? So it's tricky, and there's a lot of noise in the process. So really, you want to see that the person has an ability to think through a problem on their own with a client. For example, I was seconded to a client for 18 months, and they have a particularly interesting beta, really high dimensional. And I said my boss trusted me to do a deep dive and understand their data, and I spent a lot of time trying to think deeply about their data, for example, and likewise, somebody that we hired—I want to be able to trust them too. If there's a quantity of clients or whatever, or even working with a client for two-week projects, or we work in the team, it doesn't really matter. Are they able to? Because the thing about being a statistical consultant is that by the time a client comes to you, it means that they can't do it themselves. And this is quite a serious kind of point because we've done work with blue-chip clients from the biggest names you'll have heard of, and they have data science teams, and they have PhDs from very good universities, and they have very clever people who know their industry, who know their company, and they know their data, so how do you provide a service? You have to go a little bit further; you have to work a little bit harder. A good statistician, a good consulting statistician, will have exposure to all sorts of different industries and seeing different problems. big data, small data, weird data, and incomplete data—we've seen it all, okay. and that's what makes a good statistician: to be able to engage with those different problems and unique industries and unique problems, whatever data sets you need to really commit yourself, and that's the kind of skills we look for.

TIM: Charles You've given us a masterclass in interviewing and, at the same time, a masterclass in being an interviewee, so thank you so much for sharing all your insights and thoughts and wisdom today with us.

CHARLES: Thank you very much for having me. It's been a pleasure. Yeah, great to be on. Thank you.