In this episode of the Alooba Objective Hiring podcast, Tim interviews Faye Murray, Chief Data Officer
In this episode of Alooba’s Objective Hiring Show, Tim interviews Faye to discuss the pervasive impact of AI in hiring processes. They delve into the challenges and concerns surrounding AI-based screening, emphasizing the importance of ensuring these tools do not perpetuate biases inherent in historical data. The conversation touches on Amazon's failed AI recruitment tool due to gender bias and a University of Washington study revealing significant racial and gender biases in AI screening methods. They explore solutions like anonymizing CVs and combining AI with human oversight to mitigate these issues. Additionally, they reflect on the existing bias in human-based hiring processes and the potential benefits and drawbacks of quotas for achieving gender diversity in the workplace. The episode concludes with Faye sharing her hiring experiences across different global markets and giving shoutouts to individuals who have inspired her in the hiring and data spaces.
TIM: Faye, thank you so much for joining us on the Alooba Objective Hiring Podcast.
FAYE: Thanks, Tim. It's great to be on.
TIM: I would love to start with everyone's favorite topic at the moment: AI. It seems to be changing all sorts of things in the world and society at an alarming or breathtaking pace, depending on your perspective, and it'd be good to discuss it in terms of hiring. Have you managed to use AI at all in any bits of your hiring process? and in particular I'd love to drill down on the kind of screening step as well and your thoughts around hiring or using AI.
FAYE: Yeah, so no, I haven't used it at my current place of work or previous places of work. I do appreciate the benefits, particularly when you're receiving a lot of CVs and applications and you want to focus more on the strategic and personalized hiring aspects rather than the administrative tasks. I guess my concern with this kind of technology is that whether you're buying off the shelf or you're developing it internally at your organization, those kinds of tools could theoretically reinforce biases, and so you could end up missing out on great candidates. So that is my primary concern: there is some evidence to support that, so I think it's fairly well known that back in 2014, Amazon started building an AI tool to try and automate its recruitment process, and it was designed to review resumes, rank candidates, and identify top talent quickly and efficiently. By 2015 they basically realized it was heavily biased against women, so it was never put into production, and then about a month ago I read about a study at the University of Washington, and it was specifically looking at or investigating racial and gender bias when using AI to screen resumes, and so what they did was they took three open-source LLMs and collated around 500 real-life resumes and a similar number of real-world job descriptions from across nine occupations at different levels of seniority. and all they did on those real-life resumes was alter the first name, so they swapped in 120 first names that were generally associated with some combination of Black or White individuals, male or female, and so at the end of it they had over 3 million job race and gender combinations that they tested. and again nothing else but The first name was changed, so they knew that if the AI selected a white male candidate over a Black male candidate for a particular role, the only difference was the name because they'd submitted both the rest of the CV exactly the same, and the results from this were quite interesting. So they found that firstly, Black men fared the worst, so other candidates were preferred by the models nearly 100 percent of the time, and that was all three of those models, and they also found that the models favored results from white-associated names 85 percent of the time and female-associated names only 11 percent of the time. And given those results, I think it's critical to ensure that any tools that are being used aren't reinforcing biases that are in the data that it's been built from because all those AI-based screening tools will be learning from historical data, and if they're trained on biased historical data, they'll be inadvertently perpetuating any existing biases. Which could mean favoring candidates from particular demographic backgrounds, particular work experience backgrounds, or particular educational backgrounds, and what that can lead to is potentially qualified candidates being overlooked simply because they don't fit those historical trends or the historical mold of past hires. There are a few different things you could theoretically do. I think it's harder if you've gotten an off-the-shelf solution, so I think before you'd even employ that in your organization, you'd have to test it and go through a test or trial period with it to see if it perpetuates bias. If your organization is building something like this internally, obviously you can audit the training data, so you can try and see the data that these models are being trained on. Is it diverse? Does it reflect the current organization's values as opposed to, for instance, past biases in the old data? And then if it isn't, can you get hold of more diverse data? Can you use things like synthetic data to compensate for that? The other is if you're using these tools regularly, test them for bias. See what the outputs are if you try people from different educational backgrounds or different demographic groups to make sure it's continuing to be fair. I think something that I've been thinking about a lot is that when I've read about these and heard about them, they seem to be just very focused on matching keywords from someone's CV with the job description. and obviously there's a lot that can be lost in that, particularly if someone has transferable skills or they miss a particular word or phrase. I have looked into some of these tools, and it seems like some of them do offer something that is called skills adjacency,so it's not just focused on rigid keyword matches. And then I think the final thing, which I think is true of any AI, is that you have to combine the AI with human oversight at the same time.
TIM: That's fascinating. What I feel like would be interesting to add to the mix of the conversation is the fact that the existing human-based approach is also so drastically biased and unfair, and I can think of similar experiments that have been done where organizations have gotten, again, that group of CVS split them into different buckets, the only difference of which are the names. so one that was done in Australia I think three years ago, by people from the University of Sydney, there was a group of three CVS. The first set had Anglo first names and Anglo last names; the second set had Anglo first names. The Chinese surname third set had Chinese first and last names; they then applied en masse to tens of thousands of roles across Australia in different organizations, different industries, and different domains. So they controlled for all these other variables and then measured the rate at which those applications got callbacks, and for the first group there was a 12 percent callback, and the last group was a 4 percent callback, so basically, all else equal, if you applied to a job in Australia with a Chinese first and last name, you only have one third the chance of getting a callback. So that's the existing real actual discrimination, so I feel like that's worth adding to the mix because it's not like AI is coming into an existing level playing field and then could make it worse. I feel like it's already so dreadfully bad; could it really get worse? You know what I mean? And I feel like also, at least in theory, with an AI or some kind of data-driven system, at least there's some level of transparency that could be there where you can say, This is who got rejected and why, and here's the metrics.At the moment we just have someone in HR or talent clicking a rejection button. with no audit trail at all, no rationale for why that rejection happened, that isn't even a reported metric. This is like no nothing at all, so I feel like surely there's at least the AI-based system that has the opportunity to be successful; maybe it's also got an opportunity to be dreadful at scale as well. I don't know.
FAYE: Yeah, yeah, no, I completely agree, and it's, again, it's humans ultimately working on these things and developing these things, so whoever's working on the AI needs to be aware of these issues when they're developing the model and putting together the data on which the model is going to be trained. Otherwise, like you say, you can end up in exactly the same place. It could be worse; it could be slightly better, but it's not going to be great.
TIM: I would have thought if we just think about that initial application CV screening step in theory, surely you could do like a first call to chat GPT to Hey, anonymize this CV, get rid of the names and whatever's independent, then the next step is based on this anonymized CV, then rank it against this job description. It's not any training on looking for candidates like this; it's just here's this set of words against this set of words. I feel like something like that is surely going to end up working in some kind of fairer way I would have thought.
FAYE: Yes, that would make sense, although I have seen some studies that say that sometimes even just from looking at the language that someone's used on their CV or their resume, it can tell certain things from the background or the demographics of that individual that might be something else that has to be considered, but I agree there must be some way through this.
TIM: You reminded me of something someone mentioned to me just last week, actually, about a really interesting use case. I never thought of recruitment agencies. When they post a job ad, we typically anonymize the company that's doing the hiring so that other competitors don't come in and try to take their clients, but they still have to give enough information that the candidate is interested. So they might mention the industry or the history of the business or something. Candidates now can put that into ChatGPT and say, Tell me which is the hiring company, and apparently, it's got an alarmingly high accuracy to infer who the company really is.
FAYE: Yeah. Just based on the language used
TIM: What about in terms of this screening stage at the moment? So one bit of feedback we're getting from a lot of people, especially in Europe and North America, is that they're getting a very high volume of applicants, which is probably partly due to suppressed hiring conditions but also seems to be because candidates might be using tools to apply to roles en masse, so there's just a lot of applicants, but also there's a sense that they seem to not be written necessarily by humans or they're optimized by ChatGPT, and it's a sense that maybe the CV is now becoming even less predictive or even less representative of the person that it used to be. Where are we going to go with screening? Is the CV just going to be useless soon because it doesn't really tell the picture of who the candidate is?
FAYE: No, that's true. Well, I can't think of an easy way to have an alternative starting point, right? What else would you have? You need something that's relatively condensed that you can get a picture of the individual so you know whether it's worth having those initial conversations or initial interviews. So I'm not sure what the solution is. I've heard people speaking to me about this; this is just going to make the exaggeration situation with CVs even worse, and I don't know if I necessarily agree with that because I think there are a group of people who are going to be prone to exaggeration on their CVs. All tools like ChatGPT do is make that process a little bit quicker, maybe a little bit more accurate, but I know people who are very like—they've undercooked their CV. I've interviewed people who've then come to work with me, and actually their CV was not a representation of all the things they've done or achieved. I've looked over friends CVs where, actually, they've really undercooked their achievements—stuff that I've worked with them on, so I guess, yeah, it's just going to exacerbate some of the existing issues there, but like our previous point about the AI, these aren't new challenges; they're existing challenges in the market. so I don't think I don't think it's going to make the CB situation any better. People already can pay someone to look over their CB and rewrite it for them. This is just a way to effectively do that for free, and if companies are using these kinds of AI tools to do things like basic keyword matches or perhaps some more complex stuff, it's becoming more and more difficult for applicants as well.
TIM: What about the idea of, because, so let's say a new way of writing a CV would be instead of opening a blank Word document and just unpacking all of your achievements, you're going into ChatGPT or an LLM and having a discussion with it. It might ask you some back-and-forth questions, and it goes, Cool, I've got enough. I'm now going to create your CV for you. Is there not something around that CV that has not actually been written by you? Therefore, any lies or misrepresentations on it you can almost disassociate from. It's not that I haven't lied that Chachapati is hallucinated; some kind of reframing like that might cause a bigger portion of a CV to be lies.
FAYE: That's true, but then if you're still submitting your CV or you're signing off, like most things when you submit an application, it has a disclaimer, right? Which says, I'm not talking about it, but it basically says what you've put here is true and a good representation of what you've done or delivered and so on. So I think you still have a responsibility even if you've used other tools to come up with your CV. Even if you use the CV writer like people have been doing for years, you still have a responsibility to make sure it's an accurate reflection, or a relatively accurate reflection, of what you have and haven't done and you as an individual at work. But I do think the CV question, thinking about it a little bit more, does just highlight the importance of that kind of comprehensive, skills-based assessment in hiring, kind of thing that the Alooba platform is designed to do that will just come more to the forefront of the hiring process, but you still have that tricky bit, which is you're getting hundreds of applications. How do you process those to get it to
TIM: Yeah, it's a tricky one. Like when we've hired in the past, because we always had this remote-first business, we would have a very high volume of applicants. We'd have thousands, typically, for a role, because we had the luxury of an infinitely large market of candidates to choose from, unlike if you're hiring into an office where you've got maybe plus or minus 30 kilometers from your office. We had the world, which suddenly changes the equation. So we would always do a screening test as a first step; we wouldn't look at the CV at all. Now that has benefits for us and also some difficulties for candidates because if there's a thousand candidates taking a test, maybe five of them are going to get to the next gate. That's quite unfair that they would commit, I don't know, 30 minutes of their time to doing a test where they've got such a small chance of getting it, so I feel like there's a lot of things in hiring where it's a bit of a balancing act, a bit of a trade-off. like we set up something that's maybe a bit better for us but maybe not as good for some of those candidates who didn't get an interview. They've wasted their time, you could argue, maybe.
FAYE: And I've also heard of people not doing tasks and tests anymore in the recruitment process because they're saying, Look, I'm out of a job; I'm having to apply to so many jobs to even get a look in because hundreds of other people are applying. I don't have time to do so many tasks every week, taking an hour or two at a time.
TIM: Yeah, I wonder whether companies should be paying people for that kind of work, like those actual in-depth take-home assessments. Five to ten hours is almost that; that should be a consultancy fee just to make sure the person isn't completely out of pocket almost by doing that work. I don't know.
FAYE: Yeah, no, I agree. I've thought the same thing. It's not necessarily fair if someone's spending that amount of time to not compensate them for it, but then if you're going to recheck the whole hiring process, in my ideal world, if you had a really strong applicant, you wanted to hire them. In an ideal world, you'd have them in the office or remotely working with you for two or three weeks, fully paid, because for them, they can see, Does this job actually suit me? Am I enjoying the work atmosphere, the culture, and so on? and you can see, Okay, can they do the job? How is it working out in an ideal world? but how many organizations would A pay for that and B resource it and support that kind of setup?
TIM: Yeah, what did McKinsey used to call that? Was it the 40-hour interview where they got you in for a week to work? That's right. That's right. but I don't think they do that anymore. Are they still
FAYE: doing that?
TIM: I guess it's challenging for some candidates because it would work better if you were out of a job currently and then you worked for two weeks as opposed to having to quit a role or take leave or something. Yeah.
FAYE: Exactly.
TIM: But yes, I feel like typically the closer the hiring process steps are to the real job, Surely the better indicator that is of whether or not they're going to be successful
FAYE: I completely
TIM: What about Back to the kind of bias question, so we're talking about existing CV screening bias with humans and potential new or exacerbated biases with AI. One thing that some companies do is try to anonymize candidates details to do a bit of blind hiring. I'd love to hear your experience with that and what impacts it had on bias but also maybe some unexpected consequences of that approach.
FAYE: Yeah, sure, I've trialed it once, and it was probably not a good case, actually, to trial it, so I trialed it with two roles. There were two senior data engineer roles in a previous job I was in a while back. The reason it was not a good one to try on was because actually that team was extremely diverse demographically, educationally, and work history-wise. So I think if I inherited a team that was not diverse in those kinds of things or was not functioning and I had some roles to recruit, and I felt that we were lacking diversity, it would be something that I'd be trialing again, so the way that it generally works is you will anonymize CVs. So you can't take all the details out, obviously, but you would take off things like names—anything that points to demographic information, even educational backgrounds. You might leave on the degree and so on, but you wouldn't necessarily then put where they got the degree from; you would take off where the individual was born. living potentially in some circumstances, and so those are the primary things I know people who might take it a little bit further. I've even seen examples where they've kept the names of the jobs, but they've taken off the organizations that they worked at previously because there might be some bias associated with those. We didn't do that; we left the work experience pretty much as it was, and it was so it was challenging because we had over a hundred applications, and we had to get someone who wasn't on the hiring and recruitment team to then manually anonymize it. Now, since then, because this is a few years ago, I think there are actually tools that will do that for you. But yeah, so that was one challenge in terms of who we actually interviewed, so we filled one of those roles pretty much immediately; there was a clear kind of winner from the interview process—really excellent. They'd also done a task later on, and then the second post we had to go out again. We use the same process again, and so the idea is it takes out some of that initial bias when you're doing either your long listing or your short listing at the start, so you can only really use it in your initial screening process, right? Because once you move into interviews, you're obviously meeting the person, whether it's virtually or in person, and then the research showed—I remember checking this at the time—the main thing that it protects against is something called affinity bias, which is that all of us on an unconscious level will be looking to hire people that are more like us, whether it's educational background, work experience, or demographics. and like reflecting when I was reading that research, I thought, Yeah, I've probably done that in the past without even realizing it. It has been shown to be positive, the blind hiring practices and normalization of CVs in terms of fighting against that kind of bias. Would it have necessarily changed in those two roles we recruited to? I don't think so. The one question mark I always had is that one of those individuals actually didn't live in the country but didn't have the right to work in the country at the time because they'd just finished a master's degree. I think it was the organization I was at that was able to get a license, had a license, and was able to give visas relatively easily, so it didn't impact it that much. but if we'd seen more of the CV, we might have deduced that earlier on. I don't know if that would have impacted mine, the panel's long listing or short listing requirements. I doubt it would have because of the organization that we worked at, but overall it was a positive thing, but like I said, it took a lot of time without using a tool to anonymize the CBs. You lost some of the context from the candidate's background, which I mentioned vis-à-vis the visa situation. Obviously, it doesn't address bias in the later interview stages, but otherwise I didn't think it took anything away from the process necessarily, and so if you've got a tool to do it, I don't think you'll lose anything from going through that CB anonymization process, particularly if you've been struggling to hire or create a more diverse team.
TIM: Yeah, and hopefully now with large language models, that first step is a walk in the park. That anonymization piece should be certainly drastically easier than finding someone, as you say, who's not involved in the hiring process to go and do that manually, so that's a huge win for you. You touched on a few things there that are interesting. That's some stuff on a CV that some people would consider signal and others would consider noise, and so there's probably some gray area of which bits you should and should not remove. For example, the university name: is it helpful to know this person went to Oxford? That means they're probably smart; as a rule of thumb, that's a fair assumption. But then others would say, "Oh, then they've got a halo bias because now straight away I feel like they're smart because they've gone to Oxford. So which is right and which is wrong? I don't know.
FAYE: I know, and I don't know either, because you could even say people from lower socioeconomic backgrounds are far less likely to go to Oxford. They might be super smart, but they just don't have the opportunity to go and study there, or maybe they've done a degree apprenticeship because they couldn't afford to go and study at university. So I think those are probably like when you read research, those are some of the arguments for taking off some of those details around educational qualifications, but still, obviously, you still need to know about certifications. What degree did they achieve, and so on? But it's a difficult one because, like I said before, if you start removing too much stuff, you lose a lot of the context and information about what makes that candidate unique
TIM: Yep, it's a tricky trade-off. What about hobbies do you feel like that adds value to a CV, or is that just more reasons for you to be biased in favor of or against a
FAYE: particular
TIM: candidate, so I was told a few years ago to start taking hobbies off my CV. That was recommended from a CV person, and so I took them off, and I've kept them off. I actually quite like reading about a person's hobbies; it gives you a little bit more of a full picture of the individual, but I guess you're right. that could also be particularly that kind of affinity bias If you see someone with a load of hobbies on, let's say you've got two very similar candidates; you've anonymized them, but you've left the hobbies on there. With that affinity bias, if you see one who enjoys a lot of similar things to you, maybe that makes you slightly more inclined to that person on some kind of unconscious level. Yep, or I'll give you an example of a more conscious level, so I can remember a few years ago hiring a product analyst, and we were going through the CVs, and one of the candidates was Brazilian, which straight away I got. I'm pro-Brazilian; I love football, so already he was like elevated in my head just based on the fact that he was Brazilian, which is ridiculous. but then in the hobbies it said he was a semi-professional footballer in Brazil, and our five-a-side team at work kept just losing the grand final each semester when we played on Tuesday lunchtime at Sydney Uni, and we just needed one more player to come in just to take us to that next level, and so as soon as I saw his CV, I'm like, Oh my God, perfect. and I was even telling my colleagues, I'm like, Oh, I hope this guy's good; he's going to come in for an interview tomorrow, so that's fine for this guy, but what about the 599 candidates who didn't have football in their hobbies? I clearly was biased against them, which is unfair, I think, because being a footballer had nothing to do with his ability to be a product analyst, obviously. I feel like that's the critical difference: Is it part of the criteria for the role or not? It isn't, so I feel like I probably shouldn't be seeing that information.
FAYE: No, and that's probably going to be true of most hobbies, right? Unless you've got someone who's, in my case, they enjoy doing data analytics on the weekend or doing data science projects, which they might put to bolster their CV, unless you're putting that most of them are probably not that relevant to the jobs.
TIM: No, and if I devil's advocate it further, some people would argue likeability matters. Like, at the end of the day, you're working with someone for eight or nine hours every day for weeks, for years on end. You have to be able to get along with them, so is it falling into that cultural fit bucket where there's lots of people who can technically do their job? I need to find someone who's going to get along with everyone else, but then someone else has said to me, Yeah, but then suddenly you can imagine a startup with a bunch of 20-something-year-olds, and then they get a candidate who's 50, and they think, Oh, they're going to go into their family. They've got their kids; they've got their partner. They're not going to be the Friday night binge drinking kind of 20-year-old vibe, so is that fair? Because then that's just discrimination.
FAYE: Yeah, yeah, that's true, that's true, but there's so much of that that goes on. I can't tell you the number of job descriptions that I've seen that say we want 7 to 9 years of work experience. If you're putting that on your job description, that's discounting immediately, and that's not really illegal, right? Then you can't really do anything about that, but you see it constantly in job descriptions, or at least I do.
TIM: Yeah, you're right. I've never thought of that before; that implicitly indicates the person's age.
FAYE: Yeah. Yeah, it could be someone who shifted careers later on, so they've only got 79 years of experience in that area, and they're in their 40s, 50s, whatever, but generally that's going to be they're looking for someone younger, yeah.
TIM: It's a minefield, that's for sure. What about thinking now about different markets? So you've had the opportunity to hire across a variety of different countries, different bits of the world, in Australia, in Singapore, in the UK, in India—those would have been some of the different challenges across those different markets.
FAYE: So I would say the UK and Australia are actually very similar in terms of recruiting environments. I did find, and this is recent, Australian tax salaries tend to be a bit higher than the UK, but I think that maybe she's cost of living. I'm not sure. I was hiring with someone who'd been living in Australia for far longer than I'd lived in Australia at that point. and they said to me, You'll find there's a skills lag here in Australia compared to the UK. I actually didn't find that when we were recruiting, but it's something that I've heard a couple of times. The main challenges are some things that we've discussed already when recruiting, just the numbers of applications—the numbers of applications that you're getting—and sometimes it's very difficult. difficult to distinguish between applicants because CVs can be really similar, especially when you're at those numbers. The other challenge that I've had with the few recruiters, and this is both in Australia and the UK, I've had similar experiences with, is where I've given them a really clear job description, really clearly defined, simplified I've met with them to discuss what we're looking for, given examples, and so I can give an example. went to a recruiter in the UK. We were looking for a senior data analyst. It was a relatively generalist kind of senior data analyst role, just quite a standardized job description, and then two weeks later they came back with a load of candidates. who had only ever worked in preparing statutory and regulatory reporting, and it's just preparing, not doing any analytics or insights at work, and I had a similar experience in Australia, actually, so I've had that a few times, but other than that I haven't seen a lot of differences between the UK and Australian environments in terms of recruiting. Singapore is interesting because in the last couple of years there were some changes to international hiring laws in Singapore, so previously they would hire lots of people from other countries in Singapore. It was relatively easy to do, but they've locked that down now, so there are very strict criteria, and you have to be pretty highly skilled to be able to work in Singapore now. So what's happened is, particularly in tech talent and data talent, the salaries, because you are basically limited to the local market, absolutely skyrocketed in Singapore. This is not that long ago. I was recruiting for a couple of posts there, and the salaries were significantly higher than in the UK and Australia. like notably higher, and that's a challenge if you've got set budgets to recruit to, but otherwise Singapore is again not that different from the UK and Australia in terms of the market and the kind of talent that you're able to find. India was Probably the most challenging market I've had to recruit from has a number of different factors. So one was really high applicant volume and a huge amount of uniformity in the CVs, which made it nearly impossible to differentiate between candidates. Also, that was probably the one market where I just came across when we were going through interviews really exaggerated CVs. That was by far the biggest example that I've seen. time and time again we had to interview multiple times, multiple rounds for roles, and that was also the only example where we had multiple applicants that myself and the hiring panel could see were using real-time online assistance during the interviews to answer questions. That's the only time, and we had a few times that I've experienced that. That said, after quite a lengthy search, there were about four or five posts we were recruiting to, and we secured some really excellent hires, some great talent, but it did take almost a year for some of those roles to hire the right people, so that was really challenging.
TIM: Yeah, that's something I've heard of quite a lot recently from some of our customers who are hiring in that market that, yes, I'm going to put this delicately, but cheating prevention in the hiring process is top of their mind when hiring there. I guess I wonder if it's connected actually now to the volume of applications because, as a candidate, if you're competing with literally thousands of other people, that is a very cutthroat environment. and I imagine then you're probably looking for any kind of leg up you can get, any kind of advantage you can somehow gain, because otherwise you need a job basically, and so I guess then maybe, yeah, you would then think, Okay, I may as well get Chachapiti or my friend or someone to help me out. Maybe it's almost like it's part of the game; in a sense, it could be viewed that way.
FAYE: Yeah, and I asked some of the recruiters out there, and they said it was quite there were a few big companies that lots of people in India would use for their CBs to stand out and prepare them, which might be why so many of them just looked very similar.
TIM: Yeah, I had not come across that until the last couple of years. I feel like there's a product called Europass. I've seen that it'd be in Europe where it's, yeah, the format of the CV is standardized, whereas before that I'd never seen that standardization, so I wonder if there's a similarly common product in India.
FAYE: That would make complete sense, yeah.
TIM: Switching gears a little bit, there's been a push, obviously in the last few years, to have more gender-balanced teams in tech, especially those senior leadership roles. Some companies do that through kind of strict quotas; they say we have to have X percent, and that's going to be our target. So we're going to hit that target. Others would do it more implicitly by maybe having the anonymization and trying to make the process fairer. What are your thoughts on this push, and also what are your thoughts on quotas in particular?
FAYE: Yeah, so I think the push for gender diversity for me makes sense, and that's borne out by the research. So there's been numerous studies; there's quite a famous 2020 McKinsey study, and they've done numerous since, which showed that companies in the top quarter for gender diversity were more likely to have above-average profitability compared to companies in the bottom quarter for gender diversity. and it was quite stark, the difference, and then I've seen other research that shows that companies where the board in particular is more gender diverse have enhanced decision-making and innovation; they have improved performance in ESG-type metrics. better risk management better organizational reputation, fewer scandals and that's pretty consistent across the different research and studies that have been conducted, so for me it does make sense, and then there was a whole load of research that came out after COVID that showed that companies that were more gender diverse tended to fare better during COVID as well. so that's interesting that sort of links to some of those findings from those other studies and reports I think and actually in the EU it's really interesting they've passed recently a law that requires by mid 2026 I think it's companies which have 20 250 or more people will have to have at least 33 percent of their board represented by women and then I think it specifies that non executive director board seats at least 40 percent have to be filled by women and so those are quotas I've got to hit those by mid 2026 and that's across the EU so that's quite a difficult target and so on the one hand I feel like quotas can definitely represent or help normalize female representation and they can drive faster change because those organizations have less than two years to hit those targets My concern is always with quotas; it feels like a quick fix. They're not addressing the more systemic issues. things like bias, the lack of internal talent development, and company culture all those things that take time, that require introspection, that require real change being enacted at the organization, and require authentic representation It seems like a bit of a quick fix, and my fear is always that it will lead to tokenism and individuals that I just selected because they fit the mandate rather than being based on merit, and I don't think that benefits anyone, male or female, and actually could be harmful, and then I think it was probably I think it's literally only about a week ago I was reading a journal on management science, and I think this study came out online; it was available online last year, but it's only just been published. and they were looking at one very specific thing, which was the impact of a French law that had been passed a while back on academic hiring committees, and it required that each gender represent at least 40 percent of the board; you would assume that would mean a woman's probability of being hired by those committees was increased. What the study actually found was that it was significantly lowered, and that's because they hadn't done any of the cultural change piece whatsoever. Yeah, the quote is always, I understand the thinking behind them and the necessity, but they always concern me.
TIM: What is the marginalized or underprivileged set of people that the quota is for? So at the moment, we've split her across sex and said, Okay, let's promote more women, which is fine, but it could be that socioeconomic background or race is a more important factor that there's a greater underprivilege or what have you. And it's interesting there was some really fascinating research done by someone I know here around promotions in the Australian public service over the past 20 years, and they had this amazing longitudinal data set of like literally tens of thousands of every promotion ever in the last 20 years. as well as a lot of information about each of the different people’s demographic information, and they found the main bias really was against people from a non-English-speaking background; that was the biggest underprivileged set, so there were already quite a lot of women in leadership, but they were all white Australians. There were no Australians from a non-Anglo-Saxon background in leadership; that was the main factor, so I feel like with quotas, yeah, who are the quotas for, and how many would you have to have? And then it becomes quite a complicated mess if you actually looked at all the different segments.
FAYE: Yeah, and it probably differs right by country and culture who's doing well and who's doing less well. Doing the whole of the EU, there's probably specific things in different countries where it's less about gender and more about some of the issues that you've mentioned, like socioeconomic background, race, and so on.
TIM: Yes, and I must vary through time as well. I would have thought as populations change, it's a dynamic factor. I also remember there is a university, I should say, in Queensland that as of this year explicitly removed merit-based hiring from their overall university hiring policy. and so they now explicitly hire people based on I think they listed a few, but gender, race, and sexuality were three of the variables, and I was fascinated to see that technically it doesn't look like this breaks the racial or discrimination acts in Australia because there's like a carve-out for positive discrimination. So it seems as though I'm not a lawyer, but it seems that's probably legal, but is that going too far? Like, there's now merit; isn't a factor at all; I feel like this is a road to a very dark place personally.
FAYE: Yeah, I think, like you said, that's almost the nth degree of positive discrimination, or at least it feels like that, and to not be considering other factors, just that kind of demographic side of things, I think is a bit like the French law example that I gave you. I think it could backfire. yeah
TIM: I think so, and my biggest concern would be who's the one deciding which of these groups of ones is now getting hired, because you could easily imagine basically what is cronyism but presented as something that's all nice and good, but if it's disconnected from who actually is the best person, that could go anywhere. If you're in charge of that university, you could just start picking whatever subgroup you want to hire, which seems perverse to me personally.
FAYE: interesting I hadn't heard about that.
TIM: No, I'm surprised it didn't cause bigger waves because it was just this year. It was maybe six months ago I did a bit of LinkedIn stuff about it, but I just didn't really get much press coverage. It was viewed as, I'd say, largely in a positive light, but I feel like it's a slippery slope personally. What about skills? So when it comes to hiring, are you someone who favors softer skills, and do you think the technical skills of the data team you're hiring could be improved, or are you trying to look for a balanced set of skills? And also, how do you evaluate the soft skills versus the technical skills, maybe versus the kind of behavioral law? values-based things as well that you're looking for
FAYE: Yeah, that's a good question, so I definitely look for a balance of both when you're talking about technical and soft skills in terms of which I would privilege or value more. I generally say softer skills because I feel like technical skills you can generally teach, or you can help the individual upskill and develop. and that's true of softer skills, but generally it's much harder; it takes much longer, and I think it's Herb Kelleher who said you don't hire for skills; you hire for attitude because you can always teach skills, and I think that's probably true. So in terms of technical skills evaluation generally, these are for my technical data roles, obviously. Usually we would send a task for the reasons that I said earlier. It won't be too long because I don't want to take up too much of people's time. Try and make it interesting and use real-life data and so on, and we try not to make it too arduous, and yes, it's about their technical skills, but for me personally, it's less about getting to the right answer or presenting their dashboard in exactly the way that I want; it's more about their thought processes and their thinking and the way that they've approached it. which is why when we then interview them, the initial question will actually be an opportunity for them. We'll invite them to talk about their approach to the task, why they did things certain ways, and often if they've run out of time or not quite or thought about it and wanted to do it a different way, that's an opportunity for them to share that. Because it's an artificial situation, maybe if they're in the office, they would have approached it a little bit differently, and then we might follow up with a couple of questions to probe a little bit more and understand better their approach and why they've done it a certain way. During the interview, I will generally have a section that is technical questions. It won't be too long, a maximum of four or five questions, just to understand, okay, do they have some of the basics, but also how far does their knowledge and understanding go in their practical applications? Then I'll generally also have a section that is more behavioral-type questions, so this will be asking them how they'd approach specific scenarios or situations that are based on real-life ones that I or other people in the team have experienced. Or if they haven't experienced them, how they would approach them if they came up against a particular kind of challenge, and so we'll have a few of those, again not too many, never more than four, and then the final kind of question will be more around problem solving because that's actually key for me. If someone has good problem-solving skills or can show that even if they don't check all the technical boxes, they can show that they are able to think through a problem in a logical way and handle the situation in a way that's, yeah, problem-solving in a way that is not just logical but they understand what the core issue actually is, then that's probably going to be a person that I want to hire. So problem-solving skills are usually front and center for me personally, and then around those softer skills, I think you can actually tell a lot from how they interact with the panel and how they answer questions and so on as well, and 15 minutes because you've got to wait for them to warm up. So yeah, I hope that answers your question.
TIM: That does, and the last thing you said is really fascinating because so many people would say the polar opposite, and I feel like that's just a bias, as in, Oh, people would say, Oh, I know within five minutes whether or not they're the right candidates, but that's quite That's quite presumptive. And as you say, people take a time to warm up and be less nervous and get into it, so if you're judging it based on the first five minutes, you're not giving them a fair shot, are you?
FAYE: Yeah.
TIM: You also mentioned something that was interesting, which was that for that case study, the take-home, then having the next stage of them basically unpacking that and discussing that in an interview, which I think is really a great way to do it and reduces any fears or concerns around them doing it at home, which previously people might've been worried about the fact that maybe it wasn't them doing the assessment that might've gotten a friend taking it. Now people would say they could use ChatGPT, which is fair enough, but if you have that full interview and you can probe into what they've done and how they've approached it, there's no way they're going to be able to get through that interview unscathed if they haven't done it themselves because they're going to fall apart and crumble pretty quickly, I would have thought.
FAYE: Yeah, exactly, so yeah, that is, I didn't verbalize that, but that's one of the other benefits for sure.
TIM: And it gives them a chance to expand as well. When we've done them in the past, we've said to them, Okay, do this challenge, spend about this amount of time on it, and try not to spend more than X hours on this, because we don't want you to go crazy, and so it's like a practical thing where we get them to do something, but we don't want all the bells and whistles. So then if they're in the interview, though, they could explain if this was a real work challenge and you're actually in the job, how would you do it differently? Because we probably want them to put in more effort, obviously, if they're getting paid to do this, but we don't want them to do all of that work at home for free. So if they've got a chance to expand on that fund, that's really helpful.
FAYE: No, exactly, and I've had cases where, for whatever reason, they didn't have as much time as they wanted to allocate to it, and so they were able to explain that, and if I'd had time, I would have done these things, and then I've had people as well who've said, Actually, after I did it, I reflected, Yeah, and if I went back, I would have done it these ways. and often it's to do with how they transform the data or something around the prep of the data before they produce their visualization.
TIM: Yep, what about an interesting phenomenon that, last time we chatted, you mentioned you'd noticed, which is this strange situation in the market right now for leadership roles where it seems as though there are a lot of great data leaders struggling to find a role, but then a lot of companies complain they can't find great senior data leaders?
FAYE: Yeah.
TIM: thoughts about why this disconnect exists at the moment
FAYE: Yeah, so I think I can probably only speak about the UK market at the moment in that respect, and so I think one big factor in the UK, and I don't know if this applies to other countries, is that over the past 18 to 24 months it definitely seems to have been a dampening of the salaries for data leadership, very stagnant or actually going significantly lower. And so I think that means that some organizations are obviously looking for cheap talent in that space, and then people who are in roles where they're secure in those data leadership roles, they've got a good salary, and they're not going to be applying for some of those kinds of roles, so I think that's one issue. and I also know some of those lower salaries when I look at them on LinkedIn; I can see they're not getting that many applicants because it's just way below the market, right? So I think that's definitely one thing; another factor is I see a lot of job descriptions and role descriptions for roles in the data analytics and AI spaces, and they're often asking for really specific tools, skill sets, and experiences. sometimes really extremely niche and in a lot of questions I read through these, and in a lot of cases, I read through these, and I would question whether those requirements are actually needed. Who's written it? Do you really need those specific things? Because you're going to have a tiny pool of talent who have actually done those things. I've also seen things where people are talking about working on AI products for 10 years. I don't know many people who've been working in that space, but most of those things haven't really been around for 10 years unless you're talking about basic machine learning models. So some of it's just what they're asking for, requiring job descriptions, and it's almost like they're looking for a unicorn, someone that just doesn't exist, and so they're not able to fill those roles. I think another one is something I've discussed before: poorly instituted AI systems being used, and so they're filtering out strong candidates before a human even sees the applicant pool. I don't know how big that effect is compared to some couple of the other things I've mentioned, but I think it is starting to potentially have an impact, and then I think what I've also seen over the past two or three years, certainly since COVID, is You've had companies stop developing a robust pipeline of talent, which is what internally They've not been training or insufficiently training; there's not a lot of succession planning that I've seen going on, and that lack of focus on talent development within companies has left a gap, so they often don't have internal candidates that can fill those kinds of leadership roles. But then on top of that, when people are looking to move to another organization because they want to get those leadership skills or move into a more senior role within those organizations, they want people who already have those things, and so there's become a massive kind of skills gap. I think actually between people who are very good technically and could probably be very good in leadership roles but just haven't been given those opportunities for development growth and so on, and I think we're seeing that, and for me I saw with COVID a lot of companies and organizations shrunk or got rid of development budgets, and they just never came back.
TIM: I wonder if another bit of the equation is that if companies are trying to hire the senior-most person in the analytics function so that role doesn't exist yet, then who's the one who's actually figuring out what that role should be because that person is almost like a circularity problem? If there's no chief analytics officer or whatever, then unless they've got some external advisor engaging them on the search, which normally they don't ask because they might use a recruiter, But the recruiter themselves also don't know because they're not a data expert. So it's this weird circularity problem, perhaps a little bit
FAYE: No, I agree, and I think I've seen some where they've clearly just borrowed from different job descriptions, and it's like a mishmash of different things they're asking for, you know, various different tools. You probably only need one or two of those, and I think it's exactly for the reasons that you've mentioned that they don't know what they want.
TIM: about
FAYE: What's available out there and what they should be looking for
TIM: Yeah, I think in a fast-moving market like data, then I guess they could be forgiven a little bit for being out of their depths in that regard. As one final question, is there anyone you'd like to give a shout-out to, anyone in the data space or hiring space you think has done a really good job there?
FAYE: Yeah, so I've got slightly random lists, and the hiring space is just someone that, like, personally I learned a lot from. His name's Oliver Wormsley, and he's the director of business partnerships currently at the University of Warwick, and not specifically for data roles but for hiring his general approach to hiring. I learned a lot from being on panels with him, and this is years ago, but he would really challenge hiring panels. on biases and so on, and we're talking 10 years ago. I saw him doing this stuff, and he would really challenge why someone felt a particular way about a candidate, and he was also really serious about hiring and supporting people from nontraditional backgrounds that maybe hadn't gone to university and so on. So he's one person I learned a lot about in the field of hiring. Joseph Lanassa is another person that I've learned a lot about, both in data and hiring. So he's the director and founder of Advanced Data Strategy, which is a US company, and so on the hiring side, he genuinely looks to hire talent and develop them internally and mentor them to grow, which I think is a great approach and something it would be nice to see for more kind of data leaders, but at the same time What I've learned from him on the data side is his approach to partnering with organizations and working with them to effectively leverage data. The way that he does that with organizations that are not particularly data savvy is really impressive, as is his company. I've got two other people I want to mention. One is Nicholas Averseng, who's founder and CEO of UE in France, and so his approach to helping enterprises gain and derive meaningful value from their data and AI initiatives is really impressive, particularly in terms of being able to demonstrate that value because often companies are very concerned with their bottom line. He gets them to think about it in a slightly different way and how you can always tie it back into the organizational strategy, so he's someone that I've learned a lot about, and he does a lot of talks online that you can find, and the final person I want to mention is someone called Rob Howes. He's currently working at Smarsh. I think he's a strategic account executive, and I met him a few years back, and his approach to networking and career advice in the data space is just second to none. I learned a lot from him in terms of how he navigated conferences and so on. Yeah, they would be my four people. TIM: Some shout-outs to some amazing people there: we'll make sure they hear about these. Faye, it's been a great conversation today. I've really appreciated hearing your insights. Thanks so much for coming on the show.
FAYE: No, thank you for having me. It's really interesting. Thanks. Tim