In this episode of the Objective Hiring podcast, Tim sits down with Sami Alsindi, a seasoned data leader with extensive experience in hiring and data consultancy.
Companies are facing unprecedented challenges when it comes to hiring the right talent. From the rise of generative AI to economic shifts and the increasing use of automation, the hiring landscape is more complex than ever. In a recent podcast interview hosted by Tim Freestone, Alooba Founder, Sami Alsindi, Director of Data Science and MLOps at an automation consulting company, shared his unique perspective on the current hiring storm and where it’s headed.
This discussion delves into the pressing challenges that hiring managers face today when hiring in data science. As generative AI, rapid skill specialization, and evolving economic pressures transform hiring, Sami sheds light on maintaining fairness and objectivity in recruitment processes.
The conversation revolves around practical insights for data hiring leaders, from dealing with the surge of AI-generated applications to revamping interview techniques to ensure real-world skills are evaluated accurately. This post captures key moments and actionable advice from the conversation, offering a view of the complex challenges and strategies to achieve fairer, more efficient hiring in data-focused roles.
TIM: What Sammy are your biggest hiring challenges right now?
SAMI: It's a bit of a perfect storm. You have a few different factors that are in isolation, each of them might be somewhat manageable. And I think a lot of them have been bubbling under the surface for quite a long time. But they will hit all at once. And it's becoming a bit of a nightmare. I can speak both as a candidate and as a person who's likely to recruit staff in the future.
SAMI: so you first have the sort of dead user fake jobs or auto-generated jobs. And I know that LinkedIn for a long time was doing this type of thing where it would just fuzz job descriptions. It has caused problems for us in our previous company.
SAMI: We had hiring freeze and we saw this job description for data scientist. And it wasn't something I thought I was head of data science at this company. We we didn't author it. And we were like, what's going on here?
TIM: Oh well.
SAMI: People in my team were unhappy about the fact that there was a salary figure, for a job that didn't exist. the And I spoke to the recruitment team. Turns out it was an auto-generated LinkedIn one, but even they hadn't authorized.
SAMI: I like I was thinking, what what like this is this is awful. It's one of those things where to bump up how, you know, making the company look like it's doing really well or bumping up investor interest in a company. it's it's it's I didn't even I wasn't even aware of it being a good thing.
SAMI: I can't see it being a good thing. But you have that and then obviously you now have the, just it doesn't take a lot to write a prompt says spit out 20 job descriptions that are slightly different ah based on these types of things.
TIM: Hmm.
SAMI: ah If you're a recruitment agency, that's also, yeah you know, the good ones won't do this, but the bad ones could look like they have a lot of jobs on their market them and then try to get lots of interest candidates in by doing that type of thing. ah Then of course you have the applicants. And I think this is where the problem lies, is that the applicants are now yeah You could apply for, I don't know, if you're really dedicated and you you you can accept rejection, maybe 10, 20 jobs a day. you know if you if you And you're supposed to tailor the the cover letter and CV to each job and things like that. You can just do the Gatling gun approach. It doesn't work out as well. But now with chat GPC, here's a job description, write my CV as if it was the perfect CV for this job description. And you can automate that. And there are people applying for thousands of jobs.
SAMI: And obviously, there's some person that's going to review this. And then, yeah, of course, the problem is there's there's too many now. So that's now being automated away as well. And that old trick about, let's write MIT Harvard, Massachusetts californiali Caltech in white text on your CV, that's probably not going to work anymore. I never used it. but I like the idea. That's probably not going to work anymore as well. So you have this Gen AI, the worst aspects of Gen AI.
SAMI: is already kicking off there. You then have this interesting economic situation that is different in different countries, but even in the US, which historically rode out the storm a bit better than the rest of us from COVID and everything like that, you have a lot of layoffs happening from very ah high paying and very experienced companies and skill sets in the same market as the people who don't have those skills.
SAMI: So while there's a lot of high hiring demand for really experienced yeah know AI engineer type positions or MLOps engineer type positions, ah there's not enough people that have the specific expertise.
SAMI: And there's really, really experienced people that don't quite have that expertise, but are trying to apply for this job. And so you have a lot of demand in the wrong places, in a sense, and a lot of candidates in in over ah competing with this demand.
TIM: Mm.
SAMI: so Yeah, so there's a really specific expertise, such as AI engineer, that's emerged really quickly, and you now have to address this. And an AI engineer From what I understand, it it's it's basically someone who can do the plumbing to get a response out of a chat GPT-like interface. This used to be a ML engineer position or a data scientist position who has the experience. You don't really need to have those anymore. You should, because it's still an ML model black box. You need to understand and how to get the best out of it type thing.
SAMI: But you could take a risk and say, no, actually I'm just going to plumb these in and then just put it in front of a person. That's gotten rid of three or four very specialized technical jobs. So those people aren't qualified now and they are facing competition from someone who is in a different way. It's a really interesting trend.
SAMI: And obviously during the pandemic and and slightly before, inflation was an odd place where there was lots of cheap money for companies so that when they weren't hiring ah sprees and that led to this sort of glut ah that's caused a lot of these problems at the moment. So so all of those together cause a huge bunch of issues. So I work in consultancy. I've worked in consultancy for most of my career.
SAMI: most companies and and consultants too understand now that they need a data-first offering or they need to be a data-driven offering company and they need to embrace generative AI which is in this really odd position where it's sort of here and not and and also not here it's it's based on the promise of what it could be in a few years time and that's led to a massive push to integrate and incorporate this technology which isn't quite ready it requires some bespoking if you can imagine
TIM: Thank you.
SAMI: what it took to get ML into your company five years ago compared to today. ah's It's a bit like that, except it's evolving much faster because there's so much money involved at the moment. So what you needed to do to get, ah so three years three years ago, four years ago, I worked in a voice synthesis, generative AI startup, to create a copy of someone's voice, took a team of five people a few months, and we got it down to a few days towards the end, but we you needed our expertise and and ability.
SAMI: You can now do it with a few minutes of your voice, maybe even a few seconds if you go to the right websites, and it will sound quite a lot like you. So you don't need that domain expertise anymore and your time to live is much faster and the product is better. But the point being, if you invested in that team,
SAMI: That would be a bad investment because you now don't need to invest in that team. Could you wait another year? you Are you willing to write out that storm type thing? Where should you allocate those resources? It's really challenging to optimise for that. And as a consultancy, we have to have an offering. My so my view of generative AI is that you need to get your data your data ah yeah you to get your data cards set up correctly so that you have the right hand that you can play when the technology is ready, when your use case that you need it for is ready, and when your customers are interested. And that's, we're a bit far away from that. But doing that and allows the client to understand what they actually have available, or even as an internal company, what data do you have, making it available to everyone to consume and and play back with, getting the
SAMI: product owners, data owners, data governors, the individual boots on the ground type folks to understand that they all have a role and an interest in the succeeding. And if they can become more data-driven as well, they can suggest the use cases that a data expert wouldn't know because they aren't a subject matter expert or the and user that would understand what the thing is.
SAMI: And all of that means to say that you need to have a position on this, but no company is ready. No individual person in that company is ready. And so there's this this composition to build these types of offerings is competing with the same space of people building those offerings and building these sort of other connected aspects as well. This is also a massive hiring burst, again, sort of in the wrong places, sort of in the wrong time.
SAMI: if you If you believe that the skill set available is a finite resource, then you can't train enough people that will have this skill set. If this is the new paradigm that I want us to embrace in the future, there will be retraining and things like that. But that's going to add to the pressures of the people who are technical experts.
SAMI: and because if If yeah if ah of a an experienced developer who's not a AI engineer plus GitHub co-pilot can do the same as an experienced AI engineer, you've just added another person who could compete with that job. And it's probably for the better of society that we don't just have tech pros being the people who can run and do all business from now on. I had I heard this. so I'm I'm going to make mention one comment. I was I who said it I was in a meeting recently ah And ah somebody mentioned, they were talking about the future of Gen AI, and they mentioned the words zero employee business. And I think they meant one employee business. But the idea that the business is being run just for the bots, I thought quite funny. ah So maybe that's the problem. Anyway, so so you have that that second point. And I sorry, I now rambled a little bit. ah The last it's extremely important point is also that,
SAMI: ah With COVID, having pushed a lot of people remote, so teams aren't gathering together in person anymore, that sort of magic, I can't think of a better word, that magic that you get by people working together in person or you know collaboratively, the the fifth ah conference room, the one outside of the conference that everyone's talking about in the corridor, that's not happening as much anymore. And you have that coupled with this viewpoint and that Gen AI will replace ah interns or low-level positions
SAMI: So the low-level positions aren't in person anyway, and if and they may not need to need to exist anymore. that That's going to cause a huge daft of opportunity and inflow for the long term. So I'm not sure where the future is going to actually come from. I think it will be the people who embrace the tools the best, but they may not be the best people, and in a sense, if that makes sense. That might be the more important thing in the long term. Those three factors together are like a perfect storm.
SAMI: and Uh, it's going to make hiring really difficult, which is why it's gone back to all the ah opposite aspects. And we may well cover this later on in the podcast, but, Like everything's going to go back to inface in-face in-person interviews so because ah I don't know if you're using chat GPT or unless you're talking to me at my face. And and if you've got an implant, good on you. Uh, you know, if you're using some glasses, I don't know about right. But, but I want to know about what you know. Right. And and and we all know that you can cheat on the test and if you have a calculator, whatever next to you, but I'd like to know how you think and your conceptual stuff.
SAMI: ah right it's it's ah it's It's one of those things where it has made is is made finding good candidates more difficult. despite Despite everything I said, there's loads of good people looking for jobs, there's loads of good jobs available, but actually matching them together in in this way is quite complicated at the moment. And I hope it resolves, but I'm not optimistic on this, honestly.
TIM: Yeah, it seems like it's going to be ah and say a fight to the bottom or the ah fight to the yeah a fight to the who can use AI the most or something on either side yeah
SAMI: Yes, yeah, absolutely, yeah. It's an arms race, it's an arms race, yeah. It's an arms race. Yeah. The first person that works out how best to use it will get this leap advantage, generational advantage over the next. And then everyone will copy them and it will be always who's first. And it may not be the, it will not be the players you think. And, and I mean, just using the genitive AI models as an example, these, these companies like anthropic and Nick Stroll and, uh,
SAMI: Open AI like they you had heard of some of them, but they aren't Microsoft and Google and Amazon It's these companies you wouldn't have heard of and it's gonna happen again.
TIM: Yeah.
SAMI: And there's a little bit of a sort of it's ah it's It's a new it's a new new What do you call ah trying to think of the expression? It's a new person's game. that's not the right expression something like that But the thing is is that it is right is that if you're thinking you right?
SAMI: We have a team of thousand that can do this. You won't face the same intellectual challenges that would be like we have only three people we have to solve it a different way so it wouldn't surprise me like it might be for for recruitment specifically the one person recruitment agency that embraces these tools the best not the established player with tens of thousands of candidates and four thousand and recruiters it might well be
TIM: Yeah. Yeah. It does seem like we're on the precipice of some major change, but it's just not quite there yet. Every time I use a large language model, I'm always sort of initially impressed. But then once I keep digging, I'm like, Oh, hang on. This isn't good at all.
SAMI: this is exactly a problem and I've seen I've seen very different analyses and I could the one that I believe because it feels intuitive based on my experience as well and it sounds like you've had a similar experience. If you know nothing about a topic it can leapfrog you a little bit above the rest but you don't know what you don't know so it might give you a false confidence. If you have expert level knowledge the types of tasks you would be using this to do to fill in the gaps in your knowledge aren't solvable by the generation. So it's the middle tier, the people who are mid experienced who like they know how to do something they just want it to do it for them type thing right they hit there have an ability to send to check the output. That's not where most people are in any field. So the idea that this will
SAMI: just be this, you know, personal expert or digital Aristotle. I've heard it called that, like it will teach you everything you need to know. Great. But there's a lot of falsehood in there. And it's not because people are ill meaning, like there's a lot of falsehood on the Internet, ah but you can generally trust Wikipedia and then then you should do your research on it. Where's the equivalent for this? Like there isn't and a new thing you need to do. You just need to do the task again properly and then confirm it's the same output. Right. That doesn't you see the problem. Right. that This will change. This will change. But I don't think there's a it's It's like humans. It's not like there's going to be a certain, oh, I now know the answer. you You might think you know the answer the whole time. Somebody tells you a different answer. You don't know whether they're right. you need to it's It's all based on fuzz fuzziness. And ah there's no certainty. One of the things that's terrifying, but I'm not going to talk much about my scientific background unless you really want to go into it. But one of the things that's terrifying about the bare metal of science is nobody knows anything. It's all based on fuzz. And you think but and you it's it's hard to maintain sanity when you think,
SAMI: But we don't know anything. It's all assumptions, all the way down. The lights are on and we're not fighting each other. This doesn't make any sense. like This isn't supposed to happen. right and The odds of it happening are so low. and and and And I hated that. that's the The reason I left science is is because I wanted to know things. And you can't know anything. Yeah, anyway. I Sorry, we're going off the tangent there. But this this is very similar to that. right as as As a former neuroscientist, it's beautiful to see these things. Fundamentally,
SAMI: When one tries to understand something like the subject of consciousness, like what makes us different from the apes or whatever, right? It's it's a bunch of connections. That's that's it. like That doesn't make any sense. How could that lead to anything? And then you see this thing, which is a bunch of connections, right? It is cheating because it's using all of human text generated ever right and images and whatever. right So it's going to learn the statistical trends that our brains have learned and and evolved to produce. but But it's still cool to see that, but it isn't trustworthy. just like he wouldn't I've started to use the expression unpaid intern, unpaid, untrained intern. right would you Would you hire an unpaid, untrained intern and just give them your job? No, you wouldn't. right And you wouldn't expect that to succeed. So why would you expect this to succeed?
TIM: Yeah. Yeah. I think the key thing is we need to be very cognizant of its limitations and everyone is using them has to, yeah, be questioning it in a way that they're probably not expecting or, or ah especially if I think about like teenagers or whatever.
SAMI: Yes.
TIM: So let's call them almost like AI natives. If this is all they're used to now, this is just normal.
SAMI: yeah Yes.
TIM: I want to find something out. I go to chat GPT.
SAMI: Yes.
TIM: And if they're not consciously thinking, oh, this has got a, I don't know, 30% chance of being complete bollocks or a 10% chance of being partly wrong. that's a big problem.
SAMI: Yes, this is exactly the problem. and ill be I'll be completely frank and say that those critical thinking skills are difficult to find in broader society.
TIM: Hey, the world's greatest spam engine, maybe.
SAMI: and It's it's the sad the sad rise of things like fake news and the susceptibility of people to be tricked and and distrust of ah and mistrust of ah people and websites trying to rectify that.
SAMI: because oh no no this the I read 10 people that said this, but this website says I'm wrong, but it was written by the other side and things like that. You just amplified that problem millions fold by this extra, as you said, massive spam spamm filter on steroids, spam generation and machine and steroids.
SAMI: Those skills are difficult to find and they're difficult to develop as well, speaking personally. like it's It's not easy to do that, right? It's hard, actually, so I'm going to go back to science. so Science was, I had a really interesting lecture set about the politics of science, transformational, ah when when I back in my undergraduate.
SAMI: And it was talking about how how do you how do you get a project to be funded when you need nation state level funding, when you don't know if it will succeed, because politicians don't fund that type of research. So they were talking about the Human Genome Project and the over promising and under delivering of it, very, very important project. But the scientists knew that they were never, they weren't well meaning, they knew that it wasn't going to produce the things that presidents of the US were saying on TV but they needed the money and they knew it would be important and they were willing to play a game that I was unwilling to play as a scientist and I that's why I had to leave again it's one of the other reasons I left myself but the the sort of
SAMI: intentionally adding a new term to your paper because oh look this is cutting-edge research but it's actually the same thing that existed before that type of stuff I I felt really slimy when I realised I was in this but you need to do this to succeed to some extent and I don't like that and and again yeah it's it's it's it's hard as ah I remember reading papers for the first time properly uh full academic papers
TIM: Thank you.
SAMI: and assuming the person was right because they were scientists, learned scientists. I read another paper by an as learned scientist by the exact opposite viewpoint and I thought These can't both be true.
SAMI: And I have no way of knowing which one is true. I could ask my boss, but he's read the same papers as me. He has no other, unless he's read a third paper that he can give me, there's there's no there's no third thing that can disambiguate.
SAMI: I have to decide, but I have no information.
TIM: Yeah.
SAMI: Yeah, this this this is this is's really tricky. You can spend years trying to train yourself to do this. And if somebody just says, here's the answer. as all of the startups and big big economic players are doing. They're not going to say, actually, you need to spot the study for a few years to understand how to tease apart information. ah Yeah, I'll share one more anecdote. I was asked by a a friend of mine to prepare a, what does AI mean for gaming? And I, presentation, so I gave one and it's a gaming company and they they were interested in like the aspects of that. And I was going to joke, this is just as chat GPT was starting to come into promise. I was going to joke, I just typed in how what is AI gaming in 2024 and just put it into chat GPT and printed out the answer. And I tried to do that. And I said, write me a literature review about AI and video games in 2024.
SAMI: And it said, I can't do that. That would be unethical, whatever. And I, and I swear to God, I opened a new tab, not even an incognito, a new tab, asked the same question and just spat it out. I thought great. Fantastic. This is amazing. The guardrails do not work. Isn't this brilliant? Yeah.
TIM: Oh, that's perfect. now if I'm not mistaken, when the printing press was first created, it was used as a massive propaganda tool, despite our, our memories of it more positively.
SAMI: Yeah, of course.
TIM: So I guess this is what these tools will be.
SAMI: All technology can be used for good or for evil, and it's much easier to use it for evil.
TIM: Yeah. Yeah.
SAMI: It's much more straightforward. like That's always what comes with people's minds first, like the the atomic weapon versus the atomic energy. it's it's just It's a shame. Yeah.
TIM: Right now, in light of all these changes, as you said, this kind of perfect storm happening in hiring, has this already impacted how you think about hiring, how you have run your hiring process, how you will run your hiring process?
SAMI: ah in In short, yes. One has to be mindful of of that stuff we just discussed. So I think we'll return to in-person interviews, which is a problem because what if we're hiring international candidates? What if we're not able to get the candidates to travel in or we can't afford the travel expenses or whatever else, right? It would restrict the pool of candidates that we can work with. It's probably also gonna be based on networking or people that we've met before or discussed. That's not a bad thing, but it does lessen the, you hope it will be a meritocracy, not based on who you know or awareness of one person versus the other.
SAMI: and immediately put, on yeah I love working from home. I love going to the office, but I like the mix. If you say, I need you to come in for an interview, you've essentially, that person now needs to come in an extra day, if that makes sense, right? It's a shame, I but I think I think it's a necessity. And I think even putting aside stuff like the deep baking aspect, just simply, I want to know what the person knows, not what they can ask to chat with GPT. And it's hard. And I remember I had the advantage that the person had not planned this very well, but I asked a question about, Maybe I should explain how the hiring process goes and explain about some of the aspects or I should do that. So what I've established before, and we'll probably replicate a similar process here, we work in consultancy. We want people that can have a good understanding of what they don't know, but can speculate about areas that they don't know. And so they have some instinct or gut feeling. ah They can have an informed conversation with an expert, say data engineer, if they're a data scientist.
SAMI: Without like completely noping out of it or just deferring all the person, you know They want to try to make an informed effort to say understand their world and so I global logic we call this a t-shaped individual So you have your central expertise and you have fringe activities is ideally two of them for data science You might say ah data engineering and ah business intelligence or data visualization type thing, right? And so I'd ask questions about data engineering And I remember seeing a candidate literally, you could see his eyes darting like this because he clearly was googling the answer. And it was, I had to tell him, please, I'm not asking you what you don't know. I want you to tell me what you do know. And if the answer is I don't know, that's completely fine. Right. There's no problem there. so that's the first aspect. The second aspect of whether it's in person or not, uh, is.
SAMI: live like you live interview tests, like not but not allowing the candidate to prep ahead of time, which again is a bit of a nasty thing to do. I've been told it's a harsh interview tactic. One of the things I've done as a consultant is interview people as if I was a client. I'd ask them really obtuse poorly defined questions. I'm not going to name any names. I've had similar questions from clients before. I'll give an example and say, yeah, hi, I'm a fashion business and I have 10,000 data points. What can I do?
TIM: Well, yeah.
SAMI: and I would wait for the person to ask questions. And the first thing, it happens every time, right? we we we hire you know we We live in a specific end of a particular spectrum, let's just put it that way, right? So when when they they talk to me as if I'm not the candidate the client that the client and say, well, I don't have enough information, it's like, right, you can ask questions to this mock interviewee, right? So they gradually get out of the box and ask these questions and I'll fill in gaps for them. but the The intention is for 20 minutes, we're going to do this.
SAMI: And I want you to solution on this. I want you to convince me, as this skeptical pseudo client, right? And I try to explain to people that the sort of personal skills, so this is the element that genii's will not do very well for a while. some You might be talking to somebody who is not cooperating with you, and you don't know why. And it could be because they had a bad day, could be because they've got some issues in their personal life that they shouldn't be bringing to work, but and we're all humans, right? ah Well, some of us for now are humans. so we, we, uh, you don't know whether they, uh, their job is at risk. There's a redundancy process going on. You don't know if, uh, the thing you're building is going to obsolete a core part of that workflow. Uh, they don't believe you. They don't trust you. They don't like the technology you're involved in. You need to work with all of that. And, and you need to learn your strategy for developing that, which is not to say I can't work with this guy because you've just lost the project. It's the first company or person that can work with this person.
SAMI: I've gone through a client that had all of those issues so simultaneously. And I wish I was joking. It was a nightmare. I didn't know any of this. So I couldn't help. I couldn't do anything about it, right? I've had clients run, I'm sorry for my French, shits tests, where it's basically like, and we want to see if you think the right way. So I had to replicate something they already had. And I was like, well, well that's as silly. what Why did you ask me to duplicate work? You could have talked me through that. That was, yeah. So sorry, again, that's a talented apologies. But the thing is that, yeah, I I want to know, can you handle that situation?
SAMI: Because I know, without chat GPT, I know that you can overcome any technical obstacle, right? If you're an interested, interesting person, you probably, and motivated person, you probably can learn a new technology. You probably can do the right stack overflow queries to pass your job, right? and What we do in consulting data science is not very difficult from a programming perspective. It's a, what's in this data? What do I think I could find if I did something with it? And how do I communicate that to the client? That communication aspect is number one. And that's the hardest bit to interview for. It's the hardest bit for someone to do preparation for. So we might lean more on that. And the one other thing that I'm not sure about what we will do in the future, I have used technical tests before. And usually it's just about saying, you know, something like a Amazon have a deep learning toolkit available, what's it called? Like I would say stuff like that, because that might be less likely to come out of someone's general knowledge or something. If they've used the tool, they know, oh, Amazon SageMaker allows you to build machine learning models. Okay, I mean SageMaker, that type of stuff.
SAMI: So I'll do that frequently. I'll say pick your cloud of choice. we'll We'll focus on that, but I'd like to know if you know Amazon's cloud. And ah well we'll we'll come up with like a rough talked over solution architecture for a specific problem that they wouldn't have faced before. If they have faced it, they talked through what they've done in the past, like that type of stuff to try and make them be a bit more live. We've done the, I've done in my past sort of a take home interview test type thing.
SAMI: And the problem with that is it it turns off a lot of people. And I understand why I've been on the receiving end of it. It's a huge amount of preparation. I got criticized by a company for bad time management because I didn't get it back to them by their deadline. And I couldn't explain to them enough that you are hiring for a senior position. I have a demanding job.
SAMI: I don't know what you were expecting you gave it to me on a monday and you expected to buy the tuesday I couldn't do it you didn't tell me you were going to give it to me with one day so I just I'll told you a different day type thing right so like that that not not ah you know disrespect for the person on the other side is comes across badly but I'll I'll be as mindful as I can. What I've done instead, and this is the change, like what I've done instead is to say, hey, if you have a portfolio of code you have written and it's on your GitHub or something, you can send to me a reference. I can take a look at that and I have an understanding whether you have some ability. Now, that might be a bit more challenging.
SAMI: because if you can spit out something that is clearly not yours, it's clearly not someone else's, and but could be yours, that's going to be a bit more difficult. and You can't run the code, which is usually the thing that tells you whether it is AO generated or not, if it doesn't work. so That's going to be a bit trickier for like actually assessing technical ability.
SAMI: And if you have a rapid interview process, we've had candidates in my past that we needed in a few days' time and for a short-term need or something, spinning up a development environment or a lead code type thing, this is it's not really feasible. So you sort of have to take their trust. And I've had I've had people in my team before, at my previous company I should stress, I've had people in my team before assume someone has more knowledge than they did because they came across when someone When someone is an expert, they usually explain things really badly. And if that's not good that's not a good trait of experts, right? You should be able to explain things properly, but that we interviewed someone who's perhaps the worst candidate I've ever interviewed. And he's so bad at expressing himself. And I remember speaking to my team and saying, you wasted my time, guys, what the hell? and And they said, we thought we thought he he had no experience with interviews.
SAMI: And I said, I spoke to him. He said he literally came off an entire day of interviews on the day I interviewed him. And it was it was not. Yeah, he did not present well, unfortunately. And and and ah it shouldn't be this way ah because.
SAMI: Society should be more inclusive, tolerant, whatever. But it is this way. If the client speaks to someone and they don't like the person they're speaking to or it doesn't come across well or they don't warm to them, it's not great. And if I, e as the interviewer, don't get that impression, it's probably not going to go well for clients either. so Again, sorry, a bit of a ramble. But I would say that the The sort of the dream of of hiring is you have a standardized process that anyone can be involved in. And that's coming far and far away, further and further away from where we were with that. Because now it needs to be bespoke to getting the candidate in. Some of them don't do want to do steps, some of them don't want to do steps. I'm not actually sure anymore. And I feel like generative AI in the future could be a really good tool, like a free screening tool, but not now. And you can't count on the responses back from that but not being generative AI, which is the big problem.
TIM: Wow. What about stepping back now into your past? And if you think about all the different candidates who you've interviewed, some ultimately got their job, the vast majority didn't.
SAMI: Yep.
TIM: Are there any patterns you've recognized among those two groups that kind of differentiate typically the successful from unsuccessful candidates?
SAMI: I have a great story. ah ah So the short of it is, if you if you on your CV say your experience accurately and you apply for a job which you have qualifications for and you interview, you should get the job. You might not you you might be worthy of getting the job but miss out to someone else who had high qualifications or whatever else. So the number one rule is don't apply for a job you can't do or don't ah don't lie on your CV. And and then there is some embellishment people do, I personally don't. If you're going to do it to a ridiculous extent, don't.
SAMI: So my story is I got interviewed for a big data engineer position ah five years ago over the phone. And I didn't know why they were interviewing me because I applied and thought, ah I mean, I could, sorry, rather it was one of those platforms where you might get matched to jobs. So they yeah they picked me if that makes sense. so And I said, well you can clearly see on my TV that I don't have experience with big data engineering, but maybe it's like an aspirational one. Maybe they mean data science because the job roles are not always clear to find. And about 10 minutes in and on the phone, I could hear them say, why are we interviewing this guy?
SAMI: ah and This is one of the funniest, I could have taken such a fence to this thinking, why are you interviewing me? Why are you wasting my time? Are you the one who picked me? But I laughed at that I thinking, you didn't even like hang up the call on mute or say, hey, I'm sorry, you have the skills that we're looking for.
TIM: Yeah.
TIM: Yeah.
SAMI: I thought it was so funny.
TIM: ah
SAMI: So that's a good example. If you're looking for a data engineer, don't hire a data scientist, maybe, and that would be number one.
TIM: yeah
SAMI: ah So the main other one, and we talked about this a bit earlier, but yeah, if they are bad at presenting and articulating an interview, they will be bad at presenting and articulating in front of customers. Unsurprisingly, right? For a consulting company. This will be a story I will share about a company from many years ago. We were part of an interview panel. This person was very, very bad at expressing themselves. It is unfortunate that if English is not your first language, you will have a higher hurdle to match to do this.
SAMI: but ah We recommended a hard no. We didn't know if they had the technical experience and expertise. They did not have the communication skills to articulate what their ability was. When they joined the company, so they were we were overruled by our head of delivery, we hired them in. One of the reasons that might be the case for this was we were growing like this, and our team was here, so we needed to match the growth rate. It's always a painful problem when you're in that sort of explosive.
SAMI: growth face. And this company was also very, very ah restrictive on the use of contractors, which is fine. I understand the premise, but you shouldn't get somebody bad in as a permanent. It's much harder to get rid of a bad permanent person. It doesn't work out for either side. This person was hired against everyone's suggestions.
SAMI: And on client calls will literally say, I don't know how to answer this I don't know how to solve your problem type thing. And it's one of those things where a big unwritten rule in consultancy is at the very least you should say, let me get back to you or something like that. Or say, I'm not entirely sure I know the answer to this, but I will look into it type thing.
SAMI: you shouldn't It's like a like improvisation. if you ever If you've ever seen standard comedy, you just say no. like You shouldn't ever say no to someone's improvised suggestion. You immediately shut down all creativity and full process and everything like that. It comes across really badly. This person did not last to the company very long, so we did no favors bringing them in. and But that was an example of an unsuccessful candidate. but So ah generally, those are the those are the patterns. There's nothing it's nothing fancy or sophisticated. It shouldn't surprise anyone what my conclusions are on that.
SAMI: ah I think it's a different matter.
TIM: So you're, you're, you're, sorry, you're implying almost then that most of the time the right candidate gets the job. And it's almost like each stage is reasonably accurate. That's the, the recruiter looking to receive it for five seconds ninety something is something there's an accurate.
TIM: The first round of view is fed that you're implying there's almost an existing meritocracy. Is that what you're saying? Or perhaps not?
SAMI: ah that's That's how it should be. ah So I can't say whether it is, right but I mean that's how it should be.
TIM: Okay.
SAMI: if if you if you I guess I would ask the ah the opposite question, if you is's because it sounds like you're asking me a leading question. When are the cases that doesn't work? So you have people who represent on their CV.
SAMI: if, if they have misrepresented to the extent where they can't answer the technical questions, that should pop out of the technical interview. Otherwise something is wrong with the process. So I've not come across a candidate that has done well at all of those stages and gotten the job and then turned out to be a bad candidate.
SAMI: I'll put it that way if that makes sense. But I must admit I've not yet.
TIM: but so so So you found that performance in the hiring process generally correlates quite well to performance on the job That's really fascinating. I wonder actually thinking about it now, if in consulting interview performance is actually quite a strong measure of on the job performance, because you're always kind of in front of clients.
TIM: Whereas if you're like a data analyst in a tech company and you know, 90% of your work is in the visualization tool in SQL and what have you, you don't have to be kind of on the ball thinking of stuff as you go, kind of like an interview.
SAMI: yeah Yep.
TIM: Maybe it makes more sense why it tends to not correlate as much.
SAMI: Yeah, possibly. It's interesting. Interesting of you to hear that. ah Yeah, ah possibly. I What I would say is, if you are the in In the cases where I have hired, it's been into a team where you are going to be surrounded by some technical experts more senior than you, if it's just me, fair enough, ah that can help you along with that journey. In the case where ah it's a marketing company doing B2B marketing and you are the data hire, you don't have that fallback. So the performance for that person if they had the technical expertise and all that sort of stuff, and let's say that another firm was brought brought in to do that technical interview, as I would probably suggest, because, well, if you don't have technical skills, how can you assess whether the person you're hiring is the right hire? If that's the case, and they don't do well, I would put that on the situation around the person, ah rather than the person. If someone's past the interview process and just doesn't do their job, we're talking about different version events, obviously, I have seen that again. It's very obvious. like It's the sort of thing where
SAMI: ah Okay, I'm gonna share one story. This is, again, I was not involved in the interview process, flu I would I didn't say this until now. ah If you meet a technical person who types like this, Ask some questions.
SAMI: ah It's happened twice in my career, and both of them weren't very good. And both of them got fired very shortly after they got joined. They were both quite senior hires. I wish I was joking. I'm not, right?
SAMI: The CEO of one of my previous companies typed like this, and I was like, I understand.
TIM: oh
SAMI: But when a senior consultant or a principal consultant is typing like this, I have some problems with that.
TIM: ah What an interesting proxy. Yeah. I used to try to think about some of those little things in hiring, like just little indicators you'd get in the interview process or even in the first week, like can they turn up on time?
SAMI: Yeah.
TIM: Did they answer my question when I asked them a question?
SAMI: yeah I mean can they can they join Can they join a Google meet?
TIM: You know, those kinds of little things.
SAMI: Can they join a Google meet?
TIM: Right. Yeah.
SAMI: Jesus Christ. I had We had a candidate, we we were we were interviewing really short notice hire recently for three different candidates and one of them just couldn't join the meeting. And I'm like, okay, so if you're on a client meeting and you can't join the meeting, are you going to do well?
TIM: Yeah.
TIM: Can you maybe figure it out? Yeah.
SAMI: Yeah.
TIM: Yeah. Uh, I remember someone who used to hire a data scientist and they would have like ah a take home case study type of thing.
SAMI: Yeah.
SAMI: Yes. Cool.
TIM: And they had, they had a step zero, which was they sent them the data set. It was like in a Google dry a folder or something or Dropbox or whatever. And they explained like this is step zero is to be able to, uh, unlock the data set.
TIM: Like it had a password protect or something.
SAMI: Right.
TIM: but I think it was, they had to unhash it first.
SAMI: Okay. Okay.
TIM: to then get the hash, sorry, the unhashed password to then enter that. And they made it abundantly like, trust me, this works. Like there's no issue with this, you will be able to do it. But as you said, it was amazing the number of people who would drop off at that point thinking that it was just an issue with how he'd set it up.
TIM: But I was like, well, no, that's step zero, is you just have to be able to solve these little problems that are inevitably going to come up.
SAMI: Yep.
TIM: I thought that was quite an interesting approach.
SAMI: so I Something came to mind, actually, when you mentioned that. I interviewed a company a number of years ago. They gave me a take-home interview to task, and I did the task exactly as it said in the thing. one of the ah probably It was like completing the code to do a particular step, and it wasn't working properly, diagnosis-type thing.
SAMI: the they What they were supposed to have sent was supposed to work, but it didn't work. So I corrected it, fixed it, talked about it in my interview steps, and then did the test exactly as they expected me to do it. And I didn't pass the interview. And I looked at the feedback, and the feedback said stuff like, well, this person ah didn't use this library.
SAMI: And I looked at the tasks and it specifically said, complete this without using this library. And so I responded back and saying, you excuse me, are you serious? Like, if you're telling me, use best practices to fix this, do what you want, fine. But if you're telling me, fix it the way I've told you to fix it, I didn't do it that way. I didn't think outside the box type thing. That's not, but that I came across quite offensively. to So I sent quite a scathing email to them and said, I'm glad I don't work for you. Because if that's how you treat people, like, you know, are you expecting someone to sum a magic something up that doesn't exist?
SAMI: Yeah, that's how you get the wrong answer to the question. So I've been part of many bad interview processes, but ah it that was the first technical test one that was like, I don't know what you want me to do. Yeah.
TIM: Yeah. What about if you think over your experience hiring people, is there any, any big regrets you have, any kind of big fails you have had as a hiring manager, or perhaps you've seen one of your colleagues have when they've gone to hire someone?
SAMI: I mean, the main one was that candidate that we shouldn't have hired. I think.
TIM: And of course, you know if it's if if if it's too personal, we would we no don't want to drop anyone in it either.
SAMI: Yeah, no no I'm thinking no no, no, no, it's not. I'm just trying to think of the right way of of putting it. ah No, no, no. yeah I've bla I've been blessed not to have many on my side. ah I think the main one was about that person who typed like this ah and and and the person that we shouldn't have taken on the because of the bad interview feedback. I think that the main the main problem is is when you get a bad job specification or bad understanding of why you are hiring to replace and setting up the person to not succeed. For me, that's the biggest issue.
SAMI: ah But that's not been in in the process that I've administered. I've seen it happen in other companies. So, oh oh, sorry, there is one that comes to mind. Sorry, I forgot about this one. ah Okay, ah this wasn't a failure of mine, but it almost was because it was a process that I put together and I was assisted by some team members. And I won't say which company it was. We were hiring internally and externally ah for the excitists. And so it was going to be really exciting. We wanted to get everyone interested in this.
SAMI: ah and I set up the job description, set up the interview process and things like that, and we wanted to advertise in our internal jobs board, and our recruiting function very kindly wrote this advert that said, join our exiting data science team.
SAMI: ah And I thought, ex exiting exiting data science team?
TIM: Exiting.
SAMI: ah A few months later, there were some redundancies, so it came more true than expected, but it was the fact that they didn't prove to me what they wrote, and they thought, exiting are you serious?
TIM: ah
SAMI: The first thing I saw was exciting.
TIM: Well, but there they're, they're an HR, that was a Freudian slip, that's some inside information, maybe.
SAMI: I mean, you never know, right? You never know. So maybe they were actually correct all along, but I said, maybe I should say exciting rather than exiting. I don't know. Uh, yeah, that's probably the biggest fail I'd be part of. But I mean, it wasn't really mine. Uh, but yeah, so, yeah, again, this this isn't a hard thing to do, right?
SAMI: If you, look if you, if you feel well and you, you write a good job description, the client client has experienced correctly, they should succeed. ah If something doesn't happen in that chain, yeah.
TIM: ah
SAMI: Okay. It's not going to go well. Uh, but yeah.
TIM: Yeah. what about like, you've just painted this picture of, yeah, a bit of a perfect storm, as you put it in hiring, uh, and this kind of arms race to the bottom with AI, where its candidates are playing with AI companies, maybe screening with AI, this kind of big, massive applications. And you've also said, okay, well, probably we're going to have to get back to more of a human based approach, like more face to face.
TIM: more kind of live interviews, those types of things. ah A lot of this sounds like it could be at odds with a more objective, fairer approach, which is in theory, a more standardized approach. So what about in this this new world we're imagining of this more one-on-one, high-touch approach? Can you think of any ways that we could still have that as being quite objective and fair, even if it is a little bit kind of inconsistent between candidates?
SAMI: Yeah. Yeah, we we talked a bit about this as well. But as I said, a lot of a lot of the things that would be good to to do are a bit more difficult to do nowadays. I think the the number one one would be something like having more diversity in the interview panel. ah That might be more difficult to do now that not everyone is working every day if you have an in-person interview.
SAMI: could be more easy to do with having remote interviews, but regardless, that's really important. And that that will naturally lead to a remedy to the sorts of things like Amazon's automatic AI bot only hiring men type thing. That will naturally lead to remedy of that because you have representation from other people. And they'll notice things that you don't notice as well. That has helped in the past.
SAMI: stop trying to hire more of the same people that already exist in your company like that again goes to the amazon ai jobs thing but one of the great things about the team that I put together my previous company was without selecting specifically for it it was one of the best gender and culturally diverse teams I've ever worked in and the real nice benefit of that wasn't the sort of dei points thing I was proud of the team being the best candidates it was the fact that the ideas that people suggested were the most diverse because it was a diverse team. And so when you had ideation sessions to workshop things, they were naturally a really great mix of ideas that I wouldn't have come up with before. And I remember interviewing at another company previously that were like, we want to hire people with PhDs from the great universities, ah such as me, I guess. And I thought, OK, if you only have a bunch of Oxford PhDs in a room, you're not going to get the best ideas. You're going to get a bunch of Oxford PhD ideas. And if you have one guy who does that, you've already checked that box. You don't need any more.
SAMI: I thought, why why are you closing your your pool of candidates to this?
TIM: Thank you.
SAMI: The only people who care about this stuff are the wrong people.
TIM: yeah
SAMI: I'll put it that way. If you've been through the experience, you'll understand how unpleasant an experience it can be. And and this's not it's not going to like hone the fire, forge the best candidates ever type thing. Stop faking jobs.
SAMI: but sorry No, please. yes sorry Yeah, sorry.
TIM: And so the you're saying basically the diversity of people then directly, causally creates the diversity of ideas, which again, perhaps in consulting is is even more valuable because you can go to the client with multiple ways of thinking.
SAMI: Yes.
SAMI: Yes. I think so. I would say not not even just in something. I'd say it's probably not direct. I would put it that way.
TIM: Okay.
SAMI: But but I think and if it was, people would already be doing this, right? That's that's kind of the problem. But it is measurable. I'll give you a silly example. This is an excellency example. Women in tech, obviously, is a big problem, as a rather than the number of women not in tech is the problem.
SAMI: and So the number of products that get ideated and workshopped without consideration of women at all is is silly, and it's 50% of the economy. So if you are designing a product and you've never spoken to a woman about it, or there isn't a woman in your team, you can expect that the ideas are bad, right? And so it's not surprising that that's kind of the problem. Self-evidently stupid stuff will make its way into the market because they never spoke to someone about it who had that particular gender bias. There's the scientific examples of this. A lot of the testing in laboratory in laboratories was done on 100% albino male rats.
SAMI: And then the first time a drug might make it into a woman would be a human woman as they were about to have the drug for the first time, having gone through all the approvals.
TIM: Jesus Christ.
SAMI: And you're thinking, this are you serious? Like, no one thought about like that testing. yeah Yeah, so no one thought, because there weren't any women there. Should it require this? No. But are we where we are? Yes. So we should probably just do it.
SAMI: So I would say you that's probably the number one way. it will and As a nice byproduct of the fact that you have a diverse team, you'll get this. like that's That's a nice benefit to have. ah So I wouldn't say so direct, unfortunately. Because otherwise, again, people will be doing this.
TIM: probably got time for maybe one more quick question because you need to live a few minutes just for the uploads. what about, yeah, hiring hero?
SAMI: Do you think?
TIM: Like, is there any one, if you think back over your whole career so far, anyone that sticks out to you that you feel like did or has done hiring in a particularly good or innovative or interesting way?
SAMI: Yeah, I'm probably going to speak about my previous manager Alan, Alan Logan from Globalogic. When we we talked at the interview stage, we went far past just the, so this this was in the sort of cultural fit interview type thing. We went far past the, what are you coming in to do job wise? And we discussed the long term about what would you do if you had more permissibility? What would you do if you had more team members type thing?
SAMI: And it talks about the fact that I I would I think there should be a data science team. We were 500-person UK consultancy called ECS back then. We weren't doing data science. And I was surprised by that thinking we have consultancy engagements with some of the biggest ah companies in the UK. A lot of those company engagements are digital transformation projects. So there's massive scope for, we've put your data onto the cloud, now what?
SAMI: ah type of discussions. So I was surprised by that. And so I talked about it in an the interview and I joined and saw the opportunity to do it and said, can we do what we discussed in the interview? He said, yeah. So what you met with ah taking things forward, I mean, let's put aside to one moment, he was a fantastic person manager. But what you got when you joined was, oh, this company is filled with people who have that type of bubbling motivation.
SAMI: Filled with non-traditional interesting candidates, quite diverse candidates. A lot of the stuff that we talked about, but it wasn't again, it wasn't selected for that. It was more a question of like, ah you'd get you get the CV and he wouldn't be thinking, oh, sorry, it doesn't say doctor at the beginning of that. He'd be looking at the rest of it, ah the bits that are more than the five seconds that might be happening with some hiring managers type thing.
SAMI: So you've got a really amazing pool of candidates go in and then you'd interview them and there'd be absolutely fascinating discussions about what they've accomplished in their careers because you had these people who are ah Not out of the box. And that was great. So I'd I'd say that that that team, ah that said you know, consultancies are interesting places, right? You're not necessarily working with people in the same practice as you. ah Most projects might be one or two people. And if it's a data scientist project, it's probably just going to be a data scientist. But the people I met as data data architects, data engineers, those people, he ran the data practice. Those people are still lifelong friends because they were just really cool people. So I'd say it's probably that.
SAMI: It says a lot, but regardless of what the specific process is, if those are the people that make it through and then they become lifers, I think you've done your thing you've done your job right.