In this episode of the Alooba Objective Hiring podcast, Tim interviews Stuart Davie, VP of AI and Data Science at Peak
In this episode of Alooba’s Objective Hiring Show, Tim interviews Stuart from Peak to discuss evolving trends and challenges in data hiring across different geographical markets, particularly in India and the UK. They delve into the impact of generative AI tools like ChatGPT on the application and interview processes, examining both the opportunities and difficulties these technologies present. Stuart shares insights on Peak's data-centric approach to hiring, emphasizing the importance of leveraging internal development frameworks to create job descriptions and assess candidates effectively. They also highlight the need for modernizing recruitment practices using AI for better efficiency and reduced bias, while acknowledging the complexities and ethical considerations involved. This conversation provides a comprehensive look at how AI is reshaping hiring practices and the potential for future advancements in the recruitment industry.
TIM: Stuart Welcome to the Objective Hiring Podcast. Thank you so much for joining us.
STUART: No worries. Thanks for having me. Tim
TIM: It's our absolute pleasure, and what I'd love to kick off with is just a bit of a lay of the land from your perspective in terms of data hiring. What are the main trends you're seeing? What are the main challenges you're seeing, especially given you have a lens across different geographical markets?
STUART: Yeah, it'd probably help if I introduced what Peak does and the sort of data roles that we hire for. So we're a SaaS product company, but we sell AI applications, and data is obviously central to anything to do with AI, so we have a very large sort of data team, which is a mix of data scientists. data engineers data analysts machine learning engineers So probably, like, stretching the definition of data roles here, and together these people in these roles build our applications, deploy our applications, and provide services for our customers, so we have probably today 50 to 55 people in, like, a data role. But by far the majority of them are in what I would say is more of a, like, externally impactful role within Peak, either adding directly to our product or helping our customers by providing them sort of data science or data analytics services.
TIM: Yeah, and what about, let's say, just in terms of, I don't know, The biggest challenges at the moment, so I'll give an example one thing I've heard quite a lot recently from people, particularly in North America Are they just getting inundated with applications, and are they talking about a lot of maybe ChatGPT-optimized things that tend to look similar to one another? Are these sorts of things that you're seeing, and is what you're seeing different by market? because I know you hire in a few different geographical markets
STUART: There's a lot of applications out at the moment. We have roles open in the UK and in India at the moment, and both of them get flooded with applications straight away, so we opened up a machine learning engineer role in the Pune region of India, and within seven days we had 700 applications. And we're seeing a similar grad scheme that we've opened up, which also has data roles in it in India, has already over a thousand applications, and that was in under a week. In the UK it's not quite as much, but it's still like pretty much too many to easily process. What we've found in the UK actually is, like you say, a heavy use of Generative AI large language models, as part of the like application, are making it very hard to tell the difference between applications. Really, one estimate is that I was talking to our sort of data science team hiring managers before this, and they think a conservative estimate is that about 50 percent of the CVs that they're getting through appear to have been used, appear to have been, like, partially supported by generative AI, which is a lot.
TIM: That is a lot, and I'm really interested in how you have guesstimated that. Is it because it's using very classic Chachipiti language? Is it because it looks strikingly similar word for word to the job description? Like, how did they do that evaluation?
STUART: Yeah, so they have questions just as part of the application process, just a few questions we added these in a few years ago to act as a bit of a filter when we revamped some of our recruitment processes a couple of years ago. Previously, you could just submit an application, and we found, like, a lot of people were submitting applications without really taking time to see what was different about Peak and what the role really meant. so we added just a couple of questions that a lot of people would actually just skip altogether and we figured okay we'll just qualify those out and it helped us reduce those questions Oh, people seem to be just putting those questions into the large language models and probably copy-pasting the results straight in. It's very much in the language that things sound ChatGPT-ish, with very similar responses to responses that don't have the context of somebody's background, so they're very generic. It's very hard for us to accuse people of using these tools. It's entirely possible that they're just natural responses that these people have come up with, but it makes it very hard to differentiate. If you have hundreds of applications, you can't realistically interview all of those people. You need to make a short list somehow, and if you have a hundred people that sound exactly the same, then there's not going to be much space to put them on the short list.
TIM: And I'm interested in the perception also of yourself and the hiring managers of candidates using AI, in this case ChatGPT, for answering those questions. Do you look upon that dimly? Do you feel like that's fair game? Are there bits of the process where you think it's good candidates should use Chachapiti Are there parts where you feel like you would rather they wouldn't? What are your general views on it at the moment
STUART: I think, like working in an AI company, I think I have to be supportive of people using AI wherever they can. It can be a little bit hypocritical if I'm not. I think as a society we just need to adapt to the power of these tools and how best to use them. So, you know, people just copy-pasting these responses completely undermines what we're trying to do from a recruitment perspective, which is to understand, like, who the best candidate is. And it's undermining their chances as well. I think we'll probably reach some maturity as both sides of this equation adjust to the tools. be interesting to see what the recruitment process looks like to handle that. Okay.
TIM: Yeah, it's really interesting to see where this goes from my end. I was hiring not that long ago for some salespeople, and as part of our application and testing process, we had some open-ended questions, maybe not dissimilar to some that you had. Ours were quite specific. One I remember we had was, I imagine you got hired into this role, and it's your first day at a LUBA. What are the three things you would like from us to give you the best chance of being successful in your role? I was a little bit amazed initially when I saw that, yeah, the majority of candidates just copied and pasted something from ChatGPT for that because that was a question where I cared about that specific person's actual opinion and answer. Like, I don't give a shit what an LLM thinks about this. I'm not hiring the LLM; I want to hire you. And so I found it strange they would give me that for something that's just their opinion. It's if I ask them, How's your day going? and they ask churchy petite to answer that. I care about you knowing what I just found that depersonalization quite frustrating because it limited how much I knew about that person. That said, someone did point out to me You When I said this to them, like, Yeah, but the candidate might have thought that really that was part of the evaluation process and there was a right or wrong answer to that question, even though I just literally wanted to know what their thoughts were, so maybe if we put it into a different bit of the process, maybe we might have gotten more honest answers from them, perhaps. But you've just told me then that you've got these application questions, and they are whacking them into ChatGPT too.
STUART: It's even worse because we are located across different countries. A lot of our interviews, at least at some stage, are over computers, and we suspect people in the first stage of our interviews—some candidates—use ChatGPT to answer questions on calls with us. Like, you ask them a question, they pause, and then they say something that sounds very ChatGPT-ish while their eyes are scanning the screen. It's tough. I think the recruitment process from the employer's perspective is that you want to get the best person, and you're not so interested in the competition aspect from the candidate's side because you just want somebody who can do the job well enough and fit into the team, and all of this from the candidate's perspective, they see it As a competition, there are only so many open roles. Hundreds of people have applied. They want to beat those people, and so they want to. I think it's natural to want to use the tools available to increase your chance of success. I think people just don't realize that they're undermining their chance of success because the business is looking for a person, not just if you could get chat, like if chat GPT cool if you could use an LLM to automate all of your responses and every part of the interview process, then you can probably use a generative AI model to automate your job entirely anyway, and they wouldn't need to hire you in the first place. What they're looking for is deeper than that, and I think candidates undermine their own application by relying too much on the technology but using it a little bit to help find weaknesses or to help you can use LLMs to brainstorm; you can use an LLM as an interview training partner. If you don't have someone you can practice your interviews with, you can ask an LLM to interview you and give you feedback on that, so I think there are lots of ways you can use these tools productively.
TIM: Yeah, there's certainly You being used in a kind of brute force, crude way at the moment as a kind of blunt instrument, and I do empathize with candidates because they're probably looking at these job ads going Oh my God, there are how many applications already? So whatever I was doing before, I'm now going to have to 10x that because there are so many other applicants already, which then creates this vicious cycle where you're just getting more and more spammy-level kind of applications from candidates who have applied on a whim, sometimes maybe even automatically through some of these tools, and it's, yeah, some kind of weird race to the bottom where it feels like that inbound channel is a little bit broken at the moment, especially the screening stages. How would you How would you receive a poignant, personalized bit of outreach from a candidate who was relevant to you, so let's say you had an open role? You're the hiring manager or the hiring managers in your team, and someone sent you their CV through email, personalized, obviously non-AI-generated message, a couple of dot points, and they were a good fit. How would you perceive that? Would that be helping you? Would that be hindering you? Is this like an avenue that maybe candidates should consider?
STUART: It's tough to answer that because if I say that I think it would be helpful, I'm worried I'm going to get a thousand emails tomorrow.
TIM: ha
STUART: applying for these roles and at which point it would stop being helpful because I wouldn't be able to actually read that many Yeah, I don't have a good opinion on how to best differentiate at the moment.
TIM: I'll reframe slightly and make a suggestion that in general nothing to do with you or your roles that I feel like that is a weirdly under-busy, under-tapped channel at the moment for candidates. I get incessantly spammed through sales of various software products, but if I think of the last four or five years when we've been hiring various roles for ourselves for other companies, I've probably almost never gotten a personalized, poignant, specific message of a relevant candidate who is trying to get to me. foot in the door Somehow I've gotten very generic LinkedIn connection messages written in a dubious level of English with a generic CB for a job I'm not even hiring for, so I've got spam basically. But I haven't gotten a nice personalized bit of outreach, and what I've noticed recently also in our sales approach is because so many companies are using LLMs to do their outreach mass email campaigns like a spray and pray method. The poll opposite actually seems to work well, which is hyper-personalized, obviously not AI-generated, obviously written by a human, because that has at least a chance of cutting through in a channel that's so busy, and I feel like at the moment for candidates that channel probably isn't busy, but I wonder if it will be because they're all going to face the same mess going through the jobs boards through the front door. that maybe they need to go through the back door. Here's another way of putting it: imagine you were going for a role in six months yourself; how would you approach the job hunt knowing what you know from the hiring manager's side? now
STUART: Yeah, I would definitely want to be able to leverage my network if it's just a cold application. I think, as you say, you see it on LinkedIn; there are already hundreds of applications for this. Why would I even be seen at this point? I would want to use my network, and if I don't have it, if there's a role that I'm interested in and my network doesn't really extend into that organization, I'd be looking to try and find connections and a way to understand. and it helps me to write if I the more I can understand about an organization before applying for a job there, the better I understand if it's something that I would be able to bring value to and how well I would fit okay
TIM: There's probably another good use case of an LLM. Actually, it's just, Hey, I'm thinking of going for this role at this company. Can you give me some more insights about them? Could I? Here's the kind of things I'm interested in, and here's the kind of organization I like to work for. How old do you think I might match them. That could be a use case. I know someone mentioned yesterday that they were using a large language model for interview prep like you mentioned, but here's the LinkedIn profile of who my interviewer is. Tell me what you think they're going to ask about me based on my LinkedIn profile or based on my CV and this job. It's almost trying to guess what the interview questions might be and what might resonate with that specific interviewer. I wonder how accurate those predictions are. I guess you've only got a LinkedIn profile to go off, but it could be another interesting use case.
STUART: Yeah, I think the space is going to evolve. There's a lot of room for evolution. These technologies, they're still so new relative to how fast we're used to these things progressing, and they're evolving very quickly as well, so I think it's, yeah, you just all the way from how you can better prep for interviews to how we're going to better manage candidates, essentially automating every part of the application process and understanding the differences there. I think the recruitment industry is going to see some changes over the next few years.
TIM: I think so, and I'm really excited, to be honest, because I feel like recruitment's still pretty similar to what it was 10 or 20 years ago. It's still a job ad, a manual CV screen, some interviews, some tests, and an offer. Oh my God, surely we can do better than that, and I think at the moment the gap is now not the technology. It's just the application layer waiting to catch up to build out some of these tools to leverage the large language models that we now already have, which I think are good enough to solve lots of these endemic issues. The application screening interview summarization and feedback things are no-brainers, especially if you're in an organization that can't manage to give feedback to candidates. Like, why not have an AI listening to that conversation summarizing it? Potentially scoring candidates on long questions but at least summarizing and giving some feedback to the candidate that's already possible; there are tools that do meetings pretty accurately, so yeah, I just can't wait to see what happens in the next 18 months. Is there any bit of the hiring process in particular that you think is ripe for AI for a large language model to improve in particular?
STUART: Think AI in general. It'll be interesting to see how AI scoring will be adopted. It's a controversial area, and it was the big furor around Amazon, I think it's probably eight years or something ago now, how they were using machine learning models to score candidates, and then they stopped using the models because they found all of this bias in the models. Which I thought actually was a backwards approach to solving the problem because the models were just like predicting who the best candidates are based on the bias that was already existing in Amazon's processes. Retiring the model doesn't solve the problem; it makes the bias go away; it's just turning the lights off. So you don't need to see it anymore; if anything, the model comes up a bit like a machine. How worried should we be about machine learning models introducing bias into processes? We should be worried; they're an amazing tool for revealing the biases that are in processes that we can't measure easily today. nobody thinks that they're biased, and it's only when you have a look at a lot of data over a lot of the sort of exact same sort of data set that you would use to train a machine learning model that you can see those sort of systemic biases revealed, but it's interesting to me that whilst that was very frowned upon and everyone says that's wrong, there seem to be machine learning models used for automated CV scoring, and in a lot of applicant tracking systems, like applications, they can come in and they'll score the CVs, it seems, which to me is the same thing. It's just that I don't know if it's come in stealthily; somehow it's not a company doing it; if it's an ATS doing it, then nobody seems to be upset, or if it's an individual company doing it, then that's a problem.
TIM: I'd love to drill down on this particular thing in a bit more detail so I remember the vague details of that Google experiment, and yeah, so they built a model and just kept taking over the same existing biases that they had because it was selecting the same people that always selected before. I wonder if with a large language model, let's just take that CV screening step as an example. I wonder if there's a way we could just set it up such that we don't give it an opportunity to be biased, so imagine CV and the application data come in, and the first pass, the first prompt to LLM, is just to extract the skills, the experience, the relevant bits of information that we're looking for, so that's done. The second step is based on this: match it to the requirements from the job descriptions from whatever else and then come up with some kind of match score. If we did it like that, we're not really selecting; we're just basing it purely on the language. Would that approach then eradicate those bias concerns or at least minimize them? think
STUART: You need to be careful with language in this particular sort of case study that we're referring to. They tried to take away gender-identifying parts of the data, so like things like the name and/or just the gender explicitly, and I think it was an NLP model, and there was a correlation between things like educational institution or even just the language that's used in a CV. And the gender of the applicant, so if you don't hire if you have a business that's biased against a particular gender, then the language that gender uses that's biased, predominant, like that's more common in CVs or the places that they studied or the roles that they previously had, can carry that bias as well. I do think what you say should be helpful, though if you can try to have very clear, almost objective categories to score within and use an LLM to help accelerate the bulk processing of CVs to those scores, I think that would be helpful, and it's something that I've just done on the side myself just to help process the number of CVs that we've got. I'll just take this opportunity, though, to say that I think if we want to improve diversity and, like, equity and inclusion in the industry, then sometimes we have to rethink the criteria that we're looking for as well. It's easy to base what we think we want often partly on the people that we've seen do it before. Whereas if we have a bit more of an open mind around what sort of skills would actually be successful in the business or help us get to where we want to go, or if we're a bit sometimes someone says they want someone experienced, they need at least six months of experience or a year of experience or something like that. but then they spend six months to a year recruiting for the role, whereas if they just got someone really talented who was like a grad or new to the industry and spent that time developing them, then they could have somebody that's even better, perhaps, than what they would have found after they waited that long to make that hire. So I think there are a few things that you would really need to do to properly reduce that bias that we're talking about.
TIM: One thing I'd like to get your thoughts on is Is that so? We're both data people, and we're viewing hiring through a data lens at some level, and we're seeing a lot of opportunity with things like large language models to make hiring less biased and to highlight the existing biases like you've just mentioned. In my experience, generally in working in this field with talented HR people, the vast majority of the leaders in that space wouldn't typically come from a technical or data or scientific background, more from maybe humanities and maybe softer sciences. and I feel like on average they wouldn't really view things through a data lens if anything when they're presented with Oh, let's do some data-driven hiring to make things more objective. That feels worse to them, and their starting point is often that a test is what's biased as opposed to their more human-based approach. And so I feel like here talking about AI-based hiring would engender more fears in them that we're going to make things worse. Even though I feel like the current situation is so bad We can't make it any worse. Is there a sense of almost a skills gap in HR and talent or some new hat that they could wear to learn a little bit more from the AI and data and science-based people in their organization? What do you think?
STUART: I think so, but I can really only talk from Pete's perspective here, and we've been fortunate to have a people team that has really been enthusiastic about the potential of data-driven hiring and has been really on board with it, but then they also haven't had the skills to understand how we should do it and have lots of concerns around And I think the skills gap goes both ways, so when COVID hit, we stopped hiring, and a lot of businesses did, and then in the second half of the year we started hiring quite quickly, like a lot of companies looking for data scientists and data engineers did as well, and we weren't set up at that time for sort of that scale of recruitment. And so me and my team took on some of those responsibilities ourselves, and we wanted to be a lot more data-driven, and so we wanted to use Google Sheets to have the candidates and all the scores for all the different questions that they're asking and things like that, and our people team was worried about, like, the security of that data and, like, the candidates right to, like, be forgotten off the system and all of these sorts of things too. So I think the learning for us was both ways; they were supporters, but they had lots of fears around the legal and ethical side of things, which were things that we didn't actually know much about at first, so it took us sort of some time to reach a way of working that both sides were comfortable with. But from that point they were like data advocates, even if they didn't have the skills to do a lot of that sort of data analytics and processing sort of themselves, but then we were there to help where I've seen more problems. I think where you have, like, a senior member of a team who's very used to, like, the world that was, and they want to use, like, just their judgment to hire because that's how it's always happened. and there are a lot of people out there still like that. And I think you can see sometimes that's not the best way of going about it, and it's interesting when the data doesn't pan that out. You can see in the data, or the people working with the people team, that they can see in the data too that certain interviewers, perhaps the people that they hire, have a shorter tenure or run into problems. But how does the people team then have that conversation with this particular VP or whoever is adamant that not only do they know best, but they also are responsible for that business function? It's an interesting question. I think at what point, if you're the leader of a business function, do you say, Actually, the data says I'm not very good at these decisions? I'll let other people make them for me. Not sure, okay.
TIM: Yeah, that would require a level of humility that they might not have if they're adamant that they're right, so it's a slightly circular problem. What I'm really struck by is that if I think back to the last five and a half years of speaking to hundreds, maybe even close to a thousand, data leaders around the world, I'd say even among data leaders, 80 percent would just go, Nah, I just trust my gut. and I find that really mind-boggling that in a day-to-day role where your job is to do product analytics, marketing analytics, and help your sales team be more fit and more effective using data that
STUART: Transcription
TIM: Which is surely the most important thing that you just dismiss even the attempt to measure anything and just gut feel it the whole way. Do you have any thoughts on why that might be the case, like why would we abandon all data when it comes to people?
STUART: I think the scale of hiring will influence this. If you have one role that opens up every quarter or something like that, then given market changes and all of these other factors, you might not feel that collecting data and trying to use it to meaningfully guide your decisions would be meaningful. Definitely once you reach a point where you're hiring multiple of a similar role every quarter, then I think you can really start using data more effectively, so I think probably the scale of the problem would mean that it's probably not that useful for a lot of businesses. But I think it's also just an older, like, we at peak have been quite fortunate because we've generally been a fairly young team, and we've wanted to be part of our mission, which is to build a company that everyone loves being part of, and we had this mission within the sort of data science team. We wanted to be like the world's best data science team, and so we're always thinking of ways to use data and data science to do everything that is better, whereas I think if, again, it is a data leader but they're not used to working in a certain way around recruitment that hasn't always been data-driven before, it's easy to rest on well-trod paths while you focus on other things. okay, you
TIM: Anyway, so you could say someone not quitting or not getting fired might be successful at not having a bad hire. Or you could say someone you hired stayed in the same job for five years. Maybe that's a failure because they should be getting promoted. If they left the company for a promotion, is that a good or a bad thing? It depends on your perspective; you could interpret these things positively or negatively. And if you also didn't hire that many people, you maybe hired a few people a year. The sample size is also quite small that you might not interpret your decision-making either. As positively or negatively as all the other factors that come into it, I guess if your business has a downturn and 20 percent of people get let go and one of your people goes, What is that? What does that mean in terms of your hiring decision, so maybe you It's also that
STUART: Definitely that connection to long-term success seems to be missing. People want to do it, but they don't know how to do it because, like you say, staying in the same role for five years, is that good or bad? I think the business needs to really think about what success looks like and then try, but then the time scales if you come into a role as some sort of business leader, you have to say your first year there's a lot of work to get up to speed with the impact you're going to have in the business and start making that impact. If we're talking about data-driven hiring, then having If we're measuring success, even if it's a year into someone's tenure, are they, do we think they're going to be a star or not? And that's what we're going to base it on. It's so far away, and then I don't know what the average tenure is for data leaders or leaders of different business functions in general, but if it's, say, three years, then the time scale of bringing this in, they're probably not even going to be around to see it properly. barefoot transcription
TIM: matters, which is did I hire the right person or not? I guess there's a lot of other funnel metrics that you could measure that a lot of companies don't. A quick example is Agoda, so their marketing team approaches hiring like any marketer would, which is it's just a funnel like anything else. and so they're really good at interviewer analytics, so they have a leaderboard of which interviewers are committing the most time because they share the pool across all the different directors in marketing, and so you're meant to just contribute enough interview hours each week to make sure you're putting in your shifts. So they have that as a metric; they have conversion rates, so they want to make sure that an interviewer is like an effective filter and gate. So imagine if you passed everyone through, and everyone that you passed through failed at the next stage among a random set of interviewers; that would be a negative signal. So they come up with these different metrics, which is pretty impressive, and they just hacked out this skunkworks themselves and their own team. It's not necessarily across the organization, so yeah, maybe the further out the funnel you go, there's more data to give you that chance to do even for a couple of hires a year something meaningful that could improve the process potentially.
STUART: And we did something like that, but again the scale was a bit bigger, so we had plenty of data flowing through, but I think there are some good metrics. Another one is the criteria that you're trying to hire for. Earlier we spoke about different factors you could be scoring. If people score strongly on some of these factors, how does that correspond to success in future rounds? Because we found that, yeah, some interviewers let people through it this round, and they'd usually fail, and we would feed that back to them, and they would be able to get better. But we also found some types of questions or areas that we thought were important just didn't actually matter as much when you actually follow it all the way through.
TIM: That's really interesting. Do you remember some of those areas? Were they more subjective things or more objective things that you were evaluating?
STUART: All of our interview questions were quite subjective, so we didn't want to give people a test; we just wanted to talk to them, but we had areas, like depending on the role, it may be if it's a data science role, there's like their math and stats skills and their sort of software engineering skills and these sorts of things. and we would ask unstructured questions until the interviewers felt comfortable giving a score for each of those things, so the reason we did it this way was because we saw the interview process as a thing that we like; it's a tool that we're using to try and find the best, like, the right candidate for the role, and asking, like, the same interview question to all of the candidates to try and keep it fair isn't necessarily fair. because there could be bias in the question, or people could have prepared for that specific question, or someone else prepared for a different question, so I don't think it gives a great—like, it's not the level playing field that it might appear to be, and it also doesn't necessarily give you the information you need to be confident in how you're assessing someone. So we just had these categories and let the interviewers ask whatever questions they had to get to a confident answer.
TIM: And so you have this interview approach where you're trying to just get a handle on each of the interviewers, and they're trying to get a handle that they're comfortable that this person has skills in a particular area and to determine that they can ask basically whatever they want. Did that not introduce difficulties in comparing performance because each candidate got asked different things, and the interview would have meandered in very different directions without having that structured set of questions?
STUART: So the scoring that we would use to sort of grade candidates would be based on our internal grade framework, so if I go back to software engineering skills of a particular data scientist, would they be like an A1 based on how they've answered these questions? Would they be an A1-grade scientist? data scientist in that light line of the required skills, or would they be B grade, or would they be C grade? And it meant that direct comparison of specific skills, I think, was probably easier because the goal was to come away with the understanding of where they would fit on our development framework, and you would hope that then if the interviewers are doing a good job, someone who gets a higher score would be better. but they were able to ask whatever questions they needed to be able to determine what that level is okay
TIM: That's a really interesting way of doing it then, so you have a very clear understanding of the leveling of the roles and just fitting them to those levels by asking them questions to evaluate things. I guess probably the gap for companies in trying to do that is they might not have that framework built out in enough detail to begin with. But then that's a plus one for doing that and thinking about that. That's certainly something that it took us a while to introduce. Like, our first round of hiring engineers was hit and miss, and then part of the changes we did was thinking in real detail about the values of our business and what we were trying to hire for, but then also coming up with this very clear framework of what we did at, like, engineer, senior engineer, lead engineer, and all the different skills and values we expected to have. So then, yeah, that made it a lot easier to understand what we were looking for. We still kept a structured interview, but thinking about it now, the way you guys have laid it out would also be interesting to try for sure.
STUART: Yeah, I think I've been thinking about this space a little bit lately and I've started to think that, like the values of a business, the development framework and the recruitment side of a business should all be very strongly linked. It sounds obvious when you say it. What you're looking for when you're recruiting could almost come from, so for example, every time we want to create a new role to recruit for, we have to fill out this job description for it. and we have to write, like, what are the skills we're looking for and this sort of thing. If the role already exists in the business, all of those skills should already be defined, and they should be what the people in the current job understand they need, like what level those skills need to be; it should already be well understood for the different grades. So I think you should be able to probably create job descriptions and the specific skills that are on it and the criteria around assessing candidates very quickly based on, like, your internal development frameworks, and if you don't have that sort of development framework internally, then people are going to internally, they're not going to know what great looks like for their roles anyway. So how are you going to hire great people to fill that role?
TIM: So the starting point should be developing those frameworks, and then an output of that is the job description rather than starting with a job description in a sense
STUART: I think you don't want to get too hung up on it. You can't say we need people, but we can't start recruiting until we build out these development frameworks, and I think the size of the business matters too. If you're a small business, you probably want something that's a bit more light touch than A business that's a bit big
TIM: Stuart It's been great chatting with you this evening. It's been really insightful hearing all your thoughts on hiring, and thank you so much for sharing them with the audience.
STUART: Thanks for having me, Tim. It's been a good chat.