In this episode of the Alooba Objective Hiring podcast, Tim interviews Vadim Radinsky, Director of Data Science and Product Analytics
In this episode of Alooba’s Objective Hiring Show, Tim interviews Vadim to discuss the challenges and opportunities in hiring for data roles in an era where resumes are increasingly using AI to tailor applicant profiles. They explore whether the hiring process is fundamentally broken or simply a work in progress. The conversation dives deep into the impact of AI on resume submissions and the potential ethical issues around using AI in hiring. They also examine the importance of structured hiring processes in making recruitment fairer and more efficient. Vadim shares insights on managing the bias in AI applications, the future skill sets needed for data roles, and the value of a team-driven candidate selection process. The episode concludes with a discussion about balancing structure and flexibility in hiring to ensure the best outcomes.
TIM: Vadim Thank you so much for joining us on the Objective Hiring Podcast. I'm really looking forward to our conversation today.
VADIM: Absolutely, thank you, Tim, for having me. I'm excited to be here.
TIM: I'd love to start with just an in-your-face question: Is the hiring for data roles fundamentally broken? If so, how do you think we could solve this broken mess?
VADIM: And in my opinion, I always think about these changes, or fundamental changes that are happening in the industry, as an impetus to a quick adjustment on our end rather than something being broken, so it's an unfinished project or work in progress. So I think with recruitment, it's similar; it definitely changed a lot when it comes to both the process I've seen and I know we started using AI to screen resumes faster I know that we have applicants who are applying using the software to about 100 positions at a time, so it allows you to quickly distribute your resumes to the hiring So it's in my opinion that it is increasing the efficiency dramatically. Now whether it's broken or not, it's probably even more so sensitive on the candidate side because they get the brunt of some companies trying to advertise their positions while the position is not actually posted internally, and in that way that might seem very broken, so while the process is just getting more and more efficient, what's happening in the market definitely may seem daunting to those who may be especially to those who are maybe in the job market for the first time, recent graduates, folks who are trying to find their internships, and they're in that very new way of doing things in the recruitment market. and they're looking into Best Advice columns and finding that these, I would say, are very different, and there would be an adjustment period, but hopefully it's not as broken as some people may think it is.
TIM: Yeah, I think that's a nice reframing of the problem anyway—to think of it as imperfect and we can make it better rather than it's fundamentally flawed and we need to burn it down and start again. That's probably a healthier way to think about it anyway.
VADIM: and you guys have probably seen it too: things that you've done differently in the past, now they are more efficient, but they are also maybe unfamiliar at times, and that's part of the process. I'm sure you've faced that too.
TIM: Yeah, absolutely, and I'm interested in drilling down on one thing you mentioned around candidates applying en masse to different roles. One consequence I wonder of that approach might be that for each role a candidate has applied to, it's like they've got less skin in the game because they've applied to it at almost zero marginal cost. That's certainly some feedback we're getting from some companies: they feel like any candidate who's applied to a job has a little bit lower engagement than they probably would have had in the past. Sorry, is that something you've found yourself?
VADIM: I've seen a few resumes recently, and I would say those resumes were so specifically tailored to our needs and our position and job description because we typically look either for specific software or maybe a specific subset that fills the exact need that we have that usually when I review those resumes, they
TIM: thing it's harder to fake
VADIM: So maybe in that regard there's still effort that I'm seeing in the resumes I'm reviewing now I'm thinking if you are a generalist
TIM: because applied
VADIM: Maybe broader than it's
TIM: some feedback getting companies
VADIM: out in a way where it's more palatable to recruiters as long as the candidate is not lying or as long as they are being honest on their resume typically In the first few minutes of the conversation, even with H which in our case we give them technical questions, and they would ask those technical questions from the get-go so they will get that information out even before we start recruiting that recruiting group for the position internally. I would say all in all it's much harder to fake it. If the recruiting team knows what they're looking for, with that being said, what I've heard outside of my team is that a lot of resumes started to look very similar, and this is something that I personally cannot attest to, maybe because it's a little different for us when it comes to what we are looking for, but my friends and other colleagues in the industry did say that resumes are now becoming a little bit carbon copies of each other. So it's a little harder to distinguish them, and they all look more or less perfect, especially if they use AI to tailor their resumes, so it's becoming harder. I would say the solution to that is creating a very tailored position or a very tailored list of requirements. If you're able to, and if you're in a technical field, at least that's the remedy that we found to be effective.
TIM: Because then that meant any candidates who did fit that very specific set of requirements you felt like it's more likely to be real because you can't just apply en masse to a generic-looking role because your role is so specific.
VADIM: And one example is we recently hired a tag manager role, and that was highly tailored to code collection of the data on the page, and that had those very specific questions that we put out on our website related to a specific platform for tag management, so really unless you are in the field and you've done that, you would know how to fill it out. or if you are, then you're totally lying, and that would be found out in the first five minutes of the call, so that is also a waste of time.
TIM: yeah I'd be interested to seewhat comes out in the metrics around conversion rates in that first interview because one thing I'm hearing is Yeah exactly what you've said which is okay the CVS are now looking more uniform and a lot of candidates are probably using Chat GPT to optimize their CV at some level but I wonder whether if part of the process of either writing or updating a cv you've now or a candidate has now outsourced to a large language model which are going to make up whatever they need to make up to fit the CV to the job description if that's its task the odds of them making up something that's not truthful is they're going to say something I wonder if a candidate could now disassociate from that a little bit because it's it's not their lie they've sat down there and written on a CV it's an hallucination I love that euphemism that the large language model has come up with so I wonder whether the sort of truth score of a CV might be about to drop a lot because candidates no longer have had to commit to writing that lie themselves or that exaggeration they've outsourced the morality of that to a large language model what do you reckon
VADIM: Totally, and there's evidence of that, and they're little tiny things, especially when it comes to folks adding SQL and Python, maybe when they're not necessarily versed in 12, but it seems like the resume is tailored better with these skills included, and there's some resume inflation, and I've seen candidates including these skills when they're not
TIM: something
VADIM: them, and definitely it's something small, especially when it comes to the skills that can just slip in or a platform and that happens a lot. With that said, I'm not against using AI to tailor resumes—not at all. I think, and even percent sure whether the resume was tailored by AI or doctored by AI or not, but even if we see signs of AI in the resume, to me it's not necessarily a red flag. If anything, they can use the tool, or they're capable of using AI for work-related tasks. Fantastic! I'm all for it. Please do it; please go ahead. If they're not diligent enough to then check the work of AI afterwards or after the fact, it is a very important part of working with AI. Then you would see lots of these resumes with the slightly inflated skills in them.
TIM: Is there any bit of the hiring process that you would feel uncomfortable with candidates leveraging AI? So we've mentioned CV optimization; let's say it's fine as long as they, as you say, validate the output and they still own it, but are there any limits to where they should or should not be using these tools in the hiring process, in your opinion?
VADIM: love this question I think the most frightening part of your question is that it's a live responder. You're asking the interviewer questions about something technical, and it's AI answering, so it's not real answers, and that's pretty frightening because basically everything that we're trying to suss out if the person has critical thinking if they can solve the problem and if AI is capable of doing it for them, then it defeats the purpose of the interview, so that's
TIM: process that you're uncomfortable
VADIM: To me, I haven't seen people successfully using it. and I've had candidates who would, and when we would ask a very technical question, they would just pull up the screen, and they would say, I'm going to share the screen, and I'm going to live do it, and they would show it. They literally, and that's probably the strongest thing as the job seeker you can do. If you are being asked a technical question that you need to just show your screen, and you whip out your keyboard and you type away, and you show that you can do it, or even parts of it, and that's impossible to fake, at least to date. I don't know, maybe in a few months that would be a little different. But something like that is definitely not produced by AI. Anything else outside of a live interview, I don't personally think is unless there's unethical information that a candidate is trying to get, which is probably an exception to this rule. I would say if it's not deceiving, it's assisting, then yes, go ahead and use AI to enhance your resume. Yes, go ahead and use AI to look through your LinkedIn page. Ask AI to generate questions for you for the interview to practice, give it answers, and solicit feedback. Use it. I personally think it's an amazing tool and can help in so many ways to get better, so in a way, if everyone is going to use AI, I think it's going to also raise the standard of the interviewers. So I see a lot of positives in it.
TIM: Yeah, and for candidates competing with other candidates, you wouldn't want to be the one person not using this borderline magical tool that everyone else is using to their advantage. You'd better use it at some level where it helps you. I think maybe there are some scenarios where it might actually dilute quality in some situations because I find that the kind of tone and general type of content it would produce would be quite similar to each other. And so if you wanted to have a really aggressive creative approach to break out from the norm, then maybe using the tool that everyone else is using might not work, but that's probably again a minority of use cases where you'd need to do that.
VADIM: You have totally faced something that would stick out to you, and you would feel cautious about it and step away.
TIM: Yes, yes, very specifically two scenarios I can think of recently: one was when we were hiring sales development people at Looper, and as part of our evaluation process, we asked a question, which was, Imagine it's day one in a looper, like you got successfully hired. What are the three things that you would need from us to give you the best chance of being successful in your role? So it's a very personal question that I wanted to know this individual person's opinion on, and what I did was, because we asked this of many candidates, I could then extract that data, do an aggregation, and then that gave me a data-driven onboarding document because I knew Oh wow, 80 percent of people what their goals are in one Great, so we should have that 50 percent of people knew what I wanted to have a one-on-one. So we'll have that, so it was such an interesting information-collecting exercise for me to print an onboarding document, but what I found strange was that so many candidates had clearly just copied and pasted that into chat. chippy So I wasn't getting their views; I was getting chatty, but his views I do not give a shit what chatty PT wants to do on day one or week one at a Luba what the person thinks, and so I found that to be quite weird, but then to be fair, I was mentioning this to someone, and their pushback was like, Yeah, they might have thought this was actually part of the evaluation process where they were being graded on their answers to this question, which they weren't, but I could see how, given where we used it in the hiring process, they might have then thought, Oh, I better not be truthful, or I better get Chetty PT to answer this for me."
VADIM: No, I can see that maybe they're trying to just be the best and make sure they don't say anything out of pocket.
TIM: with me Okay, great. The other area I saw that left a slightly bitter taste in my mouth was someone who was passing off their own original effort and thought as Chet Chippy Tee works or someone Oh I took the weekend put together this sales strategy for you just that I'm extra keen and motivated which would be an amazing and unrequired flex of a candidate to put in that level of effort but You shouldn't do that claiming that because I feel like the what they were pitching was the effort they've went into like I've gone and spent hours doing this thing where you didn't ask me to do it because to show how motivated and keen I am which is fair enough but if you've just put in one prompt to chat to your team produce the output then it's not really the same thing the at the end outcome might be the same but if the thing you're pitching is the effort then I feel like that was a bit disingenuous personally so it wasn't really about the use of the tool it was more like misrepresentation of how you'd use the tool compared to your own effort so I thought that was a bridge too far personally
VADIM: You, yeah, I guess it goes back to you totally hearing you, yes.
TIM: For the live interview itself, yeah, I feel like at the moment, for candidates trying to use ChatGPT, they'll probably come unstuck because I would have thought It's a lot more cognitively difficult to be reading the output of a large language model. Listen to what the interviewer is saying and present your thoughts. That's who can do all those things at the same time. You're probably better off if you actually have the skills and experience just answering the interviewer's questions directly yourself, especially because it's going to be very difficult for you to read the output of a large language model and put it back into normal English sounding without it sounding like a robot. So for candidates who are trying to hack a live interview, I feel like that would be making life more difficult, not easier.
VADIM: It would be hard, but it would help if you have highly specific questions, and if you're doing that live—not necessarily just copying and pasting—you are actually taking that information in live, and then you know it if you are in
TIM: Yeah, no, that's a fair use case. Yeah, that wouldn't be that hard to then translate back into normal English. I wonder if before a candidate proactively screens, showing you work and problems at the moment… Oh cool they actually know doing I wonder if to get to a point pretty soon where it's just accepted that candidates are going to a large language model, especially because we need them to use them in the job because this amazing efficiency enhancer, so they would be maybe part of the evaluation, be okay, I know you're going to solve this problem with Chachapiti to begin with and going to edit it Okay, share with me. your ChatGPT screen. What's your prompt like? Why did you write the prompt this way? How did you then verify the How did you validate Maybe it would just be brought to the fore under that probably candidates are using it anyway instead of trying to pretend like they're not and hide over kinds of things. Can you imagine that taking place? What are your thoughts? so you mentioned
VADIM: To me, what's important is can you solve the problem? Can you find the answer? And if you can, if there's a will, there's a way. Sure, it is perfectly fine when we had the assessments of one of the stages of our interview process that includes assessments. We give homework, so we This amazing use of their own are the exercises that we folks live on. We've let them do that at their own pace, and if they can find that, that's great, and usually with coding, ChatGPT or other AI tools are not always as great, especially with small problems. So usually you would be able to see the logic or if it's AI-produced, but again in the past you could ask a friend right to help you with a home assignment, or you had students and teachers trying to solve something together even live, so it was always there, and I think as long as you can solve the problem on the job day in and day out, not only during the interview Then using an AI tool is perfectly fine; as a matter of fact, we used it lately with the data science team. We were picking apart the formula of the statistical testing, and we used ChatGPT for it. It just saved us a lot of time and a lot of research; it just told us that's the binomial distribution, and then we'll end from there. So it's a time saver, and it's a huge helper.
TIM: I wonder also, thinking about the use of these tools, whether or not The expected skill set of the candidates is going to change through time in the sense that maybe, let's say, I don't know, writing SQL and Python code from scratch may now be a bit old hat, and you now feed in a prompt. It then writes the code for you; it's left to unpack it and validate it at the moment. Maybe that will also change. I wonder if the blend of skills that a data professional might need is going to be different. Will some of the basic technical things be AI-driven so that you need more of the soft skills or vice versa? Like, how do you imagine these skills are going to develop in terms of what a data person actually needs?
VADIM: Their course is now on PROMpt generation. I think the quote that I like the most is that the most common programming language of the future is English. I think a lot of the existing PROMpts calculations and even ATL code pieces are going to be written in English; therefore, comprehension and the ability to express yourself clearly would be paramount, which in turn would mean yes, those soft skills are going to be very useful for highly technical positions. and that was not the case for a while because, conversely, if you're able to express yourself, if you're able to describe the problem accurately, you would be able to get your answer faster, so I think that that gap between the soft skills and hard skills is going to be more blended over time. and you would certainly make a lot of use of soft skills in the future, just as much as hard skills for those highly technical jobs.
TIM: and I wonder whether maybe the slight caveat to that would be if now people have access to what is a technical product like ChatGPT or other large language models or any other AI products, whether now some people who are more on the soft skill side of things maybe they need to understand at least at some level how these tools work to use them properly, so maybe there's like an emerging technical skillset as well, perhaps.
VADIM: Yeah, I think so, and it's also the filtering skill. Sometimes, every once in a while, AI is going to just spit out garbage, and you need to get used to critical thinking and always analyzing: does it make sense? Does it make logical sense? Do I need to throw in another prompt saying Hey, can you break it down step by step so I can actually suss out the logical information out of it? Critical thinking is still going to be there for all of us; unfortunately, AI probably will not solve for every equation, not at the I
TIM: In a job at the moment, but time will tell what AI can do. Hopefully, it can automate away the shitty bits of our roles, and we can do something that's a higher value add. Hiring, I feel like there's a lot of opportunity for AI to be used to improve things at various bits of the process. Personally, I feel like the way traditional hiring is done can be very biased and can be open to a lot of potential bias and discrimination, yet in the narrative and conversation around AI, one of the main question marks that people have is Oh, AI could be used in a very biased way, or it could be based on biased training data, but I feel like in the hiring case in particular, the current way we do things is already biased that maybe how could AI make it any worse? So I'll give you an example. There've been a lot of experiments done around the world in different markets where the researchers have tried to investigate Is there any kind of bias against people from a certain background when they're applying for roles? And so one that was done in Australia a couple of years ago by the University of Sydney was they got something like 10,000 different CVs, splitting them into three buckets. The CVs were similar across the buckets in terms of their experience and their background and their skills and this and that. The only real difference between the three sets of CVs was that in the first one, the first name was Anglo surname, and in the second group, the first name was Anglo surname. The Chinese third group, with English and Chinese first and last names, and so then they applied to tens of thousands of roles in Sydney and Melbourne with different seniorities, different domains, and different industries, like control for so many variables, and then measure the rate at which those applications got a callback, either through a literal phone call or an email or whatever. The first group, the Anglo first surname group, got a 12 percent callback, and the third group, which had the Chinese first and last name, got only a 4 percent callback, all else equal. If you applied to a role in Australia with a Chinese name, you have only one third the chance of getting a call back for a job, which you could probably do with the exact mechanics of how they did the experiment, but it seemed legitimate to me that there are a lot of obvious variables around the number of years of experience and education and all of those things. like it seemed legit, and I've seen similar results in other markets in the United States and England with different subpopulations as well, and so I feel like, yeah, there's already a lot of process there, then maybe an AI to improve this in your view to get rid of some of these systemic issues that we've seen for a time.
VADIM: Like this, here in the States, it's that specific names get fewer callbacks, and that's been a problem for a while. There have been a few attempts. Australia, including resumes without names and skills so letting pick resume and only presented in the set of paragraphs about the education as opposed to looking at it as a traditional resume PDF with AI What's been reported, what I've read, is of bias in the tendency to replicate opportunity for what we put in, we have out, and using AI can be pretty dangerous because long are doomed to make the same mistakes that people make unconsciously. and really that means that I think recruiters need to be especially careful with using AI for these kinds of assessments because these AIs might be replicating the same biases and the same mistakes that humans might be doing. What we try to do is isolate both the postings and assessments to specific skills, and that requires probably a human approach until maybe we reach a point when all humans are clear of implicit bias, which I don't know is going to happen; probably not. It's just human nature to generalize and label, but until then I think it's very important for recruiters to do that manually to make sure that the postings they have don't have gender-specific language. and then obviously during the interview stage everyone is going to be taking the same assessments; everyone is going to have the same time, and if someone needs to be accommodated for a disability, that's fine; that's also something that needs to be provided, so I do think this is the place where a lot of scrutiny is still necessary.
TIM: So you're saying this is still a place that scrutiny is very much warranted, basically.
VADIM: Totally, and mostly because we don't want AI to repeat mistakes that human humans make, and that's the area that's probably going to be lagging until humans become perfect, which is never going to happen.
TIM: No, it's not. I wonder whether for the screening case in particular, there's some, but let's say if it takes a CV screening as a stage in the process, which itself might be almost redundant now if, as you say, all these CVs are starting to look the same, then that's an issue. Maybe there are more hallucinations, or maybe there are exaggerations on the CV, so maybe the CV itself is less valuable than it was before, which in my opinion was not very valuable, but if we had, let's say, an AI doing that CV screening step, I feel like we could solve the bias problem. It wouldn't be that challenging in the sense that we wouldn't be asking the AI. Hey, give me other CVs like our current employees, or give me other CVs like these 1000 LinkedIn profiles of current data analysts who look good, because you're right, and that would inherently just be biased if it was like a two-step process to say, Here's the CV. Extract for me all the skills and experiences compared to this job description. Step one: all right now I've got the structured data from unstructured data. Okay, now rank this: like, how strong does this skill seem, and how strong does this experience seem? And then just come up with some kind of CV score; maybe that would then be the way to avoid the bias because by the point it's doing that evaluation, it doesn't have any other information to base it on. What do you think?
VADIM: totally Tim I think it depends on what kind of questions you are asking, so if your questions are around the specific skills and just extracting the data AI is doing, I think that's a perfect use case. But then if you were to ask, Hey, ChatGPT, do you think this candidate would be a good fit for the role? You might want to be a little careful with questions like that, but that's probably it, so your prompts are good examples of what you could use AI for, so I agree with you there.
TIM: Yeah, and I guess this is, yeah, the take-home of learning how to write these prompts well because, yeah, in theory that prompt might work well, but the nature of the large language models is I might try that, and it might get derailed for some reason or another, and this is also changing with different models and different updates. You can see that the output tends to change quite dramatically.
VADIM: Totally. There are probably going to be guides on what kind of questions to ask and not to ask once AI is fully a part of the recruitment process, but I'm with you, and probably a lot of filtering of that information once it is spit out by AI is needed.
TIM: So we're talking about AI CV screening, maybe some ways to make it unbiased, but you also touched on existing ways to make the hiring process in general more fair, nothing to do with AI, and you mentioned having this kind of structured approach where candidates could ask the same questions over the same amount of time. Can you unpack why you have that approach and how it helps make the hiring process fairer?
VADIM: Totally, that's the approach that I take with the candidates. Typically, I structure these in the format of a case study so that A tests your critical thinking and B, can you solve a problem from point A to point B? I also can vet if I ask different questions of every candidate; my questions sometimes might not be perfect. So if they ask me additional follow-up questions to clear things up and these questions are very similar, it means something is wrong with my question or my question is not very clear, so that eliminates that issue if I were to maybe just throw different questions at other candidates. maybe some of them are just
TIM: more nothing to do with you mentioned this
VADIM: Lucky with the other
TIM: the same questions
VADIM: notes, especially notes to HR, it's
TIM: you
VADIM: And the reason I'm using a similar set of questions is that during the screening process, the HR department that passes the candidates to me asks these technical questions during the initial stage of the interview. and they collect these specific questions from me so that this is usually a set of three to five questions when they ask those to the candidate, and if the candidate is completely lost, then that would save me time, and I won't have to interview that person because HR can not just interview them for a cultural fit, but they can also just So that's definitely efficient and easy to compare, and then with the technical assessment, also I think we're trying to be as fair as possible. Previously we had a democracy on the team that, or I like to think we have a democracy on the team where we get to vote on the candidates. and we had candidates complete SQL assignments, and then we literally looked at them side by side. Sometimes SQL code has different ways to code things, so you can have shorter state of code or maybe a more intricate one, but the goal is to solve the problem, and then our team would vote on which assignment they would think would be best. is stronger, and that was so efficient with the team voting on candidates; it's probably one of the best things that we do during the recruitment process because whichever team gets to know the candidate and whichever candidate gets the highest number of votes gets the offer. So that's pretty much been the process, so in that case we're trying to make sure that the assignments are comparable, and that allows us for this voting system to be in place.
TIM: That's such a fascinating way to do it because, yeah, not only have you made it, as you say, comparable because you've had consistent stages and timing and questions and tasks, but then you've added this kind of data-driven element and a democratic element, as you say, and you must also then get buy-in because that person's joining the team based on their peers having selected them, at least at some level. So they can't really complain, can they? They've been invested in the process. I wonder if that also makes them receive the candidate more warmly rather than if they had no skin in the game at all. What do you reckon?
VADIM: Totally, it was different. There were times when I would send the candidate to the team and say, Hey, have your own interview," and they would reject the candidate, and I would have to find someone new because the team didn't like them, and there would be a time when we would vote on the candidates, and I would get outvoted. I would vote for one candidate, and then the team would vote for a different candidate, and they would win, and you know what? Like 10 out of 10, the decision of the majority of the team was better than mine. There's been a couple of times when I would think, I don't know if this candidate is going to do well, but then I would be in the minority, and I would still hire because that's the process and the promise of the democracy. and I would think, Hell yeah, I'm glad we did that because you know what? The team had a better sense of smell and a better speaking process than maybe I would have had, so that it works, which is very maybe unusual, but I found it to be very efficient to trust the team. If I already have a strong team and they know what they're doing and I can trust them, then why wouldn't I trust them to make the right decision about the candidate? So that's it; somehow it works, and I've been pretty happy about it.
TIM: Somehow it works. I feel like that's the tagline for democracy that they probably came up with a few thousand years ago.
VADIM: It works; somehow I love it.
TIM: Yes, okay, that's great. Yeah, great method you've come up with, and what I feel like I should also throw at you is the devil's advocate of what I would normally hear if I'm talking to people who would like to follow a very unstructured approach, and so you've laid out this set of steps a candidate goes through that is consistent. You mentioned asking questions in a consistent way. something as simple as that if you ask different questions to different candidates, then of course they've had a different experience Some people, if I mention the structured approach to them, would say, Oh, it's too rigid; then interviewing can become like a box-ticking exercise, and a particular example The structured approach had maybe been slightly far when they'd applied to a fame company six months ago or so, and the questions were predefined in advance. I think the candidates even got emailed them, and then they'd leaked on the internet, so then each interviewer knew that the candidates had already prepared all these questions. Therefore I'm going to ask whatever I want, and so it's this weird circularity where they've gone too far, and then the interviewers have gone, This is stupid, and so, yeah, the general pushback I get is around it's like too rigid, and if too rigid, you give a candidate an opportunity to really shine; that's the perception. I get what to do about that kind of devil's Does that merit or are they missing the upside of having the structure?
VADIM: It has some merit. I think a lot of companies now have five or seven someone gave of where their equipment process has gone too. Why? What are you going to learn at stage 6 that you didn't learn at stage 5? To me, if you're overcomplicating questions but leaked the name of the structure, then you probably If it's completely unstructured, I would personally lose my mind; this sounds insane. How would I compare them? How would I have my notes for HR? How's the HR?
TIM: Don't give a
VADIM: shot just not like that, not my style. I cannot do that. It needs to have a process. but in a way checking boxes is a part of it. I think these are maybe boring things but very important if you want to try it. If you want to find the right candidate, it's mundane things like brushing your teeth and making your bed. It sucks; we all do that, and I don't think it's good enough of an excuse to thwart that process in the name of it being boring or too rigid. I also, when I'm looking for a very specific candidate It's boring sometimes to interview the same folks over and over again and ask similar questions and have the same conversation sets of assessments, but you sit down and you do it, and you do it right, and then you get the best candidates that way. Maybe it's a very orthodox way of thinking about it, but that's the way I approach that at times; otherwise, I feel like everything is in the air and you are making wrong decisions because, again, maybe my question was structured unfairly or just plain wrong, and I missed that good candidate who was It's just unlucky, and we want to avoid that as much as possible.
TIM: Yeah, so embrace the boredom of embracing the sort of atomic habits of just doing each of the small steps really well, and are you approaching it with at least a basic scientific way of saying, I'm going to try to measure some things and set up some objective criteria and then select the candidate who does best? that is going to end up in a better result than the complete chaos of just winging it throughout the whole process and picking a random winner.
VADIM: That's my way of thinking about it. Maybe there are some others who would prefer something else, but I think reviewing the process first, if you have 10 steps, toss out steps that don't make any sense or that don't have any difference historically in the numbers in the number of candidates that got weeded out from the process. and once you have a good process in place that everyone agreed on, then just follow it to the T. Again, maybe if we were in the creative
TIM: of embrace
VADIM: in the technical
TIM: Vadim It's been a fascinating and wide-ranging conversation today. Thank you so much for joining us on the Objective Hiring Show.
VADIM: Thank you so much. Thanks for having me. That was a lot of fun. Thank you, Tim. Talk to you soon.