In this episode of the Alooba Objective Hiring podcast, Tim interviews Jon Morra, Chief AI Officer at Zefr
In this episode of Alooba’s Objective Hiring Show, Tim interviews Jon and explores the increasing role of AI in the hiring process, including its applications in job description generation, resume screening, and candidate matching. He discusses the current challenges faced by hiring managers, such as the influx of applicants due to tech layoffs and the biases introduced by AI models. Jon emphasizes the importance of networking for candidates to stand out and highlights the need for authenticity verification in resumes. Additionally, he shares insights on the hiring sequence for building data teams, the skill set trade-offs between junior and senior hires, and the impact of AI on technical skills. Finally, John offers advice to new managers and reflects on potential improvements in the hiring process, such as earlier compensation discussions and the importance of thorough reference checks.
TIM: Jon Welcome to the Objective Hiring Show. Thanks so much for joining us.
JON: Thanks for having me.
TIM: And maybe we can almost also get So drill down for different stages of the hiring process, but yeah, it's love to get your overall thoughts to begin with.
JON: Sure. Large language models in hiring are going to become a necessity. I don't think they're quite a necessity now, but they're going to become one. They're going to become one for job description generation, for resume generation, and for resume checking, and ultimately for matching. I know back in my time at eHarmony we worked briefly on a jobs matching product. So I have a little bit of experience directly in the industry, but today it's just so different, and you have, especially with the massive tech layoffs in the States, this influx of quality candidates, or not quality candidates, right? In general, AI is really useful in creating a prioritized set of things to do; in this case, resumes to review. So if you're a hiring manager, I think it'd be almost foolish to not use AI to at the very least surface resumes to review and invest more time in. Now, the one thing I'll say is this does introduce bias, right? Every model has its own bias, especially if you didn't train it and you don't even know what the bias is. So you should think about different ways, if you're a hiring manager, to at least observe the bias in the sourcing and the prioritization and then figure out what to do about it.
TIM: and so you mentioned, yeah, the confluence of factors being this AI technology taking off and layoffs and so just a greater available supply of employees I feel like this is a perfect storm because not only are candidates applying using large language models, which allows them to apply to more roles, It sounds like they're also adjusting their CVs using a large language model, and then there's just more candidates in the candidate pool, so then from what I hear, a lot of companies are just being inundated with applicants, as you say, of varying levels of quality. Is that how you see the market?
JON: Yeah, definitely when we post jobs we get way more applicants than we used to, and just figuring out who to interview is hard, and look, being on the candidate side, I don't envy right now because in a lot of ways you could be a great candidate, and if your resume doesn't match algorithmically with what the hiring manager is doing, you might opaquely be lost, and then I actually don't even know what to do about that because with the influx of resumes, that is just a necessity; that's just an artifact of what's happening when you're doing this many-to-one process, right? Many resumes to one actual hire, I was thinking about in preparation for this what could you do? and I think the idea of authenticity is something that may be introduced into the hiring process at some time. It's something that I think would be really cool. I don't see that actually happening right now.
TIM: When you say authenticity, can you unpack that a little?
JON: So yeah, so like authenticity would be like, Okay, here's the companies you worked at, here's the schools you attended, here's the certifications you have. Each of those is possible to be independently verified, and I think it would be really great if actually your application could contain that information so that I can not just say I went to school XYZ on my document and because I said it I get an interview over somebody who didn't say it or whatever but actually went and I didn't go. I would think that there's a way to do some kind of authenticity checks and stuff like watermarking or whatever to ensure that candidates that actually have relevant skills are in fact at the front of the line, not candidates that just say they have relevant skills.
TIM: Yeah, I feel like that's such a structural problem with hiring, and that's that in those early screening stages, you basically, at the moment, have to take the candidate's word for it, and you're looking at this CV and going, You said you have advanced X and advanced Y and experience in these places. I have to just take that as a given and then interview you; unfortunately, for any hiring manager, they know that is catastrophically inaccurate because candidates exaggerate, misclaim, or maybe just don't know themselves that well, not to mention all the sort of bias angles on the other side that are from the person doing the screening. God knows how many bits of noise there are in a CV that could lead us to make The wrong decision in selecting that candidate anyway; it's such a mess. I wonder whether the kind of accuracy of a CV is so like how well a CV represents the actual person. I wonder if that's going to be at an all-time low if candidates are outsourcing the writing or optimization of it to a large language model. Have you noticed the deterioration in how accurate it maps the person?
JON: Oh, definitely. We've noticed this in some of my roles that I hire for. It's not uncommon for me to interview people and, in the first 5 minutes, be like, Why are we even speaking? You are not who you say you are on the resume, or if you are, like, I don't even know what happened. and this is what I'm talking about, like how much could the industry programmatically verify ahead of time because, look, it's not a good experience for the candidate if they're just going to get on and be totally outmatched by the interviewer; that's not good, so I don't know how to fix it. like those are my ideas, but I do see that as a problem in the industry
TIM: and I'm interested in when you've noticed this increasing discrepancy. Is it in that they've inflated their skill level in a certain area? Is it completely that they've made stuff up and they're really not the same person? Like, where do you see the main gaps?
JON: It's usually inflation, so I don't see how I have not personally experienced it, although I've heard stories of it, but I've not personally experienced somebody who says, I know how to write Python, and then you say, Okay, let's generate a Python function, and they literally don't know how to write anything right. But it's usually more of Hey, I have X, Y, and Z years of experience writing Python, and then I, as somebody who actually does have that experience, say, Okay, that's great; let's do this, and I would think that somebody with two years should know how to do this. You claim you have six, yet you don't know how to do it. So there's like a mismatch here on what I thought you would be able to do versus what you're clearly capable of doing, and the problem is I feel like this is a self-fulfilling prophecy for one of the most true, maybe the most stressful part of technical hiring, which is the technical interview. And anyone that's gone through a technical interview, it is so stressful to sit down with somebody, okay, we're going to code for the next 45 minutes, because that's not how you work, but yet you have to do it, and I don't know how to get around it because if you like, it's too often I can't just take the word of the CV. Yeah. way more jobs than I normally would because the competition is rife, but I better make sure my CV is one of the ones that looks best because it's going to be so hard for me to stand out.
TIM: So it's almost like this natural motivation to inflate or use the right words that are going to attract the first to get past that first screening layer, however that's done, but then it's not the flex they think it is because they're going to come undone pretty quickly, either in a technical interview with someone like you or if they applied to a really immature company where there's, let's say, no data team at all and they just fluff their way through. They're going to be screwed on the job. They can't do what they said they're going to do, so I don't think it's a long-term strategy, even though I understand the desperation in the short run.
JON: Oh, I get it, and you still go back to the classic techniques that work, like networking. It still works. Somebody at a company or figure it out—that is the best way to get to the top of the list. And when I'm a hiring manager, if somebody gets to me and says, Hey, I'm Joe Schmo; here are my qualifications, I'd really like to be interviewed. I almost always say yes because I like to reward the initiative, so I can't speak for other hiring managers, but I know that goes a long way for me.
TIM: That's really interesting, and I feel like it's worth unpacking a bit of the nuance here just so candidates don't get the wrong idea. I'm assuming if they just spam you with generic emails or LinkedIn messages without really respecting your time, that's not going to resonate, so it has to be relevant candidates in the right location who fit the criteria of what you're looking for. because that's solving your problem; that's not giving you a problem.
JON: Yeah, exactly right, and it's usually if you have connections on LinkedIn and you can get somebody else to make you an introduction—just any kind of meaningful connection—and it doesn't guarantee there's plenty of LinkedIn requests I get that I just don't respond to, and I'm sure that's true of every hiring manager, but yes, I guess my point is that if you find the job you want, you're going to go through your network and do the extra work. Who do I know that knows the person who I like who works at the company that goes a long way to getting past that low pass filter of Hey, I have a CV that's great, but there are other thousand great CVs as well, and you just might not be at the top of the list.
TIM: Yeah, I feel like then we might have this switch in how hiring's done a little bit if we had some metric proportion of people hired through inbound channels versus a direct behind-the-scenes channel. I feel like that's going to switch because that inbound channel is now so noisy with 70 candidates applying with such dubious-looking CVs that look the same to each other—CVs that have been written with AI—which is posing this huge problem for companies. So then if I were a candidate right now, yeah, I would just leverage my networks as hard as I could to try to weasel my way in somehow, basically.
JON: Yep.
TIM: And so it sounds like that's what you'd recommend to candidates in this current market if they can
JON: Yeah, that's exactly what I'd recommend. Don't get me wrong; even if you get to me and I listen and I'm ready, the first thing I'm going to do is tell you to apply, but then what I'll probably do is tell my recruiter, Hey, make sure this resume, after they're in the ATS, gets back to me, and then we can have an official interview.
TIM: So they still end up going through the pipeline to keep everything nice and organized and clean. They're not going through some complete back channel; it's still well organized.
JON: That's right. That's right. yeah
TIM: I feel like this is probably an especially important lesson for more junior candidates, maybe those who started their careers during COVID or went to university during COVID, maybe have almost like a deficit of connections and networks, I would imagine, compared to, let's say, my cohort. But as we've moved into this more digital world, it becomes easier to lose some of those face-to-face things, more working from home, all those kinds of things. It sounds like then you would generally recommend building those networks as well because they can become valuable at some point.
JON: Oh yes, I have. I literally got this job because I weaseled my way into a network. There was a—I got invited to speak at something called the LA CTO Forum, and I spoke. I was at eHarmony at the time, and I spoke on topics of, like, machine learning ops. and they had this thing where they went around the room, and they don't do this anymore, but now they do. They say everyone raised their hand and said, What's your name? What's your title? What company do you work at? It's I'm this person. I'm the CTO of this company. I'm this person. I'm the VP of Eng at Disney. I'm this person. I'm not like Every one of these people I should know, and so I presented myself to the group administrator, and I was like, His name is Tony. I'm like, Tony, I want to be in the group. He goes, You're not a CTO. I'm like Tony; I'm a data scientist. This is the hot new thing. You want data scientists in your group. And he rejected me a couple of times before he finally admitted me, and then lo and behold, one of the guys who I worked with at Zefr at the time, the CTO, was part of the group, and he said, Oh, I really liked you, so that was my own story of how I got this job.
TIM: And yeah, then it's not surprising that you appreciate a bit of hustle from candidates to get to you because that's how you've managed to get an advantage yourself as well. That's really interesting and an underappreciated skill for sure, and I would have thought in data and tech, like where maybe the average person's a bit more on the introverted side of the spectrum, maybe not as comfortable with putting out the ask and, I don't know, pinging a cold message I feel if data candidates had done a year as a sales development rep or something like that and they knew, Okay, my job is to call people, email them, send LinkedIn messages, and come up with the right narrative, understand who to reach out to, how, why, and the right way, that would be such a valuable skill set for the rest of their career. I would have thought
JON: And you know the irony of bringing it full circle: AI is great at that. You don't know what to say to a person on LinkedIn; great, you type in, Here's my profile, here's their profile, here's what I want to do: make a message. Haha.
TIM: To know all the skills themselves, and there are tools these days that would give you a free month-to-month subscription, like a thousand contact information, like emails and stuff, you can go and get that, so it actually isn't that hard, and it's not, I don't know about you, but it's not like I get inundated with carefully crafted, poignant, personalized messages from a candidate asking for a specific job. So it's not a busy channel at all at the moment. I feel like there's a huge opportunity for candidates.
JON: Definitely not, yeah.
TIM: I'd love to get your thoughts now about what you look for when you're hiring, particularly around the kind of trade-off between technical skills and soft skills and the trade-off between current skills and almost future potential and how you think about those dimensions when hiring into your team.
JON: Dependent on level, the more junior you are, the more important hard skills are; the more senior you are, the more important soft skills are, and then for potential, it's the same: the more junior you are, the more potential matters. The more senior you are, the more experience matters. So when I think about hiring these people, I always, first of all, prefer to hire juniors because I feel like hiring juniors does a couple of things: one, you generally get more hungry candidates, and two, it is what it is; they're less expensive, and it's less costly to the business if I, as a hiring manager, make a mistake, which is like I hire a frontline dev, and it doesn't work out after a couple of months. I let him or her go. That's not nearly the chaos that Oh, I hire a VP; it takes them six months to really understand this and lay the land and all these other things, and then it doesn't work out after nine months. That's really disruptive to the company. Yeah, I would always prefer to promote within when I can. That doesn't mean I don't hire senior; I do, but it's usually because I look at my current team and I say the skill set doesn't exist here, and I don't see someone developing it here right. You do end up, and I have seen this as well, and this is great advice. I think this is good advice for any employee, which is if you're not making yourself redundant all the time, you're going to get stuck. Like right now I have two people on my team who are so good at their job, and I don't see a way that anyone else on the team can do it that I'm almost scared to throw them into another job.
TIM: And it sounds like then when you hire people, you already have a sense of where they could go. If you're hiring reasonably junior, you could have that conversation with them in those interviews around how they could transition and how they could move in your team.
JON: Yeah, so I am like the way that we do it at Zephyr is we have an IC track and a manager track, right? Individual contributor or manager, and that's not uncommon, and the lower rungs are all IC, and then you get to a certain level, and you can stay IC and become more senior, or you can move to manager. Depending on the level I'm hiring for, it's either appropriate or not to go into that level of detail with the candidate, but I like to talk about the way I talk about future plans being less about the candidate in the role and more about the company because the way that I like to look at it is my job—I'm in the C suite; my job is to set the vision and then make sure that everyone understands the vision and is driving towards it. That is literally what I think of as my job, so when I'm talking to a candidate, I like to say, Here is the vision of where we are going as a company. I'm hiring you regardless of the level in order to fill this part today. As the company moves forward, we are going to need these other things, right? We just have to have them. We don't need them today; we need them tomorrow. The implication is that the candidate could do them maybe right, but I don't feel like my job necessarily, unless it's my direct report, is to give this overarching progress from the candidate's level; that's the candidate's job and eventually the employee's job to figure it out. My job is to build the space where this skill is needed.
TIM: And so it's up to them to move into it or not and put their hand up for it or not.
JON: When I promote, I really don't promote people. I don't want to say never because that's probably not fair, but it's rare that I promote somebody that hasn't asked for a promotion.
TIM: Again, a theme there of showing that hustle and that determination to go and get it, and do you find, so I'm based in Australia, I speak to people from all over the world, do you, I'm not sure your experience across different cultures, do you find that maybe Americans are a little bit better at that on average, a little bit more comfortable in practically going for it? I feel like the cultural differences are more aligned in America, giving it a crack than maybe somewhere like Australia or the United Kingdom.
JON: It's tougher for me to comment on Australia and the UK, but I have a number of non-native, non-American-born people on the team, and I will say that is something that does occasionally happen, but a good manager—and I'd like to think of myself as a decent manager—clearly communicates the expectation. If I'm like, Hey, in order to get a promotion, I need you to show you're ready and ask for it, then whether it culturally feels awkward, it's like I'm the manager, and I'm telling you exactly what to do, and if you choose not to do it, totally fine, but okay.
TIM: It's ultimately their responsibility, as you say. I'm wondering if you have any thoughts about this with the rise of large language models: Have you noticed a change in your trade-off between needing someone with a certain level of technical or soft skills or that potential versus the current skill set? Does the advent of these LLMs change your trade-offs or your thought process there at all?
JON: No, and like unequivocally no, because I look at it so, look, I've been studying machine learning since 2007. I've seen lots of progressions. This current incarnation I think is so big, more so because of the accessibility in the models than anything else, so the history goes back to a long time, but in like 2017 there was the formation of the attention of attention and the transformer models. and everything since then has been growing this; it's just in the last couple of years it's become incredibly accessible, something like ChatGPT, like you can just type in a question and get an answer, so I don't understand; I don't believe that with this growing amount of accessibility, anything that's important for the job is not true. Now the skills, the actual tools you use, definitely change, no question, but you need to have curiosity; you need to have ingenuity; you need to be able to clearly communicate; you need to be able to follow through in your commitments. All those things were true before and are still true now; the difference is you might be more effective. You might be left behind if you're not using the right tools. There's you still have to use the tools, but like what I look for is the same
TIM: What about the idea of maybe a junior candidate being more malleable, almost like an AI native, where it's my first inclination to use chess if it is to solve this problem, which at some point I should, would be the norm for many things, whereas maybe a more senior candidate who's got habits formed over 10 years of a career, but maybe is slightly less automatically inclined to start deviating to chat duty, and I'm thinking about myself, even in some of the problems I've been solving, it's like I would start to do it in the way that I've always done it. Hang on, no, surely I could do this probably 99 times faster by starting with a large language model.
JON: So I think that's true, but I don't think that's incongruent with what I said. Okay, you're like, I'll go with the first attribute that I love more than anything else, which is just curiosity, right? You're curious; you're saying Hey, okay, okay, I used to do it this way, but now I have these other tools, so you wouldn't have that thought if you weren't innately curious, right? And so that's where you're saying, like, What are the skills I look for? They're the same, but the manifestation of those skills is different.
TIM: And what about in terms of the soft first technical trade-off? Because I've heard some people say, You know, the barrier to entry to coding, for example, seems much lower because you can get a first pass of a SQL query or whatever with a large language model. You still have to validate and curate it. You still have to know your data set and all those kinds of things, but maybe it's now a bit easier. Would you almost take someone who's a bit more on the softer side of the spectrum because now they've got this tool that could do a bit of the technical stuff on their behalf or not really?
JON: I understand the question, but when I think about it, I was a developer for well over a decade, and I've been a data scientist for almost two decades at this point, okay? And the hardest part—I don't want to say the hardest, but one of the hardest parts of my job has always been verification that I did something right. I wrote some code. How do I know it works? I trained some model; how do I know it works? I wrote some SQL queries. How do I know I didn't miss some rows in doing my analysis, or whatever it is? The generation is hard, but proving that it's correct is almost always hard, and so LLMs, they're just not really set up to do that second part, like to prove correctness, right? that it is you get this idea of generating becoming a lot easier, but proving correctness is still hard, so when I think about hiring somebody who has years of experience as a programmer versus somebody who's like, Hey, I took a boot camp, and then I'm really good at prompting and this kind of stuff, If you can prove that the solution that you've generated is correct, I don't care how you get to the right answer. Like, I care, but I don't care, and so I think that you still need to be able to do that. Now, can you do that? If you're softer on skills, for sure I think it's a little harder because at the end of the day, if you don't understand why the code works the way it works when it was generated by an LM, then if I ask you why you believe this is correct, you're going to say, Because Chachapiti said so. and I'm going to say that's not a good reason. I reject your reason. Try again. I think that skill still becomes harder, but that's not impossible to learn.
TIM: Is there also an angle here where I know particularly software engineers would talk about when they're creating a feature or solving a problem, it's in the act of actually writing the code that sometimes the solution comes to the fore? It's not like they're just following a Jira ticket to the letter and creating code out of it. There's some iterative process they would go through in the writing of the code, so it's almost like the writing is thinking in a way what would happen now if we're using a large language model and prompting it. Are we losing something there in terms of the thought process, or is it just a different type of thinking we might end up having?
JON: I think it's I don't think we're necessarily losing anything if you come back with that verification piece, right? If you don't do the verification piece, yes, I think you're losing something, but if you do the verification piece, then I don't think you are. It's funny; I'm like, I have a fifth-grade daughter, and we were sitting down last night doing long division, and we had to do five problems Yeah, we had to do five problems, and each one of the problems she kept saying, I'm done, and I'm saying, Did you multiply the numbers to prove that you get back? and she's saying, No, I don't need to do that, and I'm like, Let's just do it anyway, and sure enough, it was wrong one time, and because it was wrong, so let's figure out why, and she got all frustrated, but she went back through and said, Oh my gosh, I forgot to bring down one of the numbers when doing my subtraction. And I don't—I would assume that this is a much better teaching aid because she discovered the incorrect in on her own. She figured it out now. She probably is less inclined to make that mistake in the future.
TIM: So we still need that verification at the end of the feedback loop to know that what we're doing is right. Does that then mean that we should be generally using large language models in areas that we understand to make ourselves more efficient as opposed to extending ourselves to areas that we have no clue about where we're not really able to independently verify the output?
JON: I think using it where you already understand is an amazing use, so like I generate code all the time; I generate SQL queries a lot with these models, but this is my area of expertise, right? And maybe I don't know what to do; I don't take the time to understand every line, but I can look at it and quickly say, Yeah, this makes sense. I don't use it when I'm doing net new research on something that I don't understand other than a way to generate other sources to find right, and that's just, again, coming back to this idea of trust: if I'm not the expert in this, why should I trust this model to be an expert in it? I'd rather have verified sources, but it is really good in idea generation, right? So I use it to do idea generation; I use it to do idea refinement frequently. I just would be really wary about using it to be like, Oh, I'm going to learn something new, and I always trusted it for simple facts; it's fine. but for more complex stuff
TIM: One interesting use case I've seen myself recently is in delivering feedback in a nonthreatening way. Like, I pumped a bunch of transcripts from these podcasts into it and got it to tell me what I was doing right, what I was doing wrong, stack rank them, and tell me how to fix it, and it was amazing. like I thought it was reasonably accurate to begin with, like I get that it felt personalized, but what I was impressed by was it's not like I felt threatened; it wasn't someone else telling me, Here's all the things you're doing wrong. They didn't have to deliver that feedback either, so it was not awkward for that person. and so I wonder whether this will end up being one of the interesting examples of its softening the blow. communicating with an AI might not be as intimidating as communicating
JON: My gosh. I think this is a great use case. In fact, I had somebody who was writing their writing a product requirement document (PRD), and when you use these models to generate the PRDs, I think they're really flat and lackluster and all these kinds of things, but when you generate the PRD yourself and then ask the model what are the holes in it, Oh my, yeah, even if only half of them are right, that's still an amazing use of time, and I think your point of it being impersonal is really good. Like, I have asked it to be really harsh on my own writing, like I'll write something and I'll say, Tell me all the reasons this is a piece of crap, like really go hard, 'cause I'd rather get it out now than get it out from somebody else.
TIM: Yeah, yeah, yeah, I feel the same way, and I'm going to keep using the models for that kind of use case. I'd love to switch gears a little bit now and hear more about your work on the advisory side of things in helping other companies build out their data analytics functions, so I'd love you to just explain that in broad strokes initially in the kind of work that you're doing.
JON: Yeah, so in addition to my time at Zephyr, I work; I advise a private equity firm. In that work I sometimes set up some data teams. I also advise a venture capital fund, and we do talk about everything from first hires to rebuilding data teams—all that kind of stuff. And in all of this work, especially when you're building from scratch, I have a very specific order of titles I like to hire, and that would be data engineering, MLE, and then data scientist, and only maybe data scientist, right? I find that when companies go out and hire a full-time employee who's a data scientist and I'm thinking like a researcher-type person as their first hire, it usually doesn't work out well, and I think that's even more true when somewhat good AI models are available for free now.
TIM: And it doesn't work out well because they don't, so you mentioned your order of operations is data engineer first, so you want the pipeline and you want the warehouse available first for anyone to be able to do anything with it rather than getting a researcher to have to hack together their own pipelines. Is that part of it?
JON: That's exactly right. So, for instance, and this is not true as much anymore, but when I was doing consulting a couple of years ago, the idea of immutability was not a given, so immutability means the data doesn't change, so if you're developing, I don't know, a shopping agent, let's say a website for shopping, right? an e-commerce website. You might say I'm going to keep the current status of the order only, so if I go and buy something and then I return it or exchange it, like if you overwrote that first transaction in your database, your future data scientists, if you're hired, would be really upset, like, Where's all my data? because I can't tell what the order stuff is, but maybe for the website function itself you don't need that piece of data, right? Because the customer just needs to know the current status; they don't need to know that, so you just throw it away because, oh, I don't need it, but your data scientists lose their minds when they come in. This is a great example, and I've actually seen this before. If you're not a data engineer, you could say Hey, I could understand what my data scientist counterparts are going to do in the future. I can build pipes that are through all this data that are immutable and have nice clean data types, then you're not set up for success.
TIM: And again, without wanting to beat the drum too hard on AI, has your view on how that first team would look or that first time would look or that order of operations changed with Accessibility of large language models and things like changeability
JON: No way, because you are the only reason that I don't want this. The main reason you're using ChatGPT is for either one of two things: One is just a straight conversational bot, in which case anyone can set that up. It's an API. Anyone can call an API, but you're most likely using it to augment data you already have or feed in data you already have or do something with your own data in order to arrive at some outcome. So you again have to have good data governance, good data hygiene, lineage, and all that kind of stuff that you'd expect from a data engineer. I think you'd expect a data engineer to develop before you can do anything.
TIM: Right, yeah, you need the fundamentals there; otherwise, it's going to be messy, and I'm just having sudden flashbacks to a business I worked at 10 years ago. 10 years ago, to be fair to them, it was an Excel hell environment where I spent my entire one-year tenure in this company, which was one year too long, trying to get access to a mythological warehouse that I don't think actually existed. And so it was a lot of VBA code and macros, which it's just, yeah, so I feel like I've got a business trauma from that.
JON: data more times than I've looked at good data so
TIM: Now, in your layout of these three roles in that sequence, I noticed you didn't say you'll hire a VP of analytics first. Are you getting a leader first and then they're hiring out the team, or are you going literally for the individual contributor data engineer, and if so, who would they be reporting to? How would you hire them without a data leader in place, do you think?
JON: So it's very specific to the company. If you're a two- or three-person startup, yeah, you're probably not hiring a leader because when your founder is quite technical, if you're a much larger company, then maybe, but the way I like to think about this is the vision for why you exist obviously emanates from the CEO or somebody like that, and then there's some leader who's responsible for prioritizing work to be done. I would rather assume you have those people in place. I would rather go IC first because a good IC is hands-on keyboard, and maybe you spend a little more, whether that's cash or equity, but they should be able to not need that translation layer, that VP layer, and in fact you could even go this We talked before about the idea of promotion. This is a great way to hire somebody in and say, Hey, you're going to build the first version, and if it goes well, then we could hire people underneath you. That's a very reasonable thing to talk about, and again, you don't need to talk about it through the candidate's lens. You could be talking about being like, This is the data analytics department we have; now we're going to grow to here, and then the implication is, But yeah, if you hire leaders first who don't have hands-on keyboard experience, that's usually a recipe for not good. Now what I will say is, and this is somewhat tooting my own horn, that doesn't mean you don't have consultants, right? It is that I've definitely worked in a consultative capacity where it's like, Hey, you're going to pay for some amount of time to come in and derive a data architecture and all these different things that somebody else executes on, or I've also played a consultative role where something already exists. and then they say, Okay, imagine you're our first data scientist. Don't actually build the model, but how would you frame up the problem you're going to solve given the data that we have? The idea is that if I can't do it, then they need to make changes before they hire a data scientist.
TIM: One angle you touched on there was for that first, let's say, individual contributor data hire for a startup for typical kinds of software stuff. Where is that person fitting in the order of operations? Let's say you've got a two- or three-person founding team; maybe their first hire is going to be an engineer, normally a software engineer. When is the first data hire? When should they be coming in, do you think? Is that person number one, two, three, or ten? When would you commit to that normally?
JON: such a hard question to answer because in the I found it a company before, and when you found a company, you do every job from data engineer to CEO to cleaning the bathrooms, right? There's no job that you're above or below, so I the way that I would think about it more is given the skill sets that you have in the company; at what point do you look around and say My data lineage, my data hygiene, and most importantly, my data availability are problems, but they're lower in prioritization than these other problems, even if you could solve them. and I have the money to hire somebody; that's when you hire
TIM: How about this? With all this advisory work that you've done, all this consulting, do you find there's any bit of advice that you give to companies in terms of hiring data people that you don't follow yourself? Is there any kind of disconnect you've noticed?
JON: So I view we talked about verification; I view the hardest part of problems as verification, and that's true certainly of data problems, and an area that I know I can improve on is this idea of fast to market with unknown lesser-known quality solutions and letting market feedback as opposed to sitting in the lab and spending a lot of time really understanding the quality of something you've built. and I'm talking about probabilistic models here. That's something that's hard because I would give the advice of getting it to market and looking bad if you need to look bad, but sometimes I definitely spend too much time in the lab and be like, Oh, I can't let it out yet. I can't let it out; perfect is the enemy of done type of stuff.
TIM: Yeah, and it's that because it's your work, and there's some sense of ego or pride there involved. If this gets decimated, that's an attack on me personally in a sense, something like that.
JON: This is a very common thing that I've talked to other builders, like professional model builders, about. We all can sit here, and we can all sit around the bar and have a drink and recognize that no model is perfect, but the real problem is that unless you're in a field where a good prediction directly generates money—and those fields are very few and far between—you think like financial Like I'm trying to predict the markets, okay, if I make good predictions, I'm literally making more money, but I can tell you I've never worked in a field like this, and I know a lot of friends who have never worked in a field like this. You don't get the sense of reward when you do a good job. You only get a sense of dread when you do a bad job, right? It's like, how could the model mispredict this? For any given this, the business doesn't want to hear that you did 10 million predictions that day and this was the only mistake they found. How could you get this wrong? And then when you look at it, you're like, Oh yeah, this is obvious. the model got it wrong Shit happens, but I have just been scarred by that so much, so this is like sometimes where I have this own—yes, it is a reflection; it's not a reflection on me personally, but it's a reflection on my work product, right?
TIM: And yeah no, no model is perfect. I remember now the model I built in the online travel industry to predict whether or not someone was going to cancel a hotel booking. That model was working pretty well. I left before COVID started. I imagine it didn't work so well throughout a pandemic where there were closed borders. I don't know what decimation occurred, but I certainly didn't factor that one in.
JON: probably not
TIM: Yeah, that's their problem, not mine. I'd love to get your thoughts on one particular area, which is, let's say you've got Like a senior individual contributor, maybe like a senior data analyst or senior data scientist, someone like that who is stepping up into a managerial role for the first time. You must have seen this throughout your career; you must have promoted lots of people in such a scenario. Are there kind of common mistakes they would make or common challenges they might have to bridge that gap between contributor and manager?
JON: No, definitely, and I've seen it, and I've felt it. The first time I was promoted to a manager, I think the first and most important thing to do is to give yourself some grace, right? It is a separate skill set to manage people than it is to do I see work, and everyone who's a manager for the first time should just Take a breath and say this is hard, and this is new, right? So whenever you give yourself grace, it's just allowing you to fail, right, and not get, not internalize this, and all this kind of stuff. The other thing I would say is that overcommunication is your friend, right? Especially if you have people that used to be peers that you're now managing, overcommunication is so important. So I have always had weekly one-on-ones with my direct reports. Maybe it's biweekly; maybe it's, gosh, I'm going to have a check-in for five minutes every day for the first month, or whatever it is, you need to overcommunicate, and the hardest thing—and this is I have yet to see a person get promoted, and I include myself on this, who doesn't succumb to this—is how do you let go of some of the knowledge you had and let your team's success be your success? So when you're an IC and you write code or whatever it is you do, every single line of code you write, you know exactly how it works; if it breaks, you probably know exactly why it broke, and you go in and fix it yourself, but when you're a manager, you don't know that every first-time manager tries to go to all their ICs and read their GitHub and know everything, and it never works. And they also very commonly still want to generate code, so they're like, Oh, this is my contract; my contribution is this code, and these other people I manage have their own independent contributions, which are great. I'm happy instead of the idea of every single one of their lines of code being a shared success. And so how do you do it that it Every manager, first-time manager, goes through this.
TIM: Yeah, I've certainly experienced this myself as a founder, which I feel like is especially tricky because, like you mentioned before, when you found a company, you do everything from the glamorous to the inglamorous and mainly the inglamorous. But we don't have a bathroom to clean because we're a fully remote company, but if we did, I'd be in there on my hands and knees.
JON: Fair, and when you can't, when a lot of founders, I would have thought probably all founders, would pride themselves on output and getting shit done and being like a conscientious kind of person; otherwise, why would you start a company? It would be miserable if you weren't that kind of person.
TIM: And so I feel like the transition that is especially challenging because I feel like it's almost feels like it's part of your identity to be doing stuff and getting shit done, but if you realize that it's the company that has to get shit done and you're a part of that even if you literally aren't the one doing each individual thing I feel like that's a very hard thing to overcome in your head.
JON: It's not uncommon for really good managers to know when to do nothing, and sometimes I'm good at it, and sometimes I'm less good at it. Saying problems are identified, I have a team; it's their top priority; they know how to solve it; they have all the tools. All that I can do as a manager is make it slower. Hey, give me a status update. Hey, did you think of this idea? Did you? What about this: if I have the right team focused on the right problem, all of that should be unnecessary. The only thing I should be doing is staying away and buying them pizza, right? Which is really uncomfortable as somebody who used to develop code.
TIM: And okay, and so there's this reframing that needs to happen in people's heads, and I imagine for some people it is a dramatic change, and if you get promoted, it's a sudden change as well, where really half of your job should be changing overnight, at least something like that. I wonder if it's sometimes tricky if you're in, let's say, that manager role where maybe you are still slightly hands-on and then you gradually move out, as opposed to maybe you're a senior analyst at one company, you change to another, and you're a VP. and suddenly there's this break where you haven't already been involved as an IC in the same company; you haven't been in the weeds. I wonder if that would almost be a little bit of an easier transition in some ways.
JON: Maybe I also think it would be incredibly hard because all your intuitions that this new company would be wrong I don't know if I have ever hired a first-time manager. I've never hired somebody for a manager role who has not managed somewhere else, right? It's just that it's super risky, but I will say, though, one of the things as a manager that I think that you do have to start to do from day one, especially when you manage technical folks, is I say, I've said this before: come back to the verification piece at the end of the day. You're not writing the code, but you are responsible for the output. So I feel like the best thing you could do as a manager, especially if you used to be technical, is when the solution's presented to you, you ask the tough questions because that's the time when the team needs to be able to hear, Did you consider this? Did you consider this? Did you consider this? And when you don't do that, then you're being derelict as a manager, but figuring out that balance is hard.
TIM: So you're on that kind of quality control. You're the head chef just before the food goes out. You're making sure that it's as good as it's meant to be. Does that mean, though, that you are still part of the process? Are you like, You're that code review step in the development process, or is that too intricately involved? Do you think it's almost a bottleneck?
JON: I think it varies depending on what you need to do, so like right now at Zephyr we have an imminent problem. I'm not going to get into the details, but let's just say it's a very glaring customer problem, right? If I were to go out into the bullpen right now—and I'm literally pointing to the bullpen right now—and say, Hey guys, have you solved this problem? and we have a code word for it, and if I were to say it, I can guarantee you three people would be able to tell me exactly where they are in the problem, exactly what's being done, and exactly what's not being done, and therefore I'm not going to do that because I have confidence that this team is working on it right. At some point they're going to say it's solved, right? And then what I'm going to do is I'm going to open up my SQL terminal, I'm going to go and verify that it's solved myself, right? And if I can verify that it is, I don't say anything; if I cannot verify that it is, I'm going to reconvene that group and say, Hey guys, when I did this and this, I don't see the results I expected. What's going on, and then that's the point at which I feel like as a manager I'm doing my job, because what I can't have is the solution go out for us to broadcast to people that it's fixed, and then for somebody else to come in and say, What do you mean it's fixed? I personally, as the manager, look like an idiot then, and I won't let that happen.
TIM: Yeah, that's a kind of founder-level detail thinking I think that really helps. It's amazing because we track this in our system, and it's amazing at various points in our company's history I was leading some leaderboards on bugs discovered and those kinds of things because you just get a sixth sense for tiny details that seem innocuous but actually can be pretty catastrophic.
JON: Yeah, yep, yep.
TIM: One final question I have for you: imagine you had the proverbial magic wand; you could call it AI, or it could be something else. If you could just fix hiring or change hiring in some way with a click of your fingers, what would that be?
JON: So I have three thoughts on this, and the first one is something we haven't even talked about yet, but at least in my experience, I think it's important, which is compensation discussions earlier and having compensation mismatches is just a waste of everyone's time. Now some laws in the States, and it varies by state, are requiring more disclosures, but even those disclosures can be incredibly wide. I've seen job postings that say, Oh, we'll pay you between a hundred thousand and a million dollars for this job. Great! So having those kinds of ballpark discussions upfront, earlier, and not in a taboo way, and we're not trying to negotiate; we're saying, Are we, company and candidate, in the same range? I think that's really important to do first. The next one I would say is, and this is a little controversial to what I said or contradictory to what I said before, but not when I explain it. I wish technical assessments were less important, right? And again, you asked me to wave a magic wand. I wish that we could look at the CV, look at the relevant experience, and say Hey, oh, this person has spent years solving problems like the one I have now. I don't need to go and independently verify that they have this experience because it's obvious that they do based on their resume and some light conversations, but unfortunately I do have to independently verify that, which sucks. It's a crappy candidate experience, and I don't like doing it as a manager because I'm going to set up a technical problem. I have a way to solve it. You solve it differently. Maybe I don't think that's as good a solution, and all of a sudden I pass on you because we didn't code it differently. That's what makes sense, and then the last one I'd say is I wish that the industry spent a little more time on backgrounds and reference checks because I've always had good experiences when I've gotten independent or third-party references and referrals and this kind of stuff with somebody I trust vouching for somebody. So I wish that played a larger role than it does now.
TIM: It's interesting. I wonder if the second area you mentioned around the kind of verification of skills is that the problem is that the data that's available in those screening stages is invalidated or not validatable. I wonder if the unlock will be some kind of new data set coming out of, I don't know, Coming out of task management tools that a candidate uses every day in their job, like some way to aggregate that, get hubs that these different profiles could, at scale, automatically pre-do this in a way that's unbeatable or ungameable
JON: Maybe, but show me the company that's going to expose their private GitHub data to the world.
TIM: John It's been a fascinating discussion today, with a wide-ranging set of topics, and thank you so much for joining us on the Objective Hiring Show.
JON: Thanks for having me.