Alooba Objective Hiring

By Alooba

Episode 89
Ben Yi on The Intersection of AI, Career Switching, and Data Science

Published on 2/4/2025
Host
Tim Freestone
Guest
Ben Yi

In this episode of the Alooba Objective Hiring podcast, Tim interviews Ben Yi, Staff Data Scientist at Intuit, Pricing expert

In this episode of Alooba’s Objective Hiring Show, Tim engages in a wide-ranging conversation with Ben, a staff data scientist at Intuit. Ben shares his rich experiences, from starting in management consulting at Boston Consulting Group to transitioning into data science roles across various industries including telco, fintech, and healthcare analytics. The discussion delves into the nuances of career switching into data roles, highlighting the strengths and challenges such candidates face. The conversation also explores the evolving role of AI in coding and hiring processes, the potential for biases, and the importance of continuous learning and adaptability in the rapidly changing tech landscape. Ben provides insights into specialized fields like pricing analytics and emphasizes the multidisciplinary nature of data roles. The episode concludes with an intriguing question for future guests: Should chief data officers become CEOs?

Transcript

TIM: We are live on the Objective Hiring Show with Ben. Ben, thank you so much for joining us today.

BEN YI: Thank you. Pleasure is all mine.

TIM: It is absolutely our pleasure as well. And Ben, it would be great if you could kick things off by telling the audience just a little bit about yourself, just so they can learn a little bit about you and contextualize everything that we talk about today.

BEN YI: Absolutely. I'm currently a staff data scientist at Intuit, which is the company behind MailChimp, QuickBooks, and TurboTax. You might've heard of some of these products. Prior to that, I actually started my career as a management consultant at the Boston Consulting Group, and I specialize in pricing. After that, I've had a series of jobs ranging from large corporates like Telstra all the way to small nascent startups. And in a number of industries. Industry is actually covered in Telco, in Fintech, healthcare analytics but the one thing that is a constant is working with data, working with machine learning and now with AI. And that's what, that's where my true passion lies. Lies.

TIM: That's a great intro. And we'll certainly look to tap into that wide breadth of experience you've got in our conversation today. One area I'd love to start with is something I don't think we've spoken much about yet on the Objective Hiring podcast, and that is about career switches, people coming from outside the data field into the data field. And I've certainly noticed this trend, especially among kind of leadership positions; maybe because data expanded so quickly, maybe there wasn't enough. talent available to kind of lead functions. And so I saw a bit of a trend of people coming, maybe from marketing or some other fields, into managed data teams. But I'd love to get your thoughts on this kind of career switch into data. Maybe also some perspective around what strengths and weaknesses these candidates might typically have for data roles.

BEN YI: Yes, of course. And of all, I love career switches, being one myself. I've had a, you know, very nontraditional switch from management consulting into a data role and now a very, very technical one these days. So the courier switches. What they tell me is. They have a true passion for working with data, and career switching is not easy. Coming from my own experience, I can tell you that you're always being judged against other candidates who have had a longer history in this profession. The people do it; most people that I come to do it because they really love doing it. Working in data and working with statistics and mathematics, and that's why I do it. And that is why these candidates are attractive to me as well. Because I know that they have the passion and have the drive to want to work in this space, and they're going to really get things done.

TIM: So they almost know your background and know you'd be receptive to someone also making that kind of career switch. And so they view you in a mentorship capacity in some sense, do you think?

BEN YI: I don't think that's the case when candidates come to me. I mean, there are usually a number of filtering stages before that happens. So I doubt that they are attracted to apply to the role because of that. But maybe they do end up taking up the offer because of that. And of course, as always, top candidates. have many options. Certainly in the situations where I've had candidates being made multiple offers, and I don't necessarily have the highest dollars to be able to entice them, but like the culture of the company. The management and the team then become elements of how I can try to sweeten the case without having extra money to necessarily or a budget to throw at it. Well,

TIM: As you mentioned, one of the strengths of these candidates is that maybe indicates like a true passion. If they're willing to kind of drop everything and go into data, then they must really want to do it. Is there anything else that has stuck out about these candidates? Like thinking back to the ones that you might've hired any extra, I don't know, other learnings they've taken from their old domain that were applicable, or some kind of maybe different mindset that they might have had to other people who'd grown up in the domain.

BEN YI: Yes, indeed. Career switches will have another. role or function under the belt, and therefore they can often see from the user's perspective, the users of the information, the user of data, and therefore they can better understand what is the problem that we're trying to solve. and therefore what is going to be truly value-added. We don't do data analytics or machine learning for the sake of doing it. There's just not enough hours in the day to be able to fulfill a fantasy world of, like, building every model and trying and testing things out. And therefore, how do we get to the value of the outcome as quickly as possible? And if you have a good understanding of the context of the problem, that helps set you apart.

TIM: I wonder if we're in this world where some of the super technical elements of some of these roles, like the coding itself, are being, let's say, automated away using large language models, and there's probably a point in the next six or 12 months where now a Claude instance or ChatGPT might be able to write code with 100 percent accuracy, maybe, and debatable, but we'll see. And that maybe the kind of coding component as an example might now be taken care of by you prompting an LLM to write it. Is there something then to be said for some of these data roles becoming relatively less technical, therefore almost lending themselves to a career switcher a bit more, someone who maybe has the domain expertise but not as much technical skill? Could you see these roles becoming slightly less technical in that respect?

BEN YI: I think it's going to become both more and less technical at the same time. It will become less technical to the point that you're talking about. Maybe familiarity with the syntax and language becomes less important. But what will be more important is asking the right questions. And I already talked about asking the right questions from the context of the business application. But there's also asking the questions from the technical aspect. So machine learning and AI are now extremely complex. area, and of course you can just check things in, and there's auto ML that will do things for you, and it will get you a really, really good result. But sometimes you've got to strip back and say. I'm trying to achieve something that's different. The results that you're giving me are not going to be good enough. What else can I do to improve the performance of the model? And that requires an understanding of statistics, mathematics, and computational efficiency. And the algorithms, and you need to have that knowledge in order to actually make the models perform at a different level. We're talking about 10x better, not a little bit, 1 or 2 percent improvements. So in order to make those kinds of leaps, you actually have to become more technical. I, and hence why I said it's becoming both more and less technical at the same time. That's how I see the world heading.

TIM: Hmm. It's interesting perspective. Like, I've been thinking about this almost yin and yang battle a little bit recently and thinking, yeah, like, are these roles going to become really people-oriented, you know, less technical, more like stakeholder engagement? He was not. But then, yeah, the other side of the spectrum is almost also trying to think about it in terms of, like, if I were, I don't know, 18 or 20 now, what would I be doing? Would I be trying to smash the people skills or smash the technical skills? And part of me thinks, well, actually, the combination of really, like, really strong machine learning stats, AI knowledge, and programming learning. Yeah. To be able to automate things is probably now a lethal combination that has almost like an explosive value that's very, very scalable in a way that maybe the people stuff isn't as much. What, what do you think of that kind of spectrum?

BEN YI: Well, until machines overtake all of our jobs, I think there's always going to be elements of stakeholder management and people management in every profession. And of course the degree to which you have to do that differs depending on the role itself and its level of seniority, but I don't see that going away anytime soon. And, and the other thing I want to emphasize is yes, LLM is getting really good at writing the code and syntax. I still find it having a sufficient level of understanding and being able to review the code is very important. And very valuable skill, at least, at least until machines can write perfect code, which might, might happen very soon. It might happen this year; we don't know, but at least for me, I often get that aha moment when I look at a code and go, Okay, well, this is why it's not giving me the right result in the ability to debug. Actually, that's another really important one. Can you identify? Issues. Can you smell the errors in the data? Because even though it could run the code and execute successfully, it doesn't mean that the numbers are right, and it doesn't mean it's not always the case that the code had an error; the error could have come from the underlying data, and therefore there is no way for the LLM to know, and that's where all that business context and knowledge and everything in your experience comes in. Becomes very important. And to know something is not quite right here. I need to go deep, deeper. It could be the code. It could be the data. It could be any one of these things. And what do I need to do in order to address that?

TIM: That's a great point you make, that kind of data savviness, which if you've grown up doing analytics and you've just been there and the kind of raw, row-level data, checking these things and knowing, you know, like having the sense of like where things are likely to go wrong, which bit of the stack might have broken, as you say, then you just develop the sixth sense for testing things, you know, I know, I'm pretty sure we forgot to filter out nulls, or like, I reckon there could be some negative values in there, like, it looks like there's an outlier, I know where this is probably coming from, or, or whatever, whatever that kind of toolkit you get. What do you think then about maybe the really junior wave of candidates, almost like AI-native candidates who have just, in their world, writing SQL is going to Chachapiti and saying, Hey, write this, or their idea of analytics is saying, like, Here's a data set, tell me some trends, where they've kind of missed the grunt work? Of actually almost like feeling the data at the road level. I feel like in the intermediate period, these AI systems are kind of good but not amazing. This is certainly something that's lost among those junior candidates or any candidate who's kind of coming to it without that foundational skill. What do you think?

BEN YI: Well, that's not so much. So that's not so different from the invention of calculators. It's taking away our ability to do arithmetic, really. Has it made an impact on society as a whole? It definitely has, but it's not necessarily a bad thing. I personally still value the ability to write SQLs because, you know, it's, you know, frequently writing SQLs that are pages and pages long, and they are really complex because you have to get. from multiple sources, and you have to massage them and then combine them and then massage them again and spit out answers in a different way. And I learned how to do that by struggling through, like you said. So I know, I now know how I now have the skills in the process to be able to dissect it and then really break it into really tiny pieces and know what to look out for at each step before I assemble the whole thing together. And a really key thing is knowing what to test and what to look for at each step. I do wonder what will happen to career data analysts who are starting their careers right now, because you're right. They might not need to write. SQL the same way that we used to struggle with them through the same way, but they might still encounter the same data issues and the same errors and the outcomes, and therefore still have to go through this debugging thing. And one other element that isn't there yet is the LLMs don't know about the data structure, and therefore you still have to feed it information. This is what my schema looks like. These are the tables, and these are the columns within the tables. It still doesn't know anything about the quality of data that lies within the warehouses. It can't tell you. I actually came out of a work meeting today, and we were looking at some results of an experiment. And it somehow didn't make sense, but it took someone very experienced to tell us, Well, actually that table might be missing certain data from a period of time because there was an error from the front end. Some logs were not recorded, and therefore, okay, so they are not going to know about any of these things.

TIM: Yeah, I'm really interested to see how things develop, and hopefully we get this kind of new wave of AI-driven BI tools that are. I guess sitting in the warehouse as a user, maybe have all this extra context about where the data has come from. And maybe that's going to need, like, a lot of almost like extra documentation to be written about the data. Because imagine a lot of companies wouldn't even have this stuff documented information about the different bits of the stack. As you say, like, here's the data collected on the front end. Here's how it's collected. Here's the back end: this and that, but I feel like we're still a way off from that kind of technology coming about.

BEN YI: Way off. I mean, yes, you can fine-tune your LLM to generate SQLs that are relevant to your organization because you can feed it historical SQLs. So it knows about our data structures and the tables, et cetera, but institutional knowledge like that is not necessarily embedded in SQL itself. So it will also not know about any of the new tables or new schemas that are being created. So yes, you can feed a context and, through techniques like RAG, say, Here, here are new tables. You, but it's not going to get generated out of LLM by default. I'm pretty sure there are some really smart people out there that are solving these. Exact problems, and let's wait a few months and see.

TIM: Fingers crossed. Well, one thing I know from particularly software engineers is, you know, sometimes when they're trying to solve a problem, it's not that they have a perfect blueprint or this 100 percent articulated juridic that they just have to go and mechanically create. It's in the writing of the code that the solution almost emerges. It's like the writing is the thinking. Is there another thought there around maybe us losing some of that? If we're no longer writing code, we're going to write a prompt to write code. Are we going to lose some ability to think, or is it just, well, not really? Because now we have to think really carefully about how we articulate it. So that the LLM can write the code the way you want it to.

BEN YI: I think that is one possibility that you have to articulate in your design criteria and better in order for them to generate the right outcome, but it also allows us to iterate faster. So being a data enthusiast, the first thing I do whenever I get a new data set is I play with it. I do all sorts of exploratory analysis, and I go, What about this? What about that? What if I cut it this way? What have I decided that way? What can I do with a scatterplot? Can I do a bar chart? Can I do a violin chart? I would, you know, if I had the time, actually spend hours. And hours just purely looking at data and thinking about it different ways. I can just get a sense of it. and I think the LLMs just make it faster to be able to do all these things.

TIM: Just a more efficient way for you to even explore things in those initial stages.

BEN YI: Precisely.

TIM: One thing you mentioned earlier was kind of the analog between the development of AI and then the development of the personal calculator. And so, you know, we were discussing, does that make almost like mental arithmetic redundant? I can remember. Going for grad roles 14 years ago in consulting and investment banking, and banking in particular, they loved to throw the mental math problems at you, like What's 47 divided by 19 to the nearest three decimal places? or some bullshit, and I found some level of dread in having to do these, I can tell you, and I had to teach myself how to do three-by-three multiplications in my head, having a complete mental breakdown. That was 14 years ago when already, you know, I felt like that was a fairly redundant skill. Is there anything that we're currently asking in interviews now that's going to look as ridiculous in, I don't know, 10 years time, that's just going to be completely pointless and completely redundant and a skill that we no longer need because AI just wiped the floor with us?

BEN YI: That's a good question because I went through very similar interviews when I got my job in consulting. And I won't say how many years ago because that just makes me look really, really old. But there's, and the other, the other funny thing I should say is the more, the more advanced you are in mathematics, the worse you get with arithmetics because at some point numbers get replaced by letters. and the English letters get replaced by Greek letters. And at that point, you really don't have any idea how to do any mental arithmetic anymore. And, therefore, going back to those interviews, people are going to ask you, you know, what is. 49 divided by 38. It's like, I have to think about it. How do you write it in the Greek alphabet? And then maybe I'll answer that question. So yes, it was a very interesting experience. Good question. What skills will become redundant 10, 15 years from now with the presence of LLM? I think writing SQL probably will be one of those that go first. Because it is the easiest to standardize, you could really get a code to run and execute. And I would say, interestingly enough, I wonder if the ability to create PowerPoint presentations, at least simple ones, will go away. And because LLMs will be better at structuring communication and therefore synthesizing information and spitting it out into pretty-looking charts, maybe that skill also becomes obsolete as well. So those are the potentials. Let me, let me just ask ChatGPT to see if it agrees with me on those ones.

TIM: Well, it, yeah, Chachapitiano often gives a very kind of politically correct answer, likes to stoke our ego a bit. I feel like if it was being really honest, it would say, listen, there's going to be no more human to human communication. AI is going to take over, like my great-great-great-grandson is going to be your ruler. So don't worry about it, but we will see. I wanted to shift gears a little bit and think now about AI in the hiring context, because I, personally, have become very much an AI optimist. I feel like the way hiring has been done traditionally has a lot of issues, often very tedious and very manual. Incredibly biased. I think that I could potentially solve a lot of these issues. Have you started to explore using AI and hiring at all? Have you seen candidates use an I on their side?

BEN YI: The most obvious one is the CV, or the resume. And I have seen technologies that promise to craft and/or tailor your CV to each job that you apply to. Or if not the CV, maybe a section of the CV or the cover letter itself. So that's an obvious one. Of course the danger is of a hallucinate, and it's no longer representing you as the candidate. I think that's very dangerous. And, believe it or not, that is not the most tedious part of the hiring process. The tedious part is the time-consuming part; really, it is the interview itself. Now, I know that AI can do interviews. I know there are companies out there that have offered AI interview services. But as a candidate and also as a hiring manager, there is one step that I haven't gone to. And because I'm a human, I am going to be working with other humans primarily. And AI is only going to be a tool to me. I am not hiring an AI. I'm hiring other humans. So the biases that you talk about are part of being human. I have to know beyond the technical skills of the candidate that we can have good reports, we can communicate, and we can understand each other. can, assigned tasks and deadlines and knowing that they will be adhered to and If there are issues, I have to expect that people will be coming to me and raising them with me, and we will solve them together. I don't know if AI is able to get to all those nuances just yet. Maybe it is, but that's a two-way thing; the bias could be me. Maybe I have a deficiency as a manager that is centered on one of these skills, therefore relying on the candidate. compliment myself. So, I actually, actually say those biases, unfortunately or fortunately, exist in the world, but that is part of who we are. If I do, I will give an example, and this could be a very controversial one. I have a candidate who has a very strong accent. the point that I have difficulty understanding them. They could have the best technical skills in the world. But I won't be able to hire them because I can't communicate with them.

TIM: I feel like this is Yeah, now we're getting to some of the meat of the topic here. So I think this is a really good place to drill down. I feel like in that particular example, yes, you're being you, or anyone else will be biased against the candidate who has that accent. But if the criteria for the role are strong communication skills, and it's a fundamental bit of their job, you're being biased in the same sense as if you hire someone who already has SQL skills and Python skills; you're kind of biased against people who don't know SQL, you could say. But I feel like that's completely fair game because it's part of the criteria for the role. Where I feel like AI could help a lot is with. Biased for things that are irrelevant to the actual role. So I'll give you an example of an experiment done in Sydney, actually by the University of Sydney, I think in 2023 from memory. The researchers got thousands of different CVs and bucketed them into three groups. First, group these three groups of CVs; they were basically similar, except for one thing on them, which is the names. First bucket of CVs had an Anglo first and last name, second bucket had Chinese first name Anglo last name, third group had Chinese first and last name. And they applied to thousands of different roles around Sydney and Melbourne, different industries, different domains, and different seniorities, like. I think they did a pretty good job at controlling for many different factors. And then they measure the rate at which these buckets of CVs got a callback, literal callback or an email back, whatever. And from memory, roughly, the rates were first bucket 12%, third bucket 4%. So in other words, you applied to a job in Australia with an English Anglo first and last name; you had three times the number of callbacks as if you applied with a Chinese first and last name. Particularly relevant for yourself, actually, because you would fall into bucket two out of those three. So you would have an 8 percent chance of getting a callback versus a 12 percent chance if I applied for a role, which, given they control for 70 other factors, looks like a pretty good natural experiment to me. I feel like that would be a good example of where I could improve that kind of scenario. What do you think?

BEN YI: And no. So yes, you're right. I remember that study very well, and I was much younger then, and I had a really big impact on my career as well. Because I definitely have been in those situations, but isn't AI a little bit of a sledgehammer for that part of the problem? All you need to do is not have the names. shown to the recruiter. and you automatically remove that problem. AI is a good tool for many to solve many problems, but it doesn't need to be the tool for everything.

TIM: Well yeah, I see what you're saying. Although even the process to remove a name successfully from a CV, I think, could be done quite well with ChatGPT like this, I don't think it was a solved problem a couple of years ago. Because yeah, you've got the name, but then you've got other things, which would—that's just one example—but there's like, well, how old are they? You can kind of infer that by when they finished high school. If they've got that on there in places like the United States, they say, I think this is saying something like, Tell me your postcode, and I'll tell you how rich you are. So, like, the exact neighborhood or postcode of a city is very indicative of socioeconomic status, maybe in a way that isn't quite the same in Australia. And other things like that. So then there's a case of these things that I feel like are clearly irrelevant, but then there's other things that are maybe on the grey. borderline of what may or may not be relevant. For example, what about hobbies? For me, hobbies really don't come into it. Like, why do I need to know that someone likes to play piano if I'm hiring them for a software engineering role, or they like to take their dog for a walk on the beach at Clovelly? Like, I don't, why does that matter? But then other people say it gives you, like, a flavor of the candidate and gives you a sense of their personality and all, you know, if they do all these other things, it indicates, like, a growth mindset. What do you think about then the hobbies and those things that fit in the gray area?

BEN YI: Oh, very interesting one, because I did put playing piano on my CV as a hobby. I can't remember whether it's still there or not, but it's an interesting one. I don't know. When I review CVs, I tend to not place too much in the hobby section to a point that it's really not that relevant to the hiring role. I will look at the years of experience, though. that there is something that we'll look into, and where they were educated and what roles they had in the past for industries. Those, to me, are important. So whether that's considered a bias or whether it's considered as part of the evaluation criteria, I think that is up to the individuals.

TIM: I remember now trying to hire a product analyst a few years ago in Sydney when I was working with Hotels Combined and reading hundreds of CVs; you're kind of looking for one to jump out at you. And I remember this one guy who was Brazilian, who straight away, like, I just love Brazilians. I don't know why; maybe it's the football thing, but whatever it is, I straight away elevated him in my head above others. Then I went to his hobby section, and it said he was a semi-professional footballer in Brazil. And we were in, like, the lunchtime indoor soccer comp at Sydney Uni every Tuesday. And we just kept losing the grand final by like a goal. Like, we were so close to having a winning team. And I saw this guy's CV, and my eyes lit up. I'm like, perfect. Amazing. He's going to get us across the line, which is pretty unfair because what has that got to do with his product analytics and, you know, AB testing? And experimentation skills, absolutely nothing. But I, that guy, at least in part, got an interview based on his football skills. Not that I asked him to do juggling in the office or anything, but he certainly got a foot in the door, not really purely based on merit in the end. We didn't hire him. So I feel like a slightly more meritocratic approach emerged, and we offered the job to what I think was the best candidate. But he certainly got in, in a way that a candidate who had. Whoever wasn't Brazilian and didn't play football didn't get an opportunity. I feel like, you know, AI or some kind of tool could remove that kind of nonsense. What do you reckon? argue that by winning that championship, it boosts the morale of the company. Oh yeah, maybe. Yeah.

BEN YI: Those lines, but I take your point. I think there are biases that are more dangerous. For example, they tend to hire, well, tend to have an affinity for people with similar backgrounds. And that's a very unconscious and potentially unfair bias. And AI could also introduce bias, and the case would be career switches. I have to bring it back to where we started because you train. In fact, I would also say a lot of recruiters have this particular bias as well, because they, what they do is they look at a number of experiences, whatever seniority and then form a judgment of how good this person is. and I can tell you that I had been rejected for roles because they didn't think I would have enough. skills. They look at, you know, years of experience in management consulting in management. And I go, this person can't possibly know how to write a SQL statement. was very surprised when I scored very highly on SQL tests, not dissimilar to your company's product. And I also had the complete opposite happen, and people looked at my CV and went, This person's too technical. Does he have the managerial experience and leadership qualities? And business acumen to be able to execute these roles and completely ignoring the fact that not only did I graduate from one of the best business schools in the world, but also worked at a top management consulting firm and as a strategic advisor. So I feel like those biases can exacerbate in the world of AI because people assume the biases have been removed. in fact, if we get a hundred screen, a hundred resumes being screened. At least if it gets screened by an experienced recruiter or experienced hiring manager, they will go, This is very unconventional; it could be the dark horse. I'm not sure the AI will be able to do that.

TIM: Yeah. I feel like it could if it's been prompted to, but

BEN YI: But it's. it?

TIM: Well, I feel like maybe this comes almost a step earlier. It's like, what are some of the fundamental issues of the hiring? I feel like one of them is. We kind of just jump into it and start hiring without really having thought through exactly who we want and why and what those criteria are and getting that down on paper. And it's in the process of doing that that anyone could be prompted, a human or an AI, but it's just the fact that we often make up this stuff as we go along.

BEN YI: Yes, but. Also, that process you described is fairly rigid, and almost, by design, people who are coming from an unconventional background have a career experience that doesn't fit exactly like that. I mean, look at all the I can guarantee you, if you look up the job descriptions for, you know, 20 data analyst roles or data or analytics managers, they're going to look fairly similar, you know, bachelor degrees or maybe advanced degrees in statistics, engineering, and followed by five to seven years of quantitative something, depending on the level of seniority. And in a managerial role, you'd have to have managed people and certain side teams. They always look like that, right? Plus technical skills. But if you throw a career switcher in there, I don't know how to react to that. It's like, okay, so they don't have seven years of experience, but they have five years of experience. As a user of the output, or maybe something completely different. The easiest thing for the recruiters to do is just screen that out because they don't know how to; they have doubts in their heads, and it's justified. Sometimes you lose a gem because of that.

TIM: Yeah, I see what you're saying. So it's almost slightly myopic to just be fixated on what exactly the criteria are, because you might miss those almost outlier out-of-the-box candidates, which a recruiter could miss who's too narrow-minded or an AI. It's, yeah, it's kind of hard to have it both ways, isn't it? Like, either you have clear criteria, this is what we want, and we're not going to deviate, or you're saying, well, this is what we want, but actually. If you see anyone that stands out for any reason, anyone that kind of looks amazing, then also let us know, and they should get some kind of, yeah, an interview product. We have a similar concept. Like, we have this structured interview plan; ask them all these questions. Here's the score of the interview, but then there's like an X factor section, which is like something that's died. I don't know. I can't categorize it, but they did something special.

BEN YI: Indeed.

TIM: Kind of a concept. Maybe. I feel similar. Like if we were hiring a software engineer. But they just spent 18 months as a founder of a software product where they did a bit of everything. Yeah, that's not in the criteria for an engineering role, but I think, well, they probably learned a bunch of stuff. They probably have a bunch of other skills that we weren't even looking for. That actually would be very valuable. And so that kind of thing would definitely be like an X factor bonus points, I think.

BEN YI: Yeah. And another example I can give you is right now I'm working with accounting software. And knowledge of accounting is not on the hiring criteria at all. but I happened to run the finance function of a small business before, and that is the exact target profile that my company's product was supposed to help serve. So having knowledge of how it works and all the struggles daily, which are experienced on a daily basis as a user, actually helps me to make the product better. But that's not even in the hiring criteria because, to your point, those kinds of combinations of skills would be too rare. And they actually just didn't bother to have it on there.

TIM: I'd love to again shift gears a little bit and think about a topic that I feel like doesn't get enough airplay in the data science analytics space. It's almost been its own separate domain over the past years, and that is pricing. And so you've got a lot of experience and expertise in pricing and pricing analytics. If you are hiring someone into a pricing analytics kind of role, what kind of character would you look for? What kind of skills do you think they need? Who would you go for?

BEN YI: That's a great question. been working in the domain for pricing for a very, very long time. So I started as one of the global topic experts within BCG. We served everything from computer hardware to computer software. We did hardware, as in hammers and nails, working with bread, frozen food, consumer electronics, and anything and everything you can imagine. I've had some sort of interaction or help to serve those industries. And Telstra. Way, please don't blame me for the price of your mobile phone bills. I blame Optus, and so, after that point, actually help another number of startups. Technically, sorry, tech startups in Australia and overseas, sort of, especially with their pricing problems, and how can they achieve the growth with their growth objectives with a good, well-structured pricing plan? So that's, that's the history of me working in this domain, but going back to your question, what makes pricing interesting is it is so multidisciplinary. It's not just about the numbers. You need to understand the product elements because so much of the product and pricing is how you align the features and values of each. Product or OH tier against the pricing. have to understand the marketing. Like, are you trying to be a premium provider? Are you trying to be a value provider? And that has an element on your pricing. have to understand. psychology of the buyer. It could be a consumer product. It could be a B2B product. B2B products will have more stringent criteria, and some of them might have a tender process, and you need to have an understanding of that. But ultimately, decisions are made by humans. So you need to understand the psychology of the decision-makers behind those tenders if you want to be very successful with this pricing strategy. And of course you do have to understand the numbers because ultimately you want to make a profit. You can; the joke I say is you can always get people to spend 10 by throwing 100 at them. And I've definitely seen promotions. That they do exactly that, the sales team and the marketing team go, Yeah, we achieve our target. Good on us, you know, we achieved something amazing. And then the finance function goes, Holy crap, what have you done? And you also have to understand the market dynamics, the elements of game theory, and what would happen. After this particular round of promotions or pricing plans, how are your competitors going to respond? Are you going to trigger a price war? So it is extremely multidisciplinary, and a very good pricing analyst needs to have not necessarily all of that knowledge, because you can't gain that overnight, but you have to have the inquisitiveness. To want to learn about all these other things, you want to be able to communicate and work with your peers. Because I already mentioned you're working with finance, with marketing, with sales, and with product, and you have to have developed a common knowledge with all those people. I think that's what makes pricing really fascinating and also challenging at the same time.

TIM: Yeah, you've laid out a great explanation there. And you just made me want to get into pricing in my next business. The next role. It is interesting.

BEN YI: Hey, if you need pricing help with me for your business, just give me a call.

TIM: What about adaptability for these kinds of candidates willingness to learn? I mentioned those two things just because they're coming up so often in my conversations at the moment. Given the rate of technological change, you need to have that mindset, but it sounds like within pricing as well, because it's so multifaceted, you almost just have to really lean into learning new things as well.

BEN YI: You have to, and in any technology-related field, because these industries, whether you're an engineer, whether you're a product manager, or whether you are a data analyst, the technology is moving so fast. And the joke in the industry is the report I compile in the morning is already out of date by the afternoon. So you have to have the ability to keep learning. And keep developing, but the interesting things we all did that in our lifetimes, right? So our education systems, everybody in Australia, at least in most of the developed world, you have to go through at least a certain number of years of schooling and education. So we all have all learned basic skills; somewhere along the line, when we became adults, we lost that ability. The ones that are able to retain and continue to reinvent themselves and develop new skills and grow in their careers. But some people lose that ability, and they become stagnant, not always a bad thing. Maybe they're a, a, At a stage of their personal life and their career where they're very happy with doing exactly what they do and excelling at that function or that level, and they don't need to develop, but I do sometimes wonder why this has become an issue because we all had that skill. when we were young,

TIM: Yeah, I wonder if part of it's, you know, you get to adulthood and you got all this stuff, you got the mortgage, you got the kids, you got the dog, you got the job, and you're just sort of trying to hang on for the ride. That must be part of it. I feel like technology is stealing a lot of our mind space as well. Now, like, it's so easy just to be. Watching endless videos on YouTube or Netflix and kind of cramming your brain full of crap rather than just sitting there and going, Oh, what am I going to learn? For me, one of my favorite books, which I can see over there on the bookshelf, is Atomic Habits. So I've been trying to think a lot about this myself recently. And, you know, trying to improve and whatnot. I've really tried to embrace the suck recently. So I bought a guitar. I've never played an instrument before. I didn't have your upbringing with the piano. So I've got no idea what I'm doing with the guitar. I can barely put my fingers in the right position. So I'm just like trying to try to improve and just do something that I'm really terrible at, taking up tennis as well. And I feel like that's part of the key is just to. Yeah, like just accept the fact that when you do something for the first time, you're going to be shit at it, which maybe you don't even notice when you're a kid; maybe that's part of it, you in this freedom that you have as a child that you don't as an adult.

BEN YI: Yeah, I think consistency is certainly the key to learning any skills. You've got to give it a period of time to get anywhere. The bigger issue that you bring up is that because we do have 10,000, everybody has 10,000 different things distracting us, and you absolutely can argue they are more important than learning another programming language or doing another instructional video on AI or how to do prompt engineering; you absolutely are right about those. The question is. I think that the issue is the forcing function is not there anymore. You're going through schools; you have no choice. You have to sit in the classrooms. You have got exams that you are trying to pass; you've got some; you've got parents pushing down your teachers, looking down on you; you have no choice. You have to do certain levels. But as an adult, that forcing function is removed from your context, from your life, and you have to rely on yourself. If you want to, and then overcome all these other priorities, they are very important in life, and deciding is important enough that I have to spend time doing this. I think that is the real challenge.

TIM: And everyone has 24 hours in a day. The world spins in the same way for everyone. So yeah, the I don't have time, I think is such an easy excuse for anything. Yeah, yeah.

BEN YI: I, I, unbeknownst to me before the book ever came out, actually tried a few of those tricks to get myself.

TIM: Yeah, it's a really good one. I couldn't recommend it highly enough for anyone listening.

BEN YI: Your favorite? What's your favorite trick out of the book?

TIM: Well, the ones that spring to mind are just the ones that make it well. At the moment, I'm more about the bad habits and the good habits. So I'm trying to get rid of some mindless technology adoption. So it's more about making it difficult, making it invisible. So my iPad and iPhone are now off by default rather than on by default. That's one change I've made. Because I'm like, I'm only going to turn it on when I specifically have this thing I want to do, as opposed to it's always on, you see what I mean? Locking it in the cupboard during the day, these kinds of things. So yeah, make it annoying, make it invisible, make it harder. I can't remember what the exact phrases were. They're the ones that I'm focused on at the moment. How about you?

BEN YI: So many, many years ago, I did this to get myself ready for the gym in the morning before work. And of course, getting up at 5 o'clock, 5:30 in the morning is really challenging. And I used to go to bed wearing my gym clothes.

TIM: Nice one.

BEN YI: Thinking what the rationale is. The alarm would go off when it goes off. And I tell myself, all you need to do is brush your teeth and get out of the door.

TIM: Yeah.

BEN YI: And once you've done that, you know, you're halfway to the gym. So that was the initiation I needed. myself to get out of the door. and it took about two or three weeks before that bedtime or that alarm time became natural to my rhythm and it wasn't painful anymore, but to get there, that was a mental trick that I needed just to remove everything that would absolutely possibly stop you from getting out of the door. And. That's, that's a very important one as well. The trick is thinking all you need to do is get out of the door. Don't think about going to the gym. Don't think about the workout you're

TIM: Yeah.

BEN YI: Gym, right? You're wearing gym clothes. So that, that, that was my favorite one.

TIM: Yeah, that's a good one. Ben, if you could ask our next guest one question, what question would that be?

BEN YI: Should chief data officers become CEOs, and why, or why not?

TIM: That's a great one, and not a question I have asked any of our guests yet. So I'm looking forward to leveling that and whoever our next guest is next week, and I'll let you know what their response is. ​Then it's been a great conversation today. Very wide-ranging. We've covered a lot of ground. It's been really interesting, and thank you so much for sharing your insight with the audience a little bit about yourself as well. Really enjoyed it. Thank you so much for joining us.

BEN YI: Thank you for having me, Tim, to be here.