Software systems that learn are on the cutting edge of practical A.I. But what’s the market for these mysterious technologies, and how will they transform the business economy? James Cham, a partner in the investment firm Bloomberg Beta, explains the machine learning market and how these technologies will change our world.
Mr. James Cham is a Partner at Bloomberg Beta L.P. At the firm, he focuses on investments in data-centric and machine learning-related companies. Mr. Cham was a Principal at Trinity Ventures which he joined in 2010 and focused on investments in consumer services specifically e-commerce, social media, and digital media. He was a Vice President at Bessemer Venture Partners. He focused on advanced web technologies and was instrumental in eight new investments in consumer internet services, security, and digital media sectors along with a number of seed investments. He was a part of Data Security investment team. He was a Consultant of Boston Consulting Group. At, Boston Consulting Group, he developed marketing strategies for entertainment and information technology companies. Earlier, Mr. Cham built web applications for startups as a Principal at Zefer and led teams that designed award-winning web applications. He also developed information technology systems for Fortune 500 companies as a Software Developer at Andersen Consulting.
Michael Krigsman: Welcome to Episode #220 of CxOTalk. I’m Michael Krigsman. I am an industry analyst and the host of CxOTalk, and right now, there is a tweet chat going on, using the hashtag #cxotalk. I want to thank Livestream for being a huge supporter of CxOTalk. Livestream is great! And we love Livestream. If you need video for live streaming, come to Livestream.
And, our guest today is James Cham, who is a partner with Bloomberg Beta. We’re going to be discussing artificial intelligence, machine learning, and related topics, particularly from an investor’s perspective. James Cham, how are you? Thank you for being here!
James Cham: I’m doing well! Thanks for having me! It is always a miracle that live streaming works as well as it does!
Michael Krigsman: You know, this is Episode #220, and …
James Cham: Wow!
Michael Krigsman: … I never expect that the whole thing is going to go through to the end, but in fact, it does!
James Cham: [Laughter] That’s great!
Michael Krigsman: So James, tell us about yourself, and about Bloomberg Beta. What are you guys doing over there?
James Cham: Sure! I’m a seed-stage investor at Bloomberg Beta. We’re a $75 billion fund, where we’re investing in the future of work. And, I’ll tell you about what that means in a moment, but first, let me tell you what I was like in the first grade.
Michael Krigsman: [Laughter] Perfect!
James Cham: You know, I’ve been a venture investor for the last ten years, and before that, I was a software developer. And, I think that there’s a sense out there right now that, you know, for all of the hype around the future of the world, there’s still a lot of hard work to be done. And on our side as venture investors, our goal is just to provide a little bit of space behind the people who are going to create the future. And I think as a venture investor, we’re most excited about finding people who are going to build the future.
Our specific core thesis is around the future of work. And the insight is that with any new technology, whether we’re talking about railroads or we’re talking about electricity, it takes a few generations of managers to figure out exactly what to do with the technology. Because even when something starts to work, it’s not that useful until people figure out what to do with it. And so, we are now here, 20-something years into the ubiquitous network computer, and we’re at the point now where managers and organizations are figuring out how to use this technology in order to actually improve the economy. And so, here we are, and we’re feeling quite excited.
Michael Krigsman: So, you are very focused on machine learning and artificial intelligence. You and your partner came up with a market landscape of machine learning, but you mentioned “future of work.” Can you maybe link together for us this notion of future of work, AI, machine learning, and similar technologies? How do those intersect?
James Cham: Sure! You know, it comes out of … When we first pitched this idea of investing in the future of work three or four years ago, there were a couple of key claims that we made, and I’ll just walk through a couple of the other ones before I end up with machine learning.
The first is that we live in a world now where everyone is a knowledge worker; and, in the case where the line cook down the street has an Android phone that is more powerful than the first 10 PCs you bought, then he’s a knowledge worker. He’s able to capture information and manipulate it in different ways. And when that’s true, we should look at the best knowledge workers in the world, and copy their tools and techniques.
Now, it turns out that there’s a class of people out there who use knowledge in interesting and clever ways. They’re quite lazy. They’re total introverts. They don’t like talking to other people, and they’re working on systems rather than practical applications. And those, of course, are software developers. And as a software developer, I’ve recognized (when I worked as a management consultant), you know, I could easily be three or four times more productive than an average colleague, in part because I understood what could be automated, and I understood a way of thinking about the world that lent itself to a certain type of automation.
And so, we invest a lot in companies that are using software development methodologies in different industries. And so, imagine bug tracking for the construction world. Yeah.
Michael Krigsman: How did you end up getting involved in machine learning, and what’s the link to machine learning?
James Cham: So that was one key way that things are changing. And the other way that things are changing is we saw four years ago that machine learning, and AI in general, which was so out of fashion for so many years ─ we saw glimmers that as both the technology was getting better, and as teams were figuring out how to use it in the biggest, most advanced tech companies, there’s a key sense that, “Oh, something is changing and maybe these things will be available to the rest of the world.” So, we started poking around and started trying to make maps.
You know, my colleague Ron would say that the nice thing about making a map is it doesn’t have to be perfect. When you start making the map, everyone starts complaining to you and you end up becoming the center of the conversation. And when that happens, then that’s the best way to learn.
Michael Krigsman: So, you created this map, and you have an overview of the machine learning industry. What did you learn from creating that overview? What were the key themes?
James Cham: You know, what was interesting is now we’ve done this exercise three times. And the first time we did it, the terms “AI” and “machine learning” were out of fashion, so people would hide the fact that they were doing something that you might want to call “AI.” And so, even though Facebook and Google and Amazon were making all sorts of interesting investments, all the very startups that were working on AI-related things were kind of quiet about it. And so, when we created the first map, it was interesting to see. Because we had to do a fair amount of digging in order to identify interesting companies.
And then, two years ago, when we put the map together, you know, you saw the beginning of the feedback loop, where a bunch of startups started saying … A bunch of clever entrepreneurs realized that there was something interesting around machine intelligence. And so as a result of that, they said to themselves, and they said to us, “You know, oh yeah! We’re an AI company.” And sometimes they were, and sometimes the weren’t. But, it started to make sense as a buzzword for early-stage investors.
And, what was most interesting last year, and this was sort of a big surprise, and this was fairly quick. Last year, when we made the map, all the corporates ─ all the big companies, and all the folks who have huge budgets ─ were suddenly interested in machine learning. And so, we found ourselves in the place where, although folks weren’t sure exactly what it meant and they weren’t exactly sure how they were going to use it, everyone kind of wanted something around AI. So, puzzling that through, I think, is the interesting exercise we’re in right now.
Michael Krigsman: You talk about “machine intelligence.” Why do you prefer that phrase?
James Cham: So, I forget if we talked about the history of artificial intelligence as a term, but it’s a 56-, 57- year-old term. And it came out of the desire of a set of computer science researchers not to use the term “cybernetics,” because no-one knew what it meant. And so, I think on our side, artificial intelligence is one of the beautiful concepts, because you feel like you unlock so many possibilities. But, that’s also a terrible thing from the point of view of a technology or startup; in part because inside AI, you find embedded a whole set of metaphysical questions and concepts about what it means to be human and not human. And so, those sorts of questions ─ although I enjoy talking about them at the bar or over dinner ─ are less useful to me as we make decisions about how to invest and as organizations think about investments. [This is] in part because, what is artificial and what is not artificial? And, you end up with slightly … The mind ends up drifting off into something much closer to science fiction.
When machine intelligence captures the sense that we are just trying to say “What are ways that both machines and systems can be slightly more intelligent today, and work with us in clever ways?” And so, we found the term “machine intelligence” to be something much more helpful, and it focuses the mind.
Michael Krigsman: So, the point is to help people, basically non-technical people as well as developers, to stay focused on the direction of practical applications of AI?
James Cham: You know, practical ─ five to ten-year questions, but also less metaphysical ones. I think questions of … Even developers love talking about: “Does this mean the end of the world is coming?” Or you know, “The Singularity represents the apocalypse.” I mean, these are important, genuine discussions. But in some ways, they are less helpful for our purposes day to day.
Michael Krigsman: So, there’s a tendency for people to want to dive into the metaphysical aspects of AI, which is a, can we say, a distraction from focusing on the applications of AI and the innovations, and where it might go during the next five or ten years, as you said?
James Cham: Yeah! You know, we are seeing so much innovation and advancement on the technical side. And what’s lagging is clear thought and understanding on the economic and managerial side. I think that the biggest risk for many … I think that the biggest risk for most of us right now around machine intelligence is less that the machines will take over, and you will no longer have a job. But the biggest risk is that we as managers will make really bad decisions about where to invest, and we’ll end up wasting billions of dollars on stupid projects that nobody ends up caring about. I think that, in some ways, is the immediate, interesting, obvious question ahead of us for the next 5-10 years.
Michael Krigsman: Okay. So, you left me the perfect opening to then drive though, which is, how should managers, or what is the framework or approach for managers to think about the economic aspects and the organizational and managerial implications of AI, machine learning, and similar kinds of technologies?
James Cham: Gosh! I’m glad you asked! I’m glad you asked. I actually think that this is still a poorly understood and badly researched part of the question. You know, for the last couple of years, I’ve been wandering around talking to various economists asking them: “Tell me what is the right microeconomic framework for thinking about how to invest in machine learning or around AI?” And I think in general, most economists, and most business school types are still more focused on the large-scale economic implications. But, those larger scale economic implications don’t really matter unless we make good decisions, right? Or, unless we make interesting decisions at a micro level. And so, in general, the way that the pattern would work is I would ask someone, and they would say, “Oh! But it’s really obvious.” And then, they’d be quiet for a long, long time as they thought through exactly what this may, or may not mean.
And then, there were three guys out of the University of Toronto, in their business school, who ended up actually coming up with what I think is the best framework for thinking about machine learning in general. I think that for most organizations, the right way to think about machine learning is to think about the cost of predictions. In the same way that if you were to abstract, at a certain level, computation. The history of computation is about reducing the cost of arithmetic. And, when you make it really cheap to add and subtract at a certain level of scale, then you end up with digital cameras and whatnot. And, if you think about AI as being different, or machine intelligence as being different, and think about it as reducing the costs of prediction, then you apply the same mental framework that you do for normal economic analysis. And you say to yourself: “If the cost of prediction goes down, then what are the complements and substitutes to me? And what are the ways that I could change my organization at its core?” So, that’s the microeconomic way of thinking about it.
And then, there’s sort of the meat and potatoes. So, what does this actually mean? What should I be counting and measuring part of it? And, I think that we, for most IT organizations on the software side, we now understand how to manage applications, and we understand how to manage data stores. We have huge inventories. We have great policies around how to build applications and how to roll them out, and we have good policies around how to manage data. Although, obviously, we’re now at a point where there’s an awful lot of data and we don’t know what to do with half of it, right? So, that’s well understood.
Now, there’s this thing in the middle, which doesn’t really have a good name, that some people are calling “algorithms,” or some people are calling “predictive frameworks.” And that thing in the middle ─ which is to say a program that is generated by another program by pulling in a bunch of data, like that thing in the middle ─ which I would call, “predictive model,” is going to be the core of most IT organizations, right?
So, it’s fine to have a data-centric organization. But if you have all this data and you don’t know what to do with it, it’s kind of useless, right? And it’s good to have better workflows, but if the workflows just generally help you do the same thing over and over again, that’s not that useful.
If, on the other hand, you as an IT organization thought about yourself as model-centric, then you would think about all the processes you have inside the organization. And you say, “Which ones of these are valuable enough, that I would want to make predictions and decisions without people involved on a day-to-day basis?” And I think the exciting thing about those models is 1) We’re going to have a lot of them, you know? They’re going to pervade throughout the entire enterprise, and that’s the exciting part.
The scary part is, we have no idea how to build and manage them because these models are totally different than … No, not totally different. They are subtly different than applications because… In the case of applications, you know it’s always amazing to me that applications work, because building software is difficult. But, I at least have some idea how to QA and test it, and how to deploy it in some consistent way ─ I have lots of bruises. And as a culture, we figured out how to do that.
Models, on the other hand, we don’t really understand; we don’t always understand… Like for some of these newer models, we don’t understand how to think about them, or how to introspect on them. We don’t really understand how to test them, because even theoretically if the model was totally testable, you wouldn’t really need there to be a model. And then we don’t know how to deploy them in a consistent way, right? And to me, that’s the heart of … There’s all the great sexy stuff. But for most organizations, as far as the things they all have to build and manage themselves, it will be these models, and understanding where in the organization should I make investments, and how should I think about it?
Michael Krigsman: I want to remind everybody that you are watching Episode #220 of CxOTalk. We’re speaking with James Cham, and if you are watching on Facebook, come on over to Twitter, and join the – well, everybody should do this – join the tweet chat that’s going on right now, using the hashtag #cxotalk.
James, you were saying that machine intelligence lowers the cost of prediction. The question is for managers (and we’re talking about the decision-making around using these technologies in a meaningful way in business), but when you say it lowers the cost of prediction, will managers really understand the full scope of the implications? How do you translate that into practical decision-making ability, I guess as a way to put it?
James Cham: I think that of course they won’t. None of us really understand … You know, when IBM started selling mainframes to Ford, and redoing their accounting system, no-one really understood exactly what that meant. They knew that it was better, and they knew that it was the future. And, I think that it is dangerous … Okay, it is important for vendors and visionaries to talk about the bright, shiny future. I think it’s dangerous for managers to get ahead of themselves. I think that we are at a point where there’s still a lot of learning to be done. And, as such, you don’t want to do a big bang, right? You know, you are still at the point where there are processes inside your organization that right now you just go through your list of processes, and started writing it down, and thinking about the places where you’re building workflow, or you have interesting data sitting on the port.
So, that’s like, you start to get there, and you start saying to yourself, “Okay, in these processes, there are some decisions that are made that, right now, you just do automatically.” Right? Either we do them automatically, or some person has to deal with it. And, you should take those, which have oftentimes been characterized as business rules and heuristics. Right? And those business rules are useful, but they don’t really change, right? But, in the new world, where you can build a model, you could then have something that’s much more flexible than traditional business rules. And, I think that’s the exciting thing, you know?
Down the street from where you are… So, I’m here in sunny San Francisco, but down the street from Boston, in Cambridge, at MIT, in the mid-nineties, there was a lot of talk about “learning organizations.” I don’t know if you remember this.
Michael Krigsman: Sure, yes.
James Cham: And, learning organizations was always kind of a lie. Right? Because organizations don’t learn. People learn. Organizations sort of pool together some sets of… People pull together some set of insights, right? And then, if they’re good, they remember those insights. If they’re better, they might even write down those insights. And if they’re best, they might change the organization in some way, either in the way it’s organized, or in specific business rules that are captured so that they can remember those insights.
But, the organization doesn’t really learn anything. The organization is this fiction of these people working together. But, in a model-centric company in which we’re able to actually systematically capture both inputs and outputs, and sets of decisions, and we try to build models that help automate those decisions, then you might actually end up with a learning organization where you actually could codify and capture the value that people get out of these decisions.
And so, that’s the exciting future, but I will not claim that I’ve got a … I don’t know. I would have stayed a management consultant if I were to try to claim to you that I had a systematic way to get there.
Michael Krigsman: So, right now, organizations that are … What advice do you have for organizations that are looking at AI, and how do you assess the state of adoption? I’m assuming that given that you’re investing in companies, machine intelligence companies, you must have a real interest in the uptake and adoption of AI in the enterprise; and in the consumer world as well.
James Cham: Uh hmm. So, maybe let’s talk about the adoption question first. I think that… You know, there’s a joke that’s from AI that researchers will make about the history of robots. And they will say, “We’re constantly talking about how robots haven’t hit the mainstream.” And then, they’d say something like, “Well, the truth is that robots have totally hit the mainstream. It’s just that at the moment they become mainstream, we no longer call them robots.” A researcher friend of mine showed me this advertisement from the 1930’s about these amazing robots that were called “toastmaking robots,” right? And then, at the moment you can consciously make toast with these robots, you’d call them “toasters,” right?
And so, I think that the adoption of AI-related techniques, and machine learning-related techniques, the moment they become interesting, everyone talks about it as “AI,” and then they don’t quite work. And then the moment they become mainstream, nobody talks about them as AI anymore, right?
But, I now have a relatively simple test for organizations, as far as just figuring out how much they’ve adopted a machine learning or machine intelligent mindset. And that is if there are senior technical people who have threatened to quit because they’re so unhappy with the reliance that the organization’s making on machine learning, then that is a good sign. That is because, at its core, an awful lot of machine learning models go antithetical to all the things that I learned as a software developer. Right? Because I am now trusting a bunch of probabilistic models in ways that I should never have before and I am also now unable to understand at its core ─ to dig into all the actual instructions; to understand what somebody’s actually working on. And, that is a huge cultural shift. It’s incredibly difficult.
And so, whenever I even talked to a data science team at a big company, or a large corporation, if the corporation tells me that: “Oh yes, things are going really well and everyone’s really happy!” Then, I immediately assume they’ve not done any of the actual hard work. That is because it is really a shift. If you assess the data science organization in most corporations, it’s either going to be engineers who don’t really understand statistics, or it will be statisticians who are, from the point of view of a software developer, really slow. Right? And, that synthesis is yet to happen in most places.
Michael Krigsman: So, what is it about machine intelligence and these various derivatives: machine learning, AI, deep learning, cognitive computing, and so forth? What is it about them that has these profound implications for society, culture, and organizations as you were just describing? Why, and in what ways, are these technologies different from traditional software?
James Cham: Hmm. So, you know, at their core, we, as software developers, are giving up an awful lot of control, right? Because rather than writing and codifying business rules in systematic ways, or understandings of schemas, we are giving up some of that control to other algorithms to generate decisions that we would think that we would want to make ourselves. And, I think that that is actually quite difficult. It reminds me, in some ways, of how you asked me earlier: “How do I assess machine learning startups?” Or “How do I think about that right now?” You know, it was true probably three or four years ago that I would be looking for people who came out of the best research labs, or people who came out of some of the companies that have actually been running the large-scale machine learning projects for a while. And, while that’s still true, the interesting shift on my side is that I am now looking more and more for people who are trying out different business models because …
You know, the first step is to write software that is probabilistic, and that you will trust to do a bunch of different things. That’s the first step. The second step, which is still poorly understood, is “What are the actual business implications for this? How does it actually change the economics of my business? How should I be charging my customer differently? How do my moats get developed differently?’ And on that side, I actually spend a fair amount of time talking to people who then have been trying to run machine-learning related businesses for the last few years. And, I think there are a lot of open questions around that.
One interesting example is if you think about traditional SaaS. It’s weird to say, “traditional SaaS,” right? SaaS has only been so popular for the last 10-15 years. But if you were to think about most great SaaS companies, they’ve spent a lot of their time convincing me, as a customer, that my data is safe in their cloud. Which really just means their servers, right? So, I now trust them. And they spent a lot of time convincing me that this is both cheaper and more effective, and also that my data is safe. It’s protected. This is also why you have an awful lot of technical architecture talk that really is just marketing to convince me that my data’s safe in their servers. So, that’s interesting, right?
But, in a machine learning world, where the data from different customers is actually all additive, traditional SaaS companies have a hard time telling me that my data is going to be used by another customer, right? And I think that that shift in thinking about how the business is run, and my relationship with my vendor, is something that an awful lot of SaaS companies are going to have a hard time with. This is in part because their terms of services assume that their data was never designed to be folded into someone else’s model. And so, that’s an example to me of the change… some set of changes in how businesses will be run for software companies. And changes that we don’t understand yet and that we’re just figuring out.
Michael Krigsman: What about the changes that are implied for companies? I mean, for example, you’re so focused on the future of work, and yet the moment that you begin to connect machine intelligence; artificial intelligence; to the future of work, a lot of people become very concerned about their jobs.
James Cham: Yeah! I think there are two interesting angles on that. So, first, these models are not perfect, right? The interesting thing about these models is there’s never the illusion that they’ll be perfect. The data that goes into them sometimes changes. The actual problems that they’re trying to solve changes. As a result of that, the models constantly have to iterate. And, you know, when we call it “machine learning,” there’s this delusion that the models get better. They don’t necessarily get better. They just iterate. And when they iterate, some are hoping that they’ll get better, and we need to have ways to understand that, right? This is because if you were to think about a big bank that builds a better and better model, right? If you’re not careful, one day, you might find that you’re accidentally redlining, right? And then, you’re incredibly racist and you get in a lot of trouble.
And so, that understanding that these models shift is a big open question. And so, there’s going to be a lot of work. There’s going to be a lot of work around managing models and building new models. When you have lots of models, then the nature of competition also changes because you’re now building models to compete with other people’s models and things like that. So, that’s the first piece.
And the second piece of the future of what my work actually looks like… There’s this guy Hal Varian, who is currently the Chief Economist at Google, and for a long time was at Berkeley. And, he once told me this story when I asked him about something very similar to this. He said to me, “Well, if you were to talk to my grandfather, and looked at what I did.” You know, Hal’s from the Midwest. His grandfather’s a farmer. So, he’d say, “Well, my grandfather looked at my work and said, ‘What are you doing? You’re fiddling your fingers all day! You’re not actually working! You’re just twiddling your fingers on this chiclet that looks like maybe a keyboard.’” And, that’s because what Hal’s doing now, or what we all do as knowledge workers, looks fundamentally different than what we would have thought of as work even fifty years ago.
I think the same thing’s going to be true for the next ten, twenty, thirty, forty years. That you’re going to go to talk to your grandkids, and you’re going to see whatever it is that they’re doing, and you’ll say, “Gosh! Are you guys working, or are you playing video games?” Right? And I think just the nature of work changes. The thing that’s hard to appreciate right now from where we sit is how dynamic, and how quickly the actual nature of work changes. So, I think that that’s the good news. The good news is that people will find things to do, and find new problems. We have so many things out in the world that need to be solved. So, that’s the good news.
The bad news is the shift is really fast, and I don’t think any of us have an idea of how fast or slow it’s going to be. And, if a shift is too fast – historically, people figure things out ultimately, but sometimes, we end up with huge upheaval and wars because people don’t know how to manage it. And so, I think that question is going to be outside the scope of this conversation, but I think that that is one of the more important questions of the age.
Michael Krigsman: Yeah. Clearly, there are going to be public policy implications, and we’ve been having some of those discussions with experts in ethics and public policy here on CxOTalk. How do you …
James Cham: … Let me just say one quick thing about that, though.
Michael Krigsman: Uh huh.
James Cham: It is hard, though, for the ethicists, and for the public policy people to talk about this right now, in part because even managers on the ground don’t totally understand how things are changing. And, in some ways, one of my biggest side projects is this effort to connect economists with actual practitioners right now in machine learning and MI [machine intelligence] because their understandings are totally different. Practitioners would actually benefit from talking to economists and people who have theory. But the folks who have theory, the theories need to change based on what’s actually happening on the ground and there’s actually still a huge disconnect at this point.
Michael Krigsman: You’re trying to connect economists who are focused on machine intelligence technologies to, when you say “practitioners,” how do you define the practitioner in this case?
James Cham: I’d say that in a lot of cases, the economists who are thinking about machine learning haven’t spent enough time with the guys and gals on the ground [who are] actually building models day to day, [to understand] how hard it is. What’s easy and what’s not easy. And, I think that there are lots of open questions about that right now.
Michael Krigsman: And, what is the …
James Cham: … As an example: You know, the algorithms are not magical. The thing that is magical is getting a bunch of data and cleaning it, and normalizing it in a way that’s usable. That’s magic. The other part that’s magic is there’s a lot of fiddling that you have to do with these models in order to get them to be effective. There are a lot of … When I say “fiddling,” oftentimes, it literally is fiddling: “How do I deal with this feature, or that feature, in order for my model to actually be predictive?”
Michael Krigsman: I think this is a very important point. When you say that the models are not magic, the algorithms are not magic, can you elaborate on this? This is a very important point for people to understand.
James Cham: Yeah. Well, it’s a little bit worse than that. 1) They’re not magic because there are a number of well-understood approaches that people have had for a number of years around how to try to build these models. But 2) It feels like magic because we don’t understand it yet. It feels like magic because the theory has not caught up with the practice, right? If you are to look inside even academia for AI, there’s a huge disconnect between the people who are building new algorithms and new approaches, and with the guys who build the theory. And, the folks who build the theory still don’t really understand exactly what’s happening right now.
Michael Krigsman: But then you said that the data is magic ─ the collection of the data; the cleansing of the data; all of that. So why this distinction in this way between the algorithm and the data?
James Cham: Because, the data … You know, I imagine most of your audience has, at some point, put together some great software product that totally failed because the data that they stuck in it was bad, or that someone misunderstood some column, or as someone misunderstood the nature of the problem, right? And, that’s always a problem. But, it is a problem when you have a thousand rows. And it’s a bigger problem when you have ten thousand rows. And it’s a large problem when you have hundreds of thousands of millions of rows. And so, that process of cleaning that data is still quite difficult. And, understanding the problem you’re trying to solve, that’s still quite difficult. And then figuring out if the data you have actually relates to the problem you think you’re trying to solve? That’s even harder, right? And so, we’re still in a lot of these model building cases, still at that point.
Michael Krigsman: What are some of the use-cases that you see that are most interesting around all of this?
James Cham: So, I try very hard as an investor not to get either too visionary, or too optimistic about things. But I don’t know. It hits everything, right? It hits everything from things as mundane as (I shouldn’t comment)… But, things as mundane as looking through people’s expenses in order to capture examples of lack of compliance. So, I’m an investor in this company called AppZen, which does this. And, on the one hand, you’d say, “Gosh, James! This is a really boring problem! Who really cares about this?” And then, I kind of said that to the founder first. But then, the moment they go through and do a little bit of analysis to look at how many cases of noncompliance you get ─ and when I say “noncompliance,” it’s either malicious or not. Right? You know, in an expense report, it’s tens of millions of dollars! And it’s one of those funny things where it’s just like this little problem sitting on the floor that was not practical to deal with before, just for organizational reasons, because you’d have to hire lots of people to deal with it, or you had to outsource it which would be complicated. But now, when you can figure out what are the things you care about, and then rely on AppZen to come up with the little bots to scrape through all the data, the cost of prediction goes down dramatically. Suddenly, this thing which was one of those nagging little things you were worried about in the back, now becomes something in the immediate present to solve.
And then so it’s things like that. It is things like … There’s a company out there right now called Textio, that goes through and reads the resume, the job description that you write. And goes through and say, “You know what? Statistically, when you write the job description this way, you’re less likely to get people to try to fill it out.” Right? And, that’s the sort of long-term conscientiousness and memory, and remembering every job description you’ve written in the past. No one [has] able to do that, right? And so, by putting together the collective of everybody, it’s kind of amazing! So, just to say, I think we’ll be surprised by how much the enterprise can change as you start to understand the potential around machine intelligence. So, it’s like all the big strategic things, and all these little parts of the company that fundamentally will change.
The difficult part ─ so, this is the caveat. The hard part is that we don’t know, or we don’t have good ways yet of predicting how much these models, or these bots, will help the organization. That we don’t have good intuition around saying “That if I go after this problem, maybe I’ll save this much money.” There’s an MIT professor who I was talking to the other day, who’s been doing this for the last twenty years, and in a different field. He said that the hard part for him ─ so in his case, he’s doing compiler design ─ and the hard part would be he’d apply machine learning to one specific problem, [and] get almost no results. You know, it would improve by five percent. I really don’t care. And, those would be all the problems that he thought he knew about. Right? But the moment he started playing with it on larger and larger scales, he’d be surprised to see that, “Oh! This problem, which I didn’t think was a problem, I can now solve because it wasn’t in my head. I didn’t think about this thing as something that was even solvable.” And so, that’s the exciting part.
Michael Krigsman: So we don’t yet have enough experience with the long-term implications of these technologies, algorithms, and models to know what is going to be really important and make significant changes. It’s still too early, essentially.
James Cham: There’s a lot of playing that has to go on, right? I think there’s a lot of playing and a lot of sharing of knowledge and learning. There has got to be a lot of entrepreneurship, both inside big companies and outside in my part of the world, right? And I think that’s the interesting cutting-edge that goes under-discussed. Because, everything else is true, right? All the sexy techniques, all the new ways that people are coming up with to solve some of the model building problems; all those are really important. But, I do think that we need to spend more time asking the questions that I’m asking.
Michael Krigsman: We have just a few minutes left. One of the issues that come up is there is just so much hype right now. Pretty much every technology company these days says, “We have AI in our product.” And, I’m sure sometimes it’s true, but probably there’s a lot of bullshit going on out there at the same time.
James Cham: Sure.
Michael Krigsman: So, how do we distinguish between the hype … And this effectively, in a sense, is a key part of your job, right? How do you distinguish between the hype and the reality?
James Cham: There are two different questions. If I were a manager right now, what would I be doing to try to figure out what works, or doesn’t work? I would probably be doing three things. I would be, “I think this is a great excuse to buy all the consumer electronics that your wife doesn’t want you to buy.” I think this is a great time to be playing with Alexa, and to be playing with various tools, just to start building intuitions on how you feel about things, and what seems effective and not effective, because there’s a lot of uncertainty. That’s why.
This is also a great time to look on the smaller end around making little investments around products that claim that they’ll solve something for you, right? So I think that there are lots of tools that are in the $1,000 – $2,000 range per month. But you should just start playing with them to see whether or not you can build an intuition.
But, the thing that I spend the most time on is I would start getting to know the data people inside your organization, and figuring out what are the secret things that they know are not accurate, and things that they know that are accurate. And just sort of digging inside the organization to figure out what are the opportunities for us. Because there are going to be opportunities that are going to require hugely expensive models to build and there will be some very low-hanging fruit. And I think once you adopt this model-centric view, rather than an applications-centric view, or a data-centric view, then I think you see your company a little differently.
Michael Krigsman: So, become friends with the data scientist in your company.
James Cham: Well, are they the scientist or just the database guys, and the guys who are managing various logs. And I think you’d be surprised what’s available.
Michael Krigsman: So we live in a data world, increasingly so. And so, therefore, learn about data and talk with people who are involved with data of all kinds, essentially is what you’re saying?
James Cham: Yeah, because we’re also in this migration from a data world to a model world. And, I think that transition … I do think that the companies that do that best or figure that out sooner, are going to be the ones that are going to be … I don’t know. Imagine all the buzzwords you love, like “agile,” or “dynamic,” or whatever ─ those good things. The ones that are model-centric, and are smart about being model-centric are going to be the ones that are going to be successful.
Michael Krigsman: So, it’s no longer just about the business model, but it’s about understanding data models and algorithm models in connection with the business model.
James Cham: Right.
Michael Krigsman: That’s the new linkage: How business model and data connect.
James Cham: That’s right. That’s right.
Michael Krigsman: Well, that’s a pretty interesting topic, and unfortunately, we are out of time. How about you come back another time to CxOTalk, and let’s explore that one, because that’s an interesting one.
James Cham: Yeah! I think that’s a big question. I think that is a big question. So, I’ll have to at least leave you with my current favorite story, which everyone should now look up. Everyone should look up … Probably your entire audience has heard about AlphaGo, and it’s super impressive. I don’t really know how to play Go, so I have no deep insight there. But, the interesting angle to me is less that the machine beat some of the best players in the world. The interesting angle is if you asked the players, they’d say that “The machine introduced strategies that we had not yet thought about.” Right? “That we had not yet understood or considered.” And, in some ways, the better, enlightened version of machine learning, the world I hope we end up in, is one in which our small understanding of what the problem set is, or the solution space is, actually gets expanded, right? Because we’re now able to rely on machines that help us explore different types of solutions that we haven’t thought about before. I think that’s exciting.
Michael Krigsman: What’s new; what it enables us to do, that we couldn’t do and that we didn’t even think about.
James Cham: Right. What strategies, what different types of problems could we solve that we haven’t even thought through yet? And I think that the managers who start tackling the basic things now will, over the next generation, be able to tackle these bigger ones.
Michael Krigsman: Well, that is hugely valuable managerial advice. Thank you so much, James Cham, for being our guest today on CxOTalk!
James Cham: Great!
Michael Krigsman: You have been watching Episode #220 with James Cham, who is a partner at Bloomberg Beta. Thanks everybody for watching, and we have another great show next week, so please join us. Bye-bye!