AI Trends executive interview with Kumar Srivastava, Vice President for Product and Strategy for Bank of New York. Kumar serves on the executive advisory board of AI World. Kumar has worked on big data analytics, machine learning, API and other products as part of his broad business experience. He and his team on BNY’s Mellon Silicon Valley Innovation Center are focused on developing advanced products. He recently spoke with John Desmond, editor of AI Trends.
AI Trends: For a financial services company such as BNY, how can strategic goals be advanced using artificial intelligence (AI) and machine learning (ML) technologies?
Kumar: AI and machine learning are new capabilities that were not available even a couple of years ago. They offer the ability to process a lot more data, which means that there is a direct benefit of digitizing the entire enterprise. That means instrumenting every possible application system, every customer-facing interface and all internal systems interfaces to produce the data. Now we have the ability to understand the data, derive patterns and derive insights which can then be leveraged in adding customer value.
And because of these change in conditions, what’s now possible is that you can get a better sense of what your clients and customers might want to do, are requiring, or requesting the institution to perform on their behalf. But also how well is the enterprise doing in terms of being able to deliver on the needs of the clients by being able to predict, but also ensure that it’s able to carry out these transactions successfully within the right quantity constraints.
And this is the ability that lets you understand what the client might want to do, and ensure that you’re providing the best quality of service. These two scenarios can really be enabled and enhanced using AI and ML, and I think this is a cross-industry phenomenon because the ability to really understand and predict what clients might need leads to the industry achieving a much higher level of client satisfaction value.
AI Trends: How can financial services companies incorporate AI and machine learning into the product development process?
Kumar: The way that AI and ML impact the entire product development process is it’s fairly profound and comprehensive. There are impacts to how products are designed, how products are delivered and deployed to users, and then how products are monitored, and then how the feedback loop is utilized to enhance the client experience and the product itself. And the way it should be done and the way it’s more likely to succeed is with training. We need to ensure that the various functions in the enterprise understand the significant shift in how product development should happen.
It’s not based on the gut feeling of one person. It’s not based on traditional methods of using market research to determine what that business logic should be in the product. What’s different now is that you want to build in a capability for AI and ML in your product so that it can actually learn what the patterns are, what the insights are, and then use those to deliver, or define, or derive business value for clients.
You want to take existing products and reconfigure them to be able to utilize AI and ML. You want t ensure they are being fed the right data, the entire data set, so that the system can learn what it needs to learn, and that can be used to then produce new types of value or solve existing problems. Also required is a change in the model for how these products are deployed into a production environment, where real users, enterprises and partners can interact and get value from these systems.
So how we do operations fundamentally also changes because now the purpose of deploying a piece of software to deliver value changes from doing something deterministic, or again and again, to being driven by this AI system. The AI model has an intelligence that is learning as it captures data; that learning has to be tracked to ensure that quality levels are consistent. Also the people building these models — data scientists or engineers – need to understand when are the right times to refresh the model, replace the model, and enhance the model with maybe additional capabilities. So what you really need is this new kind of monitoring and tracking system so that data scientists can ensure that their models continue to perform at high quality levels.
And if products do not provide a high quality of service, they need to be taken offline, redone and redeployed.
AI Trends: Very interesting. It seems like big changes could take a long time to come about. What is the impact in the short term?
Kumar: That’s a good question. It’s two things. Like any other technology, you go through the hype cycle and then you realize what can be really done and what cannot be done. So I think in the short term, two things are likely to happen. One, we’ll have a lot of enterprises trying to get started. Given this is a strategic initiative, we’ll see a lot of ML and AI activities where big enterprises try and get this expertise which doesn’t really exist in the world at a large scale. They’ll be forced to acquire these skills in-house, and then use that to get started.
Like any technology, the technology itself doesn’t solve any problem. So in the short term, we’ll see essentially two classes of organizations or enterprises that pretty much everyone will fall into. We’ll have companies like Baidu that have done really well, and have really understood how you productize something in AI and ML to churn out products that use very sophisticated technology internally. The scenarios that they deliver are very user-friendly; customers are able to use them with minimal training.
On the other hand, we’ll have enterprises that are focusing on the technology, focusing on building that expertise without a clear understanding of what problems they want to attack.
It could be very similar to big data initiatives of a few years ago. Many enterprises were struggling to adopt it because they didn’t have the business problem in mind. So I think in the short term, we’ll see some successes and some failures. What we’ll see more of is the demand for product designers and product developers who are able to understand both sides – the users problems and the technology from AI and ML — and be able to put them together. This will give uses value from they are looking for; they don’t care whether it’s coming from AI or ML or some other technology.
The technologists — the data scientists, the machine learning professionals – need to be able to understand how to take this technical capability and craft it so that it becomes a value-adding experience for the users.
AI Trends: What do you see as the impact in the longer term?
Kumar: Over the longer period, the amount of data we’re collecting will accumulate more and more. The number of capabilities or systems that can be instrumented, that’s going up. So we are producing more data from existing systems, and we’ll have more information from new systems that will be heavily influenced by sensor designs. They’ll be decoding and understanding anything and everything that’s possible to be collected.
This is one of those technologies that is simply going to slowly take over almost every aspect of how value is generated across the world in every vertical. And as that happens, we’ll see this expertise being built and acquired in every possible enterprise. We’ll have an ecosystem of independent software vendors or other integration partners that that help enterprises get these capabilities in place.
Ten years from now, AI and ML will be built into our product development tools or design tools. I’m expecting that these things will become so commodity that developers out of college will be able to pull from a menu and insert a particular engine. This is sort of possible today with open source, although it’s not so easy.
The hard part which takes time today is training and modeling and making sure the quality is at a high level. That eventually will also be automated, so at the click of a button, this will become as basic and as rudimentary as web design, HTML design.
AI Trends: How important is the open source model to the adoption of AI in the enterprise, do you think?
Kumar: Open source is huge. This is one of the few instances where the community that’s building and delivering and enabling all of this open source development and availability of software wants to collaborate to put this AI and ML stuff out there. So you have all the big enterprises — Facebook, Microsoft, Google, Baidu, and a lot of universities — all focusing on and ensuring that a lot of their value is open source.
From a coding perspective, from frameworks, from a tooling perspective, we’ll have a lot of capabilities available. But things like managing the AI, managing a deep learning network, that capability will not be open sourced. So, we’ll have an environment where it’s extremely easy to get started. because there’s a lot of code available in open source. But it will be very hard to achieve massive scale because that capability will not be open sourced.
This is similar in the big data space or any other space for that matter. For example, Netflix open sourced some of its components to manage streaming of data on a large scale. So from open source, you can get the core capabilities, but the management part of running these things at scale is never really open sourced. I’ve never seen that.
I think it will be the same thing for AI and ML. We’ll have a lot of capabilities which is great so people can get started. Students at the universities can learn it. Existing professionals can also, get these skills very quickly. But every enterprise will struggle and will possibly need outside help from people who know how to scale up these systems to achieve the kind of internet scale you might need.
For these enterprises to be able to get a head start, they need access to data that was used to train successful models. In the self-driving area, a lot of data is being collected from many different self-driving car companies. What does it take for a new entrant in that market to get at least, you know, a good understanding of what that data looks like, to be able to produce a rudimentary model? The other alternative is for them to hire a bunch of drivers, and just drive around for a long time.
So I feel there’s some value in building a model where data or the models themselves can be open sourced. They don’t have to be the latest and the greatest technology. They could be, you know, decades-old technology, or even in a few years, given the rate at which things are changing would be very useful for a lot of people that are starting to get into that area. So having some data models be open sourced and made available for universities so that students can actually get a head start, would lead to a faster evolution and improvement.
“The biases of the data scientists that are doing future engineering on these models are heavily influencing what is being learned from these observations in the real world.“
The other angle on open source that really interest me is how do we make sure that the models themselves are getting better over time? And I’ll use the self-driving car example again to illustrate the point. Imagine a car company, let’s say Uber, has been training in Arizona and now they got a license to run their cars in San Francisco. Google has been running it in the Bay area for some time. Tesla has been training across all the highways in the nation. What worries me is that every one of these models as they are been trained, are very subject to biases and assumptions of the environments and the training. The biases of the data scientists that are doing future engineering on these models, are heavily influencing what is being learned from these observations in the real world.
So we end up where everyone gets sort of a piece of the picture, or a slice of the picture of what’s happening in the real world. What that simplistically means is that every model will be really good in certain situations at predicting or transferring what needs to be done. And they will have areas where they are weak because they haven’t seen enough examples in volume to be able to learn the pattern there.
So in this fragmented world of models, sometimes it makes sense for competitors to share and cooperate. An example in the securities industry is security. The Fortune 500 companies attacked by phishing expeditions or malware or denial of service attacks, are sharing information. There are industry bodies where these big enterprises come together and they share this attack data, so that they each together have a broader understanding and a better understanding of how they can protect themselves and their users.
A similar thing is required for incorporating AI and ML, and I think the open source model needs to evolve to ensure that. Especially in the areas where the AI interacts with the real world, it can often be dangerous. The unpredictability can be very high if you have models that don’t really understand the entire set of patterns that are required to be successful in the real world. And so these large enterprises need to test the quality of their models, to figure out what are the weaknesses and what are the strengths. And then when the weaknesses are found, they’re able to collaborate to a certain bare minimum level that’s not hurting their competitive advantage. For example, some self-driving models might be much better at detecting pedestrians than others.
So I feel that we need some sort of standardization in how we measure quality of models from different enterprises for the same problem area, and then be able to use that to make products better or safer and useful for everyone.
AI Trends: How can financial services companies use AI and ML to help address cyber security?
Kumar: There are really two kinds of attack factors. One is inbound attacks from outside where somebody is trying to probe the network, trying to intrude the network using various different technologies. And they’re using many different mechanisms, such as phishing, social networking attacks, social engineering attacks, and denial of service attacks. The other scenario is internally, where maybe the malware or these enabling attacks occur through taking over employee machines or their workflows and then using that to intrude the network.
AI and ML by themselves cannot solve the problem. A whole set of technologies is required to work together. You can imagine that a lot of information is being missed, patterns of attacks, or attacks themselves because of the sheer amount of data that had to be combed through. And the response is slowed down by the agility of the enterprise to release code, release software, release pattern detection capabilities, you know, at the speed at which these attacks were evolving and changing.
AI and ML provide a capability now where a system can learn these attacks over time. That enables the enterprise to scale up and look at many different types of attacks. Now you’re be able to look at more data and process that faster given the right hardware. At the same time, it doesn’t mean that it’s a silver bullet that it will pick up everything, or that it won’t make mistakes. So you still need an assembly of technologies working together and good monitoring and remediation.
The more you can profile legitimate behavior and define illegitimate behavior, and the more you can classify and then better describe what unusual activity looks like, the better. And if you could do that at scale, that’s really how cyber security works. It’s being able to build the whitelists of behavior, of entities, of reputations, of profiles, and build blacklists, to recognize patterns that are suspicious and unknown, and have the right flows to remediate or understand those scenarios better.
So if you can imagine three classes — known-good, known-bad, and unknown — what AI and ML can do is increase the size of known-good, known-bad, and thereby reduce the unknown set. That means users of those systems looking at the unknown sets are more productive, and more efficient because they have less noise to comb through, and a smaller category of what they need to classify further
AI Trends: A lot of money is being invested in AI companies. Are they good investments?
Kumar: It depends on the kind of companies that are being invested in. A lot of companies have AI technology that they are trying to productize. They often try to put a platform around stuff available as open source. The try to put some automation around the framework part of AI and ML. I believe they will struggle because they are making it easier to do AI and ML in an enterprise if somebody buys that platform. But they’re not really solving the core problem which is twofold.
One, internal employees need to be trained, and this is beyond, way beyond simple software engineering. This requires being trained and thinking at a different level, or just a different thought process. It’s not simply as dropping code libraries and calling functions. There’s more to it. The other problem is that AI has to help generate some new product or service. That has to still happen. We’ve taken care of maybe 10% or 15 % of the actual problem and delivered a solution for that. But the majority of the problems are still untouched.
So I feel these platforms will offer tools but enterprises will struggle to find value from them because that skill set is generally missing.
Then there are companies using AI and ML to solve existing known problems, or new problems that might have come up. So they’re using AI and ML technologies to solve problems. They don’t claim to be AI companies. They claim to be a company for X and using AI to solve it. And then what they’re delivering is real software, real products that solves the actual problem, not the technical platform for the user to solve the problem.
These companies I think will have a better future than the other set because they are offering direct immediate value, because they solve a known business problem in a much better way because they have AI and ML.
AI Trends: Thank you very much for your time. Is there anything you’d like to add or emphasize?
Kumar: I’ll just like to close with saying that the skillset that’s missing really is the part that takes these technologies and actually wraps them up in a workflow scenario that solves the real problem. And that skill set is missing not because people haven’t been doing product development, but enough people who do product development and product design do not understand this new technology. So I think if anything, in addition to hiring the right machine learning data scientists who understand these technologies, companies should ensure that they’re able to find and hire people who understand both sides. So that once you have the right expertise, the right data, and the right models, you’re able to leverage that in the real world and generate value by building the right scenario and the right use case and the right support to run this stuff in the production landscape.