By AI Trends Staff
Organizations need to transition from opportunistic and tactical decision-making around AI to a more strategic focus, suggest leading business managers.
Two authors of a recent article in the MIT Sloan Management Review suggest the path to strategic AI for business can rest on three pillars. Amit Joshi is a professor of AI, analytics, and marketing strategy at IMD Business School in Switzerland, who works with companies in telecom, financial services, pharma, and manufacturing. Michael Wade is a professor of innovation and strategy at IMD Business School in Switzerland. His most recent book is “Orchestrating Transformation” from DBT Center Press, 2019.
The three pillars of strategic AI for business the authors suggest are:
- A robust and reliable technology infrastructure;
- New business models intended to bring the largest AI benefits; and
- Ethical AI
AI relies on mathematical, statistical, and computer science techniques that are based heavily on a stable infrastructure and usable data, the authors note. “Without the support of well-functioning data and infrastructure, it is useless.”
This infrastructure support must extend through the entire data value chain, from data capture to cleaning, storage, governance, security, analysis, and availability of results. The AI infrastructure market is expected to grow from $14.6 billion in 2019 to $50.6 billion by 2025, according to MarketsandMarkets.
Feedback loops are needed to act on failures. When Ticketmaster wanted to reduce the opportunity for ticket scalpers to buy blocks of tickets then resell them at higher prices, it used machine learning to build a defense system. The system combined real-time ticket sales data along with a wide view of buyer activity to block the resellers and attempt to reward legitimate customers. Feedback loops helped Ticketmaster stay ahead of scalpers.
New Business Models
Intelligence transformation refers to new business models that exploit AI, which has the potential for improvements that surpass human capabilities.
The authors pointed to OrangeShark, a Singapore-based digital marketing startup that uses machine learning to help automate advertising campaigns. The system covers media selection, ad placement, click-through monitoring and conversions, and even minor ad copy changes. OrangeShark is able to offer a pay-for-performance business model, whereby clients only pay a percentage of the difference between customer acquisition costs from a standard advertising model and the OrangeShark model.
In another example, Affectiva offers an “emotion measurement” service that leverages an enormous database of sentiment-analyzed human faces. The company analyzes and classifies a range of human emotions using deep learning models that can then be made available to clients, which include automakers for tracking the state of the driver. Thus, Affectiva has built a business model based on providing intelligence as a service in a way not possible before its AI capabilities were available. New business models should be considered a foundation of any AI strategy.
Ethical AI needs to be factored into the business plan. A review of the facial recognition market illustrates the risks of lax consideration of ethics. Facial recognition systems incorporating AI have proved to be effective, such as for police forces. However, the ethical concerns include biases such as lower accuracy in identifying dark-skinned people than white people, and women versus men.
In December 2018, Google announced it would suspend sales of its facial recognition software, citing concerns over ethics and reliability. Google’s competitors reach similar conclusions 18 months later. In June 2020, in response to the Black Lives Matter movement, IBM halted the sale of facial recognition software to police forces in the US. Two days later, Amazon announced a one-year moratorium on sales of its facial recognition software to police; Microsoft followed the next day. The experience was damaging to these companies, suggesting the technology was running ahead of the ethical considerations around the product. Such scenarios need to be anticipated.
“It is possible that an AI ethics office will need to be created within organizations to oversee AI activities, establish and implement ethical AI guidelines, and hold the organization accountable for its ethical practices,” the authors suggest.
Harvard Business School Professors Outline Strategic Use of AI in New Book
A new book by two Harvard Business School professors also encourages thinking about AI strategically. Marco Iansiti, the David Sarnoff Professor of Business Administration, and Karim R. Lakhani, the Charles Edward Wilson Professor of Business Administration, both at the Harvard Business School, are the authors of “Competing in the Age of AI” from Harvard Business Review Press, 2020.
The authors cite the example of the founder of Peloton, John Foley, who got frustrated with his local gym when he kept getting elbowed out of his preferred spin classes. He founded the company in 2012, offering stationary bicycles priced at $2,200 with integrated 21-inch tablet computers. For $39 per month, Peloton offers access to live-streamed classes where members can track their performance, connect with classmates and have their achievements called out by instructors. The business generated $700 million in revenue in its first fiscal year.
In a recent account from the Harvard Business School, Foley credited today’s technology including software, data and communication networks, as the basis for the company’s success. “We see ourselves more akin to an Apple, a Tesla, or a Nest, or a GoPro—where it’s a consumer product that has the foundation of sexy hardware technology and sexy software technology,” he stated.
The two authors have been studying the impact of AI on business for 10 years. They have worked on AI strategy with Amazon, Microsoft, Facebook, Disney, Verizon, and NASA among others.
In an interview, Iansiti stated, “A lot of people think of this as disruption, like the taxi industry is being disrupted by Uber. It’s not disruption. Rather, it’s a completely different kind of firm. This hasn’t happened in more than 100 years.” He referred to AI as “a fundamental change in the means of production” that is affecting every industry.
He noted that, “Everybody is still trying to figure it out,” and that the move to AI comes with risks, such as consumer privacy, cybersecurity, data bias, and algorithm bias.