By AI Trends Staff
Many job descriptions across organizations will require at least some use of AI in the coming years, creating opportunities for the savvy to learn about AI and advance their careers regardless of discipline.
New job titles have and will emerge to help the organization execute on AI strategy. Machine learning engineers have cemented a leading role on the AI team, for example, taking first place on best jobs listed on Indeed last year, according to a recent rapport in CIO. And AI specialists were the top job in LinkedIn’s 2020 Emerging Jobs report, with 74% annual growth in the last four years. This was followed by robot engineer and data scientist.
The number of AI-related jobs could increase globally by up to 16%, stated Ritu Jyoti, Program VP, AI Research with IDC IT consultants. With AI generating productivity returns during the pandemic, interest is growing. “IDC believes that AI spending and employment will increase among healthcare providers, education, insurance, pharmaceutical companies and federal governments,” she stated.
Here are some new AI roles taking shape:
Chief AI Officers will understand how AI technology can be exploited by the business, will help develop the company’s AI strategy and explain it to the board, other executives, employees and customers. Will work with the CIO to implement the AI strategy.
One example is Nicole Eagan, chief AI officer at Darktrace, the cybersecurity firm. She splits her time working with in-house technology teams, talking to customers, and evangelizing the firm’s AI strategy.
“I work with the CTO and our AI lab to explore new areas for research and development,” stated Eagan, who has a background in strategic marketing at Oracle. She works with a highly-qualified staff of AI experts. “We have over 35 PhDs with advanced math, machine learning and AI expertise who are working in our labs,” she stated.
The AI Ethics Officer covers risk and governance and may need to coordinate with government agencies, nonprofits, legal teams, users and privacy groups as well as technology teams.
Kathy Baxter is architect for ethical AI practice at Salesforce.com, with a background in user experience research at Google, eBay and Oracle. She combines a passion for technology with a healthy skepticism. “AI is not magic and is not appropriate for every challenge,” she stated.
AI ethics officers in her view do not need to be computer scientists or data scientists. “What is more important is to have a humanistic background like psychology, sociology, philosophy, or human-computer interaction,” she stated. “It is critical to focus on understanding everyone impacted by technology, their needs, context, and values.”
With a masters degree in human factors engineering and an undergraduate degree in applied psychology, Baxter has an ability to de-escalate the emotional debates that sometimes ensue in discussion of AI and ethics, thus enabling healthy discussion. “AI regulation is coming so creating an ethical AI practice now will better prepare you to be in compliance,” she suggested.
The AI data engineer helps to prepare data for advanced analytics and machine learning. Kevin Brown covers this role as managing director of BT Security, an arm of the British multinational telecommunications company.
The role makes sense for large organizations with high data volumes. The cybersecurity side at BT sees millions of events per second and some 4,000 cyberattacks per day. “We have a vast amount of data that we quickly need to sift through to find the anomalies,” he stated. “We’re always looking for the needle in the haystack.”
Successful AI teams have certain characteristics in common, suggests a recent report in The Enterprisers Project. One of these is to have a clear strategy.
Recent research from McKinsey consultants identified high-performing companies in AI as having addressed business alignment and data requirements. Some 72% of respondents to a survey said the company’s AI strategy aligned with the corporate strategy, and 65% reported having a clear data strategy that supports and enables AI.
Companies that take a multi-disciplinary approach to implementing AI, with team members having different backgrounds and concentrations, also have an advantage, suggested Seth Earley, CEO of Early Information Science and author of “The AI-Powered Enterprise.”
He cited the example of Vodafone, which looked to build its AI capability by adding “cognitive engineers.” However, “The problem is that cognitive engineer is a new job role and there were none on the market,” stated Earley. “Instead, they built their own by assembling a team consisting of data scientists and programmers, as well as linguists, information architects, user experience experts, and subject matter experts from the business.”
The skill mix needed will vary based on the type of AI being pursued. “Predictive analytics would not likely require a linguist, for example,” he noted.
Successful AI projects secure executive sponsorship, from those with credibility and impact in the organization, by demonstrating positive business impact and including risk mitigation. “The more thorough the plan, the greater the likelihood of getting a strong sponsor who will risk their political capital for such a project,” stated Earley. “I have seen sponsors turn down funded projects because they did not want to take on the risk of failure even though many stakeholders wanted to move forward.”
It could be that an ongoing initiative is the best fit for an AI project to have an impact, instead of a new project. Strong candidate projects will be easy to use and available to a wide range of users.