By AI Trends Staff
AI is receiving a push from the race to find a vaccine, diagnostics and effective treatments for the COVID-19 virus, and the push has also heightened awareness of the need to implement AI that is transparent and free of bias—AI that can be trusted.
The World Economic Forum is one organization that has responded. With ethics in mind, the organization’s AI and Machine Learning team recently announced its Procurement in a Box toolkit with concrete advice for purchasing, risk assessments, proposal drafting and evaluation.
To produce the toolkit, the Forum worked over the past year with many organizations, including the United Kingdom’s Office for AI in the Department for Digital, Culture, Media & Sport, with Deloitte, Salesforce and Splunk, as well as 15 other countries and more than 150 members of government, academia, civil society and the private sector. The development process incorporated workshops and interviews with government procurement officials and private sector procurement professionals, according to a recent account in Modern Diplomacy.
The UK has used the guidelines in procurement processes with its Food Standards Agency. User testing was performed in workshops with the UK Department for Transport and Defense, Science and Technology Laboratory. The testing helped the government define step-by-step guidance documents now in use across departments. It was also picked up by the NHSX, the UK government unit that helps develop best practices for the National Health Service, as guidance for AI purchases.
“The current pandemic has shown us more needs to be done to speed up the adoption of trusted AI around the world,” stated Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the World Economic Forum. “We moved from guidelines to practical tools, tested and iterated them – but this is still just a start. Now we will be working to scale them to countries around the world.”
Founded in 1971 and based in Geneva, Switzerland, the World Economic Forum is a non-governmental organization whose mission is to “improve the state of the world by engaging business, political, academic, and other leaders of society to shape global, regional, and industry agendas.”
The guidelines were also tested in the United Arab Emirates for a project to develop a chatbot application to help the Dubai Water and Electricity Authority. “As the UAE’s shift towards a knowledge-based economy gathers pace, the country has become a reliable testbed and leader in the development and execution of guidelines and frameworks that enable the large-scale deployment of emerging technologies such as Artificial Intelligence,” stated Khalfan Belhoul, CEO of the Dubai Future Foundation, the host entity of Centre for the Fourth Industrial Revolution UAE.
Government of Finland Furthers Commitment to AuroraAI Program
The government of Finland is also inserting itself into the drive for fair AI with its recent announcement that its ongoing AuroraAI program will be made fully available by the end of 2022. The program lays the foundation for using AI to bring services and people together in a better way, according to a recent account in Interesting Engineering.
According to the Finnish Ministry of Finance, the main goal of the AuroraAI National AI program is to implement an operations model based on the needs of citizens, where the AI helps citizens and companies make use of services in a timely and ethically sustainable manner.
“The core idea of the AuroraAI program involves proactively offering services to people according to their own life-events. This is something new and unique,” stated Paivi Nerg, Permanent Under-Secretary for Governance Policy. “AuroraAI helps service users and service providers find each other. It also produces savings for the entire national economy by improving the cost-efficiency of services.”
The AuroraAI project brings these four perspectives to its effort:
- The effects of Artificial Intelligence on general economic and employment trends
- The transformation of work and the labour market
- Reforms on education and skills maintenance
- Ethics in AI
Pursuit of Ethical AI Can Be Competitive, Babson Shows
To further the pursuit of ethical AI, graduate students at Babson College in Wellesley, Mass., recently engaged in a competition to see who could produce the most ethical AI.
I think the public is becoming more aware of the effect of algorithms and AI,” says Ruben Mancha, assistant professor of information systems, who taught the course on Digital Transformation. “Digital transformation should be responsive to not only customer needs, but also to the consequences it has for society,” Mancha stated in an account from Babson Thought & Action.
Mancha compared the current thinking about responsible ways to deploy and use AI to the sustainability movement, which 20 years ago focused more thinking about the environmental impact of business and which today is mainstream.
The ethics competition asked student teams to propose strategies for how an AI software platform could be responsibly employed in various settings. Teams considered the use of AI in the food industry, government, and small and medium-sized enterprises.
“It went very well,” stated Mancha.
Dan Edlebeck, a candidate for an MBA in the Class of 2020, was on a team that looked at how AI may disrupt the growing, distributing, and selling of food. “I am full of excitement and am optimistic we can use AI for good. However, any technology is nothing more than a tool. It’s a double-edged sword that has the ability to enslave or empower humanity,” he stated.