Artificial Intelligence – specifically machine learning and deep learning – was everywhere in 2018 and don’t expect the hype to die down over the next 12 months.
The hype will die eventually of course, and AI will become another consistent thread in the tapestry of our lives, just like the internet, electricity, and combustion did in days of yore.
But for at least the next year, and probably longer, expect astonishing breakthroughs as well as continued excitement and hyperbole from commentators.
This is because expectations of the changes to business and society which AI promises (or in some cases threatens) to bring about go beyond anything dreamed up during previous technological revolutions.
AI points towards a future where machines not only do all of the physical work, as they have done since the industrial revolution but also the “thinking” work – planning, strategizing and making decisions.
The jury’s still out on whether this will lead to a glorious utopia, with humans free to spend their lives following more meaningful pursuits, rather than on those which economic necessity dictates they dedicate their time, or to widespread unemployment and social unrest.
We probably won’t arrive at either of those outcomes in 2019, but it’s a topic which will continue to be hotly debated. In the meantime, here are five things that we can expect to happen:
- AI increasingly becomes a matter of international politics
2018 has seen major world powers increasingly putting up fences to protect their national interests when it comes to trade and defense. Nowhere has this been more apparent than in the relationship between the world’s two AI superpowers, the US and China.
In the face of tariffs and export restrictions on goods and services used to create AI imposed by the US Government, China has stepped up its efforts to become self-reliant when it comes to research and development.
Chinese tech manufacturer Huawei announced plans to develop its own AI processing chips, reducing the need for the country’s booming AI industry to rely on US manufacturers like Intel and Nvidia.
At the same time, Google has faced public criticism for its apparent willingness to do business with Chinese tech companies (many with links to the Chinese government) while withdrawing (after pressure from its employees) from arrangements to work with US government agencies due to concerns its tech may be militarised.
With nationalist politics enjoying a resurgence, there are two apparent dangers here.
Firstly, that artificial intelligence technology could be increasingly adopted by authoritarian regimes to restrict freedoms, such as the rights to privacy or free speech.
Secondly, that these tensions could compromise the spirit of cooperation between academic and industrial organizations across the world. This framework of open collaboration has been instrumental to the rapid development and deployment of AI technology we see taking place today and putting up borders around a nation’s AI development is likely to slow that progress. In particular, it is expected to slow the development of common standards around AI and data, which could greatly increase the usefulness of AI.
- A Move Towards “Transparent AI”
The adoption of AI across wider society – particularly when it involves dealing with human data – is hindered by the “black box problem.” Mostly, its workings seem arcane and unfathomable without a thorough understanding of what it’s actually doing.
To achieve its full potential AI needs to be trusted – we need to know what it is doing with our data, why, and how it makes its decisions when it comes to issues that affect our lives. This is often difficult to convey – particularly as what makes AI particularly useful is its ability to draw connections and make inferences which may not be obvious or may even seem counter-intuitive to us.
But building trust in AI systems isn’t just about reassuring the public. Research and business will also benefit from openness which exposes bias in data or algorithms. Reports have even found that companies are sometimes holding back from deploying AI due to fears they may face liabilities in the future if current technology is later judged to be unfair or unethical.
In 2019 we’re likely to see an increased emphasis on measures designed to increase the transparency of AI. This year IBM unveiled technology developed to improve the traceability of decisions into its AI OpenScale technology. This concept gives real-time insights into not only what decisions are being made, but how they are being made, drawing connections between data that is used, decision weighting and potential for bias in information.
The General Data Protection Regulation, put into action across Europe this year, gives citizens some protection against decisions which have “legal or other significant” impact on their lives made solely by machines. While it isn’t yet a blisteringly hot political potato, its prominence in public discourse is likely to grow during 2019, further encouraging businesses to work towards transparency.
Read the source article in Forbes.