Priorities shoud change for OpenAI, which has admirable intentions

385
red blue and yellow paper clip background

Artificial intelligence is one of the hottest topics in both business and science. Developers and industry analysts are all-in, building castles in the sky with tales of an impending AI “awakening.”

In preparation for this sea change, Elon Musk and Sam Altman founded OpenAI, a nonprofit with the dual mission of ensuring that AI stays safe and its benefits are as widely and evenly distributed as possible.

While it’s important to develop AI and harness its powers responsibly, it’s incorrect for OpenAI to focus solely on one or two types of AI, like reinforcement learning. Reinforcement learning is among the least used types of AI, and it offers few immediate safety threats or value to people and businesses. Instead, OpenAI should be honing in on the more widely used forms of AI that already pose significant risks (supervised learning) and astounding benefits (machine intelligence).

OpenAI is right to assume that potential dangers loom in AI should it go completely unchecked. Nick Bostrom’s famous “paperclip maximizer” thought experiment is a good example. Where OpenAI is missing the mark is in the subfields of its choosing to invest its resources. OpenAI’s primary focus — reinforcement learning — is a class of machine learning algorithms used for tasks like chatbots, video games and robots. Interestingly, it doesn’t typically start with data or try to learn from an existing data set. Rather, it attempts to learn to control an agent, like a robot, based purely on a set of actions it can take and its current state.

Read the source article at TechCrunch.