I have voraciously read endless pro and con scenarios about artificial intelligence since first writing about it years ago. At this point, there is no doubt that concerns about the dangers of runaway AI raised by Elon Musk, Stephen Hawking, Bill Gates, Bill Joy and others are genuine.
There also is no doubt whatsoever that the new organizations aimed at mitigating the dangers — OpenAI, The Future of Life Institute, Machine Intelligence Research Instituteand others — are extremely important developments.
Clearly, no sane person or organization wants to see, let alone encounter, runaway AI. However, a base problem is that no one knows where the actual crossover point — the edge or tipping point — exists, and thus we mortals are unlikely to be able to prevent it from occurring. Said differently, there is a very high probability that we will misjudge where that crossover point is and will thus go beyond the key threshold. Overshooting is the norm in biology and in most, if not all, evolving systems, but especially man-made ones.