An MIT algorithm has managed to produce sounds able to fool human listeners and beat Turing’s sound test for artificial intelligence.
Researchers from the Massachusetts Institute of Technology are using Alan Turing’s tests, developed in the 1950’s, as a benchmark to see if humans can create machines with a high enough level of artificial intelligence which is ‘indistinguishable’ from humans.
The academic institution has already used Turing’s work to develop a system which is able to pass a ‘visual’ Turing Test by writing characters which fool us.
Now, a team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have created a deep learning algorithm which passes the Turing Test for sound.
MIT’s team spent several months recording approximately 1,000 videos containing at least 46,000 sounds produced by hitting, prodding and scraping various objects with a drumstick.
These videos were then fed into the algorithm which deconstructed and analyzed the sounds’ pitch, volume and other characteristics. The algorithm is then able to ‘predict’ the sounds of a video by looking at the sound properties of each frame and stitching together bits of audio which match similar sounds within its database.