Sunday, March 26, 2017

'Black box' technique may lead to more powerful AI

Paste your parsed code here.
Paste your parsed code here.

The system starts with many random parameters, makes guesses, and then tweaks follow-up guesses to favor the more successful candidates, gradually whittling things down to the ideal answer. You may start with a million numbers, but you’ll end up with just one in the end.


It sounds a bit mysterious, but the benefits are easy to understand. The technique eliminates a lot of the traditional cruft in training neural networks, making the code both easier to implement and roughly two to three times faster. And when ‘workers’ in this scheme only need to share tiny bits of data with each other, the method scales elegantly the more processor cores you throw at a problem. In tests, a large supercomputer with 1,440 cores could train a humanoid to walk in 10 minutes versus 10 hours for a typical setup, and even a “lowly” 720-core system could do in 1 hour what a 32-core system would take a full day to accomplish.


There’s a long way to go before you see the black box approach used in real-world AI. However, the practical implications are clear: neural network operators could spend more time actually using their systems instead of training them. And as computers get ever faster, this increases the likelihood that this kind of learning can effectively happen in real time. You could eventually see robots that are very quick to adapt to new tasks and learn from mistakes.

Paste your parsed code here.
Paste your parsed code here.

0 comments:

Post a Comment

infolinks

google analytics

Contact Form

Name

Email *

Message *

Last Hour Hits

Copyright © Tech Visions