‘The finding has the potential to unlock new applications—some of which we can’t yet fathom—that could improve our day-to-day lives.’
New research suggests some AI capabilities can be achieved with simpler processing. It would mean more computing power could reside in smaller devices, reports MIT Technology Review.

Researchers focused on how neural networks behave.
Neural networks are structures of nodes, called neurons for their resemblance to brain function.
How neural networks work has never been well understood, for example, how they ‘recognize’ faces, objects, or sounds.
Scientists still don’t understand it. Nonetheless, the proof was in the outcome: this research found the same result could be achieved using smaller networks.
If proven further, sophisticated AI could be done at less cost and with less computing power. Put another way, AI models now available only to large tech companies could be accessible by individual scientists.
MIT Technology Review says the reductions might be dramatic — by a factor of 10x or more.
Researchers made the discovery by ‘pruning.’ By trial and error, they found they could achieve the same accuracy when using smaller portions of a network, akin to using neighbourhoods of the bigger network.
The research was done by two scientists at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL).
OUR TAKE
- The findings captured widespread notice in the AI research community. The experiment was one of two named ‘best paper’ out of 1,600 submissions at ICML, a prestigious academic conference on machine learning.
- Lessening the amount of training data is a research priority for large funders, for example, DARPA.
- Part of the promise is moving AI to ‘the edge,’ putting more AI capabilities on devices. This would pre-empt connecting with centralized servers.
A new way to build tiny neural networks could create powerful AI on your phone | MIT Technology Review | May 10, 2019 | by Karen Ho
SEE ALSO
- Smarter training of neural networks – MIT News
- The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks | Frankle & Carbin – the academic paper detailing their findings