AI and Efficiency

openai.com

We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.

Emphasis added. This is something that frequently is under-appreciated—the past couple of years have seen a slower pace of massive new benchmarks in image processing, but algorithms have continued to improve along a different dimension: efficiency. This trend is important.

Read more...
Linkedin

Want to receive more content like this in your inbox?