2 startups use flash memory to reduce AI data traffic jams and save power


Syntiant of Irvine, California, and Austin, Texas-based Mythic are each trying to speed up neural network calculations by conducting operations directly in flash memory rather than on CPUs. Moving data to a CPU is a major bottleneck and power draw, and perhaps not necessary given that the model’s weights and biases don’t change (this applies to operating rather than training networks). 

Anything that leads to more efficient, lower-power machine learning is always welcome.


Want to receive more content like this in your inbox?