One main subject dealing with synthetic intelligence is the interplay between a pc’s reminiscence and its processing capabilities. When an algorithm is in operation, information flows quickly between these two parts. Nevertheless, AI fashions depend on an unlimited quantity of knowledge, which creates a bottleneck.
A new study, printed on Monday within the journal Frontiers in Science by Purdue College and the Georgia Institute of Know-how, suggests a novel method to constructing laptop structure for AI fashions utilizing brain-inspired algorithms. The researchers say that creating algorithms on this method may scale back the power prices related to AI fashions.
“Language processing fashions have grown 5,000-fold in dimension over the past 4 years,” Kaushik Roy, a Purdue College laptop engineering professor and the examine’s lead writer, stated in a statement. “This alarmingly fast enlargement makes it essential that AI is as environment friendly as potential. Which means essentially rethinking how computer systems are designed.”
Do not miss any of our unbiased tech content material and lab-based evaluations. Add CNET as a most popular Google supply. Do not miss any of our unbiased tech content material and lab-based evaluations. Add CNET as a most popular Google supply.
Most computer systems right this moment are modeled on an concept from 1945 known as the von Neumann structure, which separates processing and reminiscence. That is the place the slowdown happens. As extra individuals all over the world make the most of data-hungry AI fashions, the excellence between a pc’s processing and reminiscence capability may change into a extra important subject.
Researchers at IBM known as out this drawback in a post earlier this yr. The problem laptop engineers are working up in opposition to is named the ‘reminiscence wall.’
Breaking the reminiscence wall
The memory wall refers back to the disparity between reminiscence and processing capabilities. Basically, laptop reminiscence is struggling to maintain up with processing speeds. This is not a brand new subject. A pair of researchers from the College of Virginia coined the term again within the Nineties.
However now that AI is prevalent, the reminiscence wall subject is sucking up time and power within the underlying computer systems that make AI fashions work. The paper’s researchers argue that we may attempt a brand new laptop structure that integrates reminiscence and processing.
Impressed by how our brains perform, the AI algorithms referred to within the paper are often called spiking neural networks. A standard criticism of those algorithms up to now is that they are often sluggish and inaccurate. Nevertheless, some laptop scientists argue that these algorithms have shown significant improvement over the previous few years.
The researchers recommend that AI fashions ought to make the most of an idea associated to SNNs, often called compute-in-memory. This idea continues to be comparatively new within the discipline of AI.
“CIM provides a promising answer to the reminiscence wall drawback by integrating computing capabilities straight into the reminiscence system,” the authors write within the paper’s summary.
Medical units, transportation, and drones are just a few areas the place researchers imagine enhancements may very well be made if laptop processing and reminiscence have been built-in right into a single system.
“AI is among the most transformative applied sciences of the twenty first century. Nevertheless, to maneuver it out of knowledge facilities and into the true world, we have to dramatically scale back its power use,” Tanvi Sharma, co-author and researcher at Purdue College, stated in an announcement.
“With much less information switch and extra environment friendly processing, AI can match into small, reasonably priced units with batteries that last more,” Sharma stated.

