In every modern processor lies a set of memory modules that act as a cache for data, used to speed up the processes that go with everyday computing tasks. However, even with this relatively fast cache at the ready, it still has some limitations that can hamper performance for some tasks, as it is basically like trying to squeeze a square peg through a round hole. Manufacturers have attempted to get around this by increasing the size of these memory banks, but these companies all eventually run into the same issues that can negatively impact efficiency.
Daniel Sanchez, an assistant professor at the Electrical Engineering and Computer Science department, explained their concept as follows:
“What you would like is to take these distributed physical memory resources and build application-specific hierarchies that maximize the performance for your particular application,”
MIT"s Computer Science and Artificial Intelligence Laboratory have simulated a new process that replaces this fixed cache, with a more dynamic memory bank that changes to accommodate different application needs, and in turn, reduces so-called lag in the process. The method called "Jenga" was shown to improve the overall processor speed by up to 30% and used up to 85% less power. If it"s integrated into modern processors, this could be a boon for modern smart devices.
Sanchez continued:
“And that depends on many things in the application. What’s the size of the data it accesses? Does it have hierarchical reuse, so that it would benefit from a hierarchy of progressively larger memories? Or is it scanning through a data structure, so we’d be better off having a single but very large level? How often does it access data? How much would its performance suffer if we just let data drop to main memory? There are all these different tradeoffs.”
However, this was only a simulation, which means MIT have not built a working prototype yet. On top of that, the researchers have only managed to simulate a rather large core arrangement with 36 cores in total, and it remains to be seen if this will translate well to smaller mobile processors which typically have two to eight cores, or see a similar boost in performance.
But as a proof-of-concept, it might interest companies like AMD and Intel to look into this, as the companies face limits to how many transistors they can squeeze into a processor chip. Quantum Computing, the next step in processor technology is still in its infancy and will require a lot more research to be a viable alternative so manufacturers will need all the help they can get to squeeze every drop of performance out of existing technologies.