Computers have advanced so far in terms of their power and potential that they rival or even surpass the human brain in their ability to store data, analyze, predict, and communicate. But there is one area where the human brain continues to dominate: energy efficiency.
“The most efficient computers still require about four times as much energy – or 10,000 times more – than the human brain for certain tasks, such as image processing and recognition, but they outperform the brain for tasks such as mathematical calculations.” Kaustav Banerjee, professor of electrical and computer engineering at UC Santa Barbara and a world expert in nanoelectronics, said: “Making computers more energy efficient is critical, as global energy consumption from on-chip electronics ranks fourth in energy consumption by country and is growing exponentially every year due to applications such as artificial intelligence. .” He also said the issue of energy-inefficient computing is particularly urgent in the context of global warming, “highlighting the urgent need to develop more energy-efficient computing technologies.”
Neuromorphic (NM) computing has emerged as a promising method to bridge the energy efficiency gap. It may be possible to approach brain-like energy efficiency by mimicking the structure and operation of the human brain, where processing occurs in parallel across arrays of low-power consuming neurons. In a paper published in the journal Nature Communications, Banerjee and colleagues Arnab Pal, Zichun Chai, Junkai Jiang, and Wei Cao, in collaboration with researchers Vivek De and Mike Davies of Intel Labs, propose an ultra-energy-efficient platform using 2D transition metal dichalcogenide (TMD)-based tunnels. – Field effect transistor (TFET). The researchers say their platform can reduce energy requirements by less than two orders of magnitude (about 100 times) compared to the human brain.
Leakage current and subthreshold swing
The concept of neuromorphic computing has been around for decades, but research into it has only intensified relatively recently. Advances in circuits that enable smaller, denser transistor arrays and enable more processing and functionality with less power consumption only scratch the surface of what can be done to enable brain-inspired computing. Add to this the appetite generated by many potential applications such as AI and the Internet of Things, and it is clear that expanding options for hardware platforms for neuromorphic computing must be addressed to move forward.
Introducing the team’s 2D tunnel transistor. A product of Banerjee’s long-standing quest to develop high-performance, low-power transistors to satisfy the thirst for increasing processing power without increasing power requirements, these atomically thin nanoscale transistors are highly responsive at low voltages and can mimic the energy-efficient operation of the human brain as the foundation of the researcher’s NM platform. In addition to their low off-state current, 2D TFETs also have low subthreshold swing (SS), a parameter that describes how efficiently a transistor can switch from off to on. According to Banerjee, the lower the SS, the lower the operating voltage and the faster and more efficient the switching.
“Neuromorphic computing architectures are designed to operate with very sparse firing circuits, meaning they mimic the way neurons in the brain fire only when needed,” said lead author Arnab Pal. Unlike the more common von Neumann architecture of today's computers, where data is processed sequentially, and memory and processing components are separated and draw power continuously throughout the entire operation, event-driven systems such as NM computers only operate when: It runs. There is an input to the process, and memory and processing are distributed across an array of transistors. Companies like Intel and IBM have developed brain-inspired platforms to deploy billions of interconnected transistors and achieve significant energy savings.
However, there is still room for improvement in energy efficiency, according to the researchers.
“In these systems, most of the energy is lost through leakage current when the transistor is off, not when it is active.” Banerjee explained. A common phenomenon in electronics, leakage current is a small amount of electricity that flows through a circuit even when the circuit is turned off (but still connected to a power source). According to the paper, current NM chips use conventional metal-oxide-semiconductor field-effect transistors (MOSFETs) with high on-state current and high off-state leakage. “The power efficiency of these chips is limited by off-state leakage, so our approach of using tunneling transistors with much lower off-state currents can significantly improve power efficiency,” Banerjee said.
When integrated into neuromorphic circuits that emulate the firing and resetting of neurons, TFETs have proven to be more energy efficient than state-of-the-art MOSFETs, especially FinFETs (MOSFET designs that incorporate vertical “fins”). Provides better control over switching and leakage. Although TFETs are still in the experimental stage, the performance and energy efficiency of neuromorphic circuits based on them make them promising candidates for the next generation of brain-inspired computing.
“Once realized, this platform could bring the energy consumption of the chip to within two times that of the human brain,” said co-authors Vivek De (Intel Fellow) and Mike Davies (Director, Intel Neuromorphic Computing Lab), adding the interface circuitry and memory storage elements. This represents a significant improvement over what is achievable today.”
Banerjee is widely recognized as one of the leading pioneers of 3D integrated circuits, which are now commercially widespread, and he added that eventually, 3D versions of these 2D-TFET-based neuromorphic circuits could be realized to more accurately mimic the human brain.