Artificial neural networks can be trained and implemented only if engineers apply advanced devices with the ability to perform data-intensive computations. Thus, for the last few decades, research teams all around the globe have been investing time and resources towards creating such devices with the help of different approaches and designs.
The problem might finally be solved as researchers have unveiled a novel energy-efficient memcapacitive device capable of implementing machine learning algorithms. The device is essentially a capacitor with memory and could develop the Neuromorphic Computing Market as it is much more efficient than existing devices and can be easily scaled up.
The team noticed that there are only memristive approaches other than conventional digital techniques for applying neural networks. Even then, there are only a limited number of memcapacitive proposals. Further, they also saw that all commercially available AI chips only come in digital/mixed based signals. Also, that there are only a few chips with resistive memory devices. Hence, the researchers made efforts in exploring an alternative approach that was based on capacitive memory devices. They came up with the present memcapacitive devices by being inspired by neurotransmitters and synapses present inside the brain.
The team’s goal was to develop such memcapacitive devices that were efficient and easy to scale up. Although few memcapacitive devices exist in the market, they cannot be used widely due to their poor dynamic range and difficulties in scaling up.
Inherently, Memcapacitor devices are way more energy efficient than memristive devices. The reason is that they are electric field-based and not current based. Thus, the signal-to-noise ratio is better for the former. The team’s developed memcapacitor device relies on charge screening. This provides it with a remarkable ability to bemass-produced. In addition, it alsobroadensits dynamic range compared to prior trials indulged in the realization of memcapacitive devices.
This device works by manipulating the electric field coupling between a bottom read-out electrode and a top gate electrode through another layer known as the shielding layer. This layer is further adjusted with the help of an analog memory that can store different weight values of artificial neural networks. The idea is similar to the way neurotransmitters present in the brain store and conveys information.
The team revealed that their idea was to enable each device to carry a significant amount of AI functionality on the device. Further, they also envisage many approaches based on the training or architecture of deep learning models. Spiking transformer-based neural networks and neural sets are some examples of such dreams. However, a vast amount of research is required before they are accomplished.