Details: |
In an era of unprecedented connectivity, countless electronic devices form an intricate network linked
to powerful cloud-based supercomputers via the internet. These advanced systems process immense vol-
umes of data using sophisticated machine learning algorithms, delivering valuable insights and actionable
feedback. However, despite these remarkable capabilities, there are still critical areas where these benefits
are difficult to fully realise―particularly in edge devices that operate in close proximity to individuals and
directly interact with their daily lives. Consider devices monitoring biological signals such as brainwaves,
blood pressure, glucose levels, or heart rate. While such devices could issue vital alerts based on their
observations, constantly uploading sensitive biometric data to the cloud raises significant privacy concerns.
At the same time, edge devices face strict power limitations, making energy-intensive techniques like deep
learning impractical. Moreover, many phenomena in human-centric environments, including biometric sig-
nals, exhibit characteristically long time scales. Processing such slow-evolving data using today’s fast digital
systems leads to considerable energy inefficiencies.
To address these challenges, we propose the concept of “Slow Electronics.” This approach aims to
enable ultra-low-power efficient information processing of slow temporal data directly at the edge. Our re-
search focuses on developing neuromorphic devices tailored to long time scales (10 ms to 100 s), designing
neural network architectures (e.g., spiking reservoir circuits) to harness these devices, and exploring algo-
rithms that autonomously extract determinism and periodicity from time-series data [1, 2, 3]. In this talk, I
will share insights into our findings, and discuss future directions in the realm of Slow Electronics.[1] Inoue, H. et al. Taming prolonged ionic drift-diffusion dynamics for brain-inspired computation. Adv. Mater. 37, e2407326
(2025).
[2] Chen, X. et al. CMOS-based area-and-power-efficient neuron and synapse circuits for time-domain analog spiking neural
networks. Appl. Phys. Lett. 122, 074102 (2023).
[3] Pati, S. P. et al. Real-time information processing via volatile resistance change in scalable protonic devices. Commun. Mater.
5, 1–9 (2024).
[4] Tanaka, G. et al. Recent advances in physical reservoir computing: A review. Neural Netw. 115, 100–123 (2019).
[5] Tamura, H. & Tanaka, G. Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing
models. Neural Netw. 143, 550–563 (2021). |