Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks

TitleTowards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks
Publication TypeConference Paper
Year of Publication2024
AuthorsTahmid, T., M. Gates, P. Luszczek, and C. Schuman
Conference Name2024 International Conference on Neuromorphic Systems (ICONS)
PublisherIEEE
Conference LocationArlington, VA, USA
Abstract

By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.

URLhttps://ieeexplore.ieee.org/document/10766542/
DOI10.1109/ICONS62911.2024.00057
External Publication Flag: