04/20/2023
By Martin Margala

The Francis College of Engineering, Department of Electrical and Computer Engineering, invites you to attend a doctoral dissertation proposal defense by Uchechukwu Leo Udeji on “Spike Motion: A Spiking Neural Network Framework for Resource Aware Implementation of Spiking Transformer Network on FPGAs.”

Candidate Name: Uchechukwu Leo Udeji
Defense Date: Tuesday May 9, 2023
Time: 10 a.m. to 12:30 p.m. EDT
Location: This will be in-person. Those interested in attending should contact Ph.D. advisor (yan_luo@uml.edu) at least 24 hours prior to the defense to request access to the meeting.

Committee Members

  • Committee Chair, Advisor, Martin Margala, Ph.D., Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
  • Jay Weitzen, Ph.D., Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
  • Dimitra Papagiannopoilou, Ph.D., Assistant Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
  • Michael Totaro, Ph.D., School of Computing and Informatics, University of Louisiana at Lafayette
Brief Abstract: The implementation of an energy and time-efficient spiking neural network (SNN) system requires effective software-hardware co-design. Spiking Neural Networks (SNNs) are event-based neural networks that encode information in the form of spike trains and are well-suited for neuromorphic hardware. Research on SNNs over the years has led to the development of several training techniques and neurons. Time-dependent backpropagation is one of several techniques used to train spiking neural networks and is applied in this study, along with Izhikevich neurons. Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware used for the verification and deployment of time-critical systems. Due to the reconfigurable nature of FPGA devices, they are well-suited for the training of neural networks and inference.

Conventional neural networks like convolutional neural networks (CNNs) do not ideally make meaning out of processed data. This problem is solved by recurrent neural networks (RNNs) and most recently transformer architectures. Transformers are neural networks that incorporate a self-attention mechanism in their architecture, which is used to create an attention map. This mechanism relates different positions of a single sequence to compute a representation of the sequence. This makes its application in natural language processing and video processing possible. Despite the advantage of transformers which allows for significantly more parallelization, they consume a lot of power on CPUs and GPUs.

The novel contribution of this research is in a proposed design methodology targeting energy-efficient spiking transformer network for In-memory or near-memory processing on FPGAs.