03/13/2024
By Danielle Fretwell

The Francis College of Engineering, Department of Electrical and Computer Engineering, invites you to attend a doctoral dissertation defense by Uchechukwu Leo Udeji on "Spike Motion: A Spiking Neural Network Transformer Framework for Resource Aware FPGAs."

Candidate Name: Uchechukwu Leo Udeji
Degree: Doctoral
Defense Date: Tuesday, March 19, 2024
Time: 9 a.m.
Location: ECE Conference Room, Ball Hall

Committee:

  • Advisor: Martin Margala, professor of ECE, UML
  • Co-Advisor: Jay Weitzen, professor of ECE, UML
  • Dimitra Papadinnoupolou, assistant professor of ECE, UML
  • Michael Totaro, Associate Professor of School of Computing and Informatics, UML

Brief Abstract:
Transformers are a class of neural networks that elevate the capabilities of conventional models like convolutional neural networks (CNNs) by integrating the distinctive self-attention mechanism. This innovation bolsters the memory prowess of transformer networks, enabling them to make informed decisions based on prior data. While traditional neural networks mainly focus on learning weights, transformers prioritize attention scores and maps, facilitating the alignment of encoded inputs with decoder outputs. Selecting the most suitable compute resources for a given task is crucial due to the plethora of hardware options, each with varying compute capacities and power requirements. CPUs, for instance, are not optimal for training transformers, especially vision transformers, due to their sequential nature, limited compute cores and extensive processing time. Conversely, GPUs exhibit lower training latency but consume significantly more power. Additionally, Field Programmable Gate Arrays (FPGAs) and neuromorphic devices can be utilized for transformer processing.

FPGAs, with their reconfigurable hardware, are particularly adept at training neural networks and inference tasks. The objective of this research has been to propose a novel transformer framework capable of processing audio, text, images, and video data. The project has investigated the advantages of integrating energy-efficient spiking neurons into the transformer network, leveraging near-memory processing, and employing FPGA-based processing. Ultimately, the thesis research presents the performance of transformer models processed on CPU, GPU, and FPGA, recommending FPGA models for various tasks.