11/17/2023
By Danielle Fretwell

The Francis College of Engineering, Department of Electrical and Computer Engineering, invites you to attend a Master's Thesis defense by Ryan McCann on "Analysis of Q-Learning Reward Functions for Adaptive Path and Core Assignment in SD-EONs."

Candidate Name: Ryan McCann
Degree: Master’s
Defense Date: Monday, Nov. 27, 2023
Time: 2:30 to 4 p.m.
Location: Ball Hall 302, or join via Zoom

Committee:

  • Advisor: Professor Vinod Vokkarane, EECE Department, University of Massachusetts Lowell
  • Professor SeungWoo Son, EECE Department, University of Massachusetts Lowell
  • Professor Ian Chen, CE Department, University of Massachusetts Lowell

Brief Abstract:
Reinforcement learning algorithms have become increasingly popular for their potential to enhance the management of SDN-EONs. However, their application often proceeds without sufficient empirical evidence, particularly concerning the choice of hyperparameters and the design of reward functions. This thesis aims to address this shortfall by conducting a comprehensive evaluation of various reward functions—termed reward function analysis—within the context of the Q-learning algorithm. This analysis seeks to optimize routing and core assignment in SDN-EONs. Through a comparative assessment of reward functions, this research strives to identify a superior method that surpasses baseline algorithms, including Shortest Path First with First-fit spectrum assignment (SPF-FF) and K-shortest Path First with First-fit spectrum assignment (KSP-FF).

In executing a dynamic SDN-EON simulator, a state-action space, known as a Q-table, was developed for the Q-learning algorithm. The progression of reward functions was examined, evolving from fixed values to dynamic ones attuned to congestion and fragmentation across potential paths and core assignments. Initially selected in a static manner, paths and cores were assessed in real-time based on their congestion levels. This dynamic evaluation informed the Q-learning algorithm's decision-making process by revealing how a particular path or core performs under specific congestion conditions.


This thesis concludes that the strategic shaping of reward functions is crucial for applying reinforcement learning to SDN-EONs, alongside careful tuning of hyperparameters. This advancement promises not only a reduction in blocking probability (BP) but also lays the groundwork for more resilient and efficient EONs.

Future research should focus on exploring more complex algorithms, such as deep Q-learning, and assessing their performance under identical network conditions. Additionally, a more sophisticated state space that includes spectrum assignment directly within the Q-tables should be considered. Modifications to the adaptation of learning parameters over time, such as an exponential decay of epsilon rather than a linear one, could also lead to substantial improvements. Pursuing this direction has the potential to create more autonomous and adaptive optical networks, ensuring they are equipped to manage increasing traffic demands.