11/21/2022
By Md. Mahmudur Rahman

The Richard A. Miner School of Computer & Information Sciences invites you to attend a doctoral dissertation defense by Md Mahmudur Rahman on "Toward Developing Multi-Modal Deep Simultaneous Learning: Theory and Applications."

Ph.D. Candidate: Md Mahmudur Rahman
Date: Monday, Dec. 5, 2022
Time: 10 a.m.
Location: To be announced.

Committee Members:

  • Mohammad Arif Ul Alam (Advisor), Assistant Professor, Miner School of Computer & Information Sciences
  • Benyuan Liu, Professor, Miner School of Computer & Information Sciences
  • Yu Cao, Professor, Miner School of Computer & Information Sciences
  • Rameswar Panda, Research Scientist, MIT-IBM Watson AI Lab, IBM Research, Cambridge, MA

Abstract:
With the recent advancement of multi-modal Internet of Things (IoT) and commodity sensing technologies, modern scientists incorporate unimaginable capabilities to healthcare solutions, robotics vision and digital medicine automation. Aided by advanced deep learning techniques with the commodity multi-modal sensing, technologies are capable of solving complex pervasive and ubiquitous problems which were, previously, impossible to solve using low performing single modal sensing technologies. Moreover, modern deep learning-based domain adaptation techniques affix tremendous classification performances to ubiquitous systems providing substantially more powerful ubiquitous service than ever albeit costing complex multi-modal setup of heterogeneity in real-time systems. In this dissertation, we present a novel multi-modal simultaneous learning framework to enhance the power of deep domain adaptation providing stable improvements over state-of-the-art domain adaptation models. Our framework holds strong distribution matching property by training both source and target auto-encoders using a novel simultaneous learning scheme on a single graph with an optimally modified MMD loss objective function. Additionally, we design a semi-supervised classification approach by transferring the aligned domain invariant feature spaces from source domain to the target domain. To affirm amplitude of such mechanisms in real-time ubiquitous systems, we investigate applicability of our proposed frameworks in (1) state-of-art computer vision; (2) high fidelity point-cloud robotic perception; and (3) complex multi-modal multi-inhabitant smart-home activity learning problems.

To further extend the capacity of the framework, this thesis aims to develop temporally coherent domain adaptation models to incorporate multi-modal temporal sensing in presence of heterogeneity. Additionally, to enhance the ubiquity of the framework, we will solve the complex heterogeneous domain adaptation problem in temporal wearable sensing based health assessment and spectroscopic analysis based disease diagnosis.