07/20/2022
By Sokny Long
The Francis College of Engineering, Department of Electrical & Computer Engineering (ECE), invites you to attend a Doctoral dissertation defense by Li Zhou on “Deep Learning Based Human Motion Modeling via Multimodal Information.”
Ph.D. Candidate: Li Zhou
Defense Date: Thursday, Aug. 4, 2022
Time: 10 a.m. to noon
Location: This will be a virtual defense via Zoom. Those interested in attending should contact the student (Li_Zhou@student.uml.edu) and committee advisor (Yan_Luo@uml.edu) at least 24 hours prior to the defense to request access to the meeting.
Committee Chair (Advisor): Yan Luo, Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
Committee Members:
- Hengyong Yu, Ph.D., Professor, Electrical & Computer Engineering, University of Massachusetts, Lowell
- Yu Cao, Ph.D., Professor, Computer Science, University of Massachusetts, Lowell
Human motion is recognized as movements, activities, and actions. Movements are the most primitive components, activities involve spatial-temporal information, and actions typically require additional knowledge of causal relationships and the interaction with the external environment. We mainly focus on three major areas related to human motion modeling: activity intensity prediction, activity/action recognition, and movement tracking. Motion information is generated from multiple techniques, such as optical motion capture systems (Impulse), vision cameras, Kinect cameras, and wearable inertial sensors. Thus, the representation of motion information is mainly video imagery and signal data, such as depth videos, RGB videos, joint positions, and inertial sensor signals. Each of these datasets has its target tasks in human motion analysis, while it has its limitations when operating under realistic conditions. Moreover, the utilization of supplementary information is conducive to human motion modeling in many areas, such as the energy expenditure (EE) estimation. In this work, we study deep learning (DL) based models running on the different modalities of the dataset to model human motion. And we designed a synchronization method to align information. Additionally, inspired by the work of multimodal fusion on image and textual datasets, we proposed a spatio-temporal model for motion generation via joint positions and audio data.
All interested students and faculty members are invited to attend the online defense via remote access.