09/01/2021
By Sokny Long

The Francis College of Engineering, Department of Electrical & Computer Engineering, invites you to attend a doctoral proposal defense by Li Zhou on “Interpretation of Full-Body Human Motion via Multi-modality Perception.”

Ph.D. Candidate: Li Zhou
Defense Date: Monday, Sept 13, 2021
Time: 1 to 2:30 p.m. EST
Location: This will be a virtual defense via Zoom. Those interested in attending should contact Ph.D. advisor, yan_luo@uml.edu, at least 24 hours prior to the defense to request access to the meeting.

Committee Chair (Advisor): Yan Luo, Professor, Electrical and Computer Engineering, University of Massachusetts Lowell

Committee Members:

  • Hengyong Yu, Professor, Electrical and Computer Engineering, University of Massachusetts Lowell
  • Yu Cao, Professor, Computer Science, University of Massachusetts Lowell

Brief Abstract:

This proposal presents several approaches to interpret full-body human motion and discusses the perception techniques to human motion in each. The motion is recognized as movements, activities, and actions. Movements are the most primitive components, activities involve spatial-temporal information, and actions typically refer to causal relationships and the external environment. In particular, we mainly focus on three major areas related to motion interpretation: activity intensity prediction, activity/action recognition, and movement tracking. Perception techniques, such as the optical motion capture system (Impulse), the vision cameras, the Kinect cameras, and the wearable inertial sensors, generate datasets for the motion analysis. Thus, the information is mainly represented as video imagery and signal data, such as depth videos, RGB videos, joint positions, and inertial sensor signals. Each of these datasets has its target tasks in human motion analysis, while it has its limitations when operating under realistic conditions. We assume that utilizing them together provides synergy. In summary, we proposed machine learning (ML) or deep learning (DL) based models running on the different modalities of the dataset to interpret human motion. We also present the work of multimodal fusion on image and textual datasets. Lastly, we describe the plan for the completion of the research work.

All interested students and faculty members are invited to attend the online defense via remote access.