The rapid evolution of large-scale AI models has created both extraordinary opportunities and fundamental research challenges. This area focuses on advancing the core capabilities of AI systems — making them more efficient, robust, and able to generalize across domains with limited data.

Research spans large language models and their optimization, visual representation learning, graph-based reasoning, multimodal learning, and reinforcement learning for autonomous systems. A unifying theme is adaptability: how can AI models be developed that transfer effectively to new tasks, domains, and data modalities without requiring massive retraining?

Application domains include scientific discovery, materials design, health informatics, data mining and autonomous robotics. Faculty in this area have active collaborations with Amazon, Meta, Oak Ridge National Laboratory, and leading academic institutions.

Research Topics

  • Large language model development & optimization
  • Visual representation learning & domain adaptation
  • Few-shot and zero-shot learning
  • Graph neural networks & retrieval-augmented models
  • Temporal knowledge graph reasoning & time series analysis
  • Reinforcement and imitation learning for autonomous systems
  • Physics-informed AI & generative models for materials discovery
  • Uncertainty quantification for engineering applications