Chao Chen (University of Texas, Austin): Fast, Robust, and Scalable Linear Solvers for Scientific Computing and Data Analytics
January 25, 12:30-1:30 p.m., Virtual Meeting
Abstract: The solution of large sparse linear systems is an essential building block in many science and engineering applications. It is also often the main computational bottleneck. For large problems, direct solvers (based on, e.g., LU or Cholesky factorizations) can require a significant amount of computing resources. By contrast, iterative solvers (e.g., CG and GMRES) can be much more efficient when effective preconditioners are provided. In this talk, I will present a randomized approach to constructing preconditioners for symmetric diagonally dominant matrices that arise from applications in scientific computing, data science, and machine learning. The new method computes an incomplete factorization of a sparse input matrix. It leverages a randomized sampling scheme developed by Spielman and Kyng that prevents excessive fill-in during Gaussian elimination. Numerical experiments demonstrate that the randomized preconditioner outperforms classical deterministic methods and delivers faster convergence, less running time, and better scalability. Finally, I will discuss some exciting research opportunities related to the new method with applications in high-performance computing and machine learning applications.
Thomas Fai (Brandeis University): Lubricated Immersed Boundary Method with Application to Fiber Bundles
February 22, 12:30-1:30 p.m., Room: Shah Hall 308
Abstract: Fluid-mediated near contact of elastic structures is a recurring theme in biofluids. The thin fluid layers that arise in applications such as the flow of red blood through blood vessels are difficult to resolve by standard computational fluid dynamics methods based on uniform fluid grids. A key assumption of the lubricated immersed boundary method, which incorporates a subgrid model to resolve thin fluid layers between immersed boundaries, is that the average velocity of nearby boundaries can be accurately computed from under-resolved simulations to bridge between different spatial scales. Here, we present a one-dimensional numerical analysis to assess this assumption and quantify the performance of the average velocity as a multiscale quantity. We explain how this analysis leads to more accurate formulations of the method and present examples from two-dimensional simulations, including applications to filament bundles.
Thomas Fai's Brandies University Bio
Bill Martin (Worcester Polytechnic Institute): Quantum Isomorphic Graphs from Association Schemes
March 1, 12:30-1:30 p.m., Room: Shah Hall 308
Abstract: Quantum Isomorphic Graphs from Association Schemes
Bill Martin's Worcester Polytechnic Institute Bio
Amalia Culiuc (Amherst College): Weighted estimates and matrix weights
March 22, 12:30-1:30 p.m., Room: Shah Hall 308
Abstract: For over two decades, the problem of proving sharp bounds for Calderon-Zygmund operators on weighted Lp spaces was an important area of study in harmonic analysis, culminating in 2010 with the proof of the A2 condition by Tuomas Hytonen. Since then, many other proofs have been developed and with them a whole new set of tools, including, most notoriously, the principles of sparse domination and their many corollaries. However, in spite of all these developments, the problem is still open in the setting of vector-valued function spaces with matrix-valued measures. In this talk, we will give an overview of the matrix A2 conjecture, its current status, and the challenges that it poses. The central question remains: is 3/2 really greater than 1?
Amalia Culiuc's Amherst College Bio
Kasso Okoudjou (Tufts University): Recent progress on the Heil-Ramanathan-Topiwala (HRT) conjecture
April 19, 12:30-1:30 p.m., Room: Shah Hall 308
Abstract: Recent progress on the HRT conjecture
Kasso Okoudjou's Tufts University Bio
Duy Nhat Phan (UMass Lowell): Stochastic Variance-Reduced Majorization-Minimization Algorithms
April 26, 12:30-1:30 p.m., Room: Shah Hall 308
Abstract: In this talk, we focus on a class of nonconvex nonsmooth optimization problems in which the objective is a sum of two functions; one function is the average of a large number of differentiable functions, while the other function is proper, lower semicontinuous and has a surrogate function that satisfies standard assumptions. Such problems arise in machine learning and regularized empirical risk minimization applications. However, nonconvexity and the large-sum structure make such problems challenging for designing new algorithms. Consequently, algorithms which can be effectively applied in such scenarios are scarce. We introduce and study three stochastic variance-reduced majorization-minimization (MM) algorithms, combining the general MM principle with new variance-reduced techniques. We study the almost surely subsequential convergence of the generated sequence to a stationary point and prove that our algorithms achieve the best-known complexity bounds in terms of the number of gradient evaluations. We demonstrate the effectiveness of our algorithms on sparse binary classification problems, sparse multi-class logistic regressions, and neural networks using several widely-used and publicly available data sets.Duy Nhat Phan's Website