Harmonic Analysis and Signal Processing Seminar Schedule

Info: The Harmonic Analysis and Signal Processing seminar is coordinated by C. Sinan Güntürk.
Location and Hour: Warren Weaver Hall (Courant Institute), time and place to be checked on this page or here.

Fall 2022


Date
Time  and Place
Speaker and Institution
Title and Abstract
Wed, Oct 12
11:00am, WWH 1302
Rayan Saab, UCSD
Quantizing neural networks


Fall 2019


Date
Time  and Place
Speaker and Institution
Title and Abstract
Mon, Sep 23
1:30pm, WWH 1314
Palina Salanevich, UCLA
Random Vector Functional Link Neural Networks as Universal Approximators
Tue, Oct 1
2pm, WWH 1314
Olga Graf, TU Munich
On unlimited sampling and reconstruction


Fall 2018


Date
Time  and Place
Speaker and Institution
Title and Abstract
Wed, Sep 26
3:30pm, WWH 1314
Felix Voigtlaender, KU Eichstätt-Ingolstadt
Approximation theoretic properties of deep ReLU neural networks


Fall 2017


Date
Time  and Place
Speaker and Institution
Title and Abstract
Tue, Oct 3
11am, WWH 1314
Hassan Mansour, MERL
Online Blind Deconvolution for Sequential Through-the-Wall-Radar-Imaging
Tue, Nov 14
11am, WWH 201
Shuyang Ling, Courant
Bilinear inverse problems: theory, algorithms, and applications in imaging science and signal processing
Tue, Nov 28
11am, WWH 201
Weilin Li, Norbert Wiener Center, University of Maryland
Recent developments on the super-resolution problem for separation below the Rayleigh length


Spring 2017


Date
Time  and Place
Speaker and Institution
Title and Abstract
Fri, Jan 27
4pm, CDS 650 (at 60 Fifth Ave)
Jonathan Weed, MIT
Optimal rates of estimation for the multi-reference alignment problem
Fri, Mar 3
10am, WWH 1302
Gitta Kutyniok, TU Berlin
Anisotropic Multiscale Systems on Bounded Domains
(joint with the Numerical Analysis and Scientific Computing Seminar)
Wed, Mar 22
2pm, WWH 1314
Tamir Bendory, Princeton
Multireference alignment, bispectrum inversion and cryo-EM
Wed, Apr 26
3:30pm, WWH 312
Bubacarr Bah, AIMS South Africa
Structured sparse recovery with sparse random matrices
Tue, May 2
2pm, WWH 201
Luis Daniel Abreu, (ARI-Vienna)
Universality and hyperuniformity of Weyl-Heisenberg ensembles
Wed, May 17
2pm, WWH 1314
Augustin Cosse, NYU CDS and Courant
Semidefinite programming relaxations for matrix completion, inverse scattering and blind deconvolution


Fall 2016


Date
Time  and Place
Speaker and Institution
Title and Abstract
Mon, Nov 7
11am, WWH 1314
Alex Wein, MIT
A message passing algorithm for cryo-EM and synchronization problems
Mon, Nov 28
11am, WWH 1314
Soledad Villar, UT Austin
Mathematical optimization for data analysis


Spring 2016


Date
Time  and Place
Speaker and Institution
Title and Abstract
Wed, May 4
2pm, WWH 512
Palina Salanevich, Jacobs University Bremen
Phase retrieval with Gabor frames: reconstruction and stability

Spring 2015



Date
Time  and Place
Speaker and Institution
Title and Abstract
Fri, Feb 13
10am, WWH 1314
Michele Pavon, Department of Mathematics, University of Padova, Padova, Italy
Optimal transport and Schroedinger bridges from a control and computational perspective
Wed, Apr 29
4pm, WWH 1314
Ulas Ayaz, Brown University, ICERM
Subspace Clustering with Ordered Weighted L1 Minimization


Spring 2014


Date
Time  and Place
Speaker and Institution
Title and Abstract
Feb 25
2pm, WWH 1314
Michele Pavon, Department of Mathematics, University of Padova, Padova, Italy
A convex optimization approach to generalized moment problems in multivariate spectral estimation

Fall 2010


Date
Time  and Place
Speaker and Institution
Title and Abstract
Sep 17
11:30am, WWH 1302
Stephane Mallat, Ecole Polytechnique
Classification by Invariant Scattering
(joint with the Applied Mathematics Seminar)
Nov 18
2pm, WWH 317
Franz Luef, UC Berkeley
Gabor Analysis as Noncommutative Geometry Over Noncommutative Tori
Nov 23
3:30pm, WWh 1302
Zuowei Shen, National University of Singapore Wavelet Frames and Applications
(joint with the Applied Mathematics Seminar)


Spring 2010


Date
Time  and Place
Speaker and Institution
Title and Abstract
Jan 27
2:00pm, WWH 1314
Luis Daniel Abreu, CMUC, University of Coimbra, Portugal
Gabor and wavelet (super)frames with Hermite and Laguerre functions
Feb 17
2:00pm, WWH 1314 Katya Scheinberg, Columbia University and CIMS
Efficiently recovering second-order models in derivative-free optimization
Feb 24
2:00pm, WWH 1314 William Wu, Stanford University
Discrete Sampling
Mar 3
2:00pm, WWH 1314 Felix Krahmer, Hausdorff Center for Mathematics, University of Bonn
Local Approximation and Quantization of Operators with Bandlimited Kohn-Nirenberg Symbols
Mar 10
2:00pm, WWH 1314 Rachel Ward, CIMS
Sparse Legendre expansions via l1 minimization
Mar 22
11:00am, WWH 202
Joel Tropp, Caltech
Finding Structure with Randomness
(Note the special time and location)


Fall 2009


Date
Time  and Place
Speaker and Institution
Title and Abstract
Nov 18
3:30pm, WWH  517
Vikram Jandhyala, Associate Professor and Director, Applied Computational
Engineering Lab, Dept of Electrical Engineering, University of Washington Seattle
Founder and Chairman, Physware Inc.
Open challenges in field-based microelectronics design and verification

(joint with Special Applied Mathematics Seminar)

      
Spring 2009


Date
Time  and Place
Speaker and Institution
Title and Abstract
March 6
1:00pm, WWH 1314
Mario A. T. Figueiredo, Instituto de Telecomunicações, Instituto Superior Tecnico,
Lisboa, Portugal
Iterative Shrinkage/Thresholding Algorithms: Some History and Recent
Developments
March 25
3:30pm, WWH 201
Yoel Shkolnisky, Department of Mathematics
Yale University
Reference Free Cryo-EM Structure Determination through Eigenvectors of Sparse Matrices



Fall 2007


Date
Time  and Place
Speaker and Institution
Title and Abstract
Oct 30
12:30pm, WWH 1302
Martin Raphan, CIMS and Laboratory for Computational Vision, Center for Neural Science, NYU
Unsupervised Regression for Image Denoising


Spring 2007

Mini Workshop on Sparsity and Approximation
Monday, April 30th 2007
  • Massimo Fornasier, Princeton University
Iterative Thresholding Algorithms for Inverse Problems with Sparsity Constraints
Abstract. Since the work of Donoho and Johnstone, soft and hard thresholding operators have been extensively studied for denoising of digital signals, mainly in a statistical framework. Usually associated to wavelet or curvelet expansions, thresholding allows to eliminate those coefficients which encode noise components. The assumption is that signals are sparse in origin with respect to such expansions and that the effect of noise is essentially to perturb the sparsity structure by introducing non zero coefficients with relatively small magnitude. While a simple and direct thresholding is used for statistical estimation of the relevant components of an explicitly given signal, and to discard those considered disturbance, the computation of the sparse representation of a signal implicitly given as the solution of an operator equation or of an inverse problem requires more sophistication. We refer, for instance, to deconvolution and superresolution problems, image recovery and enhancing, and problems arising in geophysics and brain imaging. In these cases, thresholding has been combined with Landweber iterations to compute the solutions. In this talk we present a unified theory of iterative thresholding algorithms which includes soft, hard, and the so-called firm thresholding operators. In particular, we develop a variational approach of such algorithms which allows for a complete analysis of their convergence properties. As a matter of fact, despite their simplicity which makes them very appealing to users and their enormous impact for applications, iterative thresholding algorithms converges very slowly and might be impracticable in certain situations. By analyzing their typical convergence dynamics we propose acceleration methods based 1. on projected gradient iterations, 2. on alternating subspace corrections (domain decompositions.) Also for both these latter families of algorithms, a variational approach is fundamental in order to correctly analyse the convergence.
  • Holger Rauhut, University of Vienna
Sparse Recovery
Abstract. I will give an overview on the topic of sparse recovery with some emphasis on my own contributions. This emerging field, which is also refered to as compressed sensing or compressive sampling, was initiated in 2004 with pioneering work by Emmanuel Candès, Justin Romberg and Terence Tao, and independently by David Donoho.

A signal (vector, function) is called sparse if it has an expansion with only a small number of non-vanishing coefficients in terms of a suitable basis. Many real-world signals can be well-approximated by sparse ones. The basic idea of compressed sensing is that a sparse signal can be efficiently recovered from a number of linear non-adaptive measurements (inner products), which is much smaller than the ambient dimension of the signal (but of course larger than the sparsity, i.e., the number of non-vanishing coefficients). This surprising principle has many potential applications in signal and image processing. As reconstruction procedures, mainly l_1-minimization (basis pursuit) and greedy algorithms such as orthogonal matching pursuit have been investigated so far. All known good constructions of suitable linear measurements are of probabilistic nature.

As an important special case, it is possible to reconstruct a sparse trigonometric polynomial from random samples provided the number N of samples scales as N = O(M log(D)), where M is the number of non-vanishing Fourier coefficients and D the dimension of the corresponding space of trigonometric polynomials.

Extensions to multichannel (vector-valued) signals (distributed compressed sensing) and applications to the operator identification problem will be discussed briefly.
  • Vladimir Temlyakov, University of South Carolina
On the Lebesgue Type Inequalities for Greedy Approximation (Joint work with D.L. Donoho and M. Elad)
Abstract. We study the efficiency of greedy algorithms with regard to redundant dictionaries in Hilbert spaces. We obtain upper estimates for the errors of the Pure Greedy Algorithm and the Orthogonal Greedy Algorithm in terms of the best m-term approximations. We call such estimates the Lebesgue type inequalities. We prove the Lebesgue type inequalities for dictionaries with special structure. We assume that the dictionary has a property of mutual incoherence (the coherence parameter of the dictionary is small). We develop a new technique that, in particular, allowed us to get rid of an extra factor m^{1/2} in the Lebesgue type inequality for the Orthogonal Greedy Algorithm.


Fall 2006



Date
Time  and Place
Speaker and Institution
Title and Abstract
Nov 20
2:15pm,
WWH 513
Albert Cohen, Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie
Note special day, time and place
Nonlinear Approximation by Greedy Algorithms: Theory, Applications and Open Problems
Nov 20
3:30pm
WWH 1302
Wolfgang Dahmen, Institut für Geometrie und Praktische Mathematik, RWTH Aachen
Note special day, time and place

Universal Algorithms for Machine Learning
Dec 12
2:00pm,
WWH 613
David Hammond, CIMS Image Denoising with an Orientation-Adaptive Gaussian Scale Mixture Model



Fall 2005/Spring 2006


Date
Speaker and Institution
Title and Abstract
Sept 28
Radu Balan, Siemens Corporate Research  Noncommutative Wiener Lemma and Tracial State on the Banach Algebra of Time-Frequency Shift Operators
Jan 18
Laurent Demanet, California Institute of Technology
Curvelets, Wave Atoms and their numerical implementation
Feb 8
Gabriel Peyré, Ecole Polytechnique
Multiscale Geometry for Images and Textures
Feb 15
Özgür Yılmaz, University of British Columbia
The role of sparsity in blind source separation




Fall 2004


Date
Speaker and Institution
Title and Abstract
Aug 26
Onur Güleryüz, Polytechnic University and DoCoMo USA Labs, Inc.
Signal Processing with Sparse Statistical Models and Nonlinear Approximation
Sept 15
S. Muthu Muthukrishnan, Rutgers University
Nonuniform Sparse Approximation via Haar Wavelets
Sept 29
Ivan Selesnick, Polytechnic University
Motion-based 3-D wavelet frames and probability models for video processing
Oct 13
Thomas Yu, Rensselaer Polytechnic Institute Multiscale Refinement Subdivision in Nonlinear and Geometric Settings
Oct 20
Cynthia Rudin, NYU Center for Neural Science
The Dynamics of Boosting
Dec 1
David Hammond, CIMS
Image representation via local multiscale orientation
Dec 8 
M. Alex O. Vasilescu, Media Research Lab, CIMS
A Tensor Framework for Image Analysis (Vision) and Synthesis (Graphics)






Spring/Summer 2004

Date
Speaker and Institution
Title and Abstract
Feb 4
Ingrid Daubechies, Princeton University and CIMS
Recovering sparse expansions after noisy blur
Feb 11
Chai Wah Wu, IBM T. J. Watson Research Center
Error diffusion: recent developments in theory and applications
Feb 25
John J. Benedetto, University of Maryland
Fourier and wavelet frame constructions and applications
Mar 10
Eero Simoncelli, NYU Center for Neural Science and CIMS
Statistical models of photographic images with application to Bayes denoising
Mar 17  
Ron DeVore, University of South Carolina
Estimators for Supervised Learning
Mar 24
Martin Strauss, AT&T Labs and University of Michigan
Improved time bounds for near-optimal sparse Fourier representations
Mar 31
Zhou Wang, Laboratory for Computational Vision, NYU
Perceptual image quality assessment: from error visibility to structural similarity
Apr 7
Ann Lee, Yale University
The Intrinsic Geometry of Natural Image Data and Learning by Diffusion
Apr 14
Frederik J. Simons, Princeton University
Time-frequency and time-scale analysis in geophysics:
Multiwindow and multiwavelet methods in 1D, 2D and on the sphere

Apr 28
Gilad Lerman, CIMS
Multiscale Curve and Strip Constructions and Their Application to Microarray and ChIP-chip Data
May 5
Radu Balan, Siemens Corporate Research 
Density and Redundancy of Irregular Gabor-like frames
June 9 Stephane Lafon, Yale University Diffusion maps and geometric harmonics