Spring 2015

Date |
Time and Place |
Speaker and Institution |
Title and Abstract |

Fri, Feb 13 |
10am, WWH 1314 |
Michele
Pavon, Department of Mathematics, University of
Padova, Padova, Italy |
Optimal
transport and Schroedinger bridges from a control and
computational perspective |

Wed, Apr 29 |
4pm, WWH 1314 |
Ulas
Ayaz, Brown University, ICERM |
Subspace
Clustering with Ordered Weighted L1 Minimization |

Spring 2014

Date |
Time and Place |
Speaker and
Institution |
Title and
Abstract |

Feb 25 |
2pm, WWH 1314 |
Michele
Pavon, Department of Mathematics, University
of Padova, Padova, Italy |
A
convex optimization approach to generalized
moment problems in multivariate spectral
estimation |

Fall 2010

Date |
Time and Place |
Speaker and
Institution |
Title and Abstract |

Sep 17 |
11:30am, WWH 1302 |
Stephane
Mallat, Ecole Polytechnique |
Classification
by Invariant Scattering (joint with the Applied Mathematics Seminar) |

Nov 18 |
2pm, WWH 317 |
Franz
Luef, UC Berkeley |
Gabor
Analysis as Noncommutative Geometry Over
Noncommutative Tori |

Nov 23 |
3:30pm, WWh 1302 |
Zuowei Shen, National University of Singapore | Wavelet
Frames
and Applications (joint with the Applied Mathematics Seminar) |

Spring 2010

Date |
Time and Place |
Speaker and
Institution |
Title and Abstract |

Jan 27 |
2:00pm, WWH 1314 |
Luis Daniel
Abreu, CMUC,
University of Coimbra,
Portugal |
Gabor and
wavelet (super)frames with Hermite and Laguerre
functions |

Feb 17 |
2:00pm, WWH 1314 | Katya
Scheinberg, Columbia University and CIMS |
Efficiently
recovering second-order models in derivative-free
optimization |

Feb 24 |
2:00pm, WWH 1314 | William
Wu, Stanford University |
Discrete
Sampling |

Mar 3 |
2:00pm, WWH 1314 | Felix
Krahmer, Hausdorff Center for Mathematics,
University of Bonn |
Local
Approximation and Quantization of Operators with
Bandlimited Kohn-Nirenberg Symbols |

Mar 10 |
2:00pm, WWH 1314 | Rachel Ward,
CIMS |
Sparse
Legendre expansions via l1 minimization |

Mar 22 |
11:00am, WWH 202 |
Joel
Tropp, Caltech |
Finding
Structure with Randomness (Note the special time and location) |

Fall 2009

Date |
Time and Place |
Speaker and Institution |
Title and Abstract |

Nov 18 |
3:30pm, WWH 517 |
Vikram
Jandhyala, Associate Professor and Director,
Applied Computational Engineering Lab, Dept of Electrical Engineering, University of Washington Seattle Founder and Chairman, Physware Inc. |
Open
challenges in field-based microelectronics design and
verification (joint with Special Applied Mathematics Seminar) |

Spring 2009

Date |
Time and Place |
Speaker and Institution |
Title and Abstract |

March 6 |
1:00pm, WWH 1314 |
Mario A. T.
Figueiredo, Instituto de Telecomunicaçġes,
Instituto Superior Tecnico, Lisboa, Portugal |
Iterative
Shrinkage/Thresholding Algorithms: Some History and
Recent Developments |

March 25 |
3:30pm, WWH 201 |
Yoel
Shkolnisky, Department of Mathematics Yale University |
Reference
Free Cryo-EM Structure Determination through
Eigenvectors of Sparse Matrices |

Fall 2007

Date |
Time and Place |
Speaker and Institution |
Title and Abstract |

Oct 30 |
12:30pm, WWH 1302 |
Martin
Raphan, CIMS and Laboratory for Computational
Vision, Center for Neural Science, NYU |
Unsupervised
Regression for Image Denoising |

Mini Workshop on Sparsity and Approximation

Monday, April 30th 2007

Monday, April 30th 2007

- Massimo Fornasier, Princeton University

Iterative Thresholding Algorithms for Inverse
Problems with Sparsity Constraints

Abstract. Since the
work of Donoho and Johnstone, soft and hard thresholding operators have been
extensively studied for denoising of digital signals, mainly in a statistical
framework. Usually associated to wavelet or curvelet expansions,
thresholding allows to eliminate those coefficients which encode noise
components. The assumption is that signals are sparse in origin with respect to
such expansions and that the effect of noise is essentially to perturb the
sparsity structure by introducing non zero coefficients with
relatively small magnitude. While a simple and direct thresholding is used for
statistical estimation of the relevant components of an explicitly given
signal, and to discard those considered disturbance, the
computation of the sparse representation of a signal implicitly given as the
solution of an operator equation or of an inverse problem requires more
sophistication. We refer, for instance, to deconvolution and
superresolution problems, image recovery and enhancing, and problems arising in
geophysics and brain imaging. In these cases, thresholding has been
combined with Landweber iterations to compute the solutions. In
this talk we present a unified theory of iterative thresholding
algorithms which includes soft, hard, and the so-called firm
thresholding operators. In particular, we develop a variational approach of
such algorithms which allows for a complete analysis of their
convergence properties. As a matter of fact, despite their simplicity
which makes them very appealing to users and their enormous impact for
applications, iterative thresholding algorithms converges very slowly and
might be impracticable in certain situations. By analyzing their typical
convergence dynamics we propose acceleration methods based 1. on
projected gradient iterations, 2. on alternating subspace corrections
(domain decompositions.) Also for both these latter families of algorithms, a
variational approach is fundamental in order to correctly analyse the
convergence.- Holger Rauhut, University of Vienna

Sparse Recovery

Abstract. I will give an overview on the topic of sparse recovery with some emphasis on my own contributions. This emerging field, which is also refered to as compressed sensing or compressive sampling, was initiated in 2004 with pioneering work by Emmanuel Candès, Justin Romberg and Terence Tao, and independently by David Donoho.

A signal (vector, function) is called sparse if it has an expansion with only a small number of non-vanishing coefficients in terms of a suitable basis. Many real-world signals can be well-approximated by sparse ones. The basic idea of compressed sensing is that a sparse signal can be efficiently recovered from a number of linear non-adaptive measurements (inner products), which is much smaller than the ambient dimension of the signal (but of course larger than the sparsity, i.e., the number of non-vanishing coefficients). This surprising principle has many potential applications in signal and image processing. As reconstruction procedures, mainly l_1-minimization (basis pursuit) and greedy algorithms such as orthogonal matching pursuit have been investigated so far. All known good constructions of suitable linear measurements are of probabilistic nature.

As an important special case, it is possible to reconstruct a sparse trigonometric polynomial from random samples provided the number N of samples scales as N = O(M log(D)), where M is the number of non-vanishing Fourier coefficients and D the dimension of the corresponding space of trigonometric polynomials.

Extensions to multichannel (vector-valued) signals (distributed compressed sensing) and applications to the operator identification problem will be discussed briefly.

Abstract. I will give an overview on the topic of sparse recovery with some emphasis on my own contributions. This emerging field, which is also refered to as compressed sensing or compressive sampling, was initiated in 2004 with pioneering work by Emmanuel Candès, Justin Romberg and Terence Tao, and independently by David Donoho.

A signal (vector, function) is called sparse if it has an expansion with only a small number of non-vanishing coefficients in terms of a suitable basis. Many real-world signals can be well-approximated by sparse ones. The basic idea of compressed sensing is that a sparse signal can be efficiently recovered from a number of linear non-adaptive measurements (inner products), which is much smaller than the ambient dimension of the signal (but of course larger than the sparsity, i.e., the number of non-vanishing coefficients). This surprising principle has many potential applications in signal and image processing. As reconstruction procedures, mainly l_1-minimization (basis pursuit) and greedy algorithms such as orthogonal matching pursuit have been investigated so far. All known good constructions of suitable linear measurements are of probabilistic nature.

As an important special case, it is possible to reconstruct a sparse trigonometric polynomial from random samples provided the number N of samples scales as N = O(M log(D)), where M is the number of non-vanishing Fourier coefficients and D the dimension of the corresponding space of trigonometric polynomials.

Extensions to multichannel (vector-valued) signals (distributed compressed sensing) and applications to the operator identification problem will be discussed briefly.

- Vladimir Temlyakov, University of South Carolina

On the Lebesgue Type Inequalities for Greedy
Approximation (Joint work with D.L. Donoho and
M. Elad)

Abstract. We study the efficiency of greedy algorithms with regard to redundant dictionaries in Hilbert spaces. We obtain upper estimates for the errors of the Pure Greedy Algorithm and the Orthogonal Greedy Algorithm in terms of the best m-term approximations. We call such estimates the Lebesgue type inequalities. We prove the Lebesgue type inequalities for dictionaries with special structure. We assume that the dictionary has a property of mutual incoherence (the coherence parameter of the dictionary is small). We develop a new technique that, in particular, allowed us to get rid of an extra factor m^{1/2} in the Lebesgue type inequality for the Orthogonal Greedy Algorithm.

Abstract. We study the efficiency of greedy algorithms with regard to redundant dictionaries in Hilbert spaces. We obtain upper estimates for the errors of the Pure Greedy Algorithm and the Orthogonal Greedy Algorithm in terms of the best m-term approximations. We call such estimates the Lebesgue type inequalities. We prove the Lebesgue type inequalities for dictionaries with special structure. We assume that the dictionary has a property of mutual incoherence (the coherence parameter of the dictionary is small). We develop a new technique that, in particular, allowed us to get rid of an extra factor m^{1/2} in the Lebesgue type inequality for the Orthogonal Greedy Algorithm.

Fall 2006

Date |
Time and Place |
Speaker and
Institution |
Title and Abstract |

Nov 20 |
2:15pm, WWH 513 |
Albert
Cohen, Laboratoire Jacques-Louis Lions,
Université Pierre et Marie Curie Note special day, time and place |
Nonlinear
Approximation by Greedy Algorithms: Theory,
Applications and Open Problems |

Nov 20 |
3:30pm WWH 1302 |
Wolfgang
Dahmen, Institut für Geometrie und Praktische
Mathematik, RWTH Aachen Note special day, time and place |
Universal
Algorithms for Machine Learning |

Dec 12 |
2:00pm, WWH 613 |
David Hammond, CIMS | Image
Denoising with an Orientation-Adaptive Gaussian
Scale Mixture Model |

Date |
Speaker and Institution |
Title and Abstract |

Sept 28 |
Radu Balan, Siemens Corporate Research | Noncommutative
Wiener
Lemma and Tracial State on the Banach Algebra of
Time-Frequency Shift Operators |

Jan 18 |
Laurent
Demanet, California Institute of Technology |
Curvelets,
Wave Atoms and their numerical implementation |

Feb 8 |
Gabriel
Peyré, Ecole Polytechnique |
Multiscale
Geometry for Images and Textures |

Feb 15 |
Özgür Yılmaz,
University of British Columbia |
The
role of sparsity in blind source separation |

Date |
Speaker and Institution |
Title and Abstract |

Aug 26 |
Onur
Güleryüz, Polytechnic University and DoCoMo USA
Labs, Inc. |
Signal
Processing
with Sparse Statistical Models and Nonlinear
Approximation |

Sept 15 |
S. Muthu
Muthukrishnan, Rutgers University |
Nonuniform
Sparse Approximation via Haar Wavelets |

Sept 29 |
Ivan
Selesnick, Polytechnic University |
Motion-based
3-D
wavelet frames and probability models for video
processing |

Oct 13 |
Thomas Yu, Rensselaer Polytechnic Institute | Multiscale
Refinement
Subdivision in Nonlinear and Geometric Settings |

Oct 20 |
Cynthia Rudin,
NYU Center for Neural Science |
The Dynamics
of Boosting |

Dec 1 |
David
Hammond, CIMS |
Image
representation via local multiscale orientation |

Dec 8 |
M.
Alex
O. Vasilescu, Media Research Lab, CIMS |
A
Tensor Framework for Image Analysis (Vision) and
Synthesis (Graphics) |