ECE 18-898G: Special Topics in Signal Processing: Sparsity, Structure, and Inference

The objective of this special topic course is to introduce students to algorithmic and theoretical aspects of sparsity, and more generally, low-dimensional structures, in large-scale data science and machine learning applications. Students will develop expertise at the intersection of optimization, signal processing, statistics and computer science to address emerging challenges of data science. There will be a final project based on the discussed topics. The course will introduce a mathematical theory for sparse representation, and will cover several fundamental inverse problems that are built upon low-dimensional modeling, including compressed sensing, matrix completion, robust principal component analysis, dictionary learning, super resolution, phase retrieval, neural networks, etc. We will focus on designing optimization-based algorithms that are effective in both theory and practice.

Course Syllabus

Lecture Notes

Homework Problems

Suggested Readings for Presentations/Projects

You can also select another paper upon the instructor's approval.
  1. ‘‘The Dantzig selector: Statistical estimation when p is much larger than n,’’ E. Candes and T. Tao, The Annals of Statistics, 2007.

  2. ‘‘The convex geometry of linear inverse problems,’’ V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky, Foundations of Computational mathematics, 2012.

  3. ‘‘Rank Awareness in Joint Sparse Recovery,‘‘ M. E. Davies, Y. C. Eldar, IEEE Transactions on Information Theory, 2012.

  4. ‘‘Convex recovery of a structured signal from independent random linear measurements,’’ J. A. Tropp, Sampling Theory, a Renaissance: Compressive sampling and other developments, 2015.

  5. ‘‘Near-optimal adaptive compressed sensing,‘‘ M. Malloy, R. Nowak, IEEE Transactions on Information Theory, 2014.

  6. ‘‘A unified framework for the analysis of regularized M-estimators,‘‘ S. Negahban, P. Ravikumar, M. J. Wainwright, B. Yu, Statistical Science, 2012.

  7. ‘‘Localization from Incomplete Noisy Distance Measurements,’’ A. Javanmard and A. Montanari, Foundations of Computational mathematics, 2012.

  8. ‘‘Tensor decompositions for learning latent variable models,’’ A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, M. Telgarsky, Journal of Machine Learning Research, 2014.

  9. ‘‘A geometric analysis of subspace clustering with outliers,‘‘ M. Soltanolkotabi and E. J. Candes, The Annals of Statistics, 2013.

  10. ‘‘Solving random quadratic systems of equations is nearly as easy as solving linear systems,‘‘ Y. Chen and E. J. Candes. Communications on Pure and Applied Mathematics, 2015.

  11. ‘‘Blind deconvolution using convex programming,’’ A. Ahmed, B. Recht, J. Romberg, IEEE Transactions on Information Theory, 2014.

  12. ‘‘Designing Statistical Estimators That Balance Sample Size, Risk, and Computational Cost,‘‘ J. J. Bruer, J. A. Tropp, V. Cevher, and S. Becker, IEEE Journal of Selected Topics in Signal Processing, 2015.

  13. ‘‘1-bit Matrix Completion,’’ M. A. Davenport, Y. Plan, E. van den Berg, M. Wootters, Information and Inference: A Journal of the IMA , 2014.

  14. ‘‘Achieving Exact Cluster Recovery Threshold via Semidefinite Programming,‘‘ B. Hajek Y. Wu, J. Xu, IEEE Transactions on Information Theory, 2014.

  15. ‘‘The landscape of empirical risk for non-convex losses,’’ S. Mei, Y. Bai, and A. Montanari, arXiv preprint arXiv:1607.06534, 2016.

  16. ‘‘Implicit Regularization in Nonconvex Statistical Estimation: Gradient Descent Converges Linearly for Phase Retrieval, Matrix Completion and Blind Deconvolution,’’ C. Ma, K. Wang, Y. Chi, Y. Chen, arXiv:1711.10467, 2017.