l0 Sparse signal processing and model selection with applications

Download files
Access & Terms of Use
open access
Copyright: Seneviratne, Seneviratne
Altmetric
Abstract
Sparse signal processing has far-reaching applications including compressed sensing, media compression/denoising/deblurring, microarray analysis and medical imaging. The main reason for its popularity is that many signals have a sparse representation given that the basis is suitably selected. However the difficulty lies in developing an efficient method of recovering such a representation. To this aim, two efficient sparse signal recovery algorithms are developed in the first part of this thesis. The first method is based on direct minimization of the l0 norm via cyclic descent, which is called the L0LS-CD (l0 penalized least squares via cyclic descent) algorithm. The other method minimizes smooth approximations of sparsity measures including those of the l0 norm via the majorization minimization (MM) technique, which is called the QC (quadratic concave) algorithm. The L0LS-CD algorithm is developed further by extending it to its multivariate (V-L0LS-CD (vector L0LS-CD)) and group (gL0LS-CD (group L0LS-CD)) regression variants. Computational speed-ups to the basic cyclic descent algorithm are discussed and a greedy version of L0LS-CD is developed. Stability of these algorithms is analyzed and the impact of the penalty parameter and proper initialization on the algorithm performance are highlighted. A suitable method for performance comparison of sparse approximating algorithms in the presence of noise is established. Simulations compare L0LS-CD and V-L0LS-CD with a range of alternatives on under-determined as well as over-determined systems. The QC algorithm is applicable to a class of penalties that are neither convex nor concave but have what we call the quadratic concave property. Convergence proofs of this algorithm are presented and it is compared with the Newton algorithm, concave convex (CC) procedure, as well as with the class of proximity algorithms. Simulations focus on the smooth approximations of the l0 norm and compare them with other l0 denoising algorithms. Next, two applications of sparse modeling are considered. In the first application the L0LS-CD algorithm is extended to recover a sparse transfer function in the presence of coloured noise. The second uses gL0LS-CD to recover the topology of a sparsely connected network of dynamic systems. Both applications use Laguerre basis functions for model expansion. The role of model selection in sparse signal processing is widely neglected in literature. The tuning/penalty parameter of a sparse approximating problem should be selected using a model selection criterion which minimizes a desired discrepancy measure. Compared to the commonly used model selection methods, the SURE (Stein's unbiased risk estimator) estimator stands out as one which does not suffer from the limitations of other methods. Most model selection criterion are developed based on signal or prediction mean squared error. The last section of this thesis develops an SURE criterion instead for parameter mean square error and applies this result to l1 penalized least squares problem with grouped variables. Simulations based on topology identification of a sparse network are presented to illustrate and compare with alternative model selection criteria.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Seneviratne, Seneviratne
Supervisor(s)
Solo, Victor
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
2012
Resource Type
Thesis
Degree Type
PhD Doctorate
UNSW Faculty
Files
download whole.pdf 1.45 MB Adobe Portable Document Format
Related dataset(s)