Abstract
The use of sparsity has emerged in the last fifteen years as an important tool for solving many problems in the areas of signal processing and statistical
inference. In this dissertation we pursue three significant applications of sparsity; sparse linear regression, low rank matrix completion and sparse
inverse covariance selection. In the first and third topic, sparsity refers to having a small number of nonzero vector and matrix entries respectively,
while in the second topic it is associated with low matrix rank.
A penalized approach is considered involving optimization of an objective function with two terms. One of the terms measures the goodness of fit i.e.
the error between the observed data and the estimated solution, while the other is a penalty responsible for inducing sparse solutions, hence the name
penalized problem.
It is well understood that the natural way of inducing sparsity is through the l0 ``norm'' i.e. the counting function or the discrete metric. Since the l0
function is non convex, a large volume of literature has instead resorted to using the convex l1 norm as the penalty. Therefore, the failure to consider the
l0 penalized problem is a point of departure for this dissertation. In order to bridge the gap between the l0 and l1 penalties, the focus becomes the
development of non convex optimization methods for the lq (00 but the emphasis is on 0