In system identification we model dynamical systems from measured data. This data-driven approach to modelling is useful since many real-world systems are difficult to model with physical principles. Hence, a need for system identification arises in many applications involving simulation, prediction, and model-based control.

Some of the classical approaches to system identification can lead to numerically intractable or ill-posed optimization problems. As an alternative, it has recently been shown beneficial to use so called regularization techniques, which make the ill-posed problems ‘regular’. One type of regularization is to introduce a certain rank constraint. However, this in general still leads to a numerically intractable problem, since the rank function is non-convex. One possibility is then use a convex approximation of rank, which we will do here.

The nuclear norm, i.e., the sum of the singular values, is a popular, convex surrogate of the rank function. This results in a heuristic that has been widely used in e.g. signal processing, machine learning, control, and system identification, since its introduction in 2001. The nuclear norm heuristic introduces a *regularization parameter *which governs the trade-off between model fit and model complexity. The parameter is difficult to tune, and the

current thesis revolves around this issue.

In this thesis, we first propose a choice of the regularization parameter based on the statistical properties of fictitious validation data. This can be used to avoid computationally costly techniques such as cross-validation, where the problem is solved multiple times to find a suitable parameter value. The proposed choice can also be used as initialization to search methods for minimizing some criterion, e.g. a validation cost, over the parameter domain.

Secondly, we study how the estimated system changes as a function of the parameter over its entire domain, which can be interpreted as a sensitivity analysis. For this we suggest an algorithm to compute a so called approximate regularization path with error guarantees, where the regularization path is the optimal solution as a function of the parameter. We are then able to guarantee the model fit, or, alternatively, the nuclear norm of the approximation, to deviate from the optimum by less than a pre-specified tolerance. Furthermore, we bound the *l*2-norm of the Hankel singular value approximation error, which means that in a certain subset of the parameter domain, we can guarantee the optimal Hankel singular values returned by the nuclear norm heuristic to not change more (in *l*2-norm) than a bounded, known quantity.

Our contributions are demonstrated and evaluated by numerical examples using simulated and benchmark data.