LEARN is an acrynom for
It is a five-year project that started January 1, 2011 and will be finished December 31, 2015 and it is supported by a grant of 2.5 million Euros from the European Research Council, ERC. The project is carried out by a joint team from Division of Automatic control at Linköping Univerisity (LiU) and from the Division of Automatic control at the Royal Institute of Technology in Stockholm (KTH).
An outline of ideas in LEARN, along with some results from the first year is given in
EJC paper: Lennart Ljung, Håkan Hjalmarsson, and Henrik Ohlsson: Four encounters with system identification. European Journal of Control, 2011, Nr 5-6, pp 449-471.
The Principal Investigator is Lennart Ljung, LiU, and Håkan Hjalmarsson, KTH, is co-PI. The project is organized into five themes:
The development of convex and semidefinite programming has been booming in recent years, and has played a major role in several research communities. Convexification of estimation problems has been a very visible theme in the statistics community. However, such activities have not been particularly pronounced in the System Identification community which has largely been sticking to a maximum likelihood (or related) framework. It is perhaps symptomatic that some very recent and interesting applications of semidefinite programming techniques to system identification, have their origins in optimization rather than identification research groups. To be fair, it must be said that also research on subspace identification methods and attempts to work with predictors that are linear in the parameters, like LS Support Vector Machines and kernel-like techniques, could be seen as a convexification trend. There is thus a clear link to the research Theme IV. Another area that belongs to the state-of-the-art in this context is model reduction. Model reduction is closely related to System Identification, by its inherent system approximation feature. It is therefore interesting to follow convexification attempts for model reduction problems, and see if they have implications on the identification problem.
Using fundamental system limitations to establish benchmarks
regarding what can be realistically achieved with given
resources is an often used concept in engineering.
The importance of understanding limitations in engineering systems imposed by
data based modeling is accentuated as complexity and structural
constraints grow. To this end the Cramér-Rao lower bound (CRLB) is fundamental. It translates into performance bounds for model based
applications employing identified models, e.g. it implies a lower
bound on the regulatory precision in control applications.
The limitations discussed in Theme II) above imply that it is impossible to
identify a complex system accurately with only a modest amount of
(noisy) sensor information. The key to circumvent this curse of
complexity is the observation that an application often requires only a
modest amount of system properties to be accurately
modelled.
Various non-parametric methods have been important tools in statistics
since long. Nearest neighbour, kernel, and local
approximation techniques are successfully used to estimate
surfaces in regressor spaces. We
have ourselves experience in developing, analysing and
testing such local approaches (Direct Weight
Optimisation, DWO). Also Support Vector Machines, SVM,can
be understood as such kernel methods, although the formal
treatment is via parameter estimation. The convergence of
Machine Learning towards Statistical Learning (or the other
way around) has stressed the role of kernel
approximations. Gaussian Process regression
(originally conceived in the 1950's) for example has become
a widely used tool for function approximation in Machine
Learning also for applications to dynamical systems. Some very recent contributions, have shown that conventional parametric methods can be successfully combined with learning techniques even for estimating standard linear models.
The concept and use of manifold learning techniques is related
to this area. These are methods to identify areas (manifolds) in the
regressor space that are of special interest for a given
application. By focusing on the system's behaviour on that manifold,
simpler and more effective models can be constructed.
Theme II: Fundamental
Limitations
State of the art
Objectives
While the basic expression for the CRLB is well known, understanding how it depends on system and model complexity as well as
experimental conditions during the data collection
has been subject to rather intense research for a range of problem
settings. Recently it has been recognized that
the variance of estimated frequency functions is subject to a
water-bed effect, reminiscent to Bode's sensitivity integral.
Results
Theme III: Experiment Design and
Reinforcement Techniques
State of the art
Objectives
Thus carefully designed, but
still ``inexpensive'', experiments may provide sufficient information
for the intended application. Consider for example the very simple case of
optimizing the yield of a
product with respect to one parameter. The model of the yield can be
very poor far away from the optimum (as long as it does not predict
better yield than the real optimum); it is only close to the optimum
that the modeling accuracy becomes important. It is immediate from
this example that obtaining such models requires access to
actuation abilities: Measurements should
be concentrated to the vicinity of the optimal parameter value.
There are two well-known issues that hamper the implementation of
this objective:
Convexification has proved to be a viable route to cope with Issue i) for
linear dynamical systems.
Corresponding methods for non-linear dynamical systems are still
lacking.
Results
Theme IV: Potentials of Non-parametric
Models of Dynamical Systems
State of the art
Objectives
The goal of the research in this area is to provide methodology and algorithms for estimating complex (nonlinear) dynamical systems in reliable and effective manners. We strongly feel that non-parametric methods are underutilised in the system identification research community and that powerful results can be obtained by adjoining manifold learning and machine learning strategies to conventional identification methods.
Results