www.gusucode.com > ident_featured 案例代码 matlab源码程序 > ident_featured/NonlinearStateEstimationUsingUnscentedKalmanFilterExample.m

    %% Nonlinear State Estimation Using Unscented Kalman Filter
%
% This example shows how to use the unscented Kalman filter algorithm for
% nonlinear state estimation for the van der Pol oscillator. 
%
% This example additionally uses the Signal Processing Toolbox™.

% Copyright 2016-2016 The MathWorks, Inc.

%% Introduction
%
% System Identification Toolbox offers two commands for nonlinear state
% estimation:
%
% * |extendedKalmanFilter|: First-order, discrete-time extended Kalman filter
% * |unscentedKalmanFilter|: Discrete-time unscented Kalman filter
%
% A typical workflow for using these commands is as follows: 
%
% # Model your plant and sensor behavior.
% # Construct and configure the |extendedKalmanFilter| or |unscentedKalmanFilter| object.
% # Perform state estimation by using the |predict| and |correct| commands with the object.
% # Analyze results to gain confidence in filter performance
% # Deploy the filter on your hardware. You can generate code for these
% filters using MATLAB Coder™.
% 
% This example uses the |unscentedKalmanFilter| command to demonstrate this
% workflow.

%% Plant Modeling and Discretization
%
% The unscented Kalman filter (UKF) algorithm requires a function that
% describes the evolution of states from one time-step to the next. This is
% typically called the state transition function. unscentedKalmanFilter
% supports the following two function forms:
%
% # Additive process noise: $x[k] = f(x[k-1], u[k-1]) + w[k-1])$
% # Non-additive process noise: $x[k] = f(x[k-1], w[k-1], u[k-1])$
%
% Here f(..) is the state transition function, x is the state, w is the
% process noise. u is optional and represents additional inputs to f, for
% instance system inputs or parameters. u can be specified as zero or more
% function arguments. Additive noise means the state and process noise is
% related linearly. If the relationship is nonlinear, use the second form.
% When you create the unscentedKalmanFilter object, you specify f(..) and
% also whether the process noise is additive or non-additive.

%%
% The system in this example is the van der Pol oscillator with mu=1. This
% 2-state system is described with the following set of nonlinear ordinary
% differential equations (ODE):
%
% $$ \begin{array} {l} \dot{x}_1 = x_2 \\ \dot{x}_2 = (1-x_1^2)x_2-x_1 \end{array} $$
%
% Denote this equation as $\dot{x}=f_c(x)$, where $x=[x_1; x_2]$. The
% process noise w does not appear in the system model. You can assume it is
% additive for simplicity.
%
% unscentedKalmanFilter requires a discrete-time state transition function,
% but the plant model is continuous-time. You can use a discrete-time
% approximation to the continuous-time model. Euler discretization is one
% common approximation method. Assume your sample time is $T_s$, and denote the
% continuous-time dynamics you have as $\dot{x}=f_c(x)$. Euler
% discretization approximates the derivative operator as $\dot{x} \approx
% \frac{x[k+1]-x[k]}{T_s}$. The resulting discrete-time state-transition
% function is:
%
% $$\begin{array} {ll}
%    x[k+1]&= x[k] + f_c(x[k]) \; T_s \\
%          &= f(x[k])
%   \end{array}$$
%
% The accuracy of this approximation depends on the sample time $T_s$.
% Smaller $T_s$ values provide better approximations. Alternatively, you can
% use a different discretization method. For instance, higher order
% Runge-Kutta family of methods provide a higher accuracy at the expense of
% more computational cost, given a fixed sample time $T_s$. 
%
% Create this state-transition function and save it in a file named
% vdpStateFcn.m. Use the sample time $T_s=0.05s$. You provide this function
% to the |unscentedKalmanFilter| during object construction.
type vdpStateFcn

%% Sensor Modeling
%
% unscentedKalmanFilter also needs a function that describes how the model
% states are related to sensor measurements. unscentedKalmanFilter supports
% the following two function forms:
%
% # Additive measurement noise: $y[k] = h(x[k], u[k]) + v[k]$
% # Non-additive measurement noise: $y[k] = h(x[k], v[k], u[k])$
%
% h(..) is the measurement function, v is the measurement noise. u is
% optional and represents additional inputs to h, for instance system
% inputs or parameters. u can be specified as zero or more function
% arguments. You can add additional system inputs following the u term.
% These inputs can be different than the inputs in the state transition
% function.
%
% For this example assume you have measurements of the first state $x_1$
% within some percentage error:
%
% $$ y[k] = x_1[k] \; (1+v[k]) $$
%
% This falls into the category of non-additive measurement noise because
% the measurement noise is not simply added to a function of states. You
% want to estimate of both $x_1$ and $x_2$ from the noisy measurements.
%
% Create this state transition function and save it in a file named
% vdpMeasurementNonAdditiveNoiseFcn.m. You provide this function to the
% |unscentedKalmanFilter| during object construction.
type vdpMeasurementNonAdditiveNoiseFcn

%% Filter Construction
%
% Construct the filter by providing function handles to the
% state transition and measurement functions, followed by your initial
% state guess. The state transition model has additive noise. This is the
% default setting in the filter, hence you do not need to specify it. The
% measurement model has non-additive noise, which you must specify through
% setting the |HasAdditiveMeasurementNoise| property of the object as
% |false|. This must be done during object construction. If your
% application has non-additive process noise in the state transition
% function, specify the |HasAdditiveProcessNoise| property as |false|.

% Your initial state guess at time k, utilizing measurements up to time k-1: xhat[k|k-1]
initialStateGuess = [2;0]; % xhat[k|k-1]
% Construct the filter
ukf = unscentedKalmanFilter(...
    @vdpStateFcn,... % State transition function
    @vdpMeasurementNonAdditiveNoiseFcn,... % Measurement function
    initialStateGuess,...
    'HasAdditiveMeasurementNoise',false);
%%
% Provide your knowledge of the measurement noise covariance
R = 0.2; % Variance of the measurement noise v[k]
ukf.MeasurementNoise = R;
%%
% ProcessNoise property stores the process noise covariance. It is set to
% account for model inaccuracies and the effect of unknown disturbances on
% the plant. We have the true model in this example, but discretization
% introduces errors. This example did not include any disturbances for
% simplicity. Set it to a diagonal matrix with less noise on the first
% state, and more on the second state to reflect the knowledge that the
% second state will be more impacted by modeling errors.
ukf.ProcessNoise = diag([0.02 0.1]);

%% Estimation Using predict and correct Commands
% In your application the measurement data arriving from your hardware in
% real-time are processed by the filters as they arrive. This operation is
% demonstrated in this example by generating a set of measurement data
% first, and then feeding it to the filter one step at a time.
%
% Simulate the van der Pol oscillator for 5 seconds with the filter sample
% time 0.05 [s] to generate the true states of the system.
T = 0.05; % [s] Filter sample time
timeVector = 0:T:5;
[~,xTrue]=ode45(@vdp1,timeVector,[2;0]);
%%
% Generate the measurements assuming that a sensor measures the first
% state, with a standard deviation of 45% error in each measurement.
rng(1); % Fix the random number generator for reproducible results
yTrue = xTrue(:,1);
yMeas = yTrue .* (1+sqrt(R)*randn(size(yTrue))); % sqrt(R): Standard deviation of noise

%%
% Pre-allocate space for data that you will analyze later
Nsteps = numel(yMeas); % Number of time-steps
xCorrected = zeros(Nsteps,2); % Corrected state estimates
PCorrected = zeros(Nsteps,2,2); % Corrected state estimation error covariances
e = zeros(Nsteps,1); % Residuals (or innovations)

%%
% Perform online estimation of the states x using the |correct| and
% |predict| commands. Provide generated data to the filter one time-step at
% a time.
for k=1:Nsteps
    % Let k denote the current time
    %
    % Residuals (or innovations): Measured output - Predicted output
    e(k) = yMeas(k) - vdpMeasurementFcn(ukf.State); % ukf.State is x[k|k-1] at this point
    % Incorporate the measurements at time k into the state estimates by
    % using the "correct" command. This updates the State and StateCovariance
    % properties of the filter to contain x[k|k] and P[k|k]. These values
    % are also produced as the output of the "correct" command.
    [xCorrected(k,:), PCorrected(k,:,:)] = correct(ukf,yMeas(k));
    % Predict the states at next time step, k+1. This updates the State and
    % StateCovariance properties of the filter to contain x[k+1|k] and
    % P[k+1|k]. These will be utilized by the filter at the next time-step
    predict(ukf);
end

%% Results and Validation
%
% Plot the true and estimated states over time. Also plot the measured
% value of the first state.
figure();
subplot(2,1,1);
plot(timeVector,xTrue(:,1),timeVector,xCorrected(:,1),timeVector,yMeas(:));
legend('True','Filter estimate','Measured')
ylim([-2.6 2.6]);
ylabel('x_1');
subplot(2,1,2);
plot(timeVector,xTrue(:,2),timeVector,xCorrected(:,2));
ylim([-3 1.5]);
xlabel('Time [s]');
ylabel('x_2');
%%
% The top plot shows the true, estimated, and the measured value of the
% first state. The filter utilizes the system model and noise covariance
% information to produce an improved estimate over the measurements. The
% bottom plot shows the second state. The filter is able to produce a good
% estimate.

%%
% The validation of unscented and extended Kalman filter performance is
% typically done using extensive Monte Carlo simulations. These simulations
% should test variations of: process and measurement noise realizations,
% plant operating under various conditions, initial state and state
% covariance guesses. The key signal of interest used for validating the
% state estimation is the residuals (or innovations). In this example, you
% perform residual analysis for a single simulation. Plot the residuals.
figure();
plot(timeVector, e);
xlabel('Time [s]');
ylabel('Residual (or innovation)');
%%
% The residuals should have:
%
% # Small magnitude
% # Zero mean
% # No autocorrelation, except at zero lag
%
% The mean value of the residuals is:
mean(e)
%%
% This is small relative to the magnitude of the residuals. The
% autocorrelation of the residuals can be calculated with the xcorr
% function in the Signal Processing Toolbox.
[xe,xeLags] = xcorr(e,'coeff'); % 'coeff': normalize by the value at zero lag
% Only plot non-negative lags
idx = xeLags>=0;
figure();
plot(xeLags(idx),xe(idx));
xlabel('Lags');
ylabel('Normalized correlation');
title('Auto-correlation of residuals (innovation)');
%%
% The correlation is small for all lags except 0. The mean correlation is
% close to zero, and the correlation doesn't show any significant
% non-random variations. These characteristics increase confidence in
% filter performance.
%
% In reality the true states are never available. But when performing
% simulation where you have access to real states, as an extra sanity
% check, you can look at the errors between estimated and true states.
% These errors must satisfy similar criteria to the residual: 
%
% # Small magnitude
% # Variance within filter error covariance estimate
% # Zero mean
% # Uncorrelated. 
%
% First, look at the error and the $1\sigma$ uncertainty bounds from the
% filter error covariance estimate.
eStates = xTrue-xCorrected;
figure();
subplot(2,1,1);
plot(timeVector,eStates(:,1),...               % Error for the first state
    timeVector, sqrt(PCorrected(:,1,1)),'r', ... % 1-sigma upper-bound
    timeVector, -sqrt(PCorrected(:,1,1)),'r');   % 1-sigma lower-bound
xlabel('Time [s]');
ylabel('Error for state 1');
title('State estimation errors');
subplot(2,1,2);
plot(timeVector,eStates(:,2),...               % Error for the second state
    timeVector,sqrt(PCorrected(:,2,2)),'r', ...  % 1-sigma upper-bound
    timeVector,-sqrt(PCorrected(:,2,2)),'r');    % 1-sigma lower-bound
xlabel('Time [s]');
ylabel('Error for state 2');
legend('State estimate','1-sigma uncertainty bound',...
    'Location','Best');

%%
% The error bound for state 1 approaches 0 at t=2.15 seconds because of the
% sensor model (MeasurementFcn). It has the form $x_1[k]*(1+v[k])$. At
% t=2.15 seconds the true and estimated states are near zero, which implies
% the measurement error in absolute terms is also near zero. This is
% reflected in filter's state estimation error covariance.
%
% Calculate what percentage of the points are beyond the 1-sigma
% uncertainty bound.
distanceFromBound1 = abs(eStates(:,1))-sqrt(PCorrected(:,1,1));
percentageExceeded1 = nnz(distanceFromBound1>0) / numel(eStates(:,1));
distanceFromBound2 = abs(eStates(:,2))-sqrt(PCorrected(:,2,2));
percentageExceeded2 = nnz(distanceFromBound2>0) / numel(eStates(:,2));
[percentageExceeded1 percentageExceeded2]
%% 
% The first state estimation errors exceed the $1\sigma$ uncertainty bound
% approximately 14% of the time-steps. Less than 30% of the errors exceeding
% the 1-sigma uncertainty bound implies good estimation. This criterion is
% satisfied for both states. The 0% percentage for the second state means
% that the filter is conservative: most likely the combined process and
% measurement noises are too high. Likely a better performance can be
% obtained by tuning these parameters.

%%
% Calculate the mean autocorrelation of state estimation errors. Also plot
% the autocorrelation.
mean(eStates)
[xeStates1,xeStatesLags1] = xcorr(eStates(:,1),'coeff'); % 'coeff': normalize by the value at zero lag
[xeStates2,xeStatesLags2] = xcorr(eStates(:,2),'coeff'); % 'coeff'
% Only plot non-negative lags
idx = xeStatesLags1>=0;
figure();
subplot(2,1,1);
plot(xeStatesLags1(idx),xeStates1(idx));
xlabel('Lags');
ylabel('For state 1');
title('Normalized auto-correlation of state estimation error');
subplot(2,1,2);
plot(xeStatesLags2(idx),xeStates2(idx));
xlabel('Lags');
ylabel('For state 2');

%%
% The mean value of the errors is small relative to the value of the
% states. The auto-correlation of state estimation errors show little
% non-random variations for small lag values, but these are much smaller
% than the normalized peak value 1. Combined with the fact that the
% estimated states are accurate, this behavior of the residuals can be
% considered as satisfactory results.

%% Summary
% This example has shown the steps of constructing and using the unscented
% Kalman filter for state estimation of a nonlinear system. You estimated
% states of a van der Pol oscillator from noisy measurements, and validated
% the estimation performance.

displayEndOfDemoMessage(mfilename)