www.gusucode.com > visionhdl工具箱matlab源码程序 > visionhdl/visionhdlexamples/html/LaneDetectionHDLExample.m

    %% Lane Detection
% Lane detection is a critical processing stage in Advanced Driving
% Assistance Systems (ADAS). Automatically detecting lane boundaries from 
% a video stream is computationally challenging and therefore hardware
% accelerators such as FPGAs and GPUs are often required to achieve 
% real time performance.
%
% In this example we show FPGA acceleration of 
% lane marking detection. An FPGA based lane candidate generator is coupled with a software based 
% polynomial fitting engine, to determine lane boundaries. 
%
% Copyright 2016 The MathWorks, Inc.

%% System Overview
% The <matlab:LaneDetectionHDLExample LaneDetectionHDLExample.slx> system
% is shown below. The HDL Lane Detector
% subsystem  represents the hardware accelerated part of the design, while
% Compute Ego Lanes and Fit Lanes subsystems represent the software based
% polynomial fitting engine. Prior to the Frame to Pixels block, the RGB
% input is converted to intensity color space. In addition, the Blanking
% Insertion block inserts additional blank pixels at the bottom of the
% frame. This ensures that the [700x640] output frame can fit into the control signalling provided. 
 
modelname = 'LaneDetectionHDLExample';
open_system(modelname);
set_param(modelname, 'SampleTimeColors', 'on');
set_param(modelname,'SimulationCommand','Update');
set_param(modelname, 'Open', 'on');
set(allchild(0),'Visible', 'off');


%% HDL Lane Detector
% The HDL Lane Detector represents the hardware-accelerated part of the design. This
% subsystem receives the input pixel stream from the front-facing camera
% source, transforms the view to obtain the birds-eye view,
% and then locates lane marking candidates from the transformed view.

set_param(modelname, 'SampleTimeColors', 'off');
open_system([modelname '/HDLLaneDetector'],'force');


%% Inverse Perspective Mapping
% The Inverse Perspective Mapping subsystem transforms the front facing
% camera view to a birds-eye perspective. Working with the images
% in this view simplifies the processing requirements of the downstream
% lane detection algorithms. The front facing view suffers from perspective
% distortion, causing the lanes to converge at the vanishing point. Working
% with the perspective distorted image is challenging and so the first
% stage of the system corrects the perspective distortion by transforming to the birds-eye view
%
% The Inverse Perspective
% Mapping is given by the following expression:
%
% $$(\hat{x},\hat{y}) = round\left(\frac{h_{11}x + h_{12}y + h_{13}}{h_{31}x + h_{32}y + h_{33}}, \frac{h_{21}x + h_{22}y + h_{23}}{h_{31}x + h_{32}y + h_{33}}\right)$$
%
% The homography matrix, *h*, is derived from four intrinsic parameters of
% the physical camera setup, namely the focal length, pitch, height and
% principle point (from a pinhole camera model). Please refer to Computer
% Vision System Toolbox(TM) documentation for further details.
%
% Direct evaluation of the source (front-facing) to destination (birds-eye) mapping in real time on
% FPGA/ASIC hardware is challenging. The requirement for division along
% with the potential for non-sequential memory access from a frame buffer 
% mean that the computational requirements of this  part of the design are 
% substantial. Therefore instead of directly evaluating the IPM calculation
% in real time, an offline analysis of the input to output mapping has been
% performed and used to pre-compute a mapping scheme. This is possible as
% the homography matrix is fixed after factory calibration/installation of the camera, 
% due to the camera position, height and pitch being fixed.


open_system([modelname '/HDLLaneDetector/InversePerspectiveMapping'],'force');


%% Line Buffering and Address Computation
% A full sized projective transformation from input to output would result
% in a [900x640] output image. This requires that the full [480x640] input
% image is stored in memory, while the source pixel location is
% calculated using the source location and homography matrix. Ideally
% on-chip memory should be used for this purpose, 
% removing the requirement for an off-chip frame buffer. Analysis of the
% mapping of input line to output line reveals that in order to generate
% the first 700 lines of the top down birds eye output image, around 50 lines of the 
% input image are required. This is an acceptable number of lines to store
% using on-chip memory.
%
% Seeing as the on-chip memory requirements are reasonable for generation of a
% [700x640] output image, this is chosen as the output resolution. The 
% HomographyBufferFSM controls storage of the input pixel stream such that
% the required lines are stored. There is now a mapping from input line to
% output line, however the mapping of pixels on the row level still needs
% to be understood. Analysis of the mapping per row reveals that the
% mapping of pixels on a row consists of a linear stretching.
% Therefore, for each line that is to be mapped into the
% output image, a polyfit routine is performed offline to compute the
% gradient and offset. This example implements IPM routine by a simple 
% lookup table computation with online polynomial evaluation.


open_system([modelname '/HDLLaneDetector/InversePerspectiveMapping/readAddressComputation'],'force');


%% Lane Detection
% With the birds-eye view image obtained, the actual lane detection can be
% performed. There are many techniques which can be considered for
% this purpose. To achieve an implementation which is robust, works well
% on streaming image data and which can be implemented in FPGA/ASIC hardware at
% reasonable resource cost, the approach described in [1] is employed. In
% this algorithm, a full image convolution with a vertically oriented
% first order gaussian derivative filter kernel is employed, followed by
% sub-region processing. 

open_system([modelname '/HDLLaneDetector/LaneDetection'],'force');

%% Vertically Orientated Filter Convolution
% Immediately following the birds-eye mapping of the input image, the
% output is convolved with a filter designed to locate strips of high
% intensity pixels on a dark background. The width of the kernel is 8
% pixels, which relates to the width of the lines that appear in the
% top down birds eye image. The height is set to 16 which relates to the size of the
% dashed lane markings which appear in the image.
% As the birds-eye image is physically related to the height,
% pitch etc. of the camera, the width at which lanes appear in this image
% is intrinsically related to the physical measurement on the road.
% The width and height of the kernel may need to be updated when
% operating the lane detection system in different countries. 
%
% <<visionhdllanedetectfilterkernel.png>>
%
% The output of the filter kernel is shown below, using jet colormap to
% highlight differences in intensity. Because the filter kernel is a
% general, vertically orientated Gaussian derivative, there is some response
% from many different regions. However, for the
% locations where a lane marking is present, there is a strong positive
% response located next to a strong negative response, which is consistent
% across columns. This characteristic of the filter output is used in the
% next stage of the detection algorithm to locate valid lane candidates.
%
% <<visionhdllanedetectfilteroutput.png>>
%
%% Lane Candidate Generation
% After convolution with the Guassian derivative kernel, sub-region
% processing of the output is performed in order to find the coordinates
% where a lane marking is present. Each region consists of 18 lines, with
% a ping-pong memory scheme in place to ensure that data
% can be continuously streamed through the subsystem. 
%
open_system([modelname '/HDLLaneDetector/LaneDetection/LaneCandidateGeneration'],'force');

%% Histogram Column Count
% Firstly, HistogramColumnCount counts the number of
% thresholded pixels in each column over the 18 line region. A
% high column count indicates that a lane is likely present in the
% region. This count is performed for both the positive and the negative
% thresholded images. The positive histogram counts are offset to account for the kernel width.
% Lane candidates occur where the
% positive count and negative counts are both high. This
% exploits the previously noted property of the convolution output
% where positive
% tracks appear next to negative tracks.
%
% Internally, the column counting histogram generates the control
% signalling that selects an 18 line region, computes
% the column histogram, and outputs the result when ready. A ping-pong
% buffering scheme is in place which allows one histogram to be reading
% while the next is writing.
%
%
%% Overlap and Multiply
% As noted, when a lane is present in the birds-eye image, the convolution
% result will produce strips of high-intensity positive output located next to
% strips of high-intensity negative output. The positive and negative
% column count histograms locate such regions. In order to amplify
% these locations, the positive count output is delayed by 8 clock cycles
% (an intrinsic parameter related to the kernel width), and the positive
% and negative counts are multiplied together. This amplifies columns
% where the positive and negative counts are in agreement,
% and minimizes regions where there is disagreement between the positive and
% negative counts. The design is
% pipelined in order to ensure high throughput operation.
%
% <<visionhdllanedetecthistogramamplified.png>>
%
%% Zero Crossing Filter
% At the output of the Overlap and Multiply subsystem, peaks appear
% where there are lane markings present. A peak detection algorithm determines
% the columns where lane markings are
% present. Because the SNR is relatively high in the data, this example uses
% a simple FIR filtering operation
% followed by zero crossing detection. The Zero Crossing
% Filter is implemented using the Discrete FIR Filter block from DSP System
% Toolbox(TM). It is pipelined for high-throughput operation.
%
% <<visionhdllanedetectpeakfilterresponse.png>>
%
%% Store Dominant Lanes
% The zero crossing filter output is then passed into the Store Dominant
% Lanes subsystem. This subsystem has a maximum memory of 7 entries,
% and is reset every time a new batch of 18 lines is reached. Therefore,
% for each sub-region 7 potential lane candidates are
% generated. In this subsystem, the Zero Crossing Filter output is streamed
% through, and examined for potential zero crossings. If a zero crossing
% does occur, then the difference between the address immediately prior to
% zero crossing and the address after zero crossing is taken in order to
% get a measurement of the size of the peak. The subsystem stores the zero
% crossing locations with the 
% highest magnitude.

open_system([modelname '/HDLLaneDetector/LaneDetection/LaneCandidateGeneration/StoreDominantLanes'],'force');

%% Compute Ego Lanes
% The hardware portion of the design will produce 7 potential lane
% candidates for every 18 lines of the birds-eye image.
% A simple algorithm
% is used in the software portion of the design to find the two lanes which
% most closely match the ego lanes i.e. the lanes within which the vehicle
% is contained. This algorithm assumes that the centre column of the image
% corresponds to the middle of the lane, when the vehicle is correctly
% operating within the lane boundaries. The lane candidates which are
% closest to the center are therefore assumed as the ego lanes. To further
% reject outliers, a running average of the distance between the center of
% the image and the ego lanes, both from center to left and center to right
% is maintained. If the current candidate lane is not within 1.25 units of the
% average of the left and right width, it is rejected. Rejecting outliers
% early on makes the lane fitting more straightforward.

open_system([modelname '/ComputeEgoLanes'],'force');

%% Fit Lanes
% The Fit Lanes subystem runs a RANSAC based line-fitting routine on the
% generated lane candidates. RANSAC is an iterative algorithm which builds
% up a table of inliers based on a distance measure between the proposed
% curve, and the input data. At the output of this subsystem, there is a
% [3x1] vector which specifies the polynomial coefficients found by the RANSAC
% routine.

%% Overlay Lane Markings
% The Overlay Lane Markings subsystem performs image visualization operations. It
% overlays the ego lanes and curves found by the lane-fitting routine.

open_system([modelname '/OverlayLaneMarkings'],'force');

%% Results of the Simulation
% The model includes two video displays shown at the output of the simulation results. The *BirdsEye*
% display shows the output in the warped perspective after lane candidates
% have been overlayed, polynomial fitting has been performed and the
% resulting polynomial overlayed onto the image. The *OriginalOverlay*
% display shows the *BirdsEye* output warped back into the original
% perspective.
%
% Due to the large frame sizes used in this model, simulation can take a
% relatively long time to complete. If you have an HDL Verifier(TM)
% license, you can accelerate simulation speed by directly running the
% HDL Lane Detector subsystem in hardware using FPGA in the Loop (TM).
%
% <<visionhdllanedetectBEoutput.png>>
%
% <<visionhdllanedetectoverlayoutput.png>>
%
%% HDL Code Generation
% To check and generate the HDL code referenced in this example, you must
% have an HDL Coder(TM) license.
%
% To generate the HDL code, use the following command.
%
%   makehdl('LaneDetectionHDLExample/HDLLaneDetector')
%
% To generate the test bench, use the following command. Note that test bench
% generation takes a long time due to the large data size. You may want to
% reduce the simulation time before generating the test bench.
% 
%   makehdltb('LaneDetectionHDLExample/HDLLaneDetector')
%
% For faster testbench simulation, you can generate a SystemVerilog DPIC testbench using
% the following command.
%
%   makehdltb('LaneDetectionHDLExample/HDLLaneDetector','GenerateSVDPITestBench','ModelSim')
%
% 
%
%% Conclusion
% This example has provided insight into the challenges of designing ADAS systems 
% in general, with particular emphasis paid to the acceleration of critical
% parts of the design in hardware. 



%% References 
% [1] R. K. Satzoda and Mohan M. Trivedi, "Vision based Lane Analysis:
% Exploration of Issues and Approaches for Embedded Realization", 2013 IEEE
% Conference on Computer Vision and Pattern Recognition.
%
% [2] Mohamed Aly, Real time Detection of Lane Markers in Urban Streets, IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, June 2008
% used with permission.