www.gusucode.com > visionhdl工具箱matlab源码程序 > visionhdl/visionhdlexamples/html/EdgeDetectionAndOverlayHDLExample.m

    %% Edge Detection and Image Overlay
% This example demonstrates how to detect and highlight object edges in a 
% video stream. The functionality of the pixel-stream Sobel Edge Detector 
% and Video Alignment blocks is verified by comparing the results with 
% those generated by the full-frame blocks from the Computer Vision System 
% Toolbox(TM). 
 
% Copyright 2014 The MathWorks, Inc.
 
%% Structure of the Example
% The <matlab:EdgeDetectionAndOverlayHDLExample EdgeDetectionAndOverlayHDLExample.slx> system is shown below.
 
modelname = 'EdgeDetectionAndOverlayHDLExample';
open_system(modelname);
set_param(modelname, 'SampleTimeColors', 'on');
set_param(modelname,'SimulationCommand','Update');
set_param(modelname, 'Open', 'on');
set(allchild(0),'Visible', 'off');
 
%% 
% The difference in the color of the lines feeding the *Full-Frame Behavioral Model* 
% and *Pixel-Stream HDL Model* subsystems indicates the change in the image 
% rate on the streaming branch of the model. This rate transition is 
% because the pixel stream is sent out in the same amount of time as the 
% full video frames and therefore it is transmitted at a higher rate.

%% Full-Frame Behavioral Model
% The following diagram shows the structure of the *Full-Frame Behavioral Model* 
% subsystem, which employs the frame-based *Edge Detection* block.

set_param(modelname, 'SampleTimeColors', 'off')
set_param(modelname, 'Open', 'off');
set_param([modelname '/Full-Frame Behavioral Model'], 'Open', 'on');

%%
% Given that the frame-based *Edge Detection* block does not introduce
% latency, image overlay is performed by weighting the source image and the 
% *Edge Detection* output image, and adding them together in a 
% straightforward manner. 
%
% One frame of the source video, the edge detection result, and the
% overlaid image are shown from left to right in the diagram below.
% 
% <<visionhdlsobel_vieweroutput.png>>
%
% It is a good practice to develop a behavioral system using blocks that 
% process full image frames, the *Full-Frame Behavioral Model* subsystem in 
% this example, before moving forward to working on an FPGA-targeting 
% design. Such a behavioral model helps verify the video processing design. 
% Later on, it can serve as a reference for verifying the implementation of 
% the algorithm targeted to an FPGA. Specifically, the *PSNR* (peak 
% signal-to-noise ratio) block at the top level of the model compares the 
% results from full-frame processing with those from pixel-stream processing.
  
%% Frame To Pixels: Generating a Pixel Stream
% The task of the *Frame To Pixels* is to convert a full 
% frame image to pixel stream. To simulate the effect of horizontal and 
% vertical blanking periods found in real life hardware video systems, the 
% active image is augmented with non-image data. For more information on 
% the streaming pixel protocol, click <matlab:helpview(fullfile(docroot,'visionhdl','ug','streaming-pixel-interface.html')) here>.
% The *Frame To Pixels* block is configured as shown:
%
% <<Frame2PixelsMask.png>>
%
% The *Number of components* field is set to 1 for grayscale image input, 
% and the *Video format* field is 240p to match that of the video source. 
%
% In this example, the Active Video region corresponds to the 240x320 
% matrix of the dark image from the upstream *Corruption* block. Six other 
% parameters, namely, *Total pixels per line*, *Total video lines*, 
% *Starting active line*, *Ending active line*, *Front porch*, and *Back 
% porch* specify how many non-image data will be augmented on the four 
% sides of the Active Video. For more information, see the Frame To Pixels 
% block <matlab:helpview(fullfile(docroot,'toolbox','visionhdl','visionhdl.map'),'visionhdlframetopixels') reference page>.
%
% Note that the sample time of the *Video Source* is determined by the 
% product of *Total pixels per line* and *Total video lines*. 
 
%% Pixel-Stream Edge Detection and Image Overlay
% The *Pixel-Stream HDL Model* subsystem is shown in the diagram below. You 
% can generate HDL code from this subsystem.

set_param([ modelname '/Full-Frame Behavioral Model'], 'Open', 'off');
set_param([ modelname '/Pixel-Stream HDL Model'], 'Open', 'on');

%%
% Due to the nature of pixel-stream processing, unlike the *Edge Detection*
% block in the *Full-Frame Behavioral Model*, the *Edge Detector* block 
% from the Vision HDL Toolbox(TM) will introduce latency. The latency 
% prevents us from directly weighting and adding two images to obtain the 
% overlaid image. To address this issue, the *Align Video* subsystem is 
% used to synchronize the two pixel streams before the sum.
%
% The structure of the *Align Video* subsystem is shown below. 
 
set_param([ modelname '/Pixel-Stream HDL Model'], 'Open', 'off');
set_param([ modelname '/Pixel-Stream HDL Model/Align Video'], 'Open', 'on');

%%
% To properly use this subsystem, pixelInDelayed and ctrlInDelayed must be connected to
% the pixel and control bus that are associated with a delayed pixel stream. 
% In our example, due to the latency introduced by the *Edge Detector*, the 
% pixel stream coming out of the *Edge Detector* is delayed with respect to 
% that feeding into it. Therefore, the upstream source of pixelInDelayed and ctrlInDelayed are the 
% Edge and ctrl output of the *Edge Detector*.
%
% The basic idea of aligning pixel streams is to push into a FIFO the valid 
% pixels that come earlier, and appropriately pop individual pixel from 
% this FIFO based on the valid signal of the delayed pixel-stream. Refer to the 
% <matlab:EdgeDetectionAndOverlayWithImpairedFrameHDLExample EdgeDetectionAndOverlayWithImpairedFrameHDLExample.slx> system for a 
% detail treatment as how this model is implemented.

%% Pixels To Frame: Converting Pixel Stream Back to Full Frame
% As a companion to *Frame To Pixels* that converts a full image frame to 
% pixel stream, the *Pixels To Frame* block, reversely, converts
% the pixel stream back to the full frame by making use of the synchronization signals.
% Since the output of the *Pixels To Frame* block is a 2-D matrix of a full
% image, there is no need to further carry on the bus containing five 
% synchronization signals. 
%
% The *Number of components* field and the *Video format* fields of both 
% Frame To Pixels and Pixels To Frame are set at 1 and 240p, respectively, 
% to match the format of the video source.   
 
%% Verifying the Pixel Stream Processing Design
% While building the streaming portion of the design, the *PSNR* block 
% continuously verifies results against the original full-frame design. The 
% *Delay* block on the top level of the model time-aligns the 2-D matrices 
% for a fair comparison. During the course of the simulation, the *PSNR* 
% block should give *inf* output, indicating that the output image 
% from the *Full-Frame Behavioral Model* matches the image generated from
% the stream processing *Pixel-Stream HDL Model*.

%% Exploring the Example
% The example allows you to experiment with different threshold and alpha 
% values to examine their effect on the quality of the overlaid images. 
% Specifically, two workspace variables $thresholdValue$ and $alpha$ with 
% initial values 7 and 0.8, respectively, are created upon opening the 
% model. You can modify their values using the MATLAB command line as follows: 
%
%   thresholdValue=8
%   alpha=0.5
%
% The updated $thresholdValue$ will be propagated to the 
% *Threshold* field of the *Edge Detection* block inside the *Full-Frame Behavioral Model* 
% and the *Edge Detector* block inside *Pixel-Stream HDL Model/Edge Detection*. 
% The $alpha$ value will be propagated to the *Gain1* block in the
% *Full-Frame Behavioral Model* and *Pixel-Stream HDL Model/Image Overlay*, and the value 
% of $1-alpha$ goes to *Gain2* blocks. Closing the model clears both variables from your workspace.
%
% In this example, the valid range of $thresholdValue$ is between 0 and 256, inclusive.
% Setting $thresholdValue$ equal to or greater than 257 triggers a message 
% *Parameter overflow occurred for 'threshold'*. The higher you set the 
% $thresholdValue$, the smaller the amount of edges the example finds in the video. 
%
% The valid range of $alpha$ is between 0 and 1, inclusive. It determines the 
% weights for edge detection output image and the original source image 
% before adding them. The overlay operation is a linear interpolation 
% according to the following formula. 
%
%     overlaid image = alpha*source image + (1-alpha)*edge image.
%
% Therefore, when $alpha=0$, the overlaid image is the edge detection 
% output, and when $alpha=1$ it becomes the source image. 

%% Generate HDL Code and Verify its Behavior
% To check and generate the HDL code referenced in this example, you must
% have an HDL Coder(TM) license.
%
% To generate the HDL code, use the following command:
%
%   makehdl('EdgeDetectionAndOverlayHDLExample/Pixel-Stream HDL Model');
%
% To generate testbench, use the following command:
%
%   makehdltb('EdgeDetectionAndOverlayHDLExample/Pixel-Stream HDL Model');
%
 
close_system(modelname,0);
close all;