www.gusucode.com > RBM > RBM/RBM_TB.m
function [net]=RBM_TB(x,eps,Nneurons,maxittre,NG) % Restricted Boltzman Machine (RBM): % The function RBM_TB allows to train an RBM net (ristricted Boltzman machine) % using the constrastive divergence method. % Note: the training of RBM is acived by using binary inputs usnits and % hidden units as well. % Inputs: % x: training set L inputs by M Attrebutes % eps : learning rate % Nneurons: number of neurons in the hidden layer % maxittre: maximum number of learning itterations % NG: number of gibbs samplling steps % Outputs: % net: contains important characteristics of traind net %%%% Authors: TAREK BERGHOUT %%%% BATNA 2 TECHNOLOGICAL UNIVERSITY, ALGERIA %%%% EMAIL: berghouttarek@gmail.com %%%% date: 14/04/2019 % initialization: I=scaledata(x,0,1); I2=I; % save a copy from the training input I=round(I);% transform the input into binary units W = (2*rand(size(I,2),Nneurons)-1)*4*sqrt(3.0/16.0); % generate randomly input wiegths W v_bias = zeros(1,size(I,2)); % initial bias in visible layer (input layer) h_bias = zeros(1,Nneurons); % initial bias in hidden layer % training processe: for i = 1:maxittre errvec=[]; % initialize error history for evry itteration ordering = randperm(size(I,1));% rerange randomly rows of the imput sample I = I(ordering, :); % load our reranged sample for j = 1:NG % start gibbs sampling using energy function hidden_p = round(sigmoid(I * W + repmat(h_bias,size(I,1),1)));% Find hidden units by sampling the visible layer % and then % transform it % into binary % form visible_p = sigmoid( hidden_p* W' +repmat(v_bias,size(I,1),1)); % Find visible units by sampling from the hidden ones. bP = sigmoid(visible_p * W + repmat(h_bias,size(I,1),1)); % Last step : Find hidden units from the last visible_p. pD = I'*hidden_p; % Positive Divergence nD = visible_p'*bP; % Negative Divergence W = W + eps*(pD - nD)/NG; % update weights using contrastive divergence v_bias = v_bias + eps*(sum(I-visible_p)); % Update biases of the visibal layer h_bias = h_bias + eps*(sum(hidden_p-bP)); % Update biases of the hidden layer errvec(j) = sqrt(mse((I-visible_p))); % Estimate error (RMSE) end errvecT(i) = mean(errvec);%training error history end %training accuracy with non binary units(scalar) Trv_bias = repmat(v_bias,size(I2,1),1); Trh_bias = repmat(h_bias,size(I2,1),1); Tr_h=sigmoid(I2* W + Trh_bias); Tr_v=sigmoid(Tr_h* W' + Trv_bias); Tr_acc =sqrt(mse(I2-Tr_v)); % Estimate error (RMSE) net.input=I2; % save the original normalized training data net.regen=Tr_v; % save the regenerated input (reconstructed) net.W=W; % save updated weights weights net.xbias=v_bias; % save updated baises of the visible layer net.hbias=h_bias; % save updated baises of the hidden layer net.Tr_acc=Tr_acc; % save training accracy net.hist=smooth(errvecT,13); % save the smooth version of history of training error end