JOGUE AGORA

EMJ37403


� � 
LIVE � �  � � 


04.05.2024

Still alive until now so he's discover. about the back propagation algorithm on. how. the what the weightage been update we. come out with a new uh equation how the. weightage is update in this algorithm. okay so uh still the same. the propagation algorithm same as the. other. algorithm so uh just the different is. the architecture also same just the. different is how the weightage is update. is little bit uh complicated not very. complicated but there is a many stages. involved when we want to update the. weightage by using back propagation. training algorithm okay uh we also call. this uh algorithm as a fit forward uh. neuron Network so means fit forward is. just we put in the data to the input and. then we we update the weightage and then. the the data will come out the output. and then we'll compare to the Target and.

Then we can find the the accuracy of the. of the model okay and this algorithm. also can be applied to the multilayer. net not only single layer we can also. apply this algorithm to a multi layer. multilayer means we have two hidden. layer okay like. this here so in hpet or perceptron the. example we have only single layer means. you have have. input and then you. have the wage and then you have output. but. for propagation we can also apply as a. two. layer uh architecture so we have here a. a hidden layer we call it a hidden layer. here okay so we have a out uh input. layer hiden layer and then the output. layer and still have we have still a. Biers here okay so this is how the back. PR back propagation algorithms. architecture look. like so uh another term of this type of. algorithm we can call it a multilayer. perceptron.

MLP or fit forward neuron Network all. same okay and the architecture we have a. input layer we have a output layer and. we have a hidden. layer like just now I show you in the in. the figure. okay and it's also have a. bias at the output layer okay also have. a bias at the output layer and the type. of input still the same either we need. to have a binary type number or bipolar. type number if binary is. 01 if bipolar is minus one one so this. mean any data whatever data you have uh. image data uh brain signal like my. research are doing brain signal data. or voice data whatever data we have. first the data is in the form of its. uh range but in order. to uh train the data in this type of. neuron Network first we need to convert. all this type of data into either binary. type or bipolar bipolar type binary mean.

The data is 0 1 0 1 or bipol type means. the data is between minus one + one. okay and it also have activation. function same as the. perceptron and we have so uh many. function as a activation. function and the things that the idea. come out from uh from back propagation. algorithm is it using a Delta rule in. order to update the update the vage okay. so. uh what is different between perceptron. have and B. propagation this the different is. the if perceptron we train the we update. the weightage until the weightage is. same as the weightage. before in hnet we the condition is we. just update if we have two input we just. update the wage for two inputs but for. back. propagation we update the wage. until the error between the Target and. also the. output is small means whatever error. Delta error let's say we which is.

Defined by us we want to model the. neuron Network so we need to define the. error to stop the network updated the. weightage so how the network is stop. with uh update wage is depend on the. error of the Delta which is the differ. between the uh input and the output and. also the data get so let's say you get. uh when you training the data and then. you get the output 0.9 let's say 0. uh. 0.5 and the target is equal to one so. you have a difference 0.5 if we say the. network need to be stopped until the. error. reach 0.05 that mean now we have. 0.5 is not reach 0.05 then it will. update again the weightage until it. reach 0.05 then the wa uh the training. will be stopped and we will have a final. weightage so that is the differ between. back propagation compare with the simple. algorithm that we have learned which is.

Perceptron and also. the uh What uh hatet okay. so this is the training. algorithm uh for back. propagation same first the weightage is. initialized this is uh normally is. random the computer will make it random. to get the uh initial weage. value and then. it will. uh put the input into the layer and then. at Hidden layer we will. uh uh calculate the Z value and then at. the output value we will calculate the. output Val output value Y and then the. when we can get the output value y then. we calculate the difference between. Target and also the output. here if the taret and the output if the. Delta K we get let's say we set uh the. training will stop until data is equal. to. 0.005. so uh the training process will. running if it don't reach. 0.5 until the Delta k equal to 0.5 then. the training process will stop and then.

We will get the final weightage value. and that weightage value we will use to. test our Network whether it can get the. good accuracy or not okay so the differ. is this time how we want to stop the. update the vage is based on the Delta. value which is the difference between. the Target and the output whereas for. the yesterday the simple algorithm. perceptron and half perceptron we update. the weightage until the weightage is. same as the weightage before or for hnet. we. don't compare to anything we just update. the weightage until all the input is. being processed so that's the difference. between this three algorithm. okay so this is how the equ this is the. equation of uh how we update the. weightage. okay so uh W new is equal to W plus uh. Delta W where Delta W is given as this. so we have Alpha Delta K and Z value we.

Will uh see the example after this how. the process of back. propagations running. okay. and of course in each of the neuron. Network we have also activation function. and the activation function can. be uh any activation function so. normally when for my case when I want to. model the neural network for my data I. uh I always use uh I use a brain signal. data when I want to to model the neural. network I always uh do the try and error. sometime this activation function can. give me the good accuracy of my network. sometime different activation function. can give low accuracy for my mod so it's. depend also on ATIV function and all. depend also on on your data okay so. there is no wrong no right this depend. on on your data. okay so the common uh uh activation. function that used for back propagation. is uh we call it uh B if if the data we.

Converted into binary so we always uh. use a binary. Sig if it's a uh bipolar data we always. use a bipolar sigo sigo sigmoid uh. activation activation function. okay. so this is just want to show you how the. learning rules is derived so this one. you can see by yourself but we just want. to use the. final final. equation here in order to update the in. order to update the vage so this is just. the derivation to show you how we get. this equation or in order in the process. to up the the wage here. okay. so so now we got the weightage updates. here the. equation and. uh for initialize the weightage as I. said. before uh normally we get the initial. weightage value by random number okay by. random number so we random initialize. the weightage. value by it is given by the computer. when we set and then the compter will.

Give the random number of the. initialized weightage value and then. after that starting from there then we. train the. network and the weightage value will be. update until the Delta value of delta is. small enough in order the network to be. to be stopped. okay and it's also say here the. weightage sometime is not too large and. not too and not too small so actually it. depend on. uh also on on data and also depend on. the activation. function so it is based on that uh. parameter. okay sometime we use also uh we call it. a new G withd draw initial. initialization this is one of the method. on how we want to. initialize on our uh weightage value. okay there is a lot of initialize method. that been introduced by. researcher because uh researcher want to. come up with the method in order to give. the. initialized process faster okay if you.

Have a so big data so so sometime when. we use uh this kind of uh one one type. of uh initialized method sometime it can. be so slow in order to update the. weightage so that's why many researcher. has come up with a new new algorithm in. order to initialize the weightage so the. the learning process on updating the. weightage will be will be uh faster okay. but normally we just use whatever we. have in MLB is already given we just. take the algorithm that in. MLB available in MB and then we use it. and then we try an error whether when. then we want to see whether uh it is fit. for our data or not. okay so I want to show you. the uh example. on. how we update the wage of this type of. uh Network okay so this is the example. here uh consider the fit forward Network. shown in figure so this is uh the fit. forward Network here we have uh for.

Input and then we have hidden layer here. and then we we have one output layer. here. okay and the question ask you to train. the we need to train the network using. the back propagation. algorithm so this one he not the. question not ask us to train until the. Delta is small but he just want us to. show how the wage is update so you say. train the network using back propagation. algorithm only for one approach approach. mean the one cycle of updating the uh. weightage if we say uh five approach so. that mean we update the weightage five. times first second update third update. fourth update and then five so finish. five approach one me one one cycle okay. and this the question say the input. training pattern is given here 1 one one. one and the Target also is given minus. one so if you look at here the input is.

11 one the target is minus one so we. know. already it use. what bipolar or binary it use a bipolar. bipolar data okay because the target is. minus one so this mean we have only one. output layer here so either so maybe if. one output layer the possibility of uh. data is either minus one and one so it. is in in bipolar bipolar data so the. question ask us to determine the new. wave this mean to show how we update the. weightage. using the the back propagation algorithm. okay and assume the landing rate equal. to. 0.5 assume the hidden and output layer. neuron are activate by using so again by. using. bipolar so this is the activation. function bipolar sigo activation. activation. function so how we can start so so this. is how the architecture of the black. propagation algorithm and. Network so because we have the the input.

Data one one one one so that's mean. that's why we have four input. here 1 2 3. 4 and then we have H layer two and then. the output either is minus one or one so. that's why we only have one output layer. here okay and if you see. here like I say before how it uh. sometime how we can stop the network is. depend on the different the data the. differ between Target. and uh output so you see here this is. come out the output y when Y come out. and then it will compare with the target. T and then we can get the value of error. E here so if we say that we want to to. make this network to train the network. until and the network train is stop. until e is equal to. 0.05 so when we got value of. y uh Target minus. y and then we see what is the value if. we don't get. 0.5 then the process of adting the. weightage will be go on until the e one.

Is got 0.5 but for this question is just. is already mentioned that we only one to. update the vage in one a that's mean we. don't care whatever whatever value of e. but we just update the weightage in one. one. approach. okay so. the initial value of weightage is given. here so we have input layer hidden layer. output layer so this mean here we have. two uh two different uh weightage one is. the weightage between the input layer. and hidden layer so we name it V so is. b11 is for this Ling. V12 v21 v22 v31 1. v32 v41. v42 so this is given the value. of uh vage for V and then the vage. between the hidden layer and the output. layer we given as W so. w11 and W1 so this is the value. of. uh vage for w okay so this is uh. v11 V12 v21 v22. v31. v32 0.7 v41 0.5 V 42 so the task our. task is to update the wage only one. approach how to do this starting from.

Here first. we calculate the Z in value here so what. is z in value is X the input times the. the. wage okay so Che in total Z in value is. this plus this plus this v11 * X1 + v21. +. X2 + v31 + X3 + V 41 +. time. X4. okay so we get Z in. one and. then we need to find the. output from Z1 which is given as Z1 is =. 2 1 + e Z in 1 1 this is the formula. for uh bipolar sigo activation function. okay so we get check 1 is equal to. 0.05 okay so first we need to find the. output of Z1 and then we calculate the. output of Z2 here okay so it's still the. same. same. operation but this time for Z2 here so x. uh X1 * V12 X2 * v22 X3 * v32 X4 time. v42 sum sum up and then again calculate. the Z2 here the output of hidden layer. here and then we got. 019. 74 okay this only we got until here not. yet until the output now it's only until.

The hidden layer. okay and. then from hidden layer then we want to. calculate the output value from the. output layer. y1. so again same concept. same same method. so we have vage time the output here the. input from Z1 time the vage. w11 plus w21 * SEC 2 so we have y in one. okay the input into the output layer. here okay so y1 is equal. to 2 uh 1 + e minus y in 1 same as here. here this one minus Z in because you use. the Z in so this one because this is y. in so we use the minus y in okay so we. got y1 is equal to . 0.665 and then we need to calculate the. error okay. y1 T1 minus y1 so we uh the question. already say. uh the target is minus1 so 1 minus 0. or 0.0 665 so we got E1 is -. 0.933. 9335. okay and then we need to. calculate the Delta. value okay because now the the thing is. we don't update yet the. weightage so in order to update the.

Weightage we need the Delta value so. this Delta value Delta K1 we want to. update the vage for w and. w21 this Delta J and Delta J we want to. update the vage for V okay so that's why. we need this value this value and this. value so it's given as Delta is equal to. the error times the differentiation of. the Y in. one for Delta. J uh. same equal to Delta uh Delta. J Delta. K1 time uh w11 * this formula times the. differentiation of Z in one okay so this. one is uh times the differentiation of Y. in one okay so first we need to. calculate this without this we cannot. update the wage of w without Delta J1. and J2 we cannot update the weightage of. V1 and V2 so we first we want to update. the weightage of w11 w21 so we need. to calculate the Delta. K1 as this. formula then we got the value of delta. K1 0.4 647 here then using this value.

Then we update. this. wage so w11 new again using the formula. given in algorithm w w old plus Alpha. Landing. rate times Delta K1 this. value times Z1 this. value so we got new weage value 4 w11. new weage value for. w21 so now we already update the. weightage here and the weightage here. now we want to update the weightage here. so we need to have this value Delta J1. and Delta. J2 so the Delta J1 here equal to. Delta K. * w11 * 0.5 * 1 Z1 * 1 + Z1 so if you. see here for Delta J1 you need all the. value before the Delta J1 so we need. this new value this value and this value. that we have calculate before then we. can calculate without this value we. cannot calculate the value of Delta. Delta J1 same also for Delta. J2 we need the value of delta K1 Z2 w21. the new weightage that been update. before w21 then we can have the value of.

Delta. J2 then when we got the Delta J1 and. Delta J2 then now we can update all the. weightage here 1 2 3 4 5 6 7 8 same. equation Alpha Landing rate times the. Delta times the times the input then we. can get the new wage. here so this is how the back. propagation algorithm how the process of. update the wage is little bit. complicated compared to the perceptron. algorithm and also the the what the hnet. algorithm so if you look at here why we. call it a back propagation because if. you look at the process on updating the. weightage is something like we go back. when we get the the value of output and. then we go back in order to update the. wage that's why we call it a back. propagate. back propagate algorithm so when we got. y1 we got uh the the the different value. of T at Target n y1 then first we back. backward calculate D calculate Delta and.

Then update w w and then back calculate. this and then we. can we can update the V is something. like we back propagate first the data go. in F. forward in order to update the weightage. is back. propagate that's why we call this. algorithm as a as a back propagation. algorithm the process of how we update. the weightage is in. the uh backward. okay we update the vage in a backward. Direction so that's why this algorithm. we call it a we call it a a back. propagate. algorithm okay so the question only ask. us to to to calculate get the new wage. value only after one apost that's. mean first we put the input data. calculate the weight calculate the value. until we get the value output and. compare with the Target and then update. back the new wage value with the back. propagate then finish one one. approach so if the question say two.

Approach does mean you need to make it. two time first calculate this this this. update the weightage one a push again. calculate this this this calculate back. the data value up the weightage and then. finish second second approach so it's a. bit. complicated but uh. for for research this is all done by the. by the. MLB okay I just want to show how the. process of this. algorithm by uh to update the weightage. in ml we just put the the. data put the coding then play up and. then we come up with the answers that's. all okay so this is actually what the. process is actually being done in back. propagation. algorithm. okay and uh normally we don't we don't. stop the training process for this uh. neuronet model based on uh. approach normally we set the parameter e. as I said before until we get very very. small error between Target.

And output y1 then the training is. stopped then we test it. either our model can classify with the. with the high accuracy maybe 100. accuracy if we have uh different type of. data. here okay so this is how the B. propagation. algorithm work so if you don't know or. if you want to refer back. how it work so you can back. propagate this lecture. note is this is how like uh the similar. how the the process of updating the. wage go forward and then go backward to. update the update the. wage so any question on this type of. algorithm. I will give this uh lecture note in. order for you to back propagate back all. the notes to make more you more. understand after this I will give the I. will give. the. so for my research I still using this. type of uh algorithm because uh normally. can give higher accuracy for my neuron.

Network. model even though we have many type of. uh machine learning algorithm like uh. support Vector machine. svm what else we have the new one deep. learning and all that sometime I still. using the propagation algorithm because. it can give a better better result but. for perceptron and hepnet this is very. very old one is difficult for if you. want to classify the B P data or the. complicated data so it's. very uh difficult to get the good. accuracy for your model if you use the. hnet or the perceptron. so any. question so if don't have any question I. think you maybe you want to look at back. all the this lecture note first so I. will send to you the lecture note and. then please study and any question you. can ask me after. holiday so the task that I give you. yesterday we can discuss it after. holiday during face Toof face uh.

lecture so if you don't have. any question I think uh. I stop here so happy. holiday okay and. byebye thank you thank. you

All Devices iOS Android Chromecast