Sunday, May 14, 2023

MDay23

 Quebec may feel mothers aren't necessarily women. Ha!! I'm

still getting pizza for dinner...


HAPPY MOTHER'S DAY!

                                              *     *     *

During the training phase of the model, the decision line moves all the time.

The appearance of this line on a 2d plane is actually misleading. 

There are two factors feeding into each point, a weighted x1 and a weighted x2. The

x values move the line left-right, the weight values give the inclination and the bias 

value moves it up and down.


The snipet below from Stanford University shows how the decision line can be

viewed as a truth value separator:

                                                            


https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Neuron/index.html


Back with Machine Learnia, we now look at the notion of likelihood: how much

error our various points carry.





Each and every point has to be put to contribution. If we multiply our

probabilities expressed as fractions, everything starts moving toward zero. The

better way: taking advantage of the properties of logarithms and adding our

log values. This makes our activation function usable...


the next notion to master: gradient descent. Using tiny increments to find the 

true better weights to express the relationship between the two factors.                   




*     *     *


*     *     *
Why use the sigmoid function to begin with. Actually, the original activation

function is Heaviside, and one goes to sigmoid for an improvement on that.


















No comments: