Quebec may feel mothers aren't necessarily women. Ha!! I'm
still getting pizza for dinner...
HAPPY MOTHER'S DAY!
* * *
During the training phase of the model, the decision line moves all the time.
The appearance of this line on a 2d plane is actually misleading.
There are two factors feeding into each point, a weighted x1 and a weighted x2. The
x values move the line left-right, the weight values give the inclination and the bias
value moves it up and down.
The snipet below from Stanford University shows how the decision line can be
viewed as a truth value separator:
https://cs.stanford.edu/people/eroberts/courses/soco/projects/neural-networks/Neuron/index.html
error our various points carry.
Each and every point has to be put to contribution. If we multiply our
probabilities expressed as fractions, everything starts moving toward zero. The
better way: taking advantage of the properties of logarithms and adding our
log values. This makes our activation function usable...
the next notion to master: gradient descent. Using tiny increments to find the
true better weights to express the relationship between the two factors.
No comments:
Post a Comment