Tuesday, October 3, 2023

This_morning

 We will soon know who the new speaker of the House of Commons

is.                                                


                                                                                 *     *     *


Our new Speaker of the House:

                                                                                 

                                                                       *     *     *

Pytorch 7 was about using with torch.no_grad(): to tell the model

we were going to run test - and not training - data. Below, Pytorch 8;

actually running new test data.

                                                           

Pytorch 9:

                                                                  


*     *     *
Bias in pytorch, from Bard:

In machine learning, bias is a term used to describe a parameter that is added to the output of a linear function. It is often used to shift the output of the function up or down, which can be helpful for fine-tuning the accuracy of the model.

In PyTorch, setting bias to True when creating a linear layer will cause the layer to have a bias parameter. This parameter will be initialized to zero, but it will be updated during training.

There are a few reasons why you might want to set bias to True when creating a linear layer in PyTorch:

* To allow the model to learn a shift in the output of the layer. This can be helpful for fine-tuning the accuracy of the model.
* To compensate for any bias that may be introduced by other layers in the model. For example, if you are using batch normalization, you may want to set bias to False for the convolution layers, but set bias to True for the linear layers. This will help to ensure that the output of the model is not biased towards or against any particular class.
* To make the model more robust to noise in the input data. Bias can help to offset the effects of noise, which can improve the performance of the model on real-world data.

However, there are also some potential downsides to setting bias to True:

* It can make the model more complex and difficult to train.
* It can lead to overfitting, especially if the model is not properly regularized.
* It can make the model less interpretable, as it can be difficult to understand how the bias parameter is affecting the output of the model.

Ultimately, whether or not to set bias to True is a decision that should be made on a case-by-case basis, depending on the specific needs of the model.

Here are some examples of when you might want to set bias to True:

* When you are training a model on a dataset that has a strong bias towards one particular class.
* When you are using batch normalization, but you want to allow the model to learn a shift in the output of the layer.
* When you are training a model on a noisy dataset.

Here are some examples of when you might want to set bias to False:

* When you are training a model on a dataset that is well-balanced.
* When you are using batch normalization and you do not want the model to learn a shift in the output of the layer.
* When you are training a model on a dataset that is not noisy.

It is also important to note that the default setting for bias in PyTorch is True. This means that if you do not explicitly specify the bias parameter when creating a linear layer, it will be set to True by default.

                                                                            *     *     *

Convolution, from Chatgpt:
                                                                               

                                                                                     


No comments: