- Tutorials >
- Optimizing Model Parameters
beginner/basics/optimization_tutorial
Run in Google Colab
Colab
Download Notebook
Notebook
View on GitHub
GitHub
Note
Click hereto download the full example code
Learn the Basics ||Quickstart ||Tensors || ||Transforms ||Build Model ||Autograd ||Optimization ||
Now that we have a model and data it’s time to train, validate and test our model by optimizing its parameters onour data. Training a model is an iterative process; in each iteration the model makes a guess about the output, calculatesthe error in its guess (loss), collects the derivatives of the error with respect to its parameters (as we saw inthe previous section), and optimizes these parameters using gradient descent. For a moredetailed walkthrough of this process, check out this video on backpropagation from 3Blue1Brown.
Prerequisite Code¶
We load the code from the previous sections on and Build Model.
import torchfrom torch import nnfrom torch.utils.data import DataLoaderfrom torchvision import datasetsfrom torchvision.transforms import ToTensortraining_data = datasets.FashionMNIST( root="data", train=True, download=True, transform=ToTensor())test_data = datasets.FashionMNIST( root="data", train=False, download=True, transform=ToTensor())train_dataloader = DataLoader(training_data, batch_size=64)test_dataloader = DataLoader(test_data, batch_size=64)class NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logitsmodel = NeuralNetwork()
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gzDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz 0%| | 0/26421880 [00:00<?, ?it/s] 0%| | 65536/26421880 [00:00<01:12, 364408.29it/s] 1%| | 229376/26421880 [00:00<00:38, 683757.76it/s] 3%|3 | 851968/26421880 [00:00<00:13, 1946951.87it/s] 12%|#2 | 3178496/26421880 [00:00<00:03, 6264358.72it/s] 33%|###3 | 8781824/26421880 [00:00<00:01, 15185089.13it/s] 54%|#####4 | 14385152/26421880 [00:01<00:00, 20556875.96it/s] 77%|#######7 | 20348928/26421880 [00:01<00:00, 24631216.60it/s] 99%|#########9| 26247168/26421880 [00:01<00:00, 27179660.17it/s]100%|##########| 26421880/26421880 [00:01<00:00, 18270686.48it/s]Extracting data/FashionMNIST/raw/train-images-idx3-ubyte.gz to data/FashionMNIST/rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gzDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz to data/FashionMNIST/raw/train-labels-idx1-ubyte.gz 0%| | 0/29515 [00:00<?, ?it/s]100%|##########| 29515/29515 [00:00<00:00, 326841.68it/s]Extracting data/FashionMNIST/raw/train-labels-idx1-ubyte.gz to data/FashionMNIST/rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gzDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz to data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz 0%| | 0/4422102 [00:00<?, ?it/s] 1%|1 | 65536/4422102 [00:00<00:12, 361628.64it/s] 5%|5 | 229376/4422102 [00:00<00:06, 680654.26it/s] 20%|## | 884736/4422102 [00:00<00:01, 2534082.28it/s] 44%|####3 | 1933312/4422102 [00:00<00:00, 4091491.72it/s]100%|##########| 4422102/4422102 [00:00<00:00, 6073672.08it/s]Extracting data/FashionMNIST/raw/t10k-images-idx3-ubyte.gz to data/FashionMNIST/rawDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gzDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz 0%| | 0/5148 [00:00<?, ?it/s]100%|##########| 5148/5148 [00:00<00:00, 37036495.70it/s]Extracting data/FashionMNIST/raw/t10k-labels-idx1-ubyte.gz to data/FashionMNIST/raw
Hyperparameters¶
Hyperparameters are adjustable parameters that let you control the model optimization process.Different hyperparameter values can impact model training and convergence rates(read more about hyperparameter tuning)
- We define the following hyperparameters for training:
Number of Epochs - the number times to iterate over the dataset
Batch Size - the number of data samples propagated through the network before the parameters are updated
Learning Rate - how much to update models parameters at each batch/epoch. Smaller values yield slow learning speed, while large values may result in unpredictable behavior during training.
learning_rate = 1e-3batch_size = 64epochs = 5
Optimization Loop¶
Once we set our hyperparameters, we can then train and optimize our model with an optimization loop. Eachiteration of the optimization loop is called an epoch.
- Each epoch consists of two main parts:
The Train Loop - iterate over the training dataset and try to converge to optimal parameters.
The Validation/Test Loop - iterate over the test dataset to check if model performance is improving.
Let’s briefly familiarize ourselves with some of the concepts used in the training loop. Jump ahead tosee the Full Implementation of the optimization loop.
Loss Function¶
When presented with some training data, our untrained network is likely not to give the correctanswer. Loss function measures the degree of dissimilarity of obtained result to the target value,and it is the loss function that we want to minimize during training. To calculate the loss we make aprediction using the inputs of our given data sample and compare it against the true data label value.
Common loss functions include nn.MSELoss (Mean Square Error) for regression tasks, andnn.NLLLoss (Negative Log Likelihood) for classification.nn.CrossEntropyLoss combines nn.LogSoftmax
and nn.NLLLoss
.
We pass our model’s output logits to nn.CrossEntropyLoss
, which will normalize the logits and compute the prediction error.
# Initialize the loss functionloss_fn = nn.CrossEntropyLoss()
Optimizer¶
Optimization is the process of adjusting model parameters to reduce model error in each training step. Optimization algorithms define how this process is performed (in this example we use Stochastic Gradient Descent).All optimization logic is encapsulated in the optimizer
object. Here, we use the SGD optimizer; additionally, there are many different optimizersavailable in PyTorch such as ADAM and RMSProp, that work better for different kinds of models and data.
We initialize the optimizer by registering the model’s parameters that need to be trained, and passing in the learning rate hyperparameter.
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
- Inside the training loop, optimization happens in three steps:
Call
optimizer.zero_grad()
to reset the gradients of model parameters. Gradients by default add up; to prevent double-counting, we explicitly zero them at each iteration.Backpropagate the prediction loss with a call to
loss.backward()
. PyTorch deposits the gradients of the loss w.r.t. each parameter.Once we have our gradients, we call
optimizer.step()
to adjust the parameters by the gradients collected in the backward pass.
Full Implementation¶
We define train_loop
that loops over our optimization code, and test_loop
thatevaluates the model’s performance against our test data.
def train_loop(dataloader, model, loss_fn, optimizer): size = len(dataloader.dataset) # Set the model to training mode - important for batch normalization and dropout layers # Unnecessary in this situation but added for best practices model.train() for batch, (X, y) in enumerate(dataloader): # Compute prediction and loss pred = model(X) loss = loss_fn(pred, y) # Backpropagation loss.backward() optimizer.step() optimizer.zero_grad() if batch % 100 == 0: loss, current = loss.item(), batch * batch_size + len(X) print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")def test_loop(dataloader, model, loss_fn): # Set the model to evaluation mode - important for batch normalization and dropout layers # Unnecessary in this situation but added for best practices model.eval() size = len(dataloader.dataset) num_batches = len(dataloader) test_loss, correct = 0, 0 # Evaluating the model with torch.no_grad() ensures that no gradients are computed during test mode # also serves to reduce unnecessary gradient computations and memory usage for tensors with requires_grad=True with torch.no_grad(): for X, y in dataloader: pred = model(X) test_loss += loss_fn(pred, y).item() correct += (pred.argmax(1) == y).type(torch.float).sum().item() test_loss /= num_batches correct /= size print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
We initialize the loss function and optimizer, and pass it to train_loop
and test_loop
.Feel free to increase the number of epochs to track the model’s improving performance.
loss_fn = nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)epochs = 10for t in range(epochs): print(f"Epoch {t+1}\n-------------------------------") train_loop(train_dataloader, model, loss_fn, optimizer) test_loop(test_dataloader, model, loss_fn)print("Done!")
Epoch 1-------------------------------loss: 2.298730 [ 64/60000]loss: 2.289123 [ 6464/60000]loss: 2.273286 [12864/60000]loss: 2.269406 [19264/60000]loss: 2.249603 [25664/60000]loss: 2.229407 [32064/60000]loss: 2.227368 [38464/60000]loss: 2.204261 [44864/60000]loss: 2.206193 [51264/60000]loss: 2.166651 [57664/60000]Test Error: Accuracy: 50.9%, Avg loss: 2.166725Epoch 2-------------------------------loss: 2.176750 [ 64/60000]loss: 2.169595 [ 6464/60000]loss: 2.117500 [12864/60000]loss: 2.129272 [19264/60000]loss: 2.079674 [25664/60000]loss: 2.032928 [32064/60000]loss: 2.050115 [38464/60000]loss: 1.985236 [44864/60000]loss: 1.987887 [51264/60000]loss: 1.907162 [57664/60000]Test Error: Accuracy: 55.9%, Avg loss: 1.915486Epoch 3-------------------------------loss: 1.951612 [ 64/60000]loss: 1.928685 [ 6464/60000]loss: 1.815709 [12864/60000]loss: 1.841552 [19264/60000]loss: 1.732467 [25664/60000]loss: 1.692914 [32064/60000]loss: 1.701714 [38464/60000]loss: 1.610632 [44864/60000]loss: 1.632870 [51264/60000]loss: 1.514263 [57664/60000]Test Error: Accuracy: 58.8%, Avg loss: 1.541525Epoch 4-------------------------------loss: 1.616448 [ 64/60000]loss: 1.582892 [ 6464/60000]loss: 1.427595 [12864/60000]loss: 1.487950 [19264/60000]loss: 1.359332 [25664/60000]loss: 1.364817 [32064/60000]loss: 1.371491 [38464/60000]loss: 1.298706 [44864/60000]loss: 1.336201 [51264/60000]loss: 1.232145 [57664/60000]Test Error: Accuracy: 62.2%, Avg loss: 1.260237Epoch 5-------------------------------loss: 1.345538 [ 64/60000]loss: 1.327798 [ 6464/60000]loss: 1.153802 [12864/60000]loss: 1.254829 [19264/60000]loss: 1.117322 [25664/60000]loss: 1.153248 [32064/60000]loss: 1.171765 [38464/60000]loss: 1.110263 [44864/60000]loss: 1.154467 [51264/60000]loss: 1.070921 [57664/60000]Test Error: Accuracy: 64.1%, Avg loss: 1.089831Epoch 6-------------------------------loss: 1.166889 [ 64/60000]loss: 1.170514 [ 6464/60000]loss: 0.979435 [12864/60000]loss: 1.113774 [19264/60000]loss: 0.973411 [25664/60000]loss: 1.015192 [32064/60000]loss: 1.051113 [38464/60000]loss: 0.993591 [44864/60000]loss: 1.039709 [51264/60000]loss: 0.971077 [57664/60000]Test Error: Accuracy: 65.8%, Avg loss: 0.982440Epoch 7-------------------------------loss: 1.045165 [ 64/60000]loss: 1.070583 [ 6464/60000]loss: 0.862304 [12864/60000]loss: 1.022265 [19264/60000]loss: 0.885213 [25664/60000]loss: 0.919528 [32064/60000]loss: 0.972762 [38464/60000]loss: 0.918728 [44864/60000]loss: 0.961629 [51264/60000]loss: 0.904379 [57664/60000]Test Error: Accuracy: 66.9%, Avg loss: 0.910167Epoch 8-------------------------------loss: 0.956964 [ 64/60000]loss: 1.002171 [ 6464/60000]loss: 0.779057 [12864/60000]loss: 0.958409 [19264/60000]loss: 0.827240 [25664/60000]loss: 0.850262 [32064/60000]loss: 0.917320 [38464/60000]loss: 0.868384 [44864/60000]loss: 0.905506 [51264/60000]loss: 0.856353 [57664/60000]Test Error: Accuracy: 68.3%, Avg loss: 0.858248Epoch 9-------------------------------loss: 0.889765 [ 64/60000]loss: 0.951220 [ 6464/60000]loss: 0.717035 [12864/60000]loss: 0.911042 [19264/60000]loss: 0.786085 [25664/60000]loss: 0.798370 [32064/60000]loss: 0.874939 [38464/60000]loss: 0.832796 [44864/60000]loss: 0.863254 [51264/60000]loss: 0.819742 [57664/60000]Test Error: Accuracy: 69.5%, Avg loss: 0.818780Epoch 10-------------------------------loss: 0.836395 [ 64/60000]loss: 0.910220 [ 6464/60000]loss: 0.668506 [12864/60000]loss: 0.874338 [19264/60000]loss: 0.754805 [25664/60000]loss: 0.758453 [32064/60000]loss: 0.840451 [38464/60000]loss: 0.806153 [44864/60000]loss: 0.830360 [51264/60000]loss: 0.790281 [57664/60000]Test Error: Accuracy: 71.0%, Avg loss: 0.787271Done!
Further Reading¶
Loss Functions
torch.optim
Warmstart Training a Model
Total running time of the script: ( 2 minutes 13.839 seconds)
Download Python source code: optimization_tutorial.py
Download Jupyter notebook: optimization_tutorial.ipynb