The basics for building and training models are contained in this module.
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Used in notebook but not needed in package.
import numpy as np
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
from htools import assert_raises
Optimizers
Optimizers like Adam or RMSProp can contain multiple "parameter groups", each with a different learning rate. (Other hyperparameters can vary as well, but we ignore that for now.) The functions below allow us to get a new optimizer or update an existing one. It allows us to easily use differential learning rate, but that is not required: it can also use the same LR for each parameter group.