4.6 Experiment on Regularization

We're aiming to create a knowledge hub for 3D printing of the future.

4.6 Experiment on Regularization

Three regularization methods (Dropout, L1 Norm, L2 Norm) are compared on Dog-vs-Cat dataset. We can find with drop out the model can achieve better performance, as shown in Fig. 1. But it is only in this dataset with current model setting.
Dropout can be set by nn.Drop() in pytorch. We can add L2 norm by setting weight decay in optimizer to be the number larger than 0.

    import torch.nn as nn
    import torch
    # adding drop out after linear layer
   self.classifier = nn.Sequential(nn.Linear(input, out)
        nn.Dropout(),
    )
    # considering l2 norm to weight
    torch.optim.Adam(model.parameters(), lr=lr, weight_decay=0)
    
    # computing l1 norm of network parameter
    net_para = [x.view(-1) for x in self.net.parameters()]
    torch.norm(torch.cat(net_para), 1)
Figure 1: Model performance on Dog-vs-Cat dataset with input normalization or not (Left) validation accuracy, (Right): training loss.