×
1 Choose EITC/EITCA Certificates
2 Learn and take online exams
3 Get your IT skills certified

Confirm your IT skills and competencies under the European IT Certification framework from anywhere in the world fully online.

EITCA Academy

Digital skills attestation standard by the European IT Certification Institute aiming to support Digital Society development

SIGN IN YOUR ACCOUNT TO HAVE ACCESS TO DIFFERENT FEATURES

CREATE AN ACCOUNT FORGOT YOUR PASSWORD?

FORGOT YOUR DETAILS?

AAH, WAIT, I REMEMBER NOW!

CREATE ACCOUNT

ALREADY HAVE AN ACCOUNT?
EUROPEAN INFORMATION TECHNOLOGIES CERTIFICATION ACADEMY - ATTESTING YOUR PROFESSIONAL DIGITAL SKILLS
  • SIGN UP
  • LOGIN
  • SUPPORT

EITCA Academy

EITCA Academy

The European Information Technologies Certification Institute - EITCI ASBL

Certification Provider

EITCI Institute ASBL

Brussels, European Union

Governing European IT Certification (EITC) framework in support of the IT professionalism and Digital Society

  • CERTIFICATES
    • EITCA ACADEMIES
      • EITCA ACADEMIES CATALOGUE<
      • EITCA/CG COMPUTER GRAPHICS
      • EITCA/IS INFORMATION SECURITY
      • EITCA/BI BUSINESS INFORMATION
      • EITCA/KC KEY COMPETENCIES
      • EITCA/EG E-GOVERNMENT
      • EITCA/WD WEB DEVELOPMENT
      • EITCA/AI ARTIFICIAL INTELLIGENCE
    • EITC CERTIFICATES
      • EITC CERTIFICATES CATALOGUE<
      • COMPUTER GRAPHICS CERTIFICATES
      • WEB DESIGN CERTIFICATES
      • 3D DESIGN CERTIFICATES
      • OFFICE IT CERTIFICATES
      • BITCOIN BLOCKCHAIN CERTIFICATE
      • WORDPRESS CERTIFICATE
      • CLOUD PLATFORM CERTIFICATENEW
    • EITC CERTIFICATES
      • INTERNET CERTIFICATES
      • CRYPTOGRAPHY CERTIFICATES
      • BUSINESS IT CERTIFICATES
      • TELEWORK CERTIFICATES
      • PROGRAMMING CERTIFICATES
      • DIGITAL PORTRAIT CERTIFICATE
      • WEB DEVELOPMENT CERTIFICATES
      • DEEP LEARNING CERTIFICATESNEW
    • CERTIFICATES FOR
      • EU PUBLIC ADMINISTRATION
      • TEACHERS AND EDUCATORS
      • IT SECURITY PROFESSIONALS
      • GRAPHICS DESIGNERS & ARTISTS
      • BUSINESSMEN AND MANAGERS
      • BLOCKCHAIN DEVELOPERS
      • WEB DEVELOPERS
      • CLOUD AI EXPERTSNEW
  • FEATURED
  • SUBSIDY
  • HOW IT WORKS
  •   IT ID
  • ABOUT
  • CONTACT
  • MY ORDER
    Your current order is empty.
EITCIINSTITUTE
CERTIFIED

How to optimize over all adjustable parameters of the neural network in PyTorch?

by Agnieszka Ulrich / Friday, 14 June 2024 / Published in Artificial Intelligence, EITC/AI/DLPP Deep Learning with Python and PyTorch, Data, Datasets

In the domain of deep learning, particularly when utilizing the PyTorch framework, optimizing the parameters of a neural network is a fundamental task. The optimization process is important for training the model to achieve high performance on a given dataset.

PyTorch provides several optimization algorithms, one of the most popular being the Adam optimizer, which stands for Adaptive Moment Estimation.

The Adam optimizer is an extension of the stochastic gradient descent (SGD) that has gained popularity due to its efficiency and effectiveness in training deep neural networks. It combines the advantages of two other extensions of SGD: AdaGrad and RMSProp. Adam computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients.

To utilize the Adam optimizer in PyTorch, one typically employs the `torch.optim.Adam` class. The function `optim.Adam(net.parameters())` is used to initialize the Adam optimizer with the parameters of the neural network `net`. Here is a detailed explanation of how this process works and why it is used:

Understanding the Adam Optimizer

The Adam optimizer updates the network parameters based on the following formulas:

1. First Moment Estimate (Mean of Gradients):

    \[    m_t = \beta_1 m_{t-1} + (1 - \beta_1) g_t    \]

where m_t is the first moment estimate, \beta_1 is the exponential decay rate for the first moment estimates, and g_t is the gradient at time step t.

2. Second Moment Estimate (Uncentered Variance of Gradients):

    \[    v_t = \beta_2 v_{t-1} + (1 - \beta_2) g_t^2    \]

where v_t is the second moment estimate, and \beta_2 is the exponential decay rate for the second moment estimates.

3. Bias-Corrected Estimates:

    \[    \hat{m_t} = \frac{m_t}{1 - \beta_1^t}    \]

    \[    \hat{v_t} = \frac{v_t}{1 - \beta_2^t}    \]

4. Parameter Update Rule:

    \[    \theta_t = \theta_{t-1} - \frac{\eta}{\sqrt{\hat{v_t}} + \epsilon} \hat{m_t}    \]

where \theta_t represents the parameters at time step t, \eta is the learning rate, and \epsilon is a small constant to prevent division by zero.

Implementation in PyTorch

To use the Adam optimizer in PyTorch, you need to follow these steps:

1. Define the Neural Network:

python
   import torch
   import torch.nn as nn

   class Net(nn.Module):
       def __init__(self):
           super(Net, self).__init__()
           self.fc1 = nn.Linear(784, 128)
           self.fc2 = nn.Linear(128, 64)
           self.fc3 = nn.Linear(64, 10)

       def forward(self, x):
           x = torch.relu(self.fc1(x))
           x = torch.relu(self.fc2(x))
           x = self.fc3(x)
           return x

   net = Net()
   

2. Initialize the Adam Optimizer:

python
   import torch.optim as optim

   optimizer = optim.Adam(net.parameters(), lr=0.001)
   

Here, `net.parameters()` returns an iterator over the parameters of the network, and `lr` specifies the learning rate.

3. Training Loop:

python
   criterion = nn.CrossEntropyLoss()
   for epoch in range(10):  # loop over the dataset multiple times
       running_loss = 0.0
       for inputs, labels in dataloader:
           optimizer.zero_grad()  # zero the parameter gradients
           outputs = net(inputs)
           loss = criterion(outputs, labels)
           loss.backward()  # backward pass to calculate the gradients
           optimizer.step()  # update the parameters

           running_loss += loss.item()
       print(f'Epoch {epoch+1}, Loss: {running_loss/len(dataloader)}')
   

In this loop, `optimizer.zero_grad()` clears the gradients of all optimized `torch.Tensor`s. The `loss.backward()` call computes the gradient of the loss with respect to the parameters (i.e., backpropagation), and `optimizer.step()` updates the parameters based on the computed gradients.

Advantages of Using Adam

1. Adaptive Learning Rates: Adam adjusts the learning rates of individual parameters, which can lead to faster convergence.
2. Efficient Computation: It is computationally efficient and has low memory requirements.
3. Combines Benefits of AdaGrad and RMSProp: Adam incorporates the benefits of both AdaGrad (adaptive learning rates) and RMSProp (squared gradients), making it robust and effective.

Example of Practical Use

Consider a scenario where you are training a convolutional neural network (CNN) for image classification. Using the Adam optimizer can significantly enhance the training process:

python
import torch.nn.functional as F

class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Conv2d(1, 32, 3, 1)
        self.conv2 = nn.Conv2d(32, 64, 3, 1)
        self.fc1 = nn.Linear(9216, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = F.max_pool2d(x, 2)
        x = torch.flatten(x, 1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.fc2(x)
        return x

cnn = CNN()
optimizer = optim.Adam(cnn.parameters(), lr=0.001)

In this example, the CNN has two convolutional layers followed by two fully connected layers. The Adam optimizer is initialized with the parameters of the CNN. During training, the optimizer will adaptively adjust the learning rates of the parameters, leading to efficient training.

Using `optim.Adam(net.parameters())` in PyTorch is an effective way to optimize the parameters of a neural network. The Adam optimizer's ability to adaptively adjust learning rates for individual parameters, combined with its computational efficiency, makes it a suitable choice for training deep learning models. By following the steps outlined above, one can leverage the power of the Adam optimizer to achieve better performance and faster convergence in training neural networks.

Other recent questions and answers regarding Data:

  • Is it possible to assign specific layers to specific GPUs in PyTorch?
  • Does PyTorch implement a built-in method for flattening the data and hence doesn't require manual solutions?
  • Can loss be considered as a measure of how wrong the model is?
  • Do consecutive hidden layers have to be characterized by inputs corresponding to outputs of preceding layers?
  • Can Analysis of the running PyTorch neural network models be done by using log files?
  • Can PyTorch run on a CPU?
  • How to understand a flattened image linear representation?
  • Is learning rate, along with batch sizes, critical for the optimizer to effectively minimize the loss?
  • Is the loss measure usually processed in gradients used by the optimizer?
  • What is the relu() function in PyTorch?

View more questions and answers in Data

More questions and answers:

  • Field: Artificial Intelligence
  • Programme: EITC/AI/DLPP Deep Learning with Python and PyTorch (go to the certification programme)
  • Lesson: Data (go to related lesson)
  • Topic: Datasets (go to related topic)
Tagged under: Adam Optimizer, Artificial Intelligence, Machine Learning, Neural Networks, Optimization, PyTorch
Home » Artificial Intelligence / Data / Datasets / EITC/AI/DLPP Deep Learning with Python and PyTorch » How to optimize over all adjustable parameters of the neural network in PyTorch?

Certification Center

USER MENU

  • My Account

CERTIFICATE CATEGORY

  • EITC Certification (106)
  • EITCA Certification (9)

What are you looking for?

  • Introduction
  • How it works?
  • EITCA Academies
  • EITCI DSJC Subsidy
  • Full EITC catalogue
  • Your order
  • Featured
  •   IT ID
  • EITCA reviews (Reddit publ.)
  • About
  • Contact
  • Cookie Policy (EU)

EITCA Academy is a part of the European IT Certification framework

The European IT Certification framework has been established in 2008 as a Europe based and vendor independent standard in widely accessible online certification of digital skills and competencies in many areas of professional digital specializations. The EITC framework is governed by the European IT Certification Institute (EITCI), a non-profit certification authority supporting information society growth and bridging the digital skills gap in the EU.

    EITCA Academy Secretary Office

    European IT Certification Institute ASBL
    Brussels, Belgium, European Union

    EITC / EITCA Certification Framework Operator
    Governing European IT Certification Standard
    Access contact form or call +32 25887351

    Follow EITCI on Twitter
    Visit EITCA Academy on Facebook
    Engage with EITCA Academy on LinkedIn
    Check out EITCI and EITCA videos on YouTube

    Funded by the European Union

    Funded by the European Regional Development Fund (ERDF) and the European Social Fund (ESF), governed by the EITCI Institute since 2008

    Information Security Policy | DSRRM and GDPR Policy | Data Protection Policy | Record of Processing Activities | HSE Policy | Anti-Corruption Policy | Modern Slavery Policy

    Automatically translate to your language

    Terms and Conditions | Privacy Policy
    Follow @EITCI
    EITCA Academy

    Your browser doesn't support the HTML5 CANVAS tag.

    • Artificial Intelligence
    • Web Development
    • Cybersecurity
    • Cloud Computing
    • Quantum Information
    • GET SOCIAL
    EITCA Academy


    © 2008-2026  European IT Certification Institute
    Brussels, Belgium, European Union

    TOP
    CHAT WITH SUPPORT
    Do you have any questions?
    We will reply here and by email. Your conversation is tracked with a support token.