Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • IEEE’s Revamped Online Presence Better Showcases Offerings
    • Computer Vision’s Annotation Bottleneck Is Finally Breaking
    • Portable display expands for enhanced mobile productivity
    • Spanish HealthTech startup Punto Health receives €100k grant for their dementia care app
    • 3 Best Thermal Brush, Tested and Reviewed by WIRED (2025)
    • iPhone 16E Specs vs. Google Pixel 8A: How Apple and Google’s Lower-Cost Phones Match Up
    • BBC threatens AI firm with legal action over unauthorised content use
    • What PyTorch Really Means by a Leaf Tensor and Its Grad
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Friday, June 20
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»What PyTorch Really Means by a Leaf Tensor and Its Grad
    Artificial Intelligence

    What PyTorch Really Means by a Leaf Tensor and Its Grad

    Editor Times FeaturedBy Editor Times FeaturedJune 20, 2025No Comments10 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    isn’t yet one more rationalization of the chain rule. It’s a tour by the weird aspect of autograd — the place gradients serve physics, not simply weights

    I initially wrote this tutorial for myself throughout the first yr of my PhD, whereas navigating the intricacies of gradient calculations in PyTorch. Most of it’s clearly designed with customary backpropagation in thoughts — and that’s wonderful, since that’s what most individuals want.

    However Physics-Knowledgeable Neural Community (PINN) is a moody beast and it wants a distinct sort of gradient logic. I spent a while feeding it and I figured it is perhaps price sharing the findings with the group, particularly with fellow PINN practitioners — possibly it’ll save somebody a number of complications. However when you have by no means heard of PINNs, don’t fear! This put up remains to be for you — particularly should you’re into issues like gradients of gradients and all that enjoyable stuff.

    Fundamentals phrases

    Tensor within the laptop world means merely a multidimensional array, i.e. a bunch of numbers listed by a number of integers. To be exact, there exist additionally zero-dimensional tensors, that are simply single numbers. Some folks say that tensors are a generalization of matrices to greater than two dimensions.

    When you have studied basic relativity earlier than, you’ll have heard that mathematical tensors have things like covariant and contravariant indices. However overlook about it — in PyTorch tensors are simply multidimensional arrays. No finesse right here.

    Leaf tensor is a tensor that may be a leaf (within the sense of a graph principle) of a computation graph. We are going to take a look at these beneath, so this definition will make a bit extra sense.

    The requires_grad property of a tensor tells PyTorch whether or not it ought to bear in mind how this tensor is utilized in additional computations. For now, consider tensors with requires_grad=True as variables, whereas tensors with requires_grad=False as constants.

    Leaf tensors

    Let’s begin by creating a number of tensors and checking their properties requires_grad and is_leaf.

    import torch
    
    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    
    c = torch.tensor([5.])
    d = c * c
    
    assert a.requires_grad is True and a.is_leaf is True
    assert b.requires_grad is True and b.is_leaf is False
    assert c.requires_grad is False and c.is_leaf is True
    assert d.requires_grad is False and d.is_leaf is True  # sic!
    del a, b, c, d

    a is a leaf as anticipated, and b just isn’t as a result of it’s a results of a multiplication. a is ready to require grad, so naturally b inherits this property.

    c is a leaf clearly, however why d is a leaf? The rationale d.is_leaf is True stems from a particular conference: all tensors with requires_grad set to False are thought-about leaf tensors, as per PyTorch’s documentation:

    All Tensors which have requires_grad which is False shall be leaf Tensors by conference.

    Whereas mathematically, d just isn’t a leaf (because it outcomes from one other operation, c * c), gradient computation won’t ever prolong past it. In different phrases, there gained’t be any by-product with respect to c. This permits d to be handled as a leaf.

    In a nutshell, in PyTorch, leaf tensors are both:

    • Immediately inputted (i.e. not calculated from different tensors) and have requires_grad=True. Instance: neural community weights which can be randomly initialized.
    • Don’t require gradients in any respect, no matter whether or not they’re instantly inputted or computed. Within the eyes of autograd, these are simply constants. Examples:
      • any neural community enter information,
      • an enter picture after imply elimination or different operations, which entails solely non-gradient-requiring tensors.

    A small comment for individuals who need to know extra. The requires_grad property is inherited as illustrated right here:

    a = torch.tensor([5.], requires_grad=True)
    b = torch.tensor([5.], requires_grad=True)
    c = torch.tensor([5.], requires_grad=False)
    
    d = torch.sin(a * b * c)
    
    assert d.requires_grad == any((x.requires_grad for x in (a, b, c)))

    Code comment: all code snippets must be self-contained apart from imports that I embrace solely after they seem first time. I drop them so as to decrease boilerplate code. I belief that the reader will be capable to deal with these simply.

    Grad retention

    A separate difficulty is gradient retention. All nodes within the computation graph, that means all tensors used, have gradients computed in the event that they require grad. Nonetheless, solely leaf tensors retain these gradients. This is sensible as a result of gradients are usually used to replace tensors, and solely leaf tensors are topic to updates throughout coaching. Non-leaf tensors, like b within the first instance, are usually not instantly up to date; they alter on account of modifications in a, so their gradients may be discarded. Nonetheless, there are eventualities, particularly in Physics-Knowledgeable Neural Networks (PINNs), the place you would possibly need to retain the gradients of those intermediate tensors. In such instances, you have to to explicitly mark non-leaf tensors to retain their gradients. Let’s see:

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.backward()
    
    assert a.grad just isn't None
    assert b.grad is None  # generates a warning

    You in all probability have simply seen a warning:

    UserWarning: The .grad attribute of a Tensor that isn't a leaf Tensor is being 
    accessed. Its .grad attribute will not be populated throughout autograd.backward(). 
    In the event you certainly need the .grad subject to be populated for a non-leaf Tensor, use 
    .retain_grad() on the non-leaf Tensor. In the event you entry the non-leaf Tensor by 
    mistake, ensure you entry the leaf Tensor as an alternative. 
    See github.com/pytorch/pytorch/pull/30531 for extra informations. 
    (Triggered internally at atensrcATen/core/TensorBody.h:491.)

    So let’s repair it by forcing b to retain its gradient

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()  # <- the distinction
    b.backward()
    
    assert a.grad just isn't None
    assert b.grad just isn't None

    Mysteries of grad

    Now let’s take a look at the well-known grad itself. What’s it? Is it a tensor? In that case, is it a leaf tensor? Does it require or retain grad?

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward()
    
    assert isinstance(a.grad, torch.Tensor)
    assert a.grad.requires_grad is False and a.grad.retains_grad is False and a.grad.is_leaf is True
    assert b.grad.requires_grad is False and b.grad.retains_grad is False and b.grad.is_leaf is True

    Apparently:

    – grad itself is a tensor,
    – grad is a leaf tensor,
    – grad doesn’t require grad.

    Does it retain grad? This query doesn’t make sense as a result of it doesn’t require grad within the first place. We are going to come again to the query of the grad being a leaf tensor in a second, however now we’ll take a look at a number of issues.

    A number of backwards and retain_graph

    What is going to occur after we calculate the identical grad twice?

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward()
    strive:
        b.backward()
    besides RuntimeError:
        """
        RuntimeError: Attempting to backward by the graph a second time (or 
        instantly entry saved tensors after they've already been freed). Saved 
        intermediate values of the graph are freed once you name .backward() or 
        autograd.grad(). Specify retain_graph=True if it's worthwhile to backward by 
        the graph a second time or if it's worthwhile to entry saved tensors after 
        calling backward.
        """

    The error message explains all of it. This could work:

    a = torch.tensor([3.], requires_grad=True)
    b = a * a
    b.retain_grad()
    
    b.backward(retain_graph=True)
    print(a.grad)  # prints tensor([6.])
    
    b.backward(retain_graph=True)
    print(a.grad)  # prints tensor([12.])
    
    b.backward(retain_graph=False)
    print(a.grad)  # prints tensor([18.])
    
    # b.backward(retain_graph=False)  # <- right here we'd get an error, as a result of in 
    # the earlier name we didn't retain the graph.

    Aspect (however vital) observe: you may also observe, how the gradient accumulates in a: with each iteration it’s added.

    Highly effective create_graph argument

    Methods to make grad require grad?

    a = torch.tensor([5.], requires_grad=True)
    b = a * a
    b.retain_grad()
    b.backward(create_graph=True)
    
    # Right here an attention-grabbing factor occurs: now a.grad would require grad! 
    assert a.grad.requires_grad is True
    assert a.grad.is_leaf is False
    
    # Then again, the grad of b doesn't require grad, as beforehand. 
    assert b.grad.requires_grad is False
    assert b.grad.is_leaf is True

    The above may be very helpful: a.grad which mathematically is [frac{partial b}{partial a}] just isn’t a relentless (leaf) anymore, however an everyday member of the computation graph that may be additional used. We are going to use that truth in Half 2.

    Why the b.grad doesn’t require grad? As a result of by-product of b with respect to b is just 1.

    If the backward feels counterintuitive for you now, don’t fear. We are going to quickly change to a different technique referred to as nomen omen grad that permits to exactly select elements of the derivatives. Earlier than, two aspect notes:

    Aspect observe 1: In the event you set create_graph to True, it additionally units retain_graph to True (if not explicitly set). Within the pytorch code it seems to be precisely like
    this:

        if retain_graph is None:
            retain_graph = create_graph

    Aspect observe 2: You in all probability noticed a warning like this:

        UserWarning: Utilizing backward() with create_graph=True will create a reference 
        cycle between the parameter and its gradient which might trigger a reminiscence leak. 
        We suggest utilizing autograd.grad when creating the graph to keep away from this. If 
        you must use this perform, ensure that to reset the .grad fields of your 
        parameters to None after use to interrupt the cycle and keep away from the leak. 
        (Triggered internally at C:cbpytorch_1000000000000worktorchcsrcautogradengine.cpp:1156.)
          Variable._execution_engine.run_backward(  # Calls into the C++ engine to 
        run the backward cross

    And we’ll comply with the recommendation and use autograd.grad now.

    Taking derivatives with autograd.grad perform

    Now let’s transfer from the by some means high-level .backward() technique to decrease stage grad technique that explicitly calculates by-product of 1 tensor with respect to a different.

    from torch.autograd import grad
    
    a = torch.tensor([3.], requires_grad=True)
    b = a * a * a
    db_da = grad(b, a, create_graph=True)[0]
    assert db_da.requires_grad is True

    Equally, as with backward, the by-product of b with respect to a may be handled as a perform and differentiated additional. So in different phrases, the create_graph flag may be understood as: when calculating gradients, hold the historical past of how they have been calculated, so we will deal with them as non-leaf tensors that require grad, and use additional.

    Specifically, we will calculate second-order by-product:

    d2b_da2 = grad(db_da, a, create_graph=True)[0]
    # Aspect observe: the grad perform returns a tuple and the primary component of it's what we want.
    assert d2b_da2.merchandise() == 18
    assert d2b_da2.requires_grad is True

    As mentioned earlier than: that is really the important thing property that permits us to do PINN with pytorch.

    Wrapping up

    Most tutorials about PyTorch gradients deal with backpropagation in classical supervised studying. This one explored a distinct perspective — one formed by the wants of PINNs and different gradient-hungry beasts.

    We learnt what leaves are within the PyTorch jungle, why gradients are retained by default just for leaf nodes, and the way to retain them when wanted for different tensors. We noticed how create_graph turns gradients into differentiable residents of the autograd world.

    However there are nonetheless many issues to uncover — particularly why gradients of non-scalar features require further care, the way to compute second-order derivatives with out utilizing your complete RAM, and why slicing your enter tensor is a foul concept once you want an elementwise gradient.

    So let’s meet in Half 2, the place we’ll take a more in-depth take a look at grad 👋



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    Computer Vision’s Annotation Bottleneck Is Finally Breaking

    June 20, 2025

    From Configuration to Orchestration: Building an ETL Workflow with AWS Is No Longer a Struggle

    June 20, 2025

    LLM-as-a-Judge: A Practical Guide | Towards Data Science

    June 20, 2025

    Beyond Model Stacking: The Architecture Principles That Make Multimodal AI Systems Work

    June 19, 2025

    Understanding Matrices | Part 2: Matrix-Matrix Multiplication

    June 19, 2025

    Beyond Code Generation: Continuously Evolve Text with LLMs

    June 19, 2025
    Leave A Reply Cancel Reply

    Editors Picks

    IEEE’s Revamped Online Presence Better Showcases Offerings

    June 20, 2025

    Computer Vision’s Annotation Bottleneck Is Finally Breaking

    June 20, 2025

    Portable display expands for enhanced mobile productivity

    June 20, 2025

    Spanish HealthTech startup Punto Health receives €100k grant for their dementia care app

    June 20, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    Modern timber hits US retail in a big way with first wooden Apple Store

    February 1, 2025

    How to Watch Bong Joon Ho’s ‘Mickey 17’ at Home

    May 25, 2025

    Wood Pellet Mills Are Prone to Catching Fire. Why Build Them in California?

    June 14, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.