Close Menu
    Facebook LinkedIn YouTube WhatsApp X (Twitter) Pinterest
    Trending
    • Give Mom Warm Coffee All Year Long With This Ember Smart Mug Deal
    • AI startups are struggling to access Nvidia GPUs as Microsoft and other cloud providers divert supply to internal teams and large customers like OpenAI (The Information)
    • Instagram’s New ‘Instants’ App Could Let You Ditch the Edits and Just Be Real for 24 Hours
    • Introduction to Approximate Solution Methods for Reinforcement Learning
    • AI infrared grill brings real grilled flavor indoors
    • The Latest Push to Extend Key US Spy Powers Is Still a Mess
    • Anthropic says Google is committing to invest $10B now in cash at a $350B valuation and will invest another $30B if Anthropic hits performance targets (Bloomberg)
    • ‘Apex’ Review: Charlize Theron Netflix Thriller Avoids Rock Bottom, but Barely
    Facebook LinkedIn WhatsApp
    Times FeaturedTimes Featured
    Friday, April 24
    • Home
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    • More
      • AI
      • Robotics
      • Industries
      • Global
    Times FeaturedTimes Featured
    Home»Artificial Intelligence»Introduction to Approximate Solution Methods for Reinforcement Learning
    Artificial Intelligence

    Introduction to Approximate Solution Methods for Reinforcement Learning

    Editor Times FeaturedBy Editor Times FeaturedApril 24, 2026No Comments9 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email WhatsApp Copy Link


    collection about Reinforcement Studying (RL), following Sutton and Barto’s well-known e-book “Reinforcement Studying” [1].

    Within the earlier posts we completed dissecting Half I of stated e-book, which introduces elementary resolution strategies which type the premise for a lot of RL strategies. These are: Dynamic Programming (DP), Monte Carlo strategies (MC) and Temporal Distinction Studying (TD). What separates Half I from Half II of Sutton’s e-book, and justifies the excellence, is a constraint on the issue dimension: whereas in Half I tabular resolution strategies had been lined, we now dare to dive deeper into this fascinating matters and embody operate approximation.

    To make it particular, in Half I we assumed the state house of the issues below investigation to be sufficiently small s.t. we may symbolize it and likewise the discovered options through a easy desk (think about a desk denoting a sure “goodness” – a worth – for every state). Now, in Half II, we drop this assumption, and are thus capable of deal with arbitrary issues.

    And this modified setup is dearly wanted, as we may observe first-hand: in a earlier publish we managed to study to play Tic Tac Toe, however already failed for Join 4 – for the reason that variety of states right here is within the order of 10²⁰. Or, contemplate an RL downside which learns a job primarily based on digicam photos: the variety of attainable digicam photos is larger than the variety of atoms within the identified universe [1].

    These numbers ought to persuade everybody that approximate resolution strategies are completely essential. Subsequent to enabling tackling such issues, in addition they supply generalization: for tabular strategies, two shut, however nonetheless totally different states had been handled utterly separate – whereas for approximate resolution strategies, we might hope that our operate approximation can detect such shut states and generalize.

    With that, let’s start. Within the subsequent few paragraphs, we are going to:

    • give an introduction to operate approximation
    • produce resolution strategies for such issues
    • talk about totally different decisions for approximation capabilities.

    Introduction to Operate Approximation

    Versus tabular resolution strategies, for which we used a desk to symbolize e.g. worth capabilities, we now use a parametrized operate

    with a weight vector

    v could be something, similar to a linear operate of the enter values, or a deep neural community. Later on this publish we are going to talk about totally different potentialities in particulars.

    Often, the variety of weights is way smaller than the variety of states – which yields generalization: after we replace our operate by adjusting some weights, we don’t simply replace a single entry in a desk – however it will impact (presumably) all different estimates, too.

    Let’s recap the updates guidelines from a number of of the strategies we noticed in earlier posts.

    MC strategies assign the noticed return G as worth estimate for a state:

    TD(0) bootstraps the worth estimate of the subsequent state:

    Whereas DP makes use of:

    Any longer, we are going to interpret updates of the shape s -> u as enter / output pairs of a operate we want to approximate, and for this use strategies from machine studying, specifically: supervised studying. Duties the place numbers (u) must be estimated is named operate approximation, or regression.

    To unravel this downside, we are able to in concept resort to any attainable technique for such job. We’ll talk about this in a bit, however ought to point out that there are particular necessities on such strategies: for one, they need to have the ability to deal with incremental modifications and datasets – since in RL we normally construct up expertise over time, which differs from, e.g. classical supervised studying duties. Additional, the chosen technique ought to have the ability to deal with non-stationary targets – which we are going to talk about within the subsequent subsection.

    The Prediction Goal

    All through Half I of Sutton’s e-book, we by no means wanted a prediction goal or related – in any case, we may all the time converge to the optimum operate which described every state’s worth completely. As a result of causes acknowledged above, that is not attainable – requiring us to outline an goal, a price operate, which we wish to optimize.

    We use the next:

    Let’s attempt to perceive this. That is an expectation over the distinction between predicted and precise values, which, intuitively is smart and is frequent in supervised studying. Word that this requires us to outline a distribution µ, which specifies how a lot we care about sure states.

    Usually, this merely is a measure proportional to how usually states are visited – the on-policy-distribution, on which we are going to focus on this part.

    Nevertheless, observe that it’s truly not clear whether or not that is the proper goal: in RL, we care about discovering good insurance policies. Some technique of ours would possibly optimize above goal extraordinarily nicely, however nonetheless fail to resolve the issue at hand – e.g. when the coverage spends an excessive amount of time in undesired states. Nonetheless, as mentioned, we’d like one such goal – and as a consequence of lack of different potentialities, we simply optimize this.

    Subsequent, let’s introduce a technique for minimizing this goal.

    Minimizing the Prediction Goal

    The device we decide for this job is Stochastic Gradient Descent (SGD). Not like Sutton, I don’t wish to go into too many particulars right here, and solely concentrate on the RL half – so I want to refer the reader to [1] or every other tutorial on SGD / deep studying.

    However, in precept, SGD makes use of batches (or mini batches) to compute the gradient of the target and replace the weights a small step within the path minimizing this goal.

    For thus, this gradient is:

    Now the attention-grabbing half: assume that v_π isn’t the true goal, however some (noisy) approximation of it, say U_t:

    We will present that if U_t is an unbiased of v_π, then the answer obtained through SGD converges to an area optimum – handy. We will now merely use e.g. the MC return as U_t, and acquire our very first gradient RL technique:

    Picture from [1]

    Additionally it is attainable to make use of different measures for U_t, specifically additionally use bootstrapping, i.e. use earlier estimates. When doing so, we lose these convergence ensures – however as so usually empirically this nonetheless works. Such strategies are known as semi-gradient strategies – since they solely contemplate the impact of adjusting the weights on the worth to replace, however not on the goal.

    Primarily based on this we are able to introduce TD(0) with operate approximation:

    Picture from [1]

    A pure extension of this, and likewise an extension to the corresponding n-step tabular technique, is n-step semi-gradient TD:

    Picture from [1]

    Strategies for Operate Approximation

    Within the the rest of Chapter 9 Sutton describes alternative ways of representing the approximate operate: a big a part of the chapter covers linear operate approximation and have design for this, and for non-linear operate approximation synthetic neural networks are launched. We’ll solely briefly cowl these matters, as on this weblog we primarily work with (deep) neural networks and never easy linear approximations, and likewise suspect the astute reader is already aware of fundamentals of deep studying and neural networks.

    Linear Operate Approximation

    Nonetheless, let’s briefly talk about linear approximation. On this, the state-value operate is approximated by the internal product:

    Right here, the state is described by the vector

    – and, as we are able to see, this can be a linear mixture of the weights.

    As a result of simplicity of the illustration, there are some elegant formulation (and closed-loop representations) for the answer, in addition to some convergence ensures.

    Characteristic Building for Linear Strategies

    A limitation of the above launched naive linear operate approximation is that every characteristic is used individually, and no mixture of options is feasible. Sutton lists the issue cart pole for example: right here, excessive angular velocity may be good or unhealthy, relying on the context. When the pole is properly centered, one ought to in all probability keep away from fast, jerky actions. Nevertheless, the nearer the pole will get to falling over, the sooner velocities could be wanted.

    There’s thus a separate department of analysis about designing environment friendly characteristic representations (though one may argue, that as a result of rise of deep studying, that is turning into much less essential).

    One such representations are polynomials. As an introductory instance, think about the state vector is comprised of two elements, s_1 and s_2. We may thus outline the characteristic house:

    Then, utilizing this illustration, we may nonetheless do linear operate approximation – i.e. use 4 weights to the 4 newly constructed options, and total nonetheless have a linear operate w.r.t. the weights.

    Extra typically, the polynomial-basis options of order n+1 may be represented by

    the place the c’s are integers in {0 … n}.

    Different generally used bases are the Fourier foundation, coarse and tile coding, and radial foundation capabilities – however as talked about we won’t dive deeper at this level.

    Conclusion

    On this publish we made an essential step past the earlier posts in direction of deploying RL algorithms “within the wild”. Within the previous posts, we targeted on introducing the important RL strategies, albeit within the type of tabular strategies. We noticed that they shortly attain their limits when deployed to bigger issues and thus realized that approximate resolution strategies are wanted.

    On this publish we launched fundamentals for this. Subsequent to enabling the tackling of large-scale, real-world issues, these strategies additionally introduce generalization – a robust necessity for any profitable RL algorithm.

    We started by introducing an acceptable prediction goal and methods of optimizing this.

    Then we launched our first gradient and semi-gradient RL algorithms for the prediction goal – that’s studying a worth operate for a given coverage.

    Lastly we mentioned alternative ways for establishing the approximation operate.

    As all the time, thanks for studying! And in case you are , keep tuned for the subsequent publish through which we are going to dive into the corresponding management downside.

    Different Posts on this Sequence

    References

    [1] http://incompleteideas.net/book/RLbook2020.pdf

    [2] https://pettingzoo.farama.org/



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Editor Times Featured
    • Website

    Related Posts

    I Built an AI Pipeline for Kindle Highlights

    April 24, 2026

    How to Select Variables Robustly in a Scoring Model

    April 24, 2026

    Your Synthetic Data Passed Every Test and Still Broke Your Model

    April 23, 2026

    Using a Local LLM as a Zero-Shot Classifier

    April 23, 2026

    I Simulated an International Supply Chain and Let OpenClaw Monitor It

    April 23, 2026

    Lasso Regression: Why the Solution Lives on a Diamond

    April 23, 2026
    Leave A Reply Cancel Reply

    Editors Picks

    Give Mom Warm Coffee All Year Long With This Ember Smart Mug Deal

    April 24, 2026

    AI startups are struggling to access Nvidia GPUs as Microsoft and other cloud providers divert supply to internal teams and large customers like OpenAI (The Information)

    April 24, 2026

    Instagram’s New ‘Instants’ App Could Let You Ditch the Edits and Just Be Real for 24 Hours

    April 24, 2026

    Introduction to Approximate Solution Methods for Reinforcement Learning

    April 24, 2026
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    About Us
    About Us

    Welcome to Times Featured, an AI-driven entrepreneurship growth engine that is transforming the future of work, bridging the digital divide and encouraging younger community inclusion in the 4th Industrial Revolution, and nurturing new market leaders.

    Empowering the growth of profiles, leaders, entrepreneurs businesses, and startups on international landscape.

    Asia-Middle East-Europe-North America-Australia-Africa

    Facebook LinkedIn WhatsApp
    Featured Picks

    14 Best Office Chairs of 2025— I’ve Tested 55+ to Pick Them

    February 1, 2025

    Google Data Centers Are Returning Nuclear Power to Tornado Country

    December 13, 2025

    A Visual Guide to Tuning Gradient Boosted Trees

    September 15, 2025
    Categories
    • Founders
    • Startups
    • Technology
    • Profiles
    • Entrepreneurs
    • Leaders
    • Students
    • VC Funds
    Copyright © 2024 Timesfeatured.com IP Limited. All Rights.
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.