posts, we explored Half I of the seminal e book Reinforcement Studying by Sutton and Barto [1] (*). In that part, we delved into the three elementary methods underlying practically each fashionable Reinforcement Studying (RL) algorithm: Dynamic Programming (DP), Monte Carlo strategies (MC), and Temporal Distinction Studying (TD). We not solely mentioned algorithms from every area in depth but additionally applied each in Python.
Half I of the e book focuses on tabular answer strategies — approaches fitted to issues sufficiently small to be represented in desk kind. For example, with Q-learning, we are able to compute and retailer an entire Q-table containing each attainable state-action pair. In distinction, Half II of Sutton’s e book tackles approximate answer strategies. When state and motion areas turn out to be too massive — and even infinite — we should generalize. Take into account the problem of enjoying Atari video games: the state house is just too huge to mannequin exhaustively. As an alternative, deep neural networks are used to compress the state right into a latent vector, which then serves as the idea for an approximated worth perform [2].
Whereas we’ll enterprise into Half II in upcoming posts, I’m excited to announce a brand new sequence: we’ll benchmark all of the algorithms launched in Half I towards each other. This put up serves each as a abstract and an introduction to our benchmarking framework. We’ll consider every algorithm primarily based on how shortly and successfully it might probably resolve more and more bigger Gridworld environments. In future posts, we plan to increase our experiments to more difficult situations, comparable to two-player video games, the place the variations between these strategies can be much more obvious.
Summarized, on this put up, we’ll:
- Introduce the benchmark activity and focus on our comparability standards.
- Present a chapter-by-chapter abstract of the strategies launched in Sutton’s e book together with preliminary benchmarking outcomes.
- Establish the best-performing strategies from every group and deploy them in a larger-scale benchmarking experiment.
Desk of Contents
Introducing the Benchmark Job and Experiment Planning
On this put up we’ll work with the Gymnasium [3] setting “Gridworld”. It’s basically a maze-finding activity, during which the agent has to get from the top-left nook of the maze to the bottom-right tile (the current) — with out falling into any of the icy lakes:
The state house is a quantity between 0 and N — 1, the place N is the maximal variety of tiles (16 in our case). There are 4 actions the agent can execute in each step: going left, proper, up or down. Reaching the aim yields reward 1, falling into the lake ends the episode with none reward.
The great factor about this setting is that one can generate random worlds of arbitrary dimension. Thus, what we’ll do with all strategies, is plot the variety of steps / updates wanted to resolve the setting versus the setting dimension. In actual fact, Sutton does this in some components of the e book, too, so we are able to confer with that.
Preliminaries
I’d like to start out with some common notes — hoping these will information you thru my thought course of.
It isn’t straightforward to match algorithms in a “truthful” manner. When implementing the algorithms I primarily regarded for correctness, but additionally simplicity — that’s, I wished readers to simply have the ability to join the Python code with pseudocode from Sutton’s e book. For “actual” use circumstances, one would certainly optimize the code extra, and in addition use a big array of tips widespread in RL, comparable to utilizing decaying exploration, optimistic initialization, studying charge tuning, and extra. Additional, one would take nice care to tune the hyperparameters of the deployed algorithm.
Utilized RL Methods
As a result of massive variety of algorithms below investigation, I can’t do that right here. As an alternative, I recognized two essential mechanisms, and measured their effectiveness on an algorithm identified to work fairly nicely: Q-learning. These are:
- Intermediate rewards: as an alternative of solely rewarding the agent for reaching the aim, we reward it for progress alongside the maze. We quantify this by the (normalized) distinction in x and y coordinates between present and former state. This makes use of the truth that the aim in every Gridworld setting is at all times on the backside proper, and thus increased x / y coordinates often are higher (though one might nonetheless get “caught”, in case an icy lake is between the agent and the aim). Since this distinction is normalized by the variety of states, its contribution is small, s.t. it doesn’t overshadow the reward of reaching the aim.
- Decaying exploration: all through this put up sequence, the exploration-exploration dilemma got here up incessantly. It describes the trade-off of exploiting states / actions already identified to be good, and exploring much less explored states — probably discovering even higher options, on the threat of losing time in much less optimum areas. One widespread approach addressing that is to decay exploration: beginning with a excessive quantity of exploration early, and slowly decaying this right down to decrease ranges. We do that by linearly decaying ε from 1 (100% random exploration) to 0.05 over 1000 steps.
Let’s take a look at how Q-learning performs with these methods:

As we are able to see, within the baseline setup the variety of steps wanted shortly grows, and reaches the maximal variety of allowed steps (100.000 ) — which means the algorithm didn’t resolve the setting within the allotted variety of steps. Additionally decaying ε alone didn’t contribute a lot. Nonetheless, including intermediate rewards proved to be extraordinarily efficient — and the mixture of this and decaying ε carried out finest.
Thus, for many strategies to return we begin with the “naïve” setting, the baseline implementation. Later we present outcomes for the “improved” setting consisting of intermediate rewards and decaying exploration.
Comparability Standards
As was seen within the earlier part I selected the variety of steps wanted till the discovered coverage solves the Gridworld setting because the default manner of evaluating strategies. This appears a bit extra truthful than simply measuring elapsed time, since time depends upon the concrete implementation (much like the idea of Big O notation)— and, as talked about above, I didn’t optimize for pace. Nonetheless, it is very important word that additionally steps will be deceptive, as e.g. one step in DP strategies accommodates a loop over all states, whereas one step in MC and TD strategies is the technology in a single episode (really for TD strategies we often rely one step as one worth replace, i.e. an episode technology consists of a number of steps — nonetheless I made this extra corresponding to MC strategies on goal). Attributable to this we additionally present elapsed time generally.
Experiment Construction
To cut back the variance, for every Gridworld dimension we run every methodology thrice, after which save the bottom variety of steps wanted.
The code required to run all following benchmarking will be discovered on GitHub.
Recap and Benchmarking of All Algorithms
With the basics out of the best way, let’s correctly get began. On this part we’ll recap all launched algorithms from Half I of Sutton’s e book. Additional, we’ll benchmark them towards one another on the beforehand launched Gridworld activity.
Dynamic Programming
We begin with Chapter 4 of Sutton’s e book, describing strategies from DP. These will be utilized to all kinds of issues, at all times constructing on the precept of iteratively constructing bigger options from smaller subproblems. Within the context of RL, DP strategies keep a Q-table which is crammed out incrementally. For this, we require a mannequin of the setting, after which, utilizing this mannequin, replace the anticipated worth of states or state-action pairs relying on the attainable successor states. Sutton introduces two strategies we picked up in our corresponding put up: coverage and worth iteration.
Let’s begin with coverage iteration. This consists of two interleaved steps, specifically coverage analysis and coverage enchancment. Coverage analysis makes use of DP to — because the identify says — consider the present coverage: we incrementally replace the state estimates through the use of the mannequin and coverage. Subsequent comes coverage enchancment, which employs a elementary idea of RL: in accordance with the coverage enchancment theorem, any coverage will get higher when altering the expected motion in a single state to a greater motion. Following this, we assemble the brand new coverage from the Q-table in grasping vogue. That is repeated, till the coverage has converged.
The corresponding pseudocode appears to be like as follows:

Let’s come to worth iteration. That is similar to coverage iteration, however with a easy, but essential modification: in each loop, just one step of coverage analysis is run. It may be proven that this nonetheless converges to the optimum coverage, and general does so quicker than coverage iteration:

For extra particulars, here’s my corresponding post about DP methods.
Outcomes
Now it’s time to see what these first two algorithms are actually fabricated from. We run the experiment sketched above, and get the next plot:

Each strategies are capable of resolve all created Gridworld sizes within the minimal variety of steps, 100. Stunning? Effectively, this really exhibits each a energy and in addition to a weak point of DP strategies, as concurrently of our methodology: DP strategies are “thorough”, they require an entire mannequin of the world, after which iterate over all states — yielding a great answer with only a few passes over all states. Nonetheless, which means all states have to be estimated until convergence — though a few of them may be a lot much less attention-grabbing — and this scales fairly badly with the setting dimension. In actual fact, one measured step right here accommodates a full run over all states — indicating that for these strategies time is a greater measure.
For this, we get the next graph:

Now, we are able to see enhance in compute wanted w.r.t. to the variety of states. And, we are able to additionally see that, as claimed, worth iteration converges a lot quicker, and scales a lot better. Word that the x-axis labels denote n, with the Gridworld having a dimension of n x n.
Monte Carlo Strategies
Subsequent in our put up sequence on RL we coated MC strategies. These can study from expertise alone, i.e. one can run them in any sort of setting, with out having a mannequin of it — which is a surprising realization, and really helpful: typically, we don’t have this mannequin, different occasions, it might be too advanced and impractical to make use of. Take into account the sport of Blackjack: whereas we are able to actually mannequin all attainable outcomes and corresponding chances, it’s a very tedious activity — and studying to play by simply doing that may be a very tempting thought. Attributable to not utilizing a mannequin, MC strategies are unbiased — however on the draw back their expectation has a excessive variance.
One difficulty when implementing these strategies is ensuring that each one state-action pairs are repeatedly visited, and thus up to date. Attributable to not having a mannequin, we can’t merely iterate over all attainable combos (evaluate e.g. DP strategies), however (in a manner) randomly discover the setting. If attributable to this we missed some states totally, we’d free theoretical convergence ensures, which might translate into observe.
A method of satisfying that is the exploring begins assumption (ES): we begin every episode in a random state and select the primary motion randomly, too. Aside from that, MC strategies will be applied moderately merely: we merely play out full episodes, and set the anticipated worth of state-action pairs to the common obtained reward.
MC with ES appears to be like as follows:

To take away the idea of ES, we are able to resort to 2 lessons of algorithms: on- and off-policy strategies. Let’s begin with the on-policy one.
That is really not too totally different from the ES algorithm, we merely use an ε-greedy coverage for producing episodes. That’s, we take away the idea of ES and use a “gentle” as an alternative of a “onerous” coverage for producing episodes: the used coverage in each iteration shouldn’t be totally grasping, however ε-greedy — which ensures that within the restrict we see all attainable state-action pairs:

Off-policy strategies comply with the thought of splitting exploration and studying in two insurance policies. We keep a coverage π, which we need to optimize, and a habits coverage, b.
Nonetheless, we are able to’t merely use b in every single place in our algorithm. When producing an episode and computing returns, we acquire:

I.e., the ensuing worth is the expected worth of b, not π.
That is the place significance sampling is available in. We will repair this expectation with the appropriate ratio:

This ratio is outlined by:

In our case, we acquire the next formulation:

(Word that this makes use of weighted significance sampling, as an alternative of “naïve” significance sampling.)
We might in fact compute these ratios naively in each step. Nonetheless, Sutton introduces a intelligent scheme updating these values (denoted by W
) incrementally, which is far more environment friendly. In actual fact, in my unique put up I confirmed the naive model, too — I consider this helps with understanding. Nonetheless, since right here we primarily care about benchmarking, and the “naïve” and the “incremental” model are equivalent, as much as efficiency — we right here solely listing the marginally extra advanced incremental model.
In pseudocode the corresponding algorithm appears to be like as follows:

Word that, against our preliminary put up introducing these strategies, the place the habits coverage was merely randomly, right here we choose a greater one — specifically an ε-greedy one w.r.t. to the present Q-table.
For extra particulars here’s my corresponding post on MC methods.
Outcomes
With that, let’s evaluate these three algorithms on small Gridworld environments. Word that one step right here denotes one full episode generated:

We observe off-policy MC to already outing at a Gridword dimension of 5×5, and, though MC with ES and on-policy MC carry out higher, additionally they begin to battle with bigger sizes.
This may be considerably stunning, and disappointing for MC followers. Don’t fear, we’ll handle to spice up this — nonetheless it exhibits a weak point of those algorithms: in “massive” environments with sparse rewards, MC strategies mainly need to hope to bump into the aim by probability — which decreases exponentially with the dimensions of the setting.
Thus, let’s try to make the duty simpler for the mannequin, and use the beforehand launched tips empirically discovered to assist efficiency of TD-learning: including intermediate rewards, and ε-decay — our “improved” setup.
In actual fact, with this all strategies fare a lot better:

Nonetheless, now MC ES is inflicting issues. Thus, let’s put this apart and proceed with out it: ES anyhow was a theoretical idea on the best way of creating MC strategies, and clunky to make use of / implement (some would possibly keep in mind how I applied having the setting begin in random states …):

Right here, a minimum of we get near the outcomes of DP. Word that I capped the maximal variety of steps to 100.000, so every time this quantity exhibits up within the graph it implies that the algorithm couldn’t resolve this setting within the given step restrict. On-policy MC really appears to carry out very well, the variety of steps wanted barely will increase— however off-policy MC appears to carry out worse.
Dialogue
To me, MC strategies carry out surprisingly nicely — since they basically stumble across the setting randomly to start with, hoping to search out the aim by exploration alone. Nonetheless, in fact this isn’t totally true — their efficiency (talking of on-policy MC) turns into actually good solely after enabling intermediate rewards — which information the mannequin in the direction of the aim. On this setup it appears MC strategies carry out very well — one potential cause being that they’re unbiased — and fewer delicate to hyperparameter tuning and co.
Temporal-Distinction Studying
Let’s come to TD strategies. These will be seen as combining the strengths of each approaches beforehand launched: much like MC, they don’t want a mannequin of the setting — however nonetheless they construct upon earlier estimates, they bootstrap — as in DP.
Let’s recap DP and MC fashions:
DP strategies flip the Bellman equation into an replace rule, and compute the worth of a state primarily based on the estimated values of its successor states:

MC strategies, alternatively, play out full episodes after which replace their worth estimates primarily based on the noticed return:

TD strategies mix these two concepts. They play out full episodes, however after each step replace worth estimates with the noticed return, and the earlier estimate:

A few of the most elementary RL algorithms stem from this area — and we’ll focus on them within the following.
Let’s start with Sarsa. First, we modify above launched replace rule to work with state-action pairs:

With this, Sarsa is definitely launched moderately shortly: we play episodes, and replace values following our present coverage. The identify comes from the tuples used within the updates:

In pseudocode this appears to be like as follows:

Subsequent up we’ve got Q-learning. That is similar to Sarsa, with one key distinction: it’s an off-policy algorithm. As an alternative of merely following the executed transition through the replace, we take the utmost Q-value of all successor states:

You’ll be able to image this as making a habits coverage b, which is the same as π, besides being grasping within the transitions below query.
The pseudocode appears to be like like this:

One other algorithm is Anticipated Sarsa, which (you guessed it) — is an extension of Sarsa. As an alternative of following the one transition executed by the coverage, we account for all attainable successor states, and weigh them by how seemingly they’re given the present coverage:

The final algorithm on this chapter is an extension of Q-learning. Q-learning suffers from an issue often called maximization bias: because it makes use of a most over anticipated values, the ensuing estimate can have optimistic bias. We will tackle this through the use of two Q-tables: for every replace we use one for choosing a value-maximizing motion, and the opposite for computing the replace goal. Which is used the place is set by a coin flip. The algorithm is called Double Q-learning:

Outcomes
Let’s take a look on the outcomes, beginning with the naïve setting:

We will see that each Q-learning strategies begin to get issues with Gridworld sizes of 11 x 11.
Thus let’s apply our identified tips, yielding the “improved” setup:

All strategies can now discover options considerably faster — simply Anticipated Sarsa falls out. This might very nicely be — it’s considerably much less used than Q-learning or Sarsa, and possibly extra a theoretical idea.
Thus, let’s proceed with out this methodology and see how massive world sizes we are able to resolve:

Q-learning can now additionally resolve grid sizes of 25 x 25 with out issues — however Sarsa and Double Q-learning begin to degrade.
Extra particulars will be present in my introductory post about TD methods.
Dialogue
Within the improved setup, TD strategies usually carry out nicely. We solely eradicated Anticipated Sarsa early, which anyhow shouldn’t be such a standard algorithm.
“Easy” Sarsa and Double Q-learning battle for bigger setting sizes, whereas Q-learning performs nicely general. The latter is considerably stunning, since Double Q-learning ought to tackle a few of the shortcomings of ordinary Q-learning, particularly the excessive variance. Probably, we already scale back the variance by working every experiment n occasions. One other speculation might be that Double Q-learning takes longer to converge, for the reason that variety of parameters has additionally doubled — which might point out that the facility of Double Q-learning exhibits higher for extra advanced issues with extra time.
As talked about performs Q-learning higher than Sarsa. This mirrors what can see in analysis / literature, specifically that Q-learning is considerably extra well-liked. This will most likely defined by it being off-policy, which often yields extra highly effective answer strategies. Sarsa alternatively performs higher for stochastic or “harmful” duties: since in Sarsa the precise chosen motion is taken under consideration within the worth replace, it higher understands the results of its actions, which turns out to be useful for stochastic environments and / or environments the place one can, e.g., fall off a cliff. Regardless of the latter being the case right here, the setting might be not advanced or massive sufficient, that this impact comes into play.
TD-n
TD-n strategies in a manner marry classical TD studying and MC strategies. As Sutton so properly places it, they “free us from the tyranny of the timestep” [1]. In MC strategies, we’re pressured to attend a full episode earlier than making any updates. In TD strategies, we replace estimates in each step — however are additionally pressured to solely look one step sooner or later.
Thus, it is smart to introduce n-step returns:

With that, we are able to merely introduce Sarsa-n:

We play episodes following the present coverage, after which replace the worth estimates with the n-step return.
In my corresponding put up, we additionally introduce an off-policy model of this. Nonetheless, to not blow up this put up too lengthy, and destructive expertise with off-policy MC strategies, we concentrate on the “classics” — comparable to Sarsa-n — and tree-n tree backup, which we introduce subsequent.
n-step tree backup is an extension of the beforehand seen Anticipated Sarsa. When computing the n-step return, the corresponding transition tree appears to be like as follows:

I.e., there’s a single path down the tree equivalent to the precise motion taken. Simply as in Anticipated Sarsa, we now need to weigh actions in accordance with their chance decided by the coverage. However since now we’ve got a tree of depth > 1, the cumulative worth of later ranges is weighted by the chance of the motion taken to succeed in these ranges:

The pseudocode appears to be like as follows:

Right here’s my corresponding put up on n-step TD methods.
Outcomes
As typical, we begin with the “naïve” setting, and acquire the next outcomes:

Sarsa-n begins to battle already with smaller grid world sizes. Let’s see if the improved setup modifications this:

Now certainly Sarsa-n performs a lot better, however n-step tree backup doesn’t.
Dialogue
I discovered this discovery sudden and considerably onerous to elucidate. I’d love to listen to your ideas on this — however within the meantime I used to be chatting with my chat agent of alternative, and got here to this speculation: intermediate rewards probably confuse the tree algorithm, because it must study an identical return distribution over all attainable actions. Additional, the extra ε decays, the extra the anticipated distribution would possibly differ from the habits coverage.
Mannequin-Based mostly Reinforcement Studying / Planning
Within the earlier chapter we mentioned the subject “planning” — within the RL context with this we primarily confer with model-based strategies. That’s: we’ve got (or construct) a mannequin of the setting, and use this mannequin to discover additional “nearly”, and particularly use these explorations for extra and higher updates / learnings of the worth perform. The next picture shows the mixing of planning into studying very nicely:

Within the top-right nook we see the “classical” RL coaching loop (additionally dubbed “direct” RL): beginning with some worth perform / coverage we act within the (actual) setting, and use this expertise to replace our price perform (or coverage within the case of policy-gradient strategies). When incorporating planning, we moreover additionally study a mannequin of the world from this expertise, after which use this mannequin to generate additional (digital) expertise, and replace our price or coverage perform from this.
This really is strictly the Dyna-Q algorithm, which appears to be like as follows in pseudocode:

Steps (a) — (d) are our classical Q-learning, whereas the remainder of the algorithm provides the novel planning performance, particularly the world mannequin studying.
One other associated algorithm is Prioritized Sweeping, which modifications how we pattern states for the “planning loop”: we discover and play in the true setting, whereas studying the mannequin, and save state-action pairs with massive anticipated worth modifications to a queue. Solely with this queue we begin the “planning loop”, i.e. one thing to the steps (e) and (f) above:

Extra particulars will be present in my earlier put up on model-based RL methods.
Outcomes
Let’s begin with the naïve setting:

Dyna Q performs moderately nicely, whereas Prioritized Sweeping struggles early on.
Within the improved setting we see an analogous factor:

Dialogue
Prioritized sweeping already carried out poorly within the corresponding introductory put up — I believe there both is a few difficulty, or extra seemingly this merely is a “tuning” factor — i.e. utilizing a mistaken sampling distribution.
Dyna-Q yields strong outcomes.
Benchmarking the Greatest Algorithms
Now we have now seen the efficiency of all algorithms from Half I of Sutton’s e book by benchmarking them per chapter and on Gridworlds of as much as dimension 25 x 25. Already right here we noticed higher and worse performing algorithms, and particularly already discarded just a few candidates not fitted to bigger environments.
Now we need to benchmark the remaining ones — the most effective ones from every chapter — towards each other, on Gridworlds as much as dimension 50 x 50.

These algorithms are:
- worth iteration
- on-policy MC
- Q-learning
- Sarsa-n
- Dyna-Q
Outcomes
Right here’s how they carry out on Gridworld, this time with a maximal step restrict of 200.000:

Let’s additionally plot the corresponding time wanted (word that I plot unsuccessful runs — runs reaching the maximal variety of steps with out producing a possible coverage — at 500s):

We will observe a number of attention-grabbing details from these figures:
- The variety of steps vs. time wanted is very correlated.
- Worth iteration performs exceptionally nicely, fixing even Gridworlds of dimension 50 x 50 with ease, and doing so magnitudes quicker than the next-best algorithm.
- The rating for the remaining algorithms is (higher to worse): On-policy MC, Dyna-Q, Q-learning, Sarsa-n.
Within the subsequent part we focus on these in additional particulars.
Dialogue
1. Steps vs. Time
We began this put up with a dialogue on which metrics / measurement to make use of, and — particularly — whether or not to make use of variety of steps or time wanted to resolve the issue. Wanting again, we are able to say that this dialogue was not so related in spite of everything, and — considerably surprisingly — these two numbers are extremely correlated. That’s, although, as initially described, one “step” can differ relying on the algorithm.
2. Worth Iteration Dominates
Worth Iteration carried out remarkably nicely, fixing even massive Gridworlds (as much as 50×50) with ease—outpacing all different algorithms by a large margin. This may be stunning, contemplating that DP strategies are sometimes thought-about theoretical instruments, hardly ever utilized in observe. Actual-world functions are likely to favor strategies like Q-learning [2], PPO [4], or MCTS [5].
So why does such a “textbook” methodology dominate right here? As a result of this setting is tailored for it:
- The mannequin is totally identified.
- The dynamics are easy and deterministic.
- The state house is comparatively small.
These are precisely the situations below which DP thrives. In distinction, model-free strategies like Q-learning are designed for settings the place such data is not accessible. Their energy lies in generality and scalability, not in exploiting small, well-defined issues. Q-learning incurs excessive variance and requires many episodes to converge—disadvantages which can be magnified in small-scale environments. In brief, there’s a transparent trade-off between effectivity and generality. We’ll revisit this level in a future put up once we introduce perform approximation, the place Q-learning has extra room to shine.
3. A Rating Emerges
Past Worth Iteration, we noticed the next efficiency rating: On-policy MC > Dyna-Q > Q-learning > Sarsa-n
On-policy Monte Carlo emerged because the best-performing model-free algorithm. This matches with our earlier reasoning: MC strategies are easy, unbiased, and well-suited to issues with deterministic objectives—particularly when episodes are comparatively brief. Whereas not scalable to massive or steady issues, MC strategies appear to be fairly efficient in small to medium-sized duties like Gridworld.
Dyna-Q comes subsequent. This consequence reinforces our expectations: Dyna-Q blends model-based planning with model-free studying. Though the mannequin is discovered (not given, as in Worth Iteration), it’s nonetheless easy and deterministic right here—making the discovered mannequin helpful. This boosts efficiency considerably over pure model-free approaches.
Q-learning, whereas nonetheless highly effective, underperforms on this context for the explanations mentioned above: it’s a general-purpose algorithm that isn’t capable of totally leverage the construction of easy environments.
Sarsa-n landed in final place. A probable clarification is the added bias launched by way of bootstrapping in its multi-step updates. In contrast to Monte Carlo strategies, which estimate returns from full trajectories (unbiased), Sarsa-n makes use of bootstrapped estimates of future rewards. In small environments, this bias can outweigh the advantages of decreased variance.
Lastly, let’s evaluate our outcomes vs. those from Sutton:

Word that Sutton lists the whole variety of steps on the x-axis, whereas we listing n, with the whole variety of states being n x n. For 376 states, Sutton report ~100k steps earlier than the optimum answer is discovered, whereas we report 75k for 400 states (20 x 20), contemplating Dyna-Q. The numbers are extremely comparable and supply a reassuring validation of our setup and implementation.
Conclusion
This put up served each as a recap of our sequence on Half I of Sutton and Barto’s Reinforcement Studying [1]and as an extension past the e book’s scope—by benchmarking all launched algorithms on more and more bigger Gridworld environments.
We started by outlining our benchmarking setup, then revisited the core chapters of Half I: Dynamic Programming, Monte Carlo strategies, Temporal-Distinction studying, and Mannequin-Based mostly RL / Planning. In every part, we launched key algorithms comparable to Q-learning, supplied full Python implementations, and evaluated their efficiency on Gridworlds as much as dimension 25×25. The aim of this preliminary spherical was to determine high performers from every algorithmic household. Based mostly on our experiments, the standouts have been:
Worth Iteration, On-policy MC, Q-learning, Sarsa-n, and Dyna-Q. Python code to breed these outcomes, and particularly implementations of all mentioned strategies, is accessible on GitHub.
Subsequent, we stress-tested these high-performers on bigger environments (as much as 50×50) and noticed the next rating:
Worth Iteration > On-policy MC > Dyna-Q > Q-learning > Sarsa-n
Whereas this consequence could also be stunning—given the widespread use of Q-learning and the comparatively uncommon software of Worth Iteration and MC strategies—it is smart in context. Easy, fully-known, deterministic environments are perfect for Worth Iteration and MC strategies. In distinction, Q-learning is designed for extra advanced, unknown, and high-variance environments the place perform approximation turns into mandatory. As we mentioned, there’s a trade-off between effectivity in structured duties and generality in advanced ones.
That brings us to what’s subsequent. In upcoming posts, we’ll push the boundaries additional:
- First, by benchmarking these strategies in more difficult environments comparable to two-player video games, the place direct competitors will expose their variations extra starkly.
- Then, we’ll dive into Half II of Sutton’s e book, the place perform approximation is launched. This unlocks the flexibility to scale reinforcement studying to environments far past what tabular strategies can deal with.
In the event you’ve made it this far—thanks for studying! I hope you loved this deep dive, and I’d like to have you ever again for the following installment within the sequence.
Different Posts on this Sequence
References
[1] http://incompleteideas.net/book/RLbook2020.pdf
[2] https://arxiv.org/abs/1312.5602
[3] https://gymnasium.farama.org/index.html
[4] https://arxiv.org/abs/1707.06347
[5] https://arxiv.org/abs/1911.08265
(*) Photos from [1] used with permission from the authors.