Close Menu
    Trending
    • Creating Smart Forms with Auto-Complete and Validation using AI | by Seungchul Jeff Ha | Jun, 2025
    • Why Knowing Your Customer Drives Smarter Growth (and Higher Profits)
    • Stop Building AI Platforms | Towards Data Science
    • What If Your Portfolio Could Speak for You? | by Lusha Wang | Jun, 2025
    • High Paying, Six Figure Jobs For Recent Graduates: Report
    • What If I had AI in 2018: Rent the Runway Fulfillment Center Optimization
    • YouBot: Understanding YouTube Comments and Chatting Intelligently — An Engineer’s Perspective | by Sercan Teyhani | Jun, 2025
    • Inspiring Quotes From Brian Wilson of The Beach Boys
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»How LLMs Work: Reinforcement Learning, RLHF, DeepSeek R1, OpenAI o1, AlphaGo
    Artificial Intelligence

    How LLMs Work: Reinforcement Learning, RLHF, DeepSeek R1, OpenAI o1, AlphaGo

    FinanceStarGateBy FinanceStarGateFebruary 28, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Welcome to half 2 of my LLM deep dive. When you’ve not learn Half 1, I extremely encourage you to check it out first. 

    Beforehand, we lined the primary two main levels of coaching an LLM:

    1. Pre-training — Studying from large datasets to kind a base mannequin.
    2. Supervised fine-tuning (SFT) — Refining the mannequin with curated examples to make it helpful.

    Now, we’re diving into the subsequent main stage: Reinforcement Studying (RL). Whereas pre-training and SFT are well-established, RL remains to be evolving however has develop into a important a part of the coaching pipeline.

    I’ve taken reference from Andrej Karpathy’s widely popular 3.5-hour YouTube. Andrej is a founding member of OpenAI, his insights are gold — you get the concept.

    Let’s go 🚀

    What’s the aim of reinforcement studying (RL)?

    People and LLMs course of info in a different way. What’s intuitive for us — like primary arithmetic — will not be for an LLM, which solely sees textual content as sequences of tokens. Conversely, an LLM can generate expert-level responses on complicated subjects just because it has seen sufficient examples throughout coaching.

    This distinction in cognition makes it difficult for human annotators to offer the “excellent” set of labels that constantly information an LLM towards the precise reply.

    RL bridges this hole by permitting the mannequin to be taught from its personal expertise.

    As a substitute of relying solely on express labels, the mannequin explores completely different token sequences and receives suggestions — reward alerts — on which outputs are most helpful. Over time, it learns to align higher with human intent.

    Instinct behind RL

    LLMs are stochastic — which means their responses aren’t mounted. Even with the identical immediate, the output varies as a result of it’s sampled from a chance distribution.

    We will harness this randomness by producing 1000’s and even thousands and thousands of attainable responses in parallel. Consider it because the mannequin exploring completely different paths — some good, some unhealthy. Our purpose is to encourage it to take the higher paths extra usually.

    To do that, we prepare the mannequin on the sequences of tokens that result in higher outcomes. Not like supervised fine-tuning, the place human specialists present labeled knowledge, reinforcement studying permits the mannequin to be taught from itself.

    The mannequin discovers which responses work greatest, and after every coaching step, we replace its parameters. Over time, this makes the mannequin extra prone to produce high-quality solutions when given related prompts sooner or later.

    However how can we decide which responses are greatest? And the way a lot RL ought to we do? The main points are tough, and getting them proper will not be trivial.

    RL will not be “new” — It could actually surpass human experience (AlphaGo, 2016)

    An ideal instance of RL’s energy is DeepMind’s AlphaGo, the primary AI to defeat knowledgeable Go participant and later surpass human-level play.

    Within the 2016 Nature paper (graph under), when a mannequin was skilled purely by SFT (giving the mannequin tons of excellent examples to mimic from), the mannequin was capable of attain human-level efficiency, however by no means surpass it.

    The dotted line represents Lee Sedol’s efficiency — the very best Go participant on the planet.

    It’s because SFT is about replication, not innovation — it doesn’t enable the mannequin to find new methods past human data.

    Nonetheless, RL enabled AlphaGo to play towards itself, refine its methods, and in the end exceed human experience (blue line).

    Picture taken from AlphaGo 2016 paper

    RL represents an thrilling frontier in AI — the place fashions can discover methods past human creativeness once we prepare it on a various and difficult pool of issues to refine it’s considering methods.

    RL foundations recap

    Let’s shortly recap the important thing parts of a typical RL setup:

    Picture by creator
    • Agent — The learner or resolution maker. It observes the present scenario (state), chooses an motion, after which updates its behaviour based mostly on the end result (reward).
    • Surroundings  — The exterior system by which the agent operates.
    • State —  A snapshot of the setting at a given step t. 

    At every timestamp, the agent performs an motion within the setting that may change the setting’s state to a brand new one. The agent can even obtain suggestions indicating how good or unhealthy the motion was.

    This suggestions is known as a reward, and is represented in a numerical kind. A optimistic reward encourages that behaviour, and a unfavorable reward discourages it.

    Through the use of suggestions from completely different states and actions, the agent regularly learns the optimum technique to maximise the overall reward over time.

    Coverage

    The coverage is the agent’s technique. If the agent follows a great coverage, it should constantly make good selections, resulting in increased rewards over many steps.

    In mathematical phrases, it’s a operate that determines the chance of various outputs for a given state — (πθ(a|s)).

    Worth operate

    An estimate of how good it’s to be in a sure state, contemplating the long run anticipated reward. For an LLM, the reward may come from human suggestions or a reward mannequin. 

    Actor-Critic structure

    It’s a widespread RL setup that mixes two parts:

    1. Actor — Learns and updates the coverage (πθ), deciding which motion to absorb every state.
    2. Critic — Evaluates the worth operate (V(s)) to offer suggestions to the actor on whether or not its chosen actions are resulting in good outcomes. 

    The way it works:

    • The actor picks an motion based mostly on its present coverage.
    • The critic evaluates the end result (reward + subsequent state) and updates its worth estimate.
    • The critic’s suggestions helps the actor refine its coverage in order that future actions result in increased rewards.

    Placing all of it collectively for LLMs

    The state will be the present textual content (immediate or dialog), and the motion will be the subsequent token to generate. A reward mannequin (eg. human suggestions), tells the mannequin how good or unhealthy it’s generated textual content is. 

    The coverage is the mannequin’s technique for selecting the subsequent token, whereas the worth operate estimates how helpful the present textual content context is, when it comes to finally producing top quality responses.

    DeepSeek-R1 (printed 22 Jan 2025)

    To spotlight RL’s significance, let’s discover Deepseek-R1, a reasoning mannequin attaining top-tier efficiency whereas remaining open-source. The paper introduced two models: DeepSeek-R1-Zero and DeepSeek-R1.

    • DeepSeek-R1-Zero was skilled solely through large-scale RL, skipping supervised fine-tuning (SFT).
    • DeepSeek-R1 builds on it, addressing encountered challenges.

    Deepseek R1 is without doubt one of the most superb and spectacular breakthroughs I’ve ever seen — and as open supply, a profound present to the world. 🤖🫡

    — Marc Andreessen 🇺🇸 (@pmarca) January 24, 2025

    Let’s dive into a few of these key factors. 

    1. RL algo: Group Relative Coverage Optimisation (GRPO)

    One key recreation altering RL algorithm is Group Relative Coverage Optimisation (GRPO), a variant of the extensively widespread Proximal Coverage Optimisation (PPO). GRPO was introduced in the DeepSeekMath paper in Feb 2024. 

    Why GRPO over PPO?

    PPO struggles with reasoning duties because of:

    1. Dependency on a critic mannequin.
      PPO wants a separate critic mannequin, successfully doubling reminiscence and compute.
      Coaching the critic will be complicated for nuanced or subjective duties.
    2. Excessive computational value as RL pipelines demand substantial assets to judge and optimise responses. 
    3. Absolute reward evaluations
      Once you depend on an absolute reward — which means there’s a single customary or metric to guage whether or not a solution is “good” or “unhealthy” — it may be arduous to seize the nuances of open-ended, numerous duties throughout completely different reasoning domains. 

    How GRPO addressed these challenges:

    GRPO eliminates the critic mannequin through the use of relative analysis — responses are in contrast inside a gaggle slightly than judged by a set customary.

    Think about college students fixing an issue. As a substitute of a trainer grading them individually, they evaluate solutions, studying from one another. Over time, efficiency converges towards increased high quality.

    How does GRPO match into the entire coaching course of?

    GRPO modifies how loss is calculated whereas protecting different coaching steps unchanged:

    1. Collect knowledge (queries + responses)
      – For LLMs, queries are like questions
      – The previous coverage (older snapshot of the mannequin) generates a number of candidate solutions for every question
    2. Assign rewards — every response within the group is scored (the “reward”).
    3. Compute the GRPO loss
      Historically, you’ll compute a loss — which reveals the deviation between the mannequin prediction and the true label.
      In GRPO, nonetheless, you measure:
      a) How doubtless is the brand new coverage to supply previous responses?
      b) Are these responses comparatively higher or worse?
      c) Apply clipping to stop excessive updates.
      This yields a scalar loss.
    4. Again propagation + gradient descent
      – Again propagation calculates how every parameter contributed to loss
      – Gradient descent updates these parameters to cut back the loss
      – Over many iterations, this regularly shifts the brand new coverage to desire increased reward responses
    5. Replace the previous coverage sometimes to match the brand new coverage.
      This refreshes the baseline for the subsequent spherical of comparisons.

    2. Chain of thought (CoT)

    Conventional LLM coaching follows pre-training → SFT → RL. Nonetheless, DeepSeek-R1-Zero skipped SFT, permitting the mannequin to instantly discover CoT reasoning.

    Like people considering via a troublesome query, CoT permits fashions to interrupt issues into intermediate steps, boosting complicated reasoning capabilities. OpenAI’s o1 mannequin additionally leverages this, as famous in its September 2024 report: o1’s efficiency improves with extra RL (train-time compute) and extra reasoning time (test-time compute).

    DeepSeek-R1-Zero exhibited reflective tendencies, autonomously refining its reasoning. 

    A key graph (under) within the paper confirmed elevated considering throughout coaching, resulting in longer (extra tokens), extra detailed and higher responses.

    Picture taken from DeepSeek-R1 paper

    With out express programming, it started revisiting previous reasoning steps, bettering accuracy. This highlights chain-of-thought reasoning as an emergent property of RL coaching.

    The mannequin additionally had an “aha second” (under) — an interesting instance of how RL can result in sudden and complicated outcomes.

    Picture taken from DeepSeek-R1 paper

    Notice: Not like DeepSeek-R1, OpenAI doesn’t present full precise reasoning chains of thought in o1 as they’re involved a few distillation danger — the place somebody is available in and tries to mimic these reasoning traces and recuperate a number of the reasoning efficiency by simply imitating. As a substitute, o1 simply summaries of those chains of ideas.

    Reinforcement studying with Human Suggestions (RLHF)

    For duties with verifiable outputs (e.g., math issues, factual Q&A), AI responses will be simply evaluated. However what about areas like summarisation or inventive writing, the place there’s no single “right” reply? 

    That is the place human suggestions is available in — however naïve RL approaches are unscalable.

    Picture by creator

    Let’s have a look at the naive strategy with some arbitrary numbers.

    Picture by creator

    That’s one billion human evaluations wanted! That is too pricey, gradual and unscalable. Therefore, a wiser resolution is to coach an AI “reward mannequin” to be taught human preferences, dramatically lowering human effort. 

    Rating responses can be simpler and extra intuitive than absolute scoring.

    Picture by creator

    Upsides of RLHF

    • Might be utilized to any area, together with inventive writing, poetry, summarisation, and different open-ended duties.
    • Rating outputs is far simpler for human labellers than producing inventive outputs themselves.

    Downsides of RLHF

    • The reward mannequin is an approximation — it could not completely replicate human preferences.
    • RL is nice at gaming the reward mannequin — if run for too lengthy, the mannequin may exploit loopholes, producing nonsensical outputs that also get excessive scores.

    Do notice that Rlhf will not be the identical as conventional RL.

    For empirical, verifiable domains (e.g. math, coding), RL can run indefinitely and uncover novel methods. RLHF, then again, is extra like a fine-tuning step to align fashions with human preferences.

    Conclusion

    And that’s a wrap! I hope you loved Half 2 🙂 When you haven’t already learn Half 1 — do check it out here.

    Received questions or concepts for what I ought to cowl subsequent? Drop them within the feedback — I’d love to listen to your ideas. See you within the subsequent article!





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Impact of LLMs on AI, ML, and Industries | by Sushant Gaurav | Feb, 2025
    Next Article MrBeast Is Raising Money Valuing His Business at $5 Billion
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    Stop Building AI Platforms | Towards Data Science

    June 14, 2025
    Artificial Intelligence

    What If I had AI in 2018: Rent the Runway Fulfillment Center Optimization

    June 14, 2025
    Artificial Intelligence

    AI Is Not a Black Box (Relatively Speaking)

    June 13, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    Mobile App Development with Python | Towards Data Science

    June 11, 2025

    Building Machine learning model using AWS Sagemaker notebook | by Sarayavalasaravikiran | AI Simplified in Plain English | May, 2025

    May 10, 2025

    Building Worlds with AI: Watch Three Civilizations Rise From Scratch | by Breakingthebot | Apr, 2025

    April 27, 2025

    The Only Reasons To Pay Off A Low-Interest-Rate Mortgage Early

    March 19, 2025

    Struggling to Land a Data Role in 2025? These 5 Tips Will Change That

    April 29, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    Data Science. As step into the future | by Solution | Mar, 2025

    March 19, 2025

    SEC Offering $50K Buyout Incentive; Education Dept $25K

    March 4, 2025

    The Three Step Process To Investing A Lot Of Money Wisely

    March 10, 2025
    Our Picks

    Fresh Faces in the STONfi Grant Program: Meet the Next Wave of DeFi Innovators | by Jibril Umaru | Mar, 2025

    March 22, 2025

    From Signal Flows to Hyper-Vectors: Building a Lean LMU-RWKV Classifier with On-the-Fly Hyper-Dimensional Hashing | by Robert McMenemy | May, 2025

    May 10, 2025

    Fhhjfjfjf

    April 13, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.