Close Menu
    Trending
    • Send Your Productivity Skyrocketing for Only $15 With Windows 11 Pro
    • The Good, The Bad and The Ugly of AI | by Mahmudur R Manna | Jun, 2025
    • Serious About Professional Growth? $20 Gets You 1,000+ Expert-Led Courses for Life.
    • How I Built a Bird Identification App with OpenAI CLIP | by Operation Curiosity | Jun, 2025
    • đź§  Types of Machine Learning
    • RTO Mandates Need to be ‘Less Dumb,’ Says Dropbox CEO
    • Reinforcement Learning, But With Rules: Meet the Temporal Gatekeeper | by Satyam Mishra | Jun, 2025
    • May Jobs Report Shows a ‘Steady But Cautious’ Labor Market
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Artificial Intelligence»How to Benchmark DeepSeek-R1 Distilled Models on GPQA Using Ollama and OpenAI’s simple-evals
    Artificial Intelligence

    How to Benchmark DeepSeek-R1 Distilled Models on GPQA Using Ollama and OpenAI’s simple-evals

    FinanceStarGateBy FinanceStarGateApril 24, 2025No Comments13 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    of the DeepSeek-R1 mannequin despatched ripples throughout the worldwide AI neighborhood. It delivered breakthroughs on par with the reasoning fashions from Meta and OpenAI, reaching this in a fraction of the time and at a considerably decrease price.

    Past the headlines and on-line buzz, how can we assess the mannequin’s reasoning talents utilizing acknowledged benchmarks? 

    Deepseek’s user interface makes it simple to discover its capabilities, however utilizing it programmatically affords deeper insights and extra seamless integration into real-world functions. Understanding the right way to run such fashions domestically additionally offers enhanced management and offline entry.

    On this article, we discover the right way to use Ollama and OpenAI’s simple-evals to guage the reasoning capabilities of DeepSeek-R1’s distilled fashions primarily based on the well-known GPQA-Diamond benchmark.

    Contents

    (1) What are Reasoning Models?
    (2) What is DeepSeek-R1?
    (3) Understanding Distillation and DeepSeek-R1 Distilled Models
    (4) Selection of Distilled Model
    (5) Benchmarks for Evaluating Reasoning
    (6) Tools Used
    (7) Results of Evaluation
    (8) Step-by-Step Walkthrough

    Right here is the link to the accompanying GitHub repo for this text.


    (1) What are Reasoning Fashions?

    Reasoning fashions, comparable to DeepSeek-R1 and OpenAI’s o-series fashions (e.g., o1, o3), are giant language fashions (LLMs) skilled utilizing reinforcement studying to carry out reasoning. 

    Reasoning fashions assume earlier than they reply, producing a protracted inside chain of thought earlier than responding. They excel in advanced problem-solving, coding, scientific reasoning, and multi-step planning for agentic workflows.


    (2) What’s DeepSeek-R1?

    DeepSeek-R1 is a state-of-the-art open-source LLM designed for superior reasoning, launched in January 2025 within the paper “DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning”.

    The mannequin is a 671-billion-parameter LLM skilled with intensive use of reinforcement studying (RL), primarily based on this pipeline:

    • Two reinforcement phases aimed toward discovering improved reasoning patterns and aligning with human preferences
    • Two supervised fine-tuning phases serving because the seed for the mannequin’s reasoning and non-reasoning capabilities.

    To be exact, DeepSeek skilled two fashions:

    • The primary mannequin, DeepSeek-R1-Zero, a reasoning mannequin skilled with reinforcement studying, generates information for coaching the second mannequin, DeepSeek-R1. 
    • It achieves this by producing reasoning traces, from which solely high-quality outputs are retained primarily based on their closing outcomes.
    • It implies that, in contrast to most fashions, the RL examples on this coaching pipeline should not curated by people however generated by the mannequin.

    The result is that the mannequin achieved efficiency akin to main fashions like OpenAI’s o1 model throughout duties comparable to arithmetic, coding, and complicated reasoning.


    (3) Understanding Distillation and DeepSeek-R1’s Distilled Fashions

    Alongside the total mannequin, in addition they open-sourced six smaller dense fashions (additionally named DeepSeek-R1) of various sizes (1.5B, 7B, 8B, 14B, 32B, 70B), distilled from DeepSeek-R1 primarily based on Qwen or Llama as the bottom mannequin.

    Distillation is a way the place a smaller mannequin (the “scholar”) is skilled to copy the efficiency of a bigger, extra highly effective pre-trained mannequin (the “instructor”). 

    Illustration of DeepSeek-R1 distillation course of | Picture by writer

    On this case, the instructor is the 671B DeepSeek-R1 mannequin, and the scholars are the six fashions distilled utilizing these open-source base fashions:

    DeepSeek-R1 was used because the instructor mannequin to generate 800,000 coaching samples, a mixture of reasoning and non-reasoning samples, for distillation through supervised fine-tuning of the bottom fashions (1.5B, 7B, 8B, 14B, 32B, and 70B).

    So why can we do distillation within the first place? 

    The purpose is to switch the reasoning talents of bigger fashions, comparable to DeepSeek-R1 671B, into smaller, extra environment friendly fashions. This empowers the smaller fashions to deal with advanced reasoning duties whereas being quicker and extra resource-efficient.

    Moreover, DeepSeek-R1 has a large variety of parameters (671 billion), making it difficult to run on most consumer-grade machines. 

    Even probably the most highly effective MacBook Professional, with a most of 128GB of unified reminiscence, is insufficient to run a 671-billion-parameter mannequin.

    As such, distilled fashions open up the potential of being deployed on units with restricted computational assets.

    Unsloth achieved a formidable feat by quantizing the unique 671B-parameter DeepSeek-R1 mannequin down to simply 131GB — a outstanding 80% discount in measurement. Nonetheless, a 131GB VRAM requirement stays a big hurdle.


    (4) Collection of Distilled Mannequin

    With six distilled mannequin sizes to select from, deciding on the fitting one largely is determined by the capabilities of the native system {hardware}. 

    For these with high-performance GPUs or CPUs and a necessity for optimum efficiency, the bigger DeepSeek-R1 fashions (32B and up) are very best — even the quantized 671B model is viable.

    Nonetheless, if one has restricted assets or prefers faster era occasions (as I do), the smaller distilled variants, comparable to 8B or 14B, are a greater match.

    For this undertaking, I shall be utilizing the DeepSeek-R1 distilled Qwen-14B mannequin, which aligns with the {hardware} constraints I confronted.


    (5) Benchmarks for Evaluating Reasoning

    LLMs are usually evaluated utilizing standardized benchmarks that assess their efficiency throughout numerous duties, together with language understanding, code era, instruction following, and query answering. Widespread examples embrace MMLU, HumanEval, and MGSM.

    To measure an LLM’s capability for reasoning, we want tougher, reasoning-heavy benchmarks that transcend surface-level duties. Listed below are some standard examples targeted on evaluating superior reasoning capabilities:

    (i) AIME 2024 — Competitors Math

    • The American Invitational Mathematics Examination (AIME) 2024 serves as a robust benchmark for evaluating an LLM’s mathematical reasoning capabilities. 
    • It’s a difficult math contest with advanced, multi-step issues that take a look at an LLM’s capacity to interpret intricate questions, apply superior reasoning, and carry out exact symbolic manipulation.

    (ii) Codeforces — Competitors Code

    • The Codeforces Benchmark evaluates an LLM’s reasoning capacity utilizing actual aggressive programming issues from Codeforces, a platform identified for algorithmic challenges. 
    • These issues take a look at an LLM’s capability to understand advanced directions, carry out logical and mathematical reasoning, plan multi-step options, and generate right, environment friendly code.

    (iii) GPQA Diamond — PhD-Degree Science Questions

    • GPQA-Diamond is a curated subset of the most tough questions from the broader GPQA (Graduate-Level Physics Question Answering) benchmark, particularly designed to push the bounds of LLM reasoning in superior PhD-level matters.
    • Whereas GPQA features a vary of conceptual and calculation-heavy graduate questions, GPQA-Diamond isolates solely probably the most difficult and reasoning-intensive ones.
    • It’s thought of Google-proof, which means that they’re tough to reply even with unrestricted net entry. 
    • Right here is an instance of a GPQA-Diamond query:

    On this undertaking, we use GPQA-Diamond because the reasoning benchmark, as OpenAI and DeepSeek used it to guage their reasoning fashions.


    (6) Instruments Used

    For this undertaking, we primarily use Ollama and OpenAI’s simple-evals.

    (i) Ollama

    Ollama is an open-source device that simplifies working LLMs on our pc or an area server.

    It acts as a supervisor and runtime, dealing with duties comparable to downloads and atmosphere setup. This enables customers to work together with these fashions with out requiring a relentless web connection or counting on cloud providers.

    It helps many open-source LLMs, together with DeepSeek-R1, and is cross-platform suitable with macOS, Home windows, and Linux. Moreover, it affords an easy setup with minimal fuss and environment friendly useful resource utilization.

    Vital: Guarantee your native system has GPU entry for Ollama, as this dramatically accelerates efficiency and makes subsequent benchmarking workout routines rather more environment friendly as in comparison with CPU. Run nvidia-smi in terminal to examine if GPU is detected.


    (ii) OpenAI simple-evals

    simple-evals is a light-weight library designed to guage language fashions utilizing a zero-shot, chain-of-thought prompting strategy. It consists of well-known benchmarks like MMLU, MATH, GPQA, MGSM, and HumanEval, aiming to mirror reasonable utilization situations.

    A few of it’s possible you’ll learn about OpenAI’s extra well-known and complete analysis library referred to as Evals, which is distinct from simple-evals.

    The truth is, the README of simple-evals additionally particularly signifies that it isn’t supposed to switch the Evals library.

    So why are we utilizing simple-evals? 

    The straightforward reply is that simple-evals comes with built-in analysis scripts for the reasoning benchmarks we’re focusing on (comparable to GPQA), that are lacking in Evals.

    Moreover, I didn’t discover every other instruments or platforms, aside from simple-evals, that present an easy, Python-native approach to run quite a few key benchmarks, comparable to GPQA, significantly when working with Ollama.


    (7) Outcomes of Analysis

    As a part of the analysis, I chosen 20 random questions from the GPQA-Diamond 198-question set for the 14B distilled mannequin to work on. The overall time taken was 216 minutes, which is ~11 minutes per query. 

    The result was admittedly disappointing, because it scored solely 10%, far beneath the reported 73.3% rating for the 671B DeepSeek-R1 mannequin.

    The primary subject I seen is that in its intensive inside reasoning, the mannequin typically both failed to supply any reply (e.g., returning reasoning tokens as the ultimate traces of output) or supplied a response that didn’t match the anticipated multiple-choice format (e.g., Reply: A).

    Analysis output printout from the 20 examples benchmark run | Picture by writer

    As proven above, many outputs ended up as None as a result of the regex logic in simple-evals couldn’t detect the anticipated reply sample within the LLM response.

    Whereas the human-like reasoning logic was fascinating to look at, I had anticipated stronger efficiency by way of question-answering accuracy.

    I’ve additionally seen on-line customers point out that even the bigger 32B mannequin doesn’t carry out in addition to o1. This has raised doubts concerning the utility of distilled reasoning fashions, particularly after they wrestle to offer right solutions regardless of producing lengthy reasoning.

    That stated, GPQA-Diamond is a extremely difficult benchmark, so these fashions may nonetheless be helpful for easier reasoning duties. Their decrease computational calls for additionally make them extra accessible.

    Moreover, the DeepSeek crew really useful conducting a number of assessments and averaging the outcomes as a part of the benchmarking course of — one thing I omitted as a consequence of time constraints.


    (8) Step-by-Step Walkthrough

    At this level, we’ve coated the core ideas and key takeaways. 

    Should you’re prepared for a hands-on, technical walkthrough, this part offers a deep dive into the inside workings and step-by-step implementation. 

    Take a look at (or clone) the accompanying GitHub repo to observe alongside. The necessities for the digital atmosphere setup could be discovered here.

    (i) Preliminary Setup — Ollama

    We start by downloading Ollama. Go to the Ollama download page, choose your working system, and observe the corresponding set up directions.

    As soon as set up is full, launch Ollama by double-clicking the Ollama app (for Home windows and macOS) or working ollama serve within the terminal.


    (ii) Preliminary Setup — OpenAI simple-evals

    The setup of simple-evals is comparatively distinctive. 

    Whereas simple-evals presents itself as a library, the absence of __init__.py recordsdata within the repository means it isn’t structured as a correct Python package deal, resulting in import errors after cloning the repo domestically. 

    Since additionally it is not revealed to PyPI and lacks commonplace packaging recordsdata like setup.py or pyproject.toml, it can’t be put in through pip.

    Happily, we will make the most of Git submodules as an easy workaround.

    A Git submodule lets us embrace contents of one other Git repository inside our personal undertaking. It pulls the recordsdata from an exterior repo (e.g., simple-evals), however retains its historical past separate.

    You may select one in every of two methods (A or B) to tug the simple-evals contents:

    (A) If You Cloned My Undertaking Repo

    My undertaking repo already consists of simple-evals as a submodule, so you’ll be able to simply run:

    git submodule replace --init --recursive

    (B) If You’re Including It to a Newly Created Undertaking
    To manually add simple-evals as a submodule, run this:

    git submodule add https://github.com/openai/simple-evals.git simple_evals

    Word: The simple_evals on the finish of the command (with an underscore) is essential. It units the folder identify, and utilizing a hyphen as an alternative (i.e., easy–evals) can result in import points later.


    Closing Step (For Each Strategies)

    After pulling the repo contents, you should create an empty __init__.py within the newly created simple_evals folder in order that it’s importable as a module. You may create it manually, or use the next command:

    contact simple_evals/__init__.py

    (iii) Pull DeepSeek-R1 mannequin through Ollama

    The subsequent step is to domestically obtain the distilled mannequin of your alternative (e.g., 14B) utilizing this command:

    ollama pull deepseek-r1:14b

    The checklist of DeepSeek-R1 fashions accessible on Ollama could be discovered here.


    (iv) Outline configuration

    We outline the parameters in a configuration YAML file, as proven beneath:

    The mannequin temperature is ready to 0.6 (versus the everyday default worth of 0). This follows DeepSeek’s utilization suggestions, which recommend a temperature vary of 0.5 to 0.7 (0.6 really useful) to stop countless repetitions or incoherent outputs.

    Do take a look at the curiously distinctive DeepSeek-R1 usage recommendations — particularly for benchmarking — to make sure optimum efficiency when utilizing DeepSeek-R1 fashions.

    EVAL_N_EXAMPLES is the parameter for setting the variety of questions from the total 198-question set to make use of for analysis.


    (v) Arrange Sampler code

    To assist Ollama-based language fashions inside the simple-evals framework, we create a customized wrapper class named OllamaSampler saved inside utils/samplers/ollama_sampler.py.

    On this context, a sampler is a Python class that generates outputs from a language mannequin primarily based on a given immediate. 

    Since present samplers in simple-evals solely cowl suppliers like OpenAI and Claude, we want a sampler class that gives a suitable interface for Ollama. 

    The OllamaSampler extracts the GPQA query immediate, sends it to the mannequin with a specified temperature, and returns the plain textual content response. 

    The _pack_message methodology is included to make sure the output format matches what the analysis scripts in simple-evals anticipate.


    (vi) Create analysis run script

    The next code units up the analysis execution in primary.py, together with the usage of the GPQAEval class from simple-evals to run GPQA benchmarking.

    The run_eval() perform is a configurable analysis runner that assessments LLMs by way of Ollama on benchmarks like GPQA.

    It masses settings from the config file, units up the suitable analysis class from simple-evals, and runs the mannequin by way of a standardized analysis course of. It’s saved in primary.py, which could be executed with python primary.py.

    Following the steps above, we’ve got efficiently arrange and executed the GPQA-Diamond benchmarking on the DeepSeek-R1 distilled mannequin.


    Wrapping It Up

    On this article, we showcased how we will mix instruments like Ollama and OpenAI’s simple-evals to discover and benchmark DeepSeek-R1’s distilled fashions.

    The distilled fashions could not but rival the 671B parameter unique mannequin on difficult reasoning benchmarks like GPQA-Diamond. Nonetheless, they display how distillation can develop entry to LLM reasoning capabilities.

    Regardless of subpar scores in advanced PhD-level duties, these smaller variants could stay viable for much less demanding situations, paving the best way for environment friendly native deployment on a wider vary of {hardware}.

    Earlier than you go

    I welcome you to observe my GitHub and LinkedIn to remain up to date with extra participating and sensible content material. In the meantime, have enjoyable benchmarking LLMs with Ollama and simple-evals!



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticlePapers Explained 354: Does RL Incentivize Reasoning Capacity in LLMs Beyond the Base Model? | by Ritvik Rastogi | Apr, 2025
    Next Article What to Know Before Investing in a Pre-IPO Company
    FinanceStarGate

    Related Posts

    Artificial Intelligence

    5 Crucial Tweaks That Will Make Your Charts Accessible to People with Visual Impairments

    June 7, 2025
    Artificial Intelligence

    Why AI Projects Fail | Towards Data Science

    June 7, 2025
    Artificial Intelligence

    Prescriptive Modeling Unpacked: A Complete Guide to Intervention With Bayesian Modeling.

    June 6, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    Predicting Customer Churn Using Machine Learning | by Venkatesh P | May, 2025

    May 23, 2025

    The AI Reasoning Ladder: What Machines Can Do — And Where They Still Fail | by MKWriteshere | Data Science Collective | Apr, 2025

    April 19, 2025

    What is Time?. I, Marcelo Mezquia, architect of the… | by INTENTSIM | May, 2025

    May 27, 2025

    I’ve Heard Hundreds of Pitches Running a 9-Figure Company — Here’s What Makes Me Say ‘Yes’

    May 7, 2025

    Retiring surgical nurse Richard wants to know whether to max out RRSPs or top up TFSAs

    May 16, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    This Technology Will Redefine Business by 2027

    March 23, 2025

    Why the world is looking to ditch US AI models

    March 25, 2025

    Designing a new way to optimize complex coordinated systems | MIT News

    April 25, 2025
    Our Picks

    Why AI Startup Anysphere Is the Fastest-Growing Startup Ever

    June 7, 2025

    History shows Liberals’ housing plan failed the last time

    April 8, 2025

    (Many) More TDS Contributors Are Now Eligible for Earning Through the Author Payment Program

    April 23, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.