Palo Alto, CA – Generative AI firm SambaNova introduced final week that DeepSeek-R1 671B is working right now on SambaNova Cloud at 198 tokens per second (t/s), “reaching speeds and effectivity that no different platform can match,” the corporate mentioned.
DeepSeek-R1 has lowered AI coaching prices by 10X, however its widespread adoption has been hindered by excessive inference prices and inefficiencies — till now, in keeping with the corporate. “SambaNova has eliminated this barrier, unlocking real-time, cost-effective inference at scale for builders and enterprises,” the corporate mentioned.
“Powered by the SN40L RDU chip, SambaNova is the quickest platform working DeepSeek at 198 tokens per second per person,” acknowledged Rodrigo Liang, CEO and co-founder of SambaNova. “It will improve to 5X sooner than the most recent GPU velocity on a single rack — and by yr finish, we’ll provide 100X the capability for DeepSeek-R1.”
“With the ability to run the total DeepSeek-R1 671B mannequin — not a distilled model — at SambaNova’s blazingly quick velocity is a recreation changer for builders. Reasoning fashions like R1 must generate a variety of reasoning tokens to give you a superior output, which makes them take longer than conventional LLMs. This makes dashing them up particularly necessary,” acknowledged Dr. Andrew Ng, Founding father of DeepLearning.AI, Managing Normal Associate at AI Fund, and an Adjunct Professor at Stanford College’s Pc Science Division.
“Synthetic Evaluation has independently benchmarked SambaNova’s cloud deployment of the total 671 billion parameter DeepSeek- R1 Combination of Consultants mannequin at over 195 output token/s, the quickest output velocity now we have ever measured for DeepSeek-R1. Excessive output speeds are notably necessary for reasoning fashions, as these fashions use reasoning output tokens to enhance the standard of their responses. SambaNova’s excessive output speeds will assist the usage of reasoning fashions in latency delicate use instances,” mentioned George Cameron, Co-Founder, Synthetic Evaluation.
DeepSeek-R1 has revolutionized AI by collapsing coaching prices by tenfold, nonetheless, widespread adoption has stalled as a result of DeepSeek-R1’s reasoning capabilities require considerably extra compute for inference, making AI manufacturing costlier. In actuality, the inefficiency of GPU-based inference has saved DeepSeek-R1 out of attain for many builders.
SambaNova has solved this drawback. With a proprietary dataflow structure and three-tier reminiscence design, SambaNova’s SN40L Reconfigurable Dataflow Unit (RDU) chips collapse the {hardware} necessities to run DeepSeek-R1 671B effectively from 40 racks (320 of the most recent GPUs) all the way down to 1 rack (16 RDUs) — unlocking cost-effective inference at unmatched effectivity.
“DeepSeek-R1 is likely one of the most superior frontier AI fashions obtainable, however its full potential has been restricted by the inefficiency of GPUs,” mentioned Rodrigo Liang, CEO of SambaNova. “That modifications right now. We’re bringing the following main breakthrough — collapsing inference prices and decreasing {hardware} necessities from 40 racks to only one — to supply DeepSeek-R1 on the quickest speeds, effectively.”
“Greater than 10 million customers and engineering groups at Fortune 500 firms depend on Blackbox AI to rework how they write code and construct merchandise. Our partnership with SambaNova performs a essential function in accelerating our autonomous coding agent workflows. SambaNova’s chip capabilities are unmatched for serving the total DeepSeek-R1 671B mannequin, which offers significantly better accuracy than any of the distilled variations. We couldn’t ask for a greater associate to work with to serve hundreds of thousands of customers,” acknowledged Robert Rizk, CEO of Blackbox AI.
Sumti Jairath, Chief Architect, SambaNova, defined: “DeepSeek-R1 is the proper match for SambaNova’s three-tier reminiscence structure. With 671 billion parameters R1 is the biggest open supply giant language mannequin launched thus far, which implies it wants a variety of reminiscence to run. GPUs are reminiscence constrained, however SambaNova’s distinctive dataflow structure means we will run the mannequin effectively to realize 20000 tokens/s of whole rack throughput within the close to future — unprecedented effectivity when in comparison with GPUs attributable to their inherent reminiscence and knowledge communication bottlenecks.”
SambaNova is quickly scaling its capability to fulfill anticipated demand, and by the top of the yr will provide greater than 100x the present world capability for DeepSeek-R1. This makes its RDUs probably the most environment friendly enterprise resolution for reasoning fashions.
DeepSeek-R1 671B full mannequin is offered now to all customers to expertise and to pick out customers by way of API on SambaNova Cloud.