Close Menu
    Trending
    • Optimizing DevOps for Large Enterprise Environments
    • 3 Signs You Are Ready to Sell Your Business
    • Combining technology, education, and human connection to improve online learning | MIT News
    • Building Google Veo 3 from Scratch Using Python | by Fareed Khan | Jun, 2025
    • Datavault AI to Deploy AI-Driven HPC for Biofuel R&D
    • Bezos-Sánchez Wedding Draws Business, Protests to Venice
    • Abstract Classes: A Software Engineering Concept Data Scientists Must Know To Succeed
    • AWS Made 10x Easier Using AI. Smart Tools Are upgrading Cloud… | by AI With Lil Bro | Jun, 2025
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Data Science»AI Inference: Meta Teams with Cerebras on Llama API
    Data Science

    AI Inference: Meta Teams with Cerebras on Llama API

    FinanceStarGateBy FinanceStarGateMay 2, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Sunnyvale, CA — Meta has teamed with Cerebras on AI inference in Meta’s new Llama API, combining  Meta’s open-source Llama fashions with inference know-how from Cerebras.

    Builders constructing on the Llama 4 Cerebras mannequin within the API can count on speeds as much as 18 instances sooner than conventional GPU-based options, based on Cerebras. “This acceleration unlocks a completely new technology of purposes which are unimaginable to construct on different know-how. Conversational low latency voice, interactive code technology, prompt multi-step reasoning, and real-time brokers — all of which require chaining a number of LLM calls — can now be accomplished in seconds moderately than minutes,” Cerebras stated.

    By partnering with Meta to serve Llama fashions from Meta’s new API service, Cerebras positive factors publicity to an expanded developer viewers and deepens its enterprise and partnership with Meta and their unimaginable groups.

    Since launching its inference options in 2024, Cerebras has delivered the world’s quickest Llama inference, serving billions of tokens via its personal AI infrastructure. The broad developer group now has direct entry to a strong, OpenAI-class various for constructing clever, real-time methods — backed by Cerebras velocity and scale.

    “Cerebras is proud to make Llama API the quickest inference API on the earth,” stated Andrew Feldman, CEO and co-founder of Cerebras. “Builders constructing agentic and real-time apps want velocity. With Cerebras on Llama API, they’ll construct AI methods which are basically out of attain for main GPU-based inference clouds.”

    Cerebras is the quickest AI inference answer as measured by third occasion benchmarking website Synthetic Evaluation, reaching over 2,600 token/s for Llama 4 Scout in comparison with ChatGPT at ~130 tokens/sec and DeepSeek at ~25 tokens/sec.

    Builders will be capable to entry to the quickest Llama 4 inference by deciding on Cerebras from the mannequin choices inside the Llama API. This streamlined expertise will make it straightforward to prototype, construct, and scale real-time AI purposes. To join early entry to the Llama API and to expertise Cerebras velocity as we speak, go to www.cerebras.ai/inference.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article6 Small Business Ideas You Can Start With Just $1,000
    Next Article Predicting Bird Species with Neural Network and Transfer Learning | by Manuel Cota | May, 2025
    FinanceStarGate

    Related Posts

    Data Science

    Optimizing DevOps for Large Enterprise Environments

    June 18, 2025
    Data Science

    Datavault AI to Deploy AI-Driven HPC for Biofuel R&D

    June 18, 2025
    Data Science

    Voltage Park Partners with VAST Data

    June 18, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    The Future of Humanity with AI: A New Era of Possibilities. | by Melanie Lobrigo | Apr, 2025

    April 1, 2025

    How AI Is Transforming the SEO Landscape — and Why You Need to Adapt

    February 2, 2025

    Adding Training Noise To Improve Detections In Transformers

    April 29, 2025

    Meta Layoffs Begin: Inside Meta’s Rankings of Low Performers

    February 11, 2025

    What is ANOVA? Types of ANOVA and Their Applications | by Meriç Özcan | Feb, 2025

    February 5, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    How do you teach an AI model to give therapy?

    April 1, 2025

    Sama Launches Agentic Capture for Multi-Modal Agentic AI

    February 18, 2025

    This Is the Most Underrated Leadership Skill in 2025

    April 29, 2025
    Our Picks

    What Building an App Taught Me About Parenting — And Successful Startups

    March 26, 2025

    Making extra long AI videos with Hunyuan Image to Video and RIFLEx | by Guillaume Bieler | Mar, 2025

    March 21, 2025

    Ever Wondered What’s in a Neural Network Summary? Let’s Break It Down Together! | by Saketh Yalamanchili | May, 2025

    May 3, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.