Close Menu
    Trending
    • Abstract Classes: A Software Engineering Concept Data Scientists Must Know To Succeed
    • AWS Made 10x Easier Using AI. Smart Tools Are upgrading Cloud… | by AI With Lil Bro | Jun, 2025
    • Voltage Park Partners with VAST Data
    • Meta Plans to Release New Oakley, Prada AI Smart Glasses
    • Apply Sphinx’s Functionality to Create Documentation for Your Next Data Science Project
    • Mastering Prompting with DSPy: A Beginner’s Guide to Smarter LLMs | by Adi Insights and Innovations | Jun, 2025
    • Service Robotics: The Silent Revolution Transforming Our Daily Lives
    • Why New Tax Rules Could Be a Game Changer for Your Business
    Finance StarGate
    • Home
    • Artificial Intelligence
    • AI Technology
    • Data Science
    • Machine Learning
    • Finance
    • Passive Income
    Finance StarGate
    Home»Data Science»AI Inference: Meta Teams with Cerebras on Llama API
    Data Science

    AI Inference: Meta Teams with Cerebras on Llama API

    FinanceStarGateBy FinanceStarGateMay 2, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Sunnyvale, CA — Meta has teamed with Cerebras on AI inference in Meta’s new Llama API, combining  Meta’s open-source Llama fashions with inference know-how from Cerebras.

    Builders constructing on the Llama 4 Cerebras mannequin within the API can count on speeds as much as 18 instances sooner than conventional GPU-based options, based on Cerebras. “This acceleration unlocks a completely new technology of purposes which are unimaginable to construct on different know-how. Conversational low latency voice, interactive code technology, prompt multi-step reasoning, and real-time brokers — all of which require chaining a number of LLM calls — can now be accomplished in seconds moderately than minutes,” Cerebras stated.

    By partnering with Meta to serve Llama fashions from Meta’s new API service, Cerebras positive factors publicity to an expanded developer viewers and deepens its enterprise and partnership with Meta and their unimaginable groups.

    Since launching its inference options in 2024, Cerebras has delivered the world’s quickest Llama inference, serving billions of tokens via its personal AI infrastructure. The broad developer group now has direct entry to a strong, OpenAI-class various for constructing clever, real-time methods — backed by Cerebras velocity and scale.

    “Cerebras is proud to make Llama API the quickest inference API on the earth,” stated Andrew Feldman, CEO and co-founder of Cerebras. “Builders constructing agentic and real-time apps want velocity. With Cerebras on Llama API, they’ll construct AI methods which are basically out of attain for main GPU-based inference clouds.”

    Cerebras is the quickest AI inference answer as measured by third occasion benchmarking website Synthetic Evaluation, reaching over 2,600 token/s for Llama 4 Scout in comparison with ChatGPT at ~130 tokens/sec and DeepSeek at ~25 tokens/sec.

    Builders will be capable to entry to the quickest Llama 4 inference by deciding on Cerebras from the mannequin choices inside the Llama API. This streamlined expertise will make it straightforward to prototype, construct, and scale real-time AI purposes. To join early entry to the Llama API and to expertise Cerebras velocity as we speak, go to www.cerebras.ai/inference.





    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article6 Small Business Ideas You Can Start With Just $1,000
    Next Article Predicting Bird Species with Neural Network and Transfer Learning | by Manuel Cota | May, 2025
    FinanceStarGate

    Related Posts

    Data Science

    Voltage Park Partners with VAST Data

    June 18, 2025
    Data Science

    Service Robotics: The Silent Revolution Transforming Our Daily Lives

    June 17, 2025
    Data Science

    What is OpenAI o3 and How is it Different than other LLMs?

    June 17, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    How to Measure Real Model Accuracy When Labels Are Noisy

    April 11, 2025

    How to Skip the AI Hype and Find Reliable Expert Information | by Kumar Sailesh | Mar, 2025

    March 20, 2025

    Gerçek Zamanlı Olay Tespit Modellerini Kolayca Eğitin: Esnek bir Video Sınıflandırma Modeli Eğitim Sistemi [TR] | by Furkan Çolhak | Apr, 2025

    April 4, 2025

    Self-Made Millionaire Says Successful People Share 1 Quality

    March 6, 2025

    Decoding Life One Sequence at a Time: A Practitioner’s Dive into Protein Prediction | by Everton Gomede, PhD | Apr, 2025

    April 27, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    Most Popular

    09360627233

    March 26, 2025

    Comprehensive Guide to Dependency Management in Python

    March 7, 2025

    Aligning AI with human values | MIT News

    February 5, 2025
    Our Picks

    Multiple Myeloma patient assistant using GenAI — Capstone project blog | by LeethaMe & Jamamoch | Apr, 2025

    April 21, 2025

    Is Victoria’s Secret Down? Security Incident Closes Website

    May 30, 2025

    Mastering Digital Marketing Strategies for Explosive Growth in 2025 | by Digital Biz Scope | Apr, 2025

    April 26, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Data Science
    • Finance
    • Machine Learning
    • Passive Income
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2025 Financestargate.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.