Sunnyvale, CA – Might 8, 2025 – Rafay Methods, a cloud-native and AI infrastructure orchestration and administration firm, introduced normal availability of the corporate’s Serverless Inference providing, a token-metered API for operating open-source and privately skilled or tuned LLMs.
The corporate mentioned many NVIDIA Cloud Suppliers (NCPs) and GPU Clouds are already leveraging the Rafay Platform to ship a multi-tenant, Platform-as-a-Service expertise to their prospects, full with self-service consumption of compute and AI functions. These NCPs and GPU Clouds can now ship Serverless Inference as a turnkey service at no extra price, enabling their prospects to construct and scale AI functions quick, with out having to cope with the fee and complexity of constructing automation, governance, and controls for GPU-based infrastructure.
The World AI inference market is anticipated to develop to $106 billion in 2025, and $254 billion by 2030. Rafay’s Serverless Inference empowers GPU Cloud Suppliers (GPU Clouds) and NCPs to faucet into the booming GenAI market by eliminating key adoption obstacles—automated provisioning and segmentation of complicated infrastructure, developer self-service, quickly launching new GenAI fashions as a service, producing billing knowledge for on-demand utilization, and extra.
“Having spent the final yr experimenting with GenAI, many enterprises are actually targeted on constructing agentic AI functions that increase and improve their enterprise choices. The power to quickly eat GenAI fashions by way of inference endpoints is essential to sooner improvement of GenAI capabilities. That is the place Rafay’s NCP and GPU Cloud companions have a fabric benefit,” mentioned Haseeb Budhani, CEO and co-founder of Rafay Systems.
“With our new Serverless Inference providing, obtainable totally free to NCPs and GPU Clouds, our prospects and companions can now ship an Amazon Bedrock-like service to their prospects, enabling entry to the most recent GenAI fashions in a scalable, safe, and cost-effective method. Builders and enterprises can now combine GenAI workflows into their functions in minutes, not months, with out the ache of infrastructure administration. This providing advances our firm’s imaginative and prescient to assist NCPs and GPU Clouds evolve from working GPU-as-a-Service companies to AI-as-a-Service companies.”
By providing Serverless Inference as an on-demand functionality to downstream prospects, Rafay helps NCPs and GPU Clouds deal with a key hole available in the market. Rafay’s Serverless Inference providing supplies the next key capabilities to NCPs and GPU Clouds:
-
Seamless developer integration: OpenAI-compatible APIs require zero code migration for present functions, with safe RESTful and streaming-ready endpoints that dramatically speed up time-to-value for finish prospects.
-
Clever infrastructure administration: Auto-scaling GPU nodes with right-sized mannequin allocation capabilities dynamically optimize sources throughout multi-tenant and devoted isolation choices, eliminating over-provisioning whereas sustaining strict efficiency SLAs.
-
Constructed-in metering and billing: Token-based and time-based utilization monitoring for each enter and output supplies granular consumption analytics, whereas integrating with present billing platforms by way of complete metering APIs and enabling clear, consumption-based pricing fashions.
-
Enterprise-grade safety and governance: Complete safety by way of HTTPS-only API endpoints, rotating bearer token authentication, detailed entry logging, and configurable token quotas per group, enterprise unit, or utility fulfill enterprise compliance necessities.
-
Observability, storage, and efficiency monitoring: Finish-to-end visibility with logs and metrics archived within the supplier’s personal storage namespace, help for backends like MinIO- a high-performance, AWS S3-compatible object storage system, and Weka-a high-performance, AI-native knowledge platform; in addition to a centralized credential administration guarantee full infrastructure and mannequin efficiency transparency.
Rafay’s Serverless Inference providing is on the market right this moment to all prospects and companions utilizing the Rafay Platform to ship multi-tenant, GPU and CPU primarily based infrastructure. The corporate can also be set to roll out fine-tuning capabilities shortly. These new additions are designed to assist NCPs and GPU Clouds quickly ship high-margin, production-ready AI providers, eradicating complexity.