As our Agentic Inference Cloud continues to grow, we’re excited to announce the availability of new, high-performance GPU Droplets powered by AMD Instinct™ MI350X GPUs. By integrating these cutting-edge GPUs with our platform, we’re continuing to deliver the power and scale that leading AI-native companies and builders need to run their most complex inference workloads.
The AMD Instinct™ MI350X Series sets a new standard for generative AI and high-performance computing (HPC). Built on the AMD CDNA™ 4 architecture, these GPUs are designed for the most demanding tasks: training massive models, high-speed inference, and complex scientific simulations.
The capabilities of the GPUs allow optimization for the compute bound prefill phase, while enabling high-performance inference at low latency and high token generation throughput. This provides the ability to load large models and larger context windows, leading to supporting a higher inference request density per GPU. Paired with our optimized inference platform, these feature enhancements of AMD Instinct™ MI350X GPUs offer lower latency and higher throughput.
We’ve already seen what’s possible when customers pair DigitalOcean’s optimized platform with AMD’s hardware. Earlier this year, we helped Character.AI achieve a 2X increase in production request throughput and a 50% reduction in inference costs.
Now, customers like ACE Studio are using DigitalOcean software paired with AMD hardware to push the boundaries of music creation. “At ACE Studio, our mission is to build an AI-driven music workstation for the future of music creation,” said Sean Zhao, Co-Founder & CTO. “As we expand our footprint on DigitalOcean, the next-generation AMD Instinct™ MI350X architecture, supported by close collaboration on inference optimization with AMD and DigitalOcean, provides us a strong foundation to push performance and cost efficiency even further for our customers.”
In addition to offering the latest AMD GPUs, we’re committed to transparency and simplicity, ensuring this powerful technology is easy to adopt for developers and emerging businesses:
Cost-effective, predictable pricing: We offer transparent, usage based pricing with flexible contracts and no hidden fees.
Simple setup: GPU Droplets can be provisioned and configured with security, storage, and networking requirements in just a few clicks, drastically simplifying deployment compared to complex cloud environments.
Access to enterprise features: GPU Droplets offer enterprise-grade SLAs, observability features, and are HIPAA-eligible and SOC 2 compliant.
The new GPU Droplets are available now in our Atlanta (ATL1) datacenter. Next quarter, we’ll deploy AMD Instinct™ MI355X GPUs, marking the addition of liquid-cooled racks to our offering to support even larger datasets and models.
Ready to scale your AI production? Talk to our team to learn more about AMD Instinct™ MI350X on DigitalOcean and start building on the Agentic Inference Cloud today.