Run AI/ML, deep learning, high performance compute, and analytics workloads on simple, powerful virtual machines. Scale on demand, manage costs, and deliver actionable insights—without business-slowing complexity.
Zero to GPU in just two clicks. Get a GPU Droplet running in under a minute.
Save up to 75% vs. hyperscalers* for the same on-demand GPUs. With a bill that you can actually understand.
The same easy-to-use platform that has delivered your cloud needs for over 10 years.
HIPAA-eligible and SOC 2 compliant products backed by enterprise-grade SLAs and the 24/7 Support Team you trust to keep you online.
*Up to 75% cheaper than AWS for on-demand H100s and H200s with 8 GPUs each. As of April 2025.
NVIDIA HGX H200 is a successor to the NVIDIA H100 and is based on the same Hopper architecture, but with significant improvements, especially in its memory subsystem.
Use cases: Large model training, fine-tuning, inference, and high-performance computing
Key benefit: High memory capacity to hold models with hundreds of billions of parameters, reducing the need for model splitting across multiple GPUs
Larger memory capacity and higher memory bandwidth vs. MI300X
Use cases: Large model training, fine-tuning, inference, and high-performance computing
Key benefit: High memory bandwidth and capacity to efficiently handle larger models and datasets
Up to 1.3X the performance of AMD MI250X for AI use cases
Use cases: Training LLMs, inference, and high-performance computing
Key benefit: Fast inference speeds on LLMs and high memory capacity and bandwidth.
Up to 2x faster inference and improved performance for memory-intensive HPC tasks vs. H100
Use cases: Training LLMs, inference, and high-performance computing
Key benefit: Fast training speed for LLMs
Up to 4X faster training over NVIDIA A100 for GPT-3 (175B) models
Use cases: Inference, graphical processing, rendering, 3D modeling, video, content creation, and media & gaming
Key benefit: Versatile, cost-efficient capabilities for content creation, 3D modeling, rendering, video, and inference workflows
Up to 1.7X higher performance than NVIDIA RTX A4000
Use cases: Inference, graphical processing, rendering, virtual workstations, compute, and media & gaming
Key benefit: Versatile, cost-efficient capabilities for content creation, 3D modeling, rendering, video, and inference workflows (with 2X more memory than 4000 Ada)
Up to 10X higher performance than NVIDIA RTX A6000
Use cases: Generative AI, inference & training, 3D graphics, rendering, virtual workstations, and streaming & video content
Key benefit: Versatile, cost-efficient capabilities for inference, graphics, digital twins, and real-time 4K streaming
Up to 1.7X the performance of NVIDIA A100 for AI use cases
Benchmarks available at nvidia.com and amd.com.
Looking for more help on which GPU Droplet to choose? Review How to Choose the Right GPU Droplet for Your AI/ML Workload and How to Choose a Cloud GPU for your Projects..
GPUs are currently available in our NYC2, TOR1, ATL1, and AMS3 data centers, with more data centers coming soon. All GPU models offer a 10 Gbps public and 25 Gbps private network bandwidth.
| GPU Model | GPU Memory | Droplet Memory | Droplet vCPUs | Local Storage: Boot Disk | Local Storage: Scratch Disk | Architecture |
|---|---|---|---|---|---|---|
| AMD Instinct™ MI325X* | 256 GB | 164 GiB | 20 | 720 GiB NVMe | 5 TiB NVMe | CDNA 3™ |
| AMD Instinct™ MI325X×8* | 2,048 GB | 1,310 GiB | 160 | 2,046 GiB NVMe | 40 TiB NVMe | CDNA 3™ |
| AMD Instinct™ MI300X | 192 GB | 240 GiB | 20 | 720 GiB NVMe | 5 TiB NVMe | CDNA 3™ |
| AMD Instinct™ MI300X×8 | 1,536 GB | 1,920 GiB | 160 | 2,046 GiB NVMe | 40 TiB NVMe | CDNA 3™ |
| NVIDIA HGX H200 | 141 GB | 240 GiB | 24 | 720 GiB NVMe | 5 TiB NVMe | Hopper |
| NVIDIA HGX H200×8 | 1,128 GB | 1,920 GiB | 192 | 2,046 GiB NVMe | 40 TiB NVMe | Hopper |
| NVIDIA HGX H100 | 80 GB | 240 GiB | 20 | 720 GiB NVMe | 5 TiB NVMe | Hopper |
| NVIDIA HGX H100×8 | 640 GB | 1,920 GiB | 160 | 2,046 GiB NVMe | 40 TiB NVMe | Hopper |
| NVIDIA RTX 4000 Ada Generation | 20 GB | 32 GiB | 8 | 500 GiB NVMe | Ada Lovelace | |
| NVIDIA RTX 6000 Ada Generation | 48 GB | 64 GiB | 8 | 500 GiB NVMe | Ada Lovelace | |
| NVIDIA L40S | 48 GB | 64 GiB | 8 | 500 GiB NVMe | Ada Lovelace |
* Contact sales to reserve capacity.
Don't need a full GPU Droplet? The Gradient AI Platform offers a serverless inference API and an agent development toolkit, backed by some of the world's most powerful LLMs. Add inferencing to your app within days, not weeks. And only pay for what you use.
I just need some GPUs… I need a cost-effective, reliable Kubernetes solution that is easy for everyone on the team to access. And that's DO for us.
Richard Li
Amorphous Data, Founder and CEO
GPU Droplets are virtual machines (VMs) powered by GPUs, optimized for AI/ML workloads. You can run model training, inference, large-scale neural networks, high-performance computing (HPC), and more. They integrate seamlessly with the rest of the DigitalOcean ecosystem.
They’re available in key North American data centers: New York, Atlanta, and Toronto. They offer low-latency access for developers across the continent.
Billing is per-second with a minimum 5-minute round-up. That means you only pay for actual usage. Powered-off Droplets still accrue charges since resources remain reserved, so always destroy droplets when not in use.
On-demand pricing starts around $0.76 to $3.44 per GPU/hour, depending on the hardware (e.g., NVIDIA H100, H200, AMD MI300X). Reserved pricing with longer contracts can go as low as $1.49 to $1.99 per GPU/hour, making it cost-effective for sustained workloads.
GPU Droplets range from single-GPU setups to powerful 8-GPU configurations, each coming with a boot disk (for OS and frameworks) and a scratch disk (for training data staging).
Yes. You get pre-installed Python and deep learning tools—such as Torch, CUDA, and other frameworks—so you can get started immediately with AI workloads.
Yes, DigitalOcean backs GPU Droplets with a 99.5% uptime SLA.
Absolutely. GPU Droplets mesh seamlessly with DigitalOcean’s Kubernetes service, CLI, API, Terraform, giving you enterprise-grade flexibility in deploying containerized ML workloads.
People like our simplicity and competitive pricing. We’re a go-to for launching powerful GPU infrastructure without the complexity.
What is a Cloud GPU?
Scaling Gradient Platform with GPU Droplets and DigitalOcean Networking
Droplet Features
Getting Started with 1-Click Models on GPU Droplets - A Guide to Llama 3.1 with Hugging Face
Stable Diffusion Made Easy: Get Started on DigitalOcean GPU Droplets
Choosing the Right Offering for your AI ML Workload
Choosing the Right GPU Droplet for Your AI/ML Workload
What is GPU Virtualization?
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.