DigitalOcean Gradient™ AI GPU Droplets

Run AI/ML, deep learning, high performance compute, and analytics workloads on simple, powerful virtual machines. Scale on demand, manage costs, and deliver actionable insights—without business-slowing complexity.

Benefits of GPU Droplets with DigitalOcean

Simple

Zero to GPU in just two clicks. Get a GPU Droplet running in under a minute.

Cost Effective

Save up to 75% vs. hyperscalers* for the same on-demand GPUs. With a bill that you can actually understand.

Flexible

The same easy-to-use platform that has delivered your cloud needs for over 10 years.

Reliable

HIPAA-eligible and SOC 2 compliant products backed by enterprise-grade SLAs and the 24/7 Support Team you trust to keep you online.

*Up to 75% cheaper than AWS for on-demand H100s and H200s with 8 GPUs each. As of April 2025.

Power AI training, inference, and high-performance computing (HPC) workloads with NVIDIA HGX H200

NVIDIA HGX H200 is a successor to the NVIDIA H100 and is based on the same Hopper architecture, but with significant improvements, especially in its memory subsystem.

Powerful GPU Solutions

AMD Instinct™ MI325X

  • Use cases: Large model training, fine-tuning, inference, and high-performance computing

  • Key benefit: High memory capacity to hold models with hundreds of billions of parameters, reducing the need for model splitting across multiple GPUs

  • Larger memory capacity and higher memory bandwidth vs. MI300X

AMD Instinct™ MI300X

  • Use cases: Large model training, fine-tuning, inference, and high-performance computing

  • Key benefit: High memory bandwidth and capacity to efficiently handle larger models and datasets

  • Up to 1.3X the performance of AMD MI250X for AI use cases

NVIDIA HGX H200

  • Use cases: Training LLMs, inference, and high-performance computing

  • Key benefit: Fast inference speeds on LLMs and high memory capacity and bandwidth.

  • Up to 2x faster inference and improved performance for memory-intensive HPC tasks vs. H100

NVIDIA HGX H100

  • Use cases: Training LLMs, inference, and high-performance computing

  • Key benefit: Fast training speed for LLMs

  • Up to 4X faster training over NVIDIA A100 for GPT-3 (175B) models

NVIDIA RTX 4000 Ada Generation

  • Use cases: Inference, graphical processing, rendering, 3D modeling, video, content creation, and media & gaming

  • Key benefit: Versatile, cost-efficient capabilities for content creation, 3D modeling, rendering, video, and inference workflows

  • Up to 1.7X higher performance than NVIDIA RTX A4000

NVIDIA RTX 6000 Ada Generation

  • Use cases: Inference, graphical processing, rendering, virtual workstations, compute, and media & gaming

  • Key benefit: Versatile, cost-efficient capabilities for content creation, 3D modeling, rendering, video, and inference workflows (with 2X more memory than 4000 Ada)

  • Up to 10X higher performance than NVIDIA RTX A6000

NVIDIA L40S

  • Use cases: Generative AI, inference & training, 3D graphics, rendering, virtual workstations, and streaming & video content

  • Key benefit: Versatile, cost-efficient capabilities for inference, graphics, digital twins, and real-time 4K streaming

  • Up to 1.7X the performance of NVIDIA A100 for AI use cases

Benchmarks available at nvidia.com and amd.com.

Looking for more help on which GPU Droplet to choose? Review How to Choose the Right GPU Droplet for Your AI/ML Workload and How to Choose a Cloud GPU for your Projects..

Gradient AI GPU Droplet Specifications

GPUs are currently available in our NYC2, TOR1, ATL1, and AMS3 data centers, with more data centers coming soon. All GPU models offer a 10 Gbps public and 25 Gbps private network bandwidth.

GPU ModelGPU MemoryDroplet MemoryDroplet vCPUsLocal Storage: Boot DiskLocal Storage: Scratch DiskArchitecture
AMD Instinct™ MI325X*256 GB164 GiB20720 GiB NVMe5 TiB NVMeCDNA 3™
AMD Instinct™ MI325X×8*2,048 GB1,310 GiB1602,046 GiB NVMe40 TiB NVMeCDNA 3™
AMD Instinct™ MI300X192 GB240 GiB20720 GiB NVMe5 TiB NVMeCDNA 3™
AMD Instinct™ MI300X×81,536 GB1,920 GiB1602,046 GiB NVMe40 TiB NVMeCDNA 3™
NVIDIA HGX H200141 GB240 GiB24720 GiB NVMe5 TiB NVMeHopper
NVIDIA HGX H200×81,128 GB1,920 GiB1922,046 GiB NVMe40 TiB NVMeHopper
NVIDIA HGX H10080 GB240 GiB20720 GiB NVMe5 TiB NVMeHopper
NVIDIA HGX H100×8640 GB1,920 GiB1602,046 GiB NVMe40 TiB NVMeHopper
NVIDIA RTX 4000 Ada Generation20 GB32 GiB8500 GiB NVMeAda Lovelace
NVIDIA RTX 6000 Ada Generation48 GB64 GiB8500 GiB NVMeAda Lovelace
NVIDIA L40S48 GB64 GiB8500 GiB NVMeAda Lovelace
GPU Memory256 GB
Droplet Memory164 GiB
Droplet vCPUs20
Local Storage: Boot Disk720 GiB NVMe
Local Storage: Scratch Disk5 TiB NVMe
ArchitectureCDNA 3™
GPU Memory2,048 GB
Droplet Memory1,310 GiB
Droplet vCPUs160
Local Storage: Boot Disk2,046 GiB NVMe
Local Storage: Scratch Disk40 TiB NVMe
ArchitectureCDNA 3™
GPU Memory192 GB
Droplet Memory240 GiB
Droplet vCPUs20
Local Storage: Boot Disk720 GiB NVMe
Local Storage: Scratch Disk5 TiB NVMe
ArchitectureCDNA 3™
GPU Memory1,536 GB
Droplet Memory1,920 GiB
Droplet vCPUs160
Local Storage: Boot Disk2,046 GiB NVMe
Local Storage: Scratch Disk40 TiB NVMe
ArchitectureCDNA 3™
GPU Memory141 GB
Droplet Memory240 GiB
Droplet vCPUs24
Local Storage: Boot Disk720 GiB NVMe
Local Storage: Scratch Disk5 TiB NVMe
ArchitectureHopper
GPU Memory1,128 GB
Droplet Memory1,920 GiB
Droplet vCPUs192
Local Storage: Boot Disk2,046 GiB NVMe
Local Storage: Scratch Disk40 TiB NVMe
ArchitectureHopper
GPU Memory80 GB
Droplet Memory240 GiB
Droplet vCPUs20
Local Storage: Boot Disk720 GiB NVMe
Local Storage: Scratch Disk5 TiB NVMe
ArchitectureHopper
GPU Memory640 GB
Droplet Memory1,920 GiB
Droplet vCPUs160
Local Storage: Boot Disk2,046 GiB NVMe
Local Storage: Scratch Disk40 TiB NVMe
ArchitectureHopper
GPU Memory20 GB
Droplet Memory32 GiB
Droplet vCPUs8
Local Storage: Boot Disk500 GiB NVMe
Local Storage: Scratch Disk
ArchitectureAda Lovelace
GPU Memory48 GB
Droplet Memory64 GiB
Droplet vCPUs8
Local Storage: Boot Disk500 GiB NVMe
Local Storage: Scratch Disk
ArchitectureAda Lovelace
GPU Memory48 GB
Droplet Memory64 GiB
Droplet vCPUs8
Local Storage: Boot Disk500 GiB NVMe
Local Storage: Scratch Disk
ArchitectureAda Lovelace

* Contact sales to reserve capacity.

Serverless Inference Has Arrived

Don't need a full GPU Droplet? The Gradient AI Platform offers a serverless inference API and an agent development toolkit, backed by some of the world's most powerful LLMs. Add inferencing to your app within days, not weeks. And only pay for what you use.

I just need some GPUs… I need a cost-effective, reliable Kubernetes solution that is easy for everyone on the team to access. And that's DO for us.

Richard Li

Amorphous Data, Founder and CEO

Frequently asked questions about GPU Droplets

What are GPU Droplets?

GPU Droplets are virtual machines (VMs) powered by GPUs, optimized for AI/ML workloads. You can run model training, inference, large-scale neural networks, high-performance computing (HPC), and more. They integrate seamlessly with the rest of the DigitalOcean ecosystem.

Where can I deploy GPU Droplets?

They’re available in key North American data centers: New York, Atlanta, and Toronto. They offer low-latency access for developers across the continent.

How are GPU Droplets billed?

Billing is per-second with a minimum 5-minute round-up. That means you only pay for actual usage. Powered-off Droplets still accrue charges since resources remain reserved, so always destroy droplets when not in use.

What pricing options are available?

On-demand pricing starts around $0.76 to $3.44 per GPU/hour, depending on the hardware (e.g., NVIDIA H100, H200, AMD MI300X). Reserved pricing with longer contracts can go as low as $1.49 to $1.99 per GPU/hour, making it cost-effective for sustained workloads.

What hardware configurations are offered?

GPU Droplets range from single-GPU setups to powerful 8-GPU configurations, each coming with a boot disk (for OS and frameworks) and a scratch disk (for training data staging).

Do they come ready for AI development out of the box?

Yes. You get pre-installed Python and deep learning tools—such as Torch, CUDA, and other frameworks—so you can get started immediately with AI workloads.

Is there an uptime SLA for GPU Droplets?

Yes, DigitalOcean backs GPU Droplets with a 99.5% uptime SLA.

What use cases are GPU Droplets ideal for?
GPU Droplets are perfect for:
  • AI/ML model training, fine-tuning, inference pipelines
  • HPC workloads, data processing, simulations
  • Graphics & video rendering, 3D modeling
Can I integrate GPU Droplets with Kubernetes and other tools?

Absolutely. GPU Droplets mesh seamlessly with DigitalOcean’s Kubernetes service, CLI, API, Terraform, giving you enterprise-grade flexibility in deploying containerized ML workloads.

Do developers like to use GPU Droplets?

People like our simplicity and competitive pricing. We’re a go-to for launching powerful GPU infrastructure without the complexity.

GPU Droplet Resources

What is a Cloud GPU?

What is a Cloud GPU?

Scaling Gradient Platform with GPU Droplets and DigitalOcean Networking

Scaling Gradient Platform with GPU Droplets and DigitalOcean Networking

Droplet Features

Droplet Features

Getting Started with 1-Click Models on GPU Droplets - A Guide to Llama 3.1 with Hugging Face

Getting Started with 1-Click Models on GPU Droplets - A Guide to Llama 3.1 with Hugging Face

Stable Diffusion Made Easy: Get Started on DigitalOcean GPU Droplets

Stable Diffusion Made Easy: Get Started on DigitalOcean GPU Droplets

Choosing the Right Offering for your AI ML Workload

Choosing the Right Offering for your AI ML Workload

Choosing the Right GPU Droplet for Your AI/ML Workload

Choosing the Right GPU Droplet for Your AI/ML Workload

What is GPU Virtualization?

What is GPU Virtualization?

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.