Whether you’re new to AI and machine learning (ML) or a seasoned expert, looking to train a large language model (LLM) or run cost-effective inference, DigitalOcean has a GPU Droplet for you. We currently offer five different GPU Droplet types from industry-leading brands - AMD and Nvidia - with more GPU Droplet types to come. Read on to learn more about how to choose the right GPU Droplet for your workload.
Use cases: Large model training, fine-tuning, inference, and HPC
Why choose: AMD Instinct™ MI300X’s large memory capacity allows it to hold models with hundreds of billions of parameters entirely in memory, reducing the need for model splitting across multiple GPUs.
Key benefits:
Memory performance: High memory bandwidth (up to 5.3 TB/s) and capacity (192 GB of HBM3 memory) to efficiently handle larger models and datasets.
Value: Offered at a competitive price point ($1.99/GPU/hr on-demand) for a HPC GPU.
Key performance benchmark: Up to 1.3X the performance of AMD MI250X for AI use cases
Coming soon: AMD Instinct™ MI325X
Use cases: Training LLMs, inference, and HPC
Why choose: NVIDIA H100 is based on the NVIDIA Hopper architecture, specifically designed for next-generation AI and scientific computing tasks.
Key benefits:
Computing power: Improves AI computations by using mixed precision formats (FP8 and FP16).
Speed: Features 640 Tensor Cores and 128 Ray Tracing Cores, which facilitate high-speed data processing signature to the machine.
Key performance benchmark: Up to 4X faster training over NVIDIA A100 for GPT-3 (175B) models
Coming soon: NVIDIA H200
NVIDIA RTX 4000 Ada Generation
Use cases: Inference, graphical processing, rendering, 3D modeling, video, content creation, and media & gaming
Why choose: NVIDIA RTX 4000 Ada is a versatile GPU with cost-efficient inference capabilities.
Key benefits:
Graphics performance: 3rd-generation Tensor Cores and next-gen CUDA cores with 20 GB of graphics memory and DLSS 3.0, which uses AI to boost frame rates while maintaining image quality.
Value: Offered at a competitive price point of less than $1 ($0.76 GPU/hr/on-demand).
Key performance benchmark: Up to 1.7X higher performance than NVIDIA RTX A4000
NVIDIA RTX 6000 Ada Generation
Use cases: Inference, graphical processing, rendering, virtual workstations, compute, and media & gaming
Why choose: NVIDIA RTX 6000 Ada Generation is a versatile GPU with cost-efficient inference capabilities.
Key benefits:
Graphics performance: 4th-generation Tensor Cores and next-gen CUDA cores with 48 GB of graphics memory and DLSS 3.0, which uses AI to boost frame rates while maintaining image quality.
Memory performance: 2X more memory than NVIDIA RTX 4000 Ada Generation.
Key performance benchmark: Up to 10X higher performance than NVIDIA RTX A6000
Use cases: Generative AI, inference & training, 3D graphics, rendering, virtual workstations, and streaming & video content
Why choose: NVIDIA L40S is a versatile GPU with cost-efficient capabilities for inference, graphics, digital twins, and real-time 4K streaming.
Key benefits:
Flexibility: 4th-generation Tensor Cores offer a highly-performant solution to use multiple NVIDIA libraries, such as TensorRT and CUDA.
Value: Offers 40% of the inference performance of the H100 at ~50% of the cost.
Key performance benchmarks: Up to 1.7X the performance of NVIDIA A100 for AI use cases
No matter which GPU Droplet you require, when you choose GPU Droplets with DigitalOcean, you benefit from:
Scalable, on-demand GPU compute
Virtual instances to manage cost
Seamless integration with the broader DigitalOcean ecosystem, including access to our Kubernetes service
Pre-installed Python and Deep Learning software packages
HIPAA-eligibility and SOC 2 compliance (all GPU Droplets)
Flexible configurations from single-GPU to 8-GPU setup (select GPU Droplets)
With a $200 credit available for new users, there’s no reason to hesitate - spin up a GPU Droplet today!
*Performance benchmarks available at amd.com and nvidia.com.