Share
Today, we’re excited to introduce GPU Droplets—DigitalOcean’s signature virtual machines, now with the extra power of NVIDIA H100 GPUs. With GPU Droplets, developers can effortlessly experiment, train models, and scale AI projects, without complexity or large upfront investments.
Now available for every DigitalOcean user, GPU Droplets are powered by NVIDIA H100 GPUs, which are one of the most powerful computers accessible today, and feature 640 Tensor Cores and 128 Ray Tracing Cores, facilitating high-speed data processing. GPU Droplets offer on-demand access to these machines, enabling developers, startups, and innovators to train AI models, process large datasets, and handle complex neural networks with ease.
With flexible options ranging from single-GPU configurations to setups with 8 GPUs, users can scale computing power based on their project needs without the burden of large upfront hardware investments. The virtualized hardware stack allows for seamless scaling, so you can easily spin up GPU instances and scale down when no longer needed—optimizing both performance and costs.
Each GPU Droplet includes two high-performance local disks: a boot disk to store the OS, applications, and AI/ML frameworks, and a scratch disk for staging data during training. These resources come pre-integrated with the GPU Droplet, simplifying the process for users by eliminating the need to manage networking and storage separately, allowing them to focus on training their models efficiently.
Whether you are running Large Language Models with Ollama, generating images with Stable Diffusion, or rendering high quality images on your favorite graphics software, GPU Droplets cater to everyone, from those just getting started with AI to those running production workloads serving thousands of users.
See how you can create animated GIFs in the video below, and check out our YouTube for videos on how to create containerized RAG pipelines and more demos coming soon!
GPU Droplets make cutting-edge AI capabilities accessible to everyone by reducing the high costs and complexity typically associated with larger cloud providers. GPU Droplets come pre-installed with a range of Python and Deep Learning software packages, such as Torch and CUDA, making it easy to quickly bring your own work onto GPU Droplets with minimal setup. Our transparent pricing model supports startups and developers by making it affordable to experiment, scale, and grow AI projects.
With GPU Droplets, users can:
DigitalOcean users like Story.com are already leveraging GPU Droplets to run intensive workloads, saying:
“Story.com’s GenAI workflow demands heavy computational power, and DigitalOcean’s H100 nodes have been a game-changer for us. As a startup, we needed a reliable solution that could handle our intensive workloads, and DO delivered with exceptional stability and performance. From seamless onboarding to rock-solid infrastructure, every part of the process has been smooth. The support team is incredibly responsive and quick to meet our requirements, making it an invaluable part of our growth.” - Deep Mehta, Co-Founder and CTO, Story.com
Dive into the future of AI infrastructure, and spin up a GPU Droplet today!
Whether you’re developing chatbots, training large language models, or analyzing big data, our virtual machines powered by NVIDIA H100 GPUs make advanced AI simple to use and cost effective. Visit our product documentation for a step-by-step guide on spinning up a GPU Droplet. Interested in learning more about custom solutions, larger GPU allocations, or reserving instances for higher discounts? Contact our sales team and learn how DigitalOcean’s fabric, with speeds up to 400 Gbps, can help you run applications requiring high levels of RAM and compute power across multiple nodes of 8-GPU instances.
GPU Droplets are now available in TOR1 and NYC2, with more data centers coming soon.
Share
Bratin Saha, Chief Product & Technology Officer