The cloud built for Artificial Intelligence

Train and fine-tune models in the cloud. Build the next ChatGPT

Purpose built machines for large scale GPU accelerated workloads

On-demand access to powerful and low-cost virtual machines. A large GPU catalog for both training and inference.

Read the docs

From research to production

Start something new or scale up an existing project on affordable cloud GPUs. Replace high upfront costs of managing servers with predictable hourly pricing.

Low-cost GPUs with per-second billing

Save up to 70% on compute costs

Spend significantly less on your GPU compute compared to the major public clouds or buying your own servers.

Predictable costs

Scale when you need, stop paying when you don’t. On-demand pricing means you only pay for what you use.

No commitments

Easily change instance types anytime so you always have access to the mix of cost and performance. Cancel anytime.

Go from signup to training a model in seconds

Preloaded with ML frameworks

Choose “ML in a Box” template that comes preinstalled with all the major ML frameworks and CUDA® drivers.

Latest NVIDIA GPUs

Choose from the largest GPU catalog in the world. Leverage the latest NVIDIA GPUs including Ampere A100s with up to 8 GPUs.

Root access, connect with SSH

Bring your SSH key and connect directly to your VM with full root access.

Limitless computing power on demand

Simple management interface & API

Easily launch a large cluster of compute nodes, zero DevOps required. Track realtime utilization across your team. Full API access.

Lightning fast networking

Each instance is connected to a 10 Gbps backend network with 1Gbps internet connectivity.

The latest state-of-the-art infrastructure

With one of the largest catalog of GPUs in the world, you always have access to the best hardware available.

Story.com’s GenAI workflow demands heavy computational power, and DigitalOcean’s H100 nodes have been a game-changer for us. As a startup, we needed a reliable solution that could handle our intensive workloads, and DO delivered with exceptional stability and performance. From seamless onboarding to rock-solid infrastructure, every part of the process has been smooth. The support team is incredibly responsive and quick to meet our requirements, making it an invaluable part of our growth.

Deep Mehta

Co-Founder and CTO, Story.com

FAQs

Where can I find more information on GPUs?

You can choose from a host of GPU options—including A100, A4000, and more—to power your apps. If you’re looking for H100s to run your workloads, check out GPU Droplets.

If you’re wanting to stay on legacy Machines please see this page.

How much does AI Machines cost?

All legacy Paperspace resources are billed on a per-hour basis. Non-GPU resources like storage and public IP addresses have a monthly maximum charge. If a Non-GPU resource reaches a monthly maximum charge, the resource doesn’t incur any further charges for the rest of the billing cycle. Read the docs for more information on legacy Paperspace pricing.

Sign up

Sign up for Machines through the legacy Paperspace console.

Sign up