DigitalOcean is thrilled to announce the availability of NVIDIA H100 Tensor Core GPU instances and key enhancements to the Paperspace platform, helping startups, growing digital businesses and ISVs to continuously deliver impactful AI experiences to their customers. The explosive growth of generative AI has triggered a race among companies to bring powerful AI/ML innovations to market quickly. AI-native startups and fast-growing digital companies who are training or running inference on foundation models and require access to state-of-the-art infrastructure can now leverage superior performance with competitive cost savings on PaperSpace by DigitalOcean.
NVIDIA H100 GPUs have accelerated AI/ML adoption with order-of-magnitude performance gains unimaginable a few years ago. Equipped with NVIDIA’s new Transformer Engine built on 4th Gen Tensor Cores, H100 GPUs power some of the most significant innovations in the AI/ML space, such as large language models and synthetic media models.
Paperspace now offers these powerful GPUs as both on-demand and reserved instances.
Sensational AI Performance: Powered by the new NVIDIA Transformer Engine and 4th Gen Tensor Cores, H100 GPUs deliver up to 9x faster AI training and up to 30x faster AI- inference speedups on large language models compared to the previous-generation NVIDIA A100 Tensor Core GPUs.
Scale with ease: Multi-node H100 GPU deployment (8x GPUs) enables scaling of GPU power to handle large and complex models. Blazing-fast 3.2TBps NVIDIA NVLink interconnect between these GPUs makes this multi-node GPU setup operate as a massive compute block.
Spin up in seconds: Create an H100 GPU instance in just a few seconds. Our ML-in-a-box configuration provides a holistic compute solution that combines GPUs, Ubuntu Linux images, private network, SSD-based storage, public IPs and snapshots.
Starts as low as $2.24/hr per chip. Paperspace provides both on-demand and guaranteed instances of H100 GPUs. Per-second billing options and unlimited bandwidth helps you save costs.
24/7 Reliability: Paperspace’s platform is monitored 24/7 so customers can maintain absolute focus on training their models and not on infrastructure. When you go into production, our extensive customer support options will help you stay on top of high-traffic usage.
Here’s what’s inside Paperspace H100 instances:
|GPU memory (GB)
|CPU RAM (GB)
|NVIDIA NVLink support
|GPU interconnect speeds
Our ML-in-a-box configuration enables users to implement everything they require for a powerful user-facing AI/ML app. If your model is already in production, run inference on H100 GPUs on Paperspace to deliver delightful AI experiences to your customers.
Paperspace customers love the performance advantage H100 GPUs bring to their AI/ML models:
“Training our next-generation text-to-video model with millions of video inputs on NVIDIA H100 GPUs on Paperspace took us just 3 days, enabling us to get a newer version of our model much faster than before. We also appreciate Paperspace’s stability and excellent customer support, which has enabled our business to stay ahead of the AI curve. - Naeem Ahmed, Founder, Moonvalley AI
Spinning up an NVIDIA H100 GPU instance on Paperspace takes just a few clicks. Read our docs to get started with NVIDIA H100 GPUs on Paperspace!
Paperspace’s pricing for NVIDIA H100 GPUs is designed to be flexible. Our transparent, per-second pricing model combined with zero data ingress and egress fees, is perfect for startups and growing digital businesses who want flexibility and predictable pricing while leveraging the power of high-performing GPUs. Learn more about Paperspace pricing here!
|Guaranteed instances: Ability for you to reserve H100x8 with 3.2TB of interconnect speeds for a specific period of time
|H100x8 with 3.2TBps interconnect speeds
|Click here to get started
|On-demand: Get on-demand access to H100 GPUs from the Paperspace console. There is no upfront time commitment to use the on-demand offering
|H100x1 (no interconnect speeds) H100x8 with 3.2TBps interconnect speeds
|Click here to get started
In addition to computing power, Gradient Deployments provide AI/ML businesses the ability to deploy models at lightning speed. Recent enhancements include:
Simplified container registry validation: With a complete redesign of the container registry experience, it’s much easier for users to link their container registries to Paperspace. Simply select the container registry vendor (Dockerhub, Azure CR, Google CR, or Github CR), fill in the namespace, username/password or access token, and we will prefill the other values required to connect the registry to Paperspace. This provides improved management of existing container registries and adds a new way to add containers into Gradient Deployments to ensure successful deployment starts, streamlining the container registry experience.
Enhanced security for deployment endpoints: We have fortified the deployment endpoints for Paperspace to provide enhanced security for Gradient Deployments. When creating a deployment, you can choose whether to set the endpoint to public or protected. By selecting protected status, you can easily manage permissions and have more control over who can access the endpoints. You also have the option to switch between public and protected endpoints as needed. Check out our docs to learn more.
We’re excited about the new customer offerings on the Paperspace platform and remain committed to delivering excellent experiences for AI/ML businesses. Click here to get started or contact our sales team to learn more about how Paperspace can help your business grow.
March 4, 2024•3 min read
February 23, 2024•3 min read
February 8, 2024•3 min read