ERR_CERT_COMMON_NAME_INVALID on local dev environment using signed url to upload directly from web

I have a project using Digitial Ocean spaces to upload and request files from a server. I’ve got a node.js server that uses the s3 sdk to generate a presigned URL that’s then used in a javascript fetch to upload directly to spaces. This is to reduce load on our server and reduce the running costs required in proxying through our server.

This is the code making the upload request (written in Typescript)

async (uploadUrl: string, file: File) => {
    const response = await fetch(uploadUrl, {
      method: 'PUT',
      body: file,
      headers: {
        'Content-Type': file.type,
    return response

This responds with an ERR_CERT_COMMON_NAME_INVALID error. I’m using browsersync for a local dev server and imagine this is a case of setting up CORS properly to allow localhost - I’m aware of the security concern with doing that, and I will restrict CORS as soon as development is complete on this feature. I’ve added a CORS rule to allow localhost:3000 but I’m still getting this issue so would appreciate any pointers - the ideal to me would be if there was a way of setting up like a dev mode on an obscure bucket that relaxes these rules temporarily.

As I understand it, I shouldn’t need to edit the ACL as, while the bucket is private, the presigned url is being generated by a server with an api key and secret with full access, but am I wrong?

Here’s the relevant part of my server side code in case that’s relevant (it’s node.js)

import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'

const client = new S3Client({
      endpoint: process.env.SPACES_ENDPOINT,
      region: process.env.SPACES_REGION,
      forcePathStyle: false,
      credentials: {
        accessKeyId: process.env.SPACES_ACCESS_KEY,
        secretAccessKey: process.env.SPACES_SECRET,

const getPresignedPutURL = () => {
    const command = new PutObjectCommand({
      Bucket: process.env.SPACES_BUCKET,
      Key: key,
      Metadata: { ...metadata },

    return getSignedUrl(client, command, { expiresIn: 60 * 60 * 12 })

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Accepted Answer

Oh wow - after days of messing around on and off with this, I made a typo when naming my bucket and every other error was a red herring… Eventually tried to hit just the domain directly ( and got a “bucket doesn’t exist” error that keyed me onto it.

Damn. Thanks for your help anyway haha.

The error was a red herring - the issue was the URL being given. For SPACES_ENDPOINT, I was including my bucket name in the subdomain (which is what the Digital Ocean settings page spits out for ‘Origin Endpoint’, hence the confusion) - presumably it was the Spaces SSL certificate that didn’t support the double subdomain that was being spat out as the URL by the client

I’m now getting CORS errors instead but they’re at least coming from the correct URL…

Access to fetch at '' from origin 'https://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Site Moderator
Site Moderator badge
December 14, 2023


The ERR_CERT_COMMON_NAME_INVALID error typically indicates an issue with the SSL certificate. It seems like your development environment is using HTTPS, and the SSL certificate is not valid for the specified domain or subdomain.

Regarding CORS, you’ve already mentioned that you added a CORS rule to allow localhost:3000. Ensure that your CORS rule is correctly configured in your Digital Ocean Spaces settings. The CORS configuration should include the necessary headers and methods for your use case.

The rule can also look like this:

Create a new configuration with these values:

-   Origin: `*`
-   Allowed Methods: `GET`
-   Allowed Headers: Click “Add header” an them: `*`
-   Save options

A similar question was asked here:

As for the ACL, you are correct in assuming that you shouldn’t need to modify it. The presigned URL is generated with your server’s AWS API key and secret, which should have the required permissions. The ACL of the bucket itself does not impact the ability to use a presigned URL generated with appropriate credentials.


Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more
DigitalOcean Cloud Control Panel