Question

docker is running an internal network, but on droplet gets CORS errors

Posted October 15, 2021 150 views
NginxNode.jsDocker

I am using a droplet as a dev environment for a dockerized application (node, nginx, postgresql, graphql, react). Our docker app sets up an internal network so that different components can communicate with each other. The docker container builds and runs great, but internal calls from the frontend to the backend are getting blocked in the droplet based on CORS errors.

We can run this thing literally everywhere else with no CORS errors - are there droplet-specific settings I need to create?

1 comment
  • ugh, there’s also an issue with the dockerized instance of the database being built under user ‘systemd-coredump’ rather than the user that everything else is built under.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
Submit an Answer
1 answer

Hello,

There are no Droplet specific settings. This would happen if the API is running on a different port so that the request origin would not match. Can you share the exact CORS error that you are getting?

You cat add a Access-Control-Allow-Origin to your Node.js app so that it would accept the requests:

app.use(function(req, res, next) {
    res.setHeader('Access-Control-Allow-Origin', '*');
    res.setHeader('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE');
    res.setHeader('Access-Control-Allow-Headers', 'Content-Type');
    res.setHeader('Access-Control-Allow-Credentials', true);
    next();
});

Let me know how it goes.
Best,
Bobby

  • Thank you Bobby!

    Our CORS error looks like this:

    Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8080/v1/graphql. (Reason: CORS request did not succeed)

    the backend component we’re trying to connect to is not a node app, it’s a hasura instance. It’s built at port 8080 via our yaml file, which specifies that it should be at port 8080 on an internal vpcr network:

    
    services:
      hasura:
        image: hasura/graphql-engine:v1.3.3
        container_name: colrc-v2-hasura
        ports:
          - "8080:8080"
        depends_on:
          - "postgres"
        depends_on:
          postgres:
            condition: service_healthy
        restart: always
        env_file:
          - ./.env
        user: "${UID}:${GID}"
    networks:
        vpcbr:
        ipv4_address: 10.5.0.2
    
    
    networks:
      vpcbr:
        driver: bridge
        ipam:
          config:
            - subnet: 10.5.0.0/16
              gateway: 10.5.0.1
    

    I’ve gotta think the problem is related to the group/owner weirdness that occurs when we docker-up the system, and the database component gets built like this:

    drwx------ 19 systemd-coredump root   4096 Nov 23 19:10 db_data
    
    

    rather than all our other components, that up like this:

    drwxr-xr-x  4 myusername      myusername 4096 Oct 15 23:16 file_data
    
    

    In spite of the fact that I’m dockering up with the UID and GID set in the parameters of docker-compose up

     sudo UID=${UID} GID=${GID} docker-compose up
    
  • Bobby, thank you - just found our error and fixed it. It was none of the things I was worried about - instead it came from the fact we weren’t passing in the right environment variable in the frontend, so localhost wasn’t really there.

    Rookie mistake. Thanks again for your help!