1

I have a game where people can draw and guess, similar to scribble.io. I'm using Nginx for load balancing, hosted on AWS, with a backend written in Node.js.

The problem I'm encountering is that my backend endpoint/route, called /allRoom, is not being hit when I try to create a room. This issue allows players to bypass the check that ensures the room exists. As a result, I've ended up with two administrators in the same room, which is not the intended scenario.

I can confirm through the response headers that all players with the same room ID are coming to the same room. I am attaching all the relevant code for your review.

Join.jsx - from here, the player will try to enter the room

  const handleCreateRoom = async () => {
    if (!userName || !room) {
      toast.error("Please enter your name and room ID!", { autoClose: 1000 });
      return;
    }
    try {
      const response = await axios.get(`${backendLink}/allRooms`);
      const existingRooms = response.data.rooms;
      console.log("existingRooms :: " , existingRooms);
      console.log("My response :: " , response);
      if (existingRooms.includes(room)) {
        toast.error("Room ID already taken! Please choose a different one.", {
          autoClose: 1500,
        });
        return;
      }
      console.log("I am going to navigate");
      navigate(`/room?roomID=${room}&name=${userName}`);
      sessionStorage.setItem("role", "admin");
    } catch (err) {
      console.error("Error checking room existence:", err);
      toast.error("Error checking room existence. Please try again.", {
        autoClose: 1500,
      });
      return;
    }
  };

index.js - first the endpoint and then the socket logic

const roomManager = new RoomManager();
const chatManager = new ChatManager(io, logger);

app.get("/allRooms" , (req, response)=>{
    try{
        const rooms = roomManager.showRooms();
        console.log("ALL ROOMS IN BACKEND :: " , rooms);
        response.status(200).json({rooms});
    }
    catch(error){
        console.error("Error fetching rooms:", error);
        response.status(500).json({message: "Failed to fetch rooms"});
    }
})

roomManager.setIO(io);
roomManager.setLogger(logger);
roomManager.setActiveRoomsGauge(activeRooms);

io.on("connection", (socket) => {
    logger.info(`New user connected: ${socket.id}`);
    activeUsersGauge.inc();

    socket.on("join-room", (info) => {
        try {
            console.log("new user ::", info);
            console.log("Room while i join", roomManager.showRooms());
            roomManager.joinRoom(socket, info);
        } catch (error) {
            logger.error(`Error in join-room event: ${error.message}`, { stack: error.stack });
            socket.emit("error", { message: "Failed to join room" });
        }
    });
.
.
.
.

The problem I detected

In EC2:

ALL ROOMS IN BACKEND ::  []
new user :: { room: 'singh', name: '✨✨✨✨✨✨✨', role: 'admin', ready: false }
Room while i join []  // first logs user end here, now second's
new user :: { room: 'singh', name: '🎶🎶🎶🎶🎶🎶', role: 'admin', ready: false }
Room while i join [ 'singh' ]
[ec2-user@ip-172-31-12-150 ~]$ 

In FrontEnd - there is an empty array of rooms, though the server is the same where rooms exists, and somehow this is passing the axios and returning 200 status code - i checked this in the network tab

existingRooms ::  []

It is clear that the line ALL ROOMS IN BACKEND :: .. was not printed for the second user who attempted to create a room that already exists. This indicates that the endpoint /allRooms was not accessed.

My nginx config file

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

upstream backend {
    hash $arg_roomID consistent;
    server 43.205.**.**:8000;
    server 13.232.**.**:8000;
}

server {
    listen 80;
    server_name doodlebackend.me;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name doodlebackend.me;

    ssl_certificate /etc/letsencrypt/live/doodlebackend.me/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/doodlebackend.me/privkey.pem;

    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        add_header X-Backend-Server $upstream_addr always;
    }

    location /socket.io/ {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_cache_bypass $http_upgrade;
        add_header X-Backend-Server $upstream_addr always;
    }
}

Why is this happening?

1 Answer 1

2

Your /allRooms route is not consistently hit by the client, because your NGINX load balancer is hashing the backend based on $arg_roomID, but /allRooms does not include a roomID query parameter. This results in inconsistent routing, where:

  • /allRooms might go to Server A

  • The user later joins via socket or another API call routed to Server B

This leads to inconsistent app state, because your roomManager is in-memory, so rooms only exist on the instance they were created on.

From your logs:

pgsql:

ALL ROOMS IN BACKEND ::  []
new user :: { room: 'singh', name: '✨✨✨✨✨✨✨', role: 'admin', ready: false }
Room while i join []

^ First user — /allRooms was called, returned [], so it allowed creation.

pgsql:

new user :: { room: 'singh', name: '🎶🎶🎶🎶🎶🎶', role: 'admin', ready: false }
Room while i join [ 'singh' ]

^ Second user — /allRooms was NOT logged, but the room already existed. So the check failed, and now you have two admins.

So clearly /allRooms went to a different instance than where the room was created.

Suggested Fix:

Update Join.jsx:

const response = await axios.get(`${backendLink}/allRooms?roomID=${room}`);

And now NGINX will hash based on roomID consistently across all requests.

ALSO:

Add this to your NGINX config to log which backend handled the request:

log_format custom '$remote_addr - $host [$time_local] '
                  '"$request" $status '
                  'Backend=$upstream_addr';

access_log /var/log/nginx/access.log custom;

Now you’ll see which server served which request.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you so much, I should have seen this. You did a great help!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.