0

Firstly, I am fairly new to Docker, NGINX and I'm trying to containerize my MERN app. I goal is to dockerize the frontend(react), backend(nodejs) and nginx and run them with docker-compose. Till now I have dockerized them and everything runs without any error. But I can not access the backend what I have used proxy pass, this is showing 404.

In my Nginx, localhost ("/") is for my react app and "/apipoint" is for the backend that I set by using proxy pass. But in the browser I cannot access anything by "localhost/apipoint". Though I can access the backend by "localhost:PORT"

My folder structure


├── Backend
│   ├── Dockerfile
│  
├── Frontend
│   │
│   └── Dockerfile
│   │
│   └── default.conf
│
└── docker-compose.yml

default.conf (nginx)

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    server_name _;

location /apipoint {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_pass http://nodejsserver:8082;
    }
location / {
        root /var/www/html;
    }
}

Dockerfile (frontend)

# pull the Node.js Docker image
FROM node:alpine as builder

# create the directory inside the container
WORKDIR ./Frontend

# copy the package.json files from local machine to the workdir in container
COPY package*.json ./

# run npm install in our local machine
RUN npm install --force

# copy the generated modules and all other files to the container
COPY . ./

# the command that starts our app
RUN npm run build

#stage 2

FROM nginx

# WORKDIR /usr/share/nginx/html
COPY default.conf /etc/nginx/conf.d/default.conf


# Copies static resources from builder stage  --from=builder
COPY --from=builder ./Frontend/build /var/www/html

EXPOSE 80

RUN chown nginx.nginx /var/www/html/ -R

Dockerfile (backend)

# pull the Node.js Docker image
FROM node:alpine

# create the directory inside the container
WORKDIR ./Backend

# copy the package.json files from local machine to the workdir in container
COPY package*.json ./

# run npm install in our local machine
RUN npm install --force

# copy the generated modules and all other files to the container
COPY . .

# our app is running on port 5000 within the container, so need to expose it
EXPOSE 8082

# the command that starts our app
CMD ["node", "server.js"]

Docker-compose.yml

version: "3.8"
services:
    nodeserver:
        build:
            context: ./Backend
        container_name: nodejsserver
        hostname: nodejsserver
        networks:
            - app-network
        ports:
            - "8082:8082"
    frontend:
        build:
            context: ./Frontend
        container_name: nginx
        hostname: nginx
        networks:
           - app-network
        ports:
           - "80:80"
 networks:
     app-network: 
         external: true

Is there something wrong in my docker-compose file ? or the nginx config ? or am I doing something wrong somewhere else? I would really appreciate any kind of suggestion and solution.

Thank you.

1 Answer 1

1

Skimming through your Dockerfiles a few conceptual items on the docker side jump out (no particular order).

In both Dockerfiles you have the lines

# run npm install in our local machine
RUN npm install --force

# copy the generated modules and all other files to the container
COPY . .

The RUN command in a Dockerfile does not run (in this case install npm dependencies) on your local machine - it runs the command in your desired image. After the first step above, there is nothing new on your local machine to COPY over (just whatever already existed in the directory). There is also no "container" yet - the Dockerfile is used to build an image from which containers can be generated.

In the same Dockerfile lets look at the line

COPY package*.json ./

I think you mean something like

COPY package.json /

In your docker-compose.yml file both of your services have hostnames - you can remove these. Since your services are running together on the same network they can access each other via their respective service names. Adding the hostnames will only confuse matters.

Since you say you're fairly new to Docker and nginx, I'd say simplify your setup by creating a distinct nginx docker service (so you would have 3 services in total in your docker-compose file: frontend, backend, and nginx proxy).

I would steer away from attempting to author multi-layer Dockerfiles (your frontend) - one layer / functionality per docker image / service. This will make it easier to isolate / debug which facet (docker vs nginx) is causing you trouble.

In terms of dockerized nginx proxies - you can use the official image or a convinence image like jwilder's popular proxy that simplifies the nginx interface a bit.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you. I will be trying out your suggested logics and changes.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.