12

I have tried to create a docker image of my backend API. but getting errors. I have googled about it, and everyone who has the same issue had to add node_module on the .dockerignore file.

I already did it, but, still have the same error.

I am adding my file info here.

Dockerfile

FROM node:alpine
WORKDIR /usr/src/app
COPY package*.json .
#COPY yarn.lock .
RUN apk add --no-cache yarn --repository="http://dl-cdn.alpinelinux.org/alpine/edge/community"
#RUN yarn install --frozen-lockfile
RUN yarn install
RUN yarn
COPY . .
CMD ["yarn", "dev"];

.dockerignore

/node_modules
.env
docker-compose.yml

docker-compose.yml

version: "3.9"

services:
  mongo_db:
    container_name: mongodb_container
    image: mongo:latest
    restart: always
    ports:
      - "27017:27017"
    volumes:
      - mongo_db:/data/db

  #EET service
  eetapi:
    container_name: eetapi_container
    build: .
    volumes:
      - .:/usr/src/app
    ports:
      - "3000:3000"
    environment:
      SITE_URL: http://localhost
      PORT: 3000
      MONGO_URL: mongodb://mongodb_container:27017/easyetapi
      JWT_SECRET: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      SENTRY_DSN: https://[email protected]/xxxxxxx
      MAILGUN_DOMAIN: mg.myeetdomain.tld
      MAILGUN_API_KEY: xxxxxxxxxxxxxxx-xxxxxxxxxxx-xxxxxxxx
      NODE_ENV: production
    depends_on:
      - mongo_db
volumes:
  mongo_db: {}

The Error

Error Screenshot

Please help me out.

Thank You

6 Answers 6

13

The volumes: block overwrites everything in the image with the current directory on the host. That includes the node_modules tree installed in the Dockerfile. If you have a MacOS or Windows host but a Linux container, replacing the node_modules tree will cause the error you get.

You should delete the volumes: block so that you run the code and library tree that are built into the image.

Since the bind-mount overwrites literally everything the Dockerfile does, it negates any benefit you get from building the Docker image. Effectively you're just running an unmodified node image with bind-mounted host content, and you'll get the same effect with a much simpler setup if you Node on the host without involving Docker. (You could still benefit from running the database in a container.)

Sign up to request clarification or add additional context in comments.

4 Comments

Thank you so much! It worked. but if I don't map my local machine app volumes: then if I update anything locally while developing, will it pull the change automatically (I will use nodemon)? Please let me know. Thank You?
Docker is designed as an isolation system, and the program running in a container can't usually access the code you're live-editing on your host. I'd recommend using Node without Docker for actual development.
@DavidMaze You mentioned that on dev mode its advisable to not use docker containers, of if you're running a distributed system that have other service where main service e.g.: API service is in need of them? 1. In my case I have a custom made selfie verification AI model service, 2. Emailing Service Which both are need by main service API. On dev, do I have to run those without a docker environment?
If you have dependencies like this in images already, it's fine to run those images as dependencies. (A setup I frequently use is to run a database in a container while I have local tools for the code I'm writing.) This is a case where avoiding injecting code with volumes: makes a difference: imagine another team maintains that verification service and you're just trying to run it; you can just run their image without having to separately check out their source code, provided they've tested their image this way.
7

I ended up a difference solution , the problem occurred due to node modules created by my machine which is windows and I am creating docker images in alpine. So I use bcryptjs that is platform independent.

bcrypt: It is a native binding to the C++ bcrypt library. It requires compilation and contains bindings to the underlying system's C library. As a result, it might have better performance due to its native implementation. That means if I compiled the library in windows then it will not work in linux. But performance would be better.

bcryptjs: It is a pure JavaScript implementation of the bcrypt algorithm. It doesn't have native bindings and relies entirely on JavaScript. While it might be slower compared to bcrypt, especially in CPU-intensive operations, it's easier to install and use across different systems without requiring native compilation.

So I installed bcryptjs and I got the build success. Again it is just a choice, if you have big complex password hashing in your project then you should got for bcrypt for better performance.

Comments

3

Delete the existing node_modules folder and rebuild Docker images.

Comments

1

I solved this problem. Just add in docker-compose.yml in volumes a code like - '/app/node_modules'.For ensure that the directory is available within the container.

Comments

1

All you need to do is copy all your code inside a src/ folder. Then you should mount only that folder (i.e. src/) between the host and the docker image. That way, you will only need to rebuild your docker-compose file when you add a new package to package.json.

  #EET service
  eetapi:
    container_name: eetapi_container
    build: .
    volumes:
      - ./src/:/usr/src/app/src/
    ports:
      - "3000:3000"
    ...

Comments

0

After several hours of debugging the main issue why this suddenly started happening. I discovered that under the service section of my docker compose.yml file my service which was named: apis-service Was having an incorrect volume binding.

I had this volume section that tells docker not to override my host node_modules directory.

The incorrect setup that produced the error

version: "3.8"
services:
  # NGINX Service
  nginx:
    image: nginx:stable-alpine
    ports:
      - "3000:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
  # Api Gateway Service
  apis-service:
    build:
      context: ./api-service
      dockerfile: Dockerfile

    volumes:
      - type: bind
        source: ./api-service
        target: /apis-service
      - /api-service/node_modules # This line here caused the issue because the service name which is my working dir on docker container didn't match what I specified here.
    env_file:
      - ./.env.local

    command: yarn dev

I discovered that the /api-service/node_modules is incorrect because my Dockerfile workdir is /apis-service Not /api-service.

So now I have to change it and point to the correct directory inside the container and everything started working fine again.

The correct setup that produced the desired outcome.


version: "3.8"
services:
  # NGINX Service
  nginx:
    image: nginx:stable-alpine
    ports:
      - "3000:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
  # Api Gateway Service
  apis-service:
    build:
      context: ./api-service
      dockerfile: Dockerfile

    volumes:
      - type: bind
        source: ./api-service
        target: /apis-service
      - /apis-service/node_modules # The correct line that fix the issue.
    env_file:
      - ./.env.local

    command: yarn dev



Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.