0

I'm having a Dotnet core base image in Dockerfile. I need to call some python scripts using python interpreter from .Net Core Project internally. Now, there can be multiple options to add python interpreter like creating a seperate python container or install the python in the current .net core image like below code in DockerFile :

RUN apt-get install -y python3-pip python3-dev \
  && cd /usr/local/bin \
  && ln -s /usr/bin/python3 python \
  && pip3 install --upgrade pip

But, running the above commands increases the image size by 400 MB.

I have pulled the alpine python image and run it in a different container and it consumes only 45 MB. So, what's the best possible way to configure python in .Net core? And is it a better approach to create a separate python container and also how can I call a python script using a python interpreter that is installed in a separate container?

P.S: I am very new to Docker

2 Answers 2

1

Well, I don't recommend using alpine base image for python, that's why: https://pythonspeed.com/articles/alpine-docker-python/

You can pay an extra 400MB in your ubuntu container or use debian:slim base image, it will weight less than ubuntu. It even may weight less than alpine if you will install some extra libs for python

The best way is to have different run environments for different languages in separate containers, so just put your python scripts into its own container.

Sign up to request clarification or add additional context in comments.

2 Comments

Thank you for the suggestion. I don't have any python application. I just need a python command line interpreter to execute few python scripts from .net core project and nothing else. So, in that case, I am expecting to install python in a very less possible size.
so in this case you just need to use any image except alpine
0

It's not a good approach to create a separate container for python because let's say for example if that container goes down due to some issues then it'll be difficult for you to execute your python scripts.

Approach 1

The other way you can do is copy the python tar file to the container and in the entry point script, you can untar it in a specified location. This will reduce the image size also.

  1. create a directory

    RUN mkdir -p /python /python/interpreter /python/interpreter/tar

  2. copy the tar file to that path

    COPY <workspace>/<tar file name> /python/tar

  3. Untar command in the entry point script

    tar xv -C /python/interpreter/ -f /python/interpreter/tar/<tar file name>

Approach 2

You can run a python script by using the python docker image directly:

docker run -it --rm --name my-script -v "$PWD":/user/myapp -w /user/src/myapp python:3 python your-script.py

Approach 3

Use alpine base image like below

FROM alpine:3.7 -- this has a virtual image size of 37MB

    `FROM ubuntu:18.04
     RUN apt-get update \
        && apt-get install -y --no-install-recommends mysql-client \
        && rm -rf /var/lib/apt/lists/*

This has a virtual image size of 15MB

6 Comments

alpine image is not suggested for python: pythonspeed.com/articles/alpine-docker-python
Agreed that using alpine can make python docker builds slower. I just included the different approaches and that's the reason I suggested this as a last approach.
I just need to execute multiple simple pythons scripts. In that case, can I use alpine because of the less requirement and also less image size?
Yes, you can use it's up to you. Better to go with the 1st approach as you'll have more control over it.
by the way in alpine python installation can weight more than in debian:slim
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.