0

I have two docker containers running in an Ubuntu 16.04 machine, one docker container has a mysql sever running, the other container holds a dockerized python script set to run a cron job every minute that loads data into mysql. How can I connect the two to load data through the python script into the mysql container? I have an error showing up: Here are my relevant commands:

MYSQL container runs without issue:

docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=yourPassword --name icarus -d mysql_docker_image

CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                    PORTS                               NAMES
927e50ca0c7d        mysql_docker_image                "docker-entrypoint.s…"   About an hour ago   Up About an hour          0.0.0.0:3306->3306/tcp, 33060/tcp   icarus

Second container holds cron and python script:

 #build the container without issue    
    sudo docker run -t -i -d docker-cron

    #exec into it to check logs
    sudo docker exec -i -t container_id /bin/bash

    #check logs
    root@b149b5e7306d:/# cat /var/log/cron.log

Error:

have the following error showing up, which I believe has to do with wrong host address:

Caught this error: OperationalError('(pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'localhost\' ([Errno 99] Cannot assign requested address)")',)

Python Script:

from traffic.data import opensky
from sqlalchemy import create_engine
#from sqlalchemy_utils import database_exists, create_database
import sqlalchemy
import gc


#connection and host information
host = 'localhost'
db='icarus'
engine = create_engine('mysql+pymysql://root:password@'+ host+ ':3306/'+ db) #create engine connection
version= sys.version_info[0]

#functions to upload data
def upload(df,table_name):
    df.to_sql(table_name,con=engine,index=False,if_exists='append')
    engine.dispose()
    print('SUCCESSFULLY LOADED DATA INTO STAGING...')

#pull data drom api
sv = opensky.api_states()
final_df = sv.data
#quick column clean up 
print(final_df.head())
final_df=final_df.rename(columns = {'timestamp':'time_stamp'})


#insert data to staging
try:
    upload(final_df, 'flights_stg')
except Exception as error:
        print('Caught this error: ' + repr(error))
del(final_df)
gc.collect()

I'm assuming the error is the use of 'localhost' as my address? How would i go about resolving something like this?

More information:

MYSQL Dockerfile:

FROM mysql
COPY init.sql /docker-entrypoint-initdb.d

Python Dockerfile:

FROM ubuntu:latest

WORKDIR /usr/src/app

#apt-get install -y build-essential -y  python python-dev python-pip python-virtualenv libmysqlclient-dev curl&& \

RUN \
  apt-get update && \
  apt-get install -y build-essential -y git -y  python3.6 python3-pip libproj-dev proj-data proj-bin libgeos++-dev libmysqlclient-dev python-mysqldb curl&& \
  rm -rf /var/lib/apt/lists/*

COPY requirements.txt ./
RUN pip3 install --upgrade pip && \
    pip3 install --no-cache-dir -r requirements.txt

RUN pip3 install --upgrade setuptools
RUN pip3 install git+https://github.com/xoolive/traffic

COPY . .

# Install cron
RUN apt-get update
RUN apt-get install cron

# Add crontab file in the cron directory
ADD crontab /etc/cron.d/simple-cron

# Add shell script and grant execution rights
ADD script.sh /script.sh
RUN chmod +x /script.sh

# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/simple-cron

# Create the log file to be able to run tail
RUN touch /var/log/cron.log

# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
1
  • 1
    In the Docker documentation, I recommend reading Networking in Compose, even if you're not using Compose, and then Use bridge networks. You need to create a ("user-defined bridge") network and then launch both containers attached to that --net, and they will be able to reach other using their --name as DNS names. localhost in Docker is almost always "this container". Commented Mar 31, 2020 at 0:13

2 Answers 2

1

Can you share your dockerfile or compose file from MySQL container. Yes, the problem related to using localhost as host. You must use a docker service name as host. So in docker service name works as DNS. For example if your docker-compose looks like:

services:
mydb:
 image: mysql:5.7
 command: --default-authentication-plugin=mysql_native_password
 restart: always
 environment:
   MYSQL_ROOT_PASSWORD:root
   MYSQL_USER: root
   MYSQL_PASSWORD: root
   MYSQL_DATABASE: root

You must use mydb instead of localhost

Sign up to request clarification or add additional context in comments.

1 Comment

Hey Anar. I added my dockerfiles to the question above so you can see what I am doing. In case my repo is here: github.com/datafaust/docker_mysql_icarus.
0

Looks like a user defined network bridge was the way to go here as recommended. Solved issue by:

docker network create my-net

docker create --name mysql \
  --network my-net \
  --publish 3306:3306 \
  -e MYSQL_ROOT_PASSWORD=password \
  mysql:latest


docker create --name docker-cron \
  --network my-net \
  docker-cron:latest

docker start each of them and using the --name as the host was perfect.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.