0

This code works, but is very slow. And I will like to use sqlalchemy module because the rest of the script uses that instead of mysql. Is there any advantage of using sqlalchemy or should I continue with this ...

for emp_id in mylist:
    try:
        connection = mysql.connector.connect(host='x.x.x.x', port='3306', database='xxx', user='root', password='xxx')

        cursor = connection.cursor(prepared=True)
        sql_fetch_blob_query = """SELECT col1, col2, Photo from tbl where ProfileID = %s"""

        cursor.execute(sql_fetch_blob_query, (emp_id, ))
        record = cursor.fetchall()
        for row in record:
            image =  row[2]
            file_name = 'myimages4'+'/'+str(row[0])+ '_' + str(row[1]) + '/' + 'simage' + str(emp_id) + '.jpg'
            write_file(image, file_name)

    except mysql.connector.Error as error :
        connection.rollback()
        print("Failed to read BLOB data from MySQL table {}".format(error))

    finally:
        if(connection.is_connected()):
            cursor.close()
            connection.close()            
0

1 Answer 1

1

Do you really need to set up new mysql connection and obtain cursor on each iteration? If no, opening it once at the beginning will really speed up your code.

connection = mysql.connector.connect(host='x.x.x.x', port='3306', database='xxx', user='root', password='xxx', charset="utf8")
cursor = connection.cursor(prepared=True)

for emp_id in mylist:
    try:
        sql_fetch_blob_query = """SELECT col1, col2, Photo from tbl where ProfileID = %s"""

        cursor.execute(sql_fetch_blob_query, (emp_id, ))
        record = cursor.fetchall()
        for row in record:
            image =  row[2]
            file_name = 'myimages4'+'/'+str(row[0])+ '_' + str(row[1]) + '/' + 'simage' + str(emp_id) + '.jpg'
            write_file(image, file_name)

    except mysql.connector.Error as error :
        connection.rollback()
        print("Failed to read BLOB data from MySQL table {}".format(error))

    finally:
        # ouch ...
        if(connection.is_connected()):
            cursor.close()
            connection.close()   

UPD: Actually you don't even need to make N queries to database, because all data can be obtained in one query with WHERE ProfileID IN (.., ..) SQL statement. Take a look this small code, which solves a pretty much identical task:

transaction_ids = [c['transaction_id'] for c in checkouts]
format_strings = ','.join(['%s'] * len(transaction_ids))
dm_cursor.execute("SELECT ac_transaction_id, status FROM transactions_mapping WHERE ac_transaction_id IN (%s)" % format_strings, tuple(transaction_ids)) 
payments = dm_cursor.fetchall()

Please use it to solve your problem.

Sign up to request clarification or add additional context in comments.

6 Comments

I tried that before posting the question. But got this error... AttributeError: 'NoneType' object has no attribute 'charset'
Can u try to add charset="utf8" to connection params? I have updated code. Sauce
charset="utf-8" worked. But utf8 did not. Thanks for your help. Is there any other suggestion?
Okay, can u try pymysql.connect(....), instead of pymysql.connector.connect(...).
There are tens of thousands of ID's in the list. I will need to change "max_allowed_packet" variable for "IN" clause that is not possible.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.