7

Let's say I have three servers A, B & C. Server C will have the celery task code, and I need to execute them from servers A & B.

From the celery documentation, I see that there's a task.py file which is run as a celery worker

from celery import Celery

app = Celery('tasks', broker='pyamqp://guest@localhost//')

@app.task
def add(x, y):
    return x + y

And then we have another python file (let's say client.py) which calls these tasks.

from tasks import add

add.delay(4, 4)

Here I can see that the client.py file is dependent on the tasks.py file as it's importing the task from the tasks.py file. If we are to run these two files in separate servers, we need to decouple them and somehow call the tasks without having to import the code. I am not able to figure out how to achieve that. So, how can it be done?

1 Answer 1

7

In general you do not do that. You deploy the same code (containing tasks) to both producers (clients) and consumers (workers). However, Celery is a cool piece of software and it allows you to actually schedule task without the need to distribute the code on the producer side. For that you have to use the send_task() method. You have to configure producer with same parameters as your workers (same broker naturally, same serialization) and you must know the calling task name and its parameters in order to schedule its execution correctly.

Sign up to request clarification or add additional context in comments.

1 Comment

I was able to call the task using the following format celery.send_task('tasks.add', (2, 2)) without importing the task module into the client code. Thank you.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.