0

We have Python 3.6.1 set up with Django, Celery, and Rabbitmq on Ubuntu 14.04. Right now, I'm using the Django debug server (for dev and Apache isn't working). My current problem is that the celery workers get launched from Python and immediately die -- processes show as defunct. If I use the same command in a terminal window, the worker gets created and picks up the task if there is one waiting in the queue.

Here's the command:

celery worker --app=myapp --loglevel=info --concurrency=1 --maxtasksperchild=20 -n celery_1 -Q celery

The same functionality occurs for whichever queues are being set up.

In the terminal, we see the output myapp.settings - INFO - Loading... followed by output that describes the queue and lists the tasks. When running from Python, the last thing we see is the Loading...

In the code, we do have a check to be sure we are not running the celery command as root.

These are the Celery settings from our settings.py file:

CELERY_ACCEPT_CONTENT = ['json','pickle']
CELERY_TASK_SERIALIZER = 'pickle'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IMPORTS = ('api.tasks',)
CELERYD_PREFETCH_MULTIPLIER = 1
CELERYD_CONCURRENCY = 1
BROKER_POOL_LIMIT = 120  # Note: I tried this set to None but it didn't seem to make any difference
CELERYD_LOG_COLOR = False
CELERY_LOG_FORMAT = '%)asctime)s - $(processName)s - %(levelname)s - %(message)s'
CELERYD_HIJACK_ROOT_LOGGER = False
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(psconf.BASE_DIR, 'myapp_static/')
BROKER_URL = psconf.MQ_URI
CELERY_RESULT_BACKEND = 'rpc'
CELERY_RESULT_PERSISTENT = True
CELERY_ROUTES = {}
for entry in os.scandir(psconf.PLUGIN_PATH):
    if not entry.is_dir() or entry.name == '__pycache__':
        continue
    plugin_dir = entry.name
    settings_file = f'{plugin_dir}.settings'
    try:
        plugin_tasks = importlib.import_module(settings_file)
        queue_name = plugin_tasks.QUEUENAME
    except ModuleNotFoundError as e:
        logging.warning(e)
    except AttributeError:
        logging.debug(f'The plugin {plugin_dir} will use the general worker queue.')
    else:
        CELERY_ROUTES[f'{plugin_dir}.tasks.run'] = {'queue': queue_name}
        logging.debug(f'The plugin {plugin_dir} will use the {queue_name} queue.')

Here is the part that kicks off the worker:

    class CeleryWorker(BackgroundProcess):
      def __init__(self, n, q):
        self.name = n
        self.worker_queue = q        
        cmd = f'celery worker --app=myapp --loglevel=info --concurrency=1 --maxtasksperchild=20 -n {self.name" -Q {self.worker_queue}'
        super().__init__(cmd, cwd=str(psconf.BASE_DIR))

    class BackgroundProcess(subprocess.Popen):
      def __init__(self, args, **kwargs):
        super().__init__(args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True, **kwargs)

Any suggestions as to how to get this working from Python are appreciated. I'm new to Rabbitmq/Celery.

4
  • You mentioned the celery workers get launched from Python and immediately die... But I don't see any Python code that is actually launching the celery worker. Could you please include that or describe this in more detail? Commented Aug 24, 2018 at 21:00
  • @sytech - I've edited the post to include code snippets that show how the worker is being launched. Commented Aug 27, 2018 at 14:02
  • And where and how is this "CeleryWorker" used ? But anyway: there's no reason your django code should launch celery workers, and given how both the builtin dev server AND wsgi connectors tend to launch and stop Django subprocesses there are pretty good reasons you DONT want your django to launch celery workers. Commented Aug 27, 2018 at 14:40
  • We have a Manager GUI that kicks off the workers on startup. We take the stdout and stderr and display them in the GUI. This all works like a charm under Windows 10. I'm trying to get it all to behave in Linux and that is where the pain started. Commented Aug 27, 2018 at 14:59

1 Answer 1

0

Just in case someone else needs this...It turns out that the problem was that the shell script which kicks off this whole app is now being launched with sudo and, even though I thought I was checking so we wouldn't launch the celery worker with sudo, I'd missed something and we were trying to launch as root. That is a no-no. I'm now explicitly using 'sudo -u ' and the workers are starting properly.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.