I am able to execute my task no problem using
scrape_adhoc_reporting([store], [types], inventory)
This is a problem though, because this task can easily take an hour. So I try to make the task async. I tried both of following:
scrape_adhoc_reporting.apply_async(args=[[store], [types], inventory])
scrape_adhoc_reporting.delay([store], [types], inventory)
Both of these methods did not work. The view just redirects as it should, but the task never gets executed. There are no errors in the error log. Any insight as to what I am doing wrong?
Edit: After looking around a little bit more, I see people talking about registering a task. Is this something I need to do?
I ran in the same issue and I just solved it. MattH is right: this is due to non-running workers.
I'm using Django (1.5), Celery (3.0+) and Django-Celery on Windows. To get Celery Beat working, I followed this tutorial: http://mrtn.me/blog/2012/07/04/django-on-windows-run-celery-as-a-windows-service/ as on Windows, Beat can only be launched as a service.
However, as you, my tasks were launched but not executed. This came from a bug in the packaged version django-windows-tools (from pip).
I fixed the issue by downloading the latest version of django-windows-tools from GitHub (https://github.com/antoinemartin/django-windows-tools).
If you want it to be run remotely, you need a worker process running with that task loaded and a routing system configured to get the task request sent between the caller and the worker.
Have a look at the celery documentation for workers and tasks.
The code that you're running is just executing the task locally.
When using asynchronous celery task in Windows normally you get an error that is fixed by setting a parameter.
i.e:
With Django in the file celery.py you should:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1') <== Add this line for Windows compatibility.
This will fix the problem on Windows and does not create incompatibility problems on other systems.
Related
I'm running two django projects in one VM and for that two separate celery process is running with supervisor. Now suddenly the tasks is getting clashed with other process. I'm getting unregistered task error. I don't know how of sudden i'm facing this issue. Earlier it was working and nothing changed in the configurations.
The above attached image is a reference and task is registered in other service but how of a sudden it's running here. I tried so many solution not able to find the root cause.
I have a Django web application that executes tasks via celery. It is run with a combination of Apache, uWSGI, and Redis. For some reason, one of the tasks is being executed by the uWSGI server and the other is executed by the Python interpreter. This is causing permissions issues as uWSGI doesn't run as the same user as Python does.
What could cause the tasks to be run by different programs? Is there a setting somewhere?
Turns out, I needed to call the task with .delay() to get the Celery deamon to execute the task instead of uWSGI.
In our Django project(Django 2.1.5), every time we try to run the project we have to give the '--noreload' command in addition to the runserver command, else the project returns an error as,
ValueError: signal only works in main thread
We are using Django signals to communicate between the apps created in Django and Web-sockets in Threading aysnc-mode to connect between the other services involved in the project. When we try to deploy the project in Jenkins, this becomes a problem and we are using Nginx as the webserver for host the application. Is there any possibility to solve the issue of '--noreload' and run the application normally?
We are not sure if its because of the same problem referred above but we have a problem when trying to migrate the changes in the Models in Django, it always returns
No changes Detected
After a quick internet search, we did the migrations by mentioning the app names and it did work, yet the terminal stays still after the migrating and waits to manually terminate the process.
Is there a possible solution to overcome this? and also we would like to know where we go wrong
I am trying to find a way of running my Nodejs app in the background. I did a lot of research and I am aware of them (node-windows, forever, nssm, ...).
During this what came to my mind was to create my OWN service wrapper in c++ which executes the script (windows) as a child process.
Therefore my question: Is it possible? and what are the possibilities to communicate with the node.exe executing my script? In Google in find tons of articles about the node "childprocess" module but nothing where the node.exe is the childprocess.
BTW: In one of the answers here on SO I found a solution with the sc.exe, but when I am installing the node.exe with the script it gets terminated because it does not respond to the SCM commands. Did this ever work?
Thank you alot in advance
You can make the process run in background using pm2
pm2 start app.js --watch
This will start the process and will also look for changes in the file. More about watch flag
I've followed the all the docs I could find as closely as I could, even setting up an example project using Celery's official django example (celery/examples/django) but tasks still aren't showing up in django-admin:
https://github.com/mikeumus/django-celery-example
I've left a comment on a corresponding django-celery github issue here:
https://github.com/celery/django-celery/issues/335
I've been 3 days trying to get tasks to show and to be able to use all the awesome code seen in django-celery's admin.py for things like scheduling tasks but I've had no luck. I'm running the the camera event deamon. Don't know what I'm missing. Someone in #celery IRC said django-celery is working for them so I must just be missing something. Really excited to get my isolated django-celery github project working so people can just clone it when using the module so that they don't miss anything in the checklist of little things required to get it working.
Django version 1.7.1
Celery 3.1.18
Here's my ./manage.py celery -A proj worker -l info --loglevel=DEBUG and event camera output in a gist:
https://gist.github.com/mikeumus/133631d3fa66ad53280c
Any advice is greatly appreciated. Anyone can access the ubuntu environment I'm working in via Cloud 9 here: https://ide.c9.io/mikeumus/celery-django-example
If you need write access, just ask for it here first.
Thanks,
Mike
I have the same problem.
As a workaround you can use flower that is a monitoring tool for celery.
http://flower.readthedocs.io/en/latest/
Using flower you can see more information which is no provided for the admin site of django