My Ruby on Rails application uses foreman (https://github.com/ddollar/foreman) to manage the processes that need to be started when the app is run.
My Procfile looks like this:
precompile: bundle exec rake assets:precompile
web: bundle exec puma -e $PUMA_ENV
worker: bundle exec rake jobs:work
search: bundle exec rake sunspot:solr:run
Running $ foreman start worked as expected until I added the first line (the precompile task).
When I run $ foreman start my output looks like this:
$ foreman start -e .env.dev
10:30:20 precompile.1 | started with pid 7309
10:30:20 web.1 | started with pid 7310
10:30:20 worker.1 | started with pid 7311
10:30:20 search.1 | started with pid 7312
10:30:22 web.1 | [7310] Puma starting in cluster mode...
10:30:22 web.1 | [7310] * Version 2.8.2 (ruby 2.1.0-p0), codename: Sir Edmund Percival Hillary
10:30:22 web.1 | [7310] * Min threads: 4, max threads: 16
10:30:22 web.1 | [7310] * Environment: development
10:30:22 web.1 | [7310] * Process workers: 2
10:30:22 web.1 | [7310] * Phased restart available
10:30:22 web.1 | [7310] * Listening on tcp://0.0.0.0:3000
10:30:22 web.1 | [7310] Use Ctrl-C to stop
10:30:23 web.1 | [7313] + Gemfile in context: /Users/username/rails_projects/lcms/Gemfile
10:30:23 web.1 | [7314] + Gemfile in context: /Users/username/rails_projects/lcms/Gemfile
10:30:30 web.1 | [7310] - Worker 1 (pid: 7314) booted, phase: 0
10:30:30 worker.1 | [Worker(host:MacBook-Pro.local pid:7311)] Starting job worker
10:30:30 web.1 | [7310] - Worker 0 (pid: 7313) booted, phase: 0
10:30:32 precompile.1 | exited with code 0
10:30:32 system | sending SIGTERM to all processes
SIGTERM received
10:30:32 web.1 | [7310] - Gracefully shutting down workers...
10:30:32 worker.1 | [Worker(host:MacBook-Pro.local pid:7311)] Exiting...
10:30:32 search.1 | exited with code 143
10:30:32 web.1 | [7310] - Goodbye!
10:30:32 web.1 | exited with code 0
10:30:33 worker.1 | exited with code 0
I don't know how to get more details on the problem. I've added $stdout.sync = true to my config/environments/development.rb, and the output is the same as without.
I've also tried both appending and prepending RAILS_ENV=development and RAILS_ENV=production to the precompile task.
How can I get my foreman/Procfile setup to precompile assets successfully, and then continue with the other tasks that start the app?
I've decided that my best option is to use a syntax that will execute my rake assets tasks prior to booting Puma and only if precompiling is successful.
So, running the commands in order with && between them seems to achieve the result that I desire
web: bundle exec rake assets:clean RAILS_ENV=$FOREMAN_ENV && bundle exec rake assets:precompile RAILS_ENV=$FOREMAN_ENV && bundle exec puma -e $FOREMAN_ENV
worker: bundle exec rake jobs:work
search: bundle exec rake sunspot:solr:run
The problem is that as soon as one of the processes in foreman exists, all do. I assume this is by design which makes sense. The application is the combination of running services.
If you want to run one-off tasks you can use foreman run.
Try adding sleep to the every process after first line
web: sleep 1; bundle exec puma -e $PUMA_ENV
worker: sleep 1; bundle exec rake jobs:work
search: sleep 1; bundle exec rake sunspot:solr:run
If it works, remove sleep 1; one by one to see what cause problem.
I've been struggling with deploying an app on Dokku since yesterday. I've been able to deploy two others on the same PaaS platform but for some reason, this one seems to be giving issues.
Right now, I can't even make sense of these logs.
11:30:52 rake.1 | started with pid 12
11:30:52 console.1 | started with pid 14
11:30:52 web.1 | started with pid 16
11:30:52 worker.1 | started with pid 18
11:31:30 worker.1 | [Worker(host:134474ed9b8c pid:18)] Starting job worker
11:31:30 worker.1 | 2015-09-21T11:31:30+0000:[Worker(host:134474ed9b8c pid:18)] Starting job worker
11:31:31 worker.1 | Delayed::Backend::ActiveRecord::Job Load (9.8ms) UPDATE "delayed_jobs" SET locked_at = '2015-09-21 11:31:31.090080', locked_by = 'host:134474ed9b8c pid:18' WHERE id IN (SELECT "delayed_jobs"."id" FROM "delayed_jobs" WHERE ((run_at <= '2015-09-21 11:31:30.694648' AND (locked_at IS NULL OR locked_at < '2015-09-21 07:31:30.694715') OR locked_by = 'host:134474ed9b8c pid:18') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1 FOR UPDATE) RETURNING *
11:31:32 console.1 | Loading production environment (Rails 4.2.0)
11:31:33 web.1 | [2015-09-21 11:31:33] INFO WEBrick 1.3.1
11:31:33 web.1 | [2015-09-21 11:31:33] INFO ruby 2.0.0 (2015-04-13) [x86_64-linux]
11:31:33 web.1 | [2015-09-21 11:31:33] INFO WEBrick::HTTPServer#start: pid=20 port=5200
11:31:33 rake.1 | Abort testing: Your Rails environment is running in production mode!
11:31:33 console.1 | Switch to inspect mode.
11:31:33 console.1 |
11:31:33 console.1 | exited with code 0
11:31:33 system | sending SIGTERM to all processes
11:31:33 worker.1 | [Worker(host:134474ed9b8c pid:18)] Exiting...
11:31:33 worker.1 | 2015-09-21T11:31:33+0000: [Worker(host:134474ed9b8c pid:18)] Exiting...
11:31:33 rake.1 | exited with code 1
11:31:33 web.1 | terminated by SIGTERM
11:31:36 worker.1 | SQL (1.6ms) UPDATE "delayed_jobs" SET "locked_by" = NULL, "locked_at" = NULL WHERE "delayed_jobs"."locked_by" = $1 [["locked_by", "host:134474ed9b8c pid:18"]]
11:31:36 worker.1 | exited with code 0
I really would appreciate if anyone could help catch what I'm doing wrong. Thanks.
In development mode on localhost ( OS X° I am starting my services with
foreman start
my Procfile is :
postgresql: postgres -D vendor/postgresql
redis: redis-server vendor/redis/db/redis.conf
redis-slave: redis-server vendor/redis-slave/db/redis.conf
sidekiq: sidekiq -C config/sidekiq.yml -q devise,1 -q default -q mailers
sidekiq_web: thin -R sidekiq.ru start -p 9292
mail: mailcatcher -f
web: bundle exec unicorn -p 3001 -c ./config/unicorn.rb
rails: bundle exec rails s unicorn
Everything is starting fine , but sidekiq cannot connect to the running Redis instance , so it exit with code
However , if I start each process in different windows , it running fine ... what could be wrong with my foreman script ?
$ foreman start
01:19:21 postgresql.1 | started with pid 48166
01:19:21 redis.1 | started with pid 48167
01:19:21 redis-slave.1 | started with pid 48168
01:19:21 sidekiq.1 | started with pid 48169
01:19:21 sidekiq_web.1 | started with pid 48170
01:19:21 mail.1 | started with pid 48171
01:19:21 web.1 | started with pid 48172
01:19:21 rails.1 | started with pid 48173
01:19:21 redis-slave.1 | 48168:S 16 Aug 01:19:21.220 * Increased maximum number of open files to 10032 (it was originally set to 2560).
01:19:21 redis-slave.1 | _._
01:19:21 redis-slave.1 | _.-``__ ''-._
01:19:21 redis-slave.1 | _.-`` `. `_. ''-._ Redis 3.0.1 (00000000/0) 64 bit
01:19:21 redis-slave.1 | .-`` .-```. ```\/ _.,_ ''-._
01:19:21 redis-slave.1 | ( ' , .-` | `, ) Running in standalone mode
01:19:21 redis-slave.1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6380
01:19:21 redis-slave.1 | | `-._ `._ / _.-' | PID: 48168
01:19:21 redis-slave.1 | `-._ `-._ `-./ _.-' _.-'
01:19:21 redis-slave.1 | |`-._`-._ `-.__.-' _.-'_.-'|
01:19:21 redis-slave.1 | | `-._`-._ _.-'_.-' | http://redis.io
01:19:21 redis-slave.1 | `-._ `-._`-.__.-'_.-' _.-'
01:19:21 redis-slave.1 | |`-._`-._ `-.__.-' _.-'_.-'|
01:19:21 redis-slave.1 | | `-._`-._ _.-'_.-' |
01:19:21 redis-slave.1 | `-._ `-._`-.__.-'_.-' _.-'
01:19:21 redis-slave.1 | `-._ `-.__.-' _.-'
01:19:21 redis-slave.1 | `-._ _.-'
01:19:21 redis-slave.1 | `-.__.-'
01:19:21 redis-slave.1 |
01:19:21 redis-slave.1 | 48168:S 16 Aug 01:19:21.248 # Server started, Redis version 3.0.1
01:19:21 redis-slave.1 | 48168:S 16 Aug 01:19:21.275 * DB loaded from disk: 0.028 seconds
01:19:21 redis-slave.1 | 48168:S 16 Aug 01:19:21.275 * The server is now ready to accept connections on port 6380
01:19:21 postgresql.1 | LOG: database system was shut down at 2015-08-16 01:12:29 CEST
01:19:21 postgresql.1 | LOG: database system is ready to accept connections
01:19:21 postgresql.1 | LOG: autovacuum launcher started
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.254 * Connecting to MASTER localhost:6379
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.255 * MASTER <-> SLAVE sync started
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.255 * Non blocking connect for SYNC fired the event.
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.256 * Master replied to PING, replication can continue...
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.256 * Partial resynchronization not possible (no cached master)
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.268 * Full resync from master: 337a934fccebc3e0ad1627e6f06f0e061b0515e2:1
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.374 * MASTER <-> SLAVE sync: receiving 18 bytes from master
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.374 * MASTER <-> SLAVE sync: Flushing old data
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.374 * MASTER <-> SLAVE sync: Loading DB in memory
01:19:22 redis-slave.1 | 48168:S 16 Aug 01:19:22.378 * MASTER <-> SLAVE sync: Finished with success
01:19:23 sidekiq_web.1 | /Users/yves/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/thin-1.6.3/lib/thin/backends/base.rb:103: warning: epoll is not supported on this platform
01:19:23 sidekiq.1 | exited with code 0
01:19:23 system | sending SIGTERM to all processes
01:19:23 postgresql.1 | LOG: received smart shutdown request
01:19:23 redis-slave.1 | 48168:signal-handler (1439680763) Received SIGTERM scheduling shutdown...
01:19:23 postgresql.1 | LOG: autovacuum launcher shutting down
01:19:23 postgresql.1 | LOG: shutting down
01:19:23 redis.1 | exited with code 0
01:19:23 redis-slave.1 | 48168:S 16 Aug 01:19:23.505 # Connection with master lost.
01:19:23 redis-slave.1 | 48168:S 16 Aug 01:19:23.505 * Caching the disconnected master state.
01:19:23 web.1 | terminated by SIGTERM
01:19:23 mail.1 | terminated by SIGTERM
01:19:23 rails.1 | terminated by SIGTERM
01:19:23 sidekiq_web.1 | terminated by SIGTERM
01:19:23 postgresql.1 | LOG: database system is shut down
01:19:23 postgresql.1 | exited with code 0
01:19:23 redis-slave.1 | 48168:S 16 Aug 01:19:23.575 # User requested shutdown...
01:19:23 redis-slave.1 | 48168:S 16 Aug 01:19:23.575 # Redis is now ready to exit, bye bye...
01:19:23 redis-slave.1 | exited with code 0
yves#MacMini: recommence $ ps aux | grep redis
yves 48330 0.0 0.0 2441988 672 s000 S+ 1:26AM 0:00.01 grep redis
yves#MacMini: recommence $ foreman start
01:26:26 postgresql.1 | started with pid 48353
01:26:26 redis.1 | started with pid 48354
01:26:26 redis-slave.1 | started with pid 48355
01:26:26 sidekiq.1 | started with pid 48356
01:26:26 sidekiq_web.1 | started with pid 48357
01:26:26 mail.1 | started with pid 48358
01:26:26 web.1 | started with pid 48359
01:26:26 rails.1 | started with pid 48360
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.464 * Increased maximum number of open files to 10032 (it was originally set to 2560).
01:26:26 redis-slave.1 | _._
01:26:26 redis-slave.1 | _.-``__ ''-._
01:26:26 redis-slave.1 | _.-`` `. `_. ''-._ Redis 3.0.1 (00000000/0) 64 bit
01:26:26 redis-slave.1 | .-`` .-```. ```\/ _.,_ ''-._
01:26:26 redis-slave.1 | ( ' , .-` | `, ) Running in standalone mode
01:26:26 redis-slave.1 | |`-._`-...-` __...-.``-._|'` _.-'| Port: 6380
01:26:26 redis-slave.1 | | `-._ `._ / _.-' | PID: 48355
01:26:26 redis-slave.1 | `-._ `-._ `-./ _.-' _.-'
01:26:26 redis-slave.1 | |`-._`-._ `-.__.-' _.-'_.-'|
01:26:26 redis-slave.1 | | `-._`-._ _.-'_.-' | http://redis.io
01:26:26 redis-slave.1 | `-._ `-._`-.__.-'_.-' _.-'
01:26:26 redis-slave.1 | |`-._`-._ `-.__.-' _.-'_.-'|
01:26:26 redis-slave.1 | | `-._`-._ _.-'_.-' |
01:26:26 redis-slave.1 | `-._ `-._`-.__.-'_.-' _.-'
01:26:26 redis-slave.1 | `-._ `-.__.-' _.-'
01:26:26 redis-slave.1 | `-._ _.-'
01:26:26 redis-slave.1 | `-.__.-'
01:26:26 redis-slave.1 |
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.467 # Server started, Redis version 3.0.1
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.467 * DB loaded from disk: 0.000 seconds
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.467 * The server is now ready to accept connections on port 6380
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.469 * Connecting to MASTER localhost:6379
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.470 * MASTER <-> SLAVE sync started
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.470 * Non blocking connect for SYNC fired the event.
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.470 * Master replied to PING, replication can continue...
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.471 * Partial resynchronization not possible (no cached master)
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.471 * Full resync from master: f55a717fbd36511c5581e081e154c8e557499754:1
01:26:26 postgresql.1 | LOG: database system was shut down at 2015-08-16 01:19:23 CEST
01:26:26 postgresql.1 | LOG: autovacuum launcher started
01:26:26 postgresql.1 | LOG: database system is ready to accept connections
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.557 * MASTER <-> SLAVE sync: receiving 18 bytes from master
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.558 * MASTER <-> SLAVE sync: Flushing old data
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.558 * MASTER <-> SLAVE sync: Loading DB in memory
01:26:26 redis-slave.1 | 48355:S 16 Aug 01:26:26.558 * MASTER <-> SLAVE sync: Finished with success
01:26:27 sidekiq_web.1 | /Users/yves/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/thin-1.6.3/lib/thin/backends/base.rb:103: warning: epoll is not supported on this platform
01:26:27 sidekiq.1 | exited with code 0
01:26:27 system | sending SIGTERM to all processes
01:26:27 postgresql.1 | LOG: received smart shutdown request
01:26:27 redis-slave.1 | 48355:signal-handler (1439681187) Received SIGTERM scheduling shutdown...
01:26:27 postgresql.1 | LOG: autovacuum launcher shutting down
01:26:27 postgresql.1 | LOG: shutting down
01:26:27 postgresql.1 | LOG: database system is shut down
01:26:27 redis.1 | exited with code 0
01:26:27 redis-slave.1 | 48355:S 16 Aug 01:26:27.991 # Connection with master lost.
01:26:27 redis-slave.1 | 48355:S 16 Aug 01:26:27.991 * Caching the disconnected master state.
01:26:27 redis-slave.1 | 48355:S 16 Aug 01:26:27.997 # User requested shutdown...
01:26:27 redis-slave.1 | 48355:S 16 Aug 01:26:27.997 # Redis is now ready to exit, bye bye...
01:26:27 redis-slave.1 | exited with code 0
01:26:27 postgresql.1 | exited with code 0
01:26:28 mail.1 | terminated by SIGTERM
01:26:28 sidekiq_web.1 | terminated by SIGTERM
01:26:28 rails.1 | terminated by SIGTERM
01:26:28 web.1 | terminated by SIGTERM
The sidekiq.log shows the error ..
WARN: Unresolved specs during Gem::Specification.reset:
minitest (~> 5.1)
WARN: Clearing out unresolved specs.
Please report a bug if this causes problems.
2015-08-15T23:19:29.882Z 48183 TID-oxy6u8hqw INFO: Booting Sidekiq 3.4.2 with redis options {:url=>"redis://localhost:6379/0", :driver=>:hiredis}
2015-08-15T23:19:32.130Z 48183 TID-oxy6u8hqw INFO: Running in ruby 2.2.2p95 (2015-04-13 revision 50295) [x86_64-darwin14]
2015-08-15T23:19:32.130Z 48183 TID-oxy6u8hqw INFO: See LICENSE and the LGPL-3.0 for licensing details.
2015-08-15T23:19:32.130Z 48183 TID-oxy6u8hqw INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org/pro
2015-08-15T23:19:32.235Z 48183 TID-oxy6qw1tk ERROR: Error fetching message: Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)
2015-08-15T23:19:32.235Z 48183 TID-oxy6qw1tk ERROR: /Users/yves/.rbenv/versions/2.2.2/lib/ruby/gems/2.2.0/gems/redis-3.2.1/lib/redis/client.rb:331:in `rescue in establish_connection'
Try "127.0.0.1" or "::1" instead of 'localhost'. IPv6 vs IPv4 can be tricky at times.
I have this in a django view:
import logging
from django.conf import settings
fmt = getattr(settings, 'LOG_FORMAT', None)
lvl = getattr(settings, 'LOG_LEVEL', logging.INFO)
logging.basicConfig(format=fmt, level=lvl)
#api_view(['GET', 'POST'])
def index(request):
if request.GET.get("request_id"):
logging.info("standard CMH request...")
barf()
# etc
In my work environment (DEBUG=True) this prints on the console when I access the relevant page over REST from a client app:
mgregory$ foreman start
19:59:11 web.1 | started with pid 37371
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Starting gunicorn 18.0
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Listening at: http://0.0.0.0:5000 (37371)
19:59:11 web.1 | 2014-10-10 19:59:11 [37371] [INFO] Using worker: sync
19:59:11 web.1 | 2014-10-10 19:59:11 [37374] [INFO] Booting worker with pid: 37374
19:59:18 web.1 | standard CMH request...
What is actually happening is that barf() is throwing an exception, because it's not defined. But this isn't appearing in the console log.
How can I get all exceptions to appear in the console log, in DEBUG=True environment?
Supplementary: is there any reason why I wouldn't have this do the same in the production environment, with DEBUG=False? My production environment does not email me, I'd love to have these exceptions in the log, I think?
It seems as though messages are not getting put onto the queue properly.
I'm using Django with Celery and Kombu to make use of Django's own database as a Broker Backend. All I need is a very simple Pub/Sub setup. It will eventually deploy to Heroku, so I'm using foreman to run locally. Here is the relevant code and info:
pip freeze
Django==1.4.2
celery==3.0.15
django-celery==3.0.11
kombu==2.5.6
Procfile
web: source bin/activate; python manage.py run_gunicorn -b 0.0.0.0:$PORT -w 4; python manage.py syncdb
celeryd: python manage.py celeryd -E -B --loglevel=INFO
settings.py
# Celery configuration
import djcelery
CELERY_IMPORTS = ("api.tasks",)
BROKER_URL = "django://localhost//"
djcelery.setup_loader()
put_message
with Connection(settings.BROKER_URL) as conn:
queue = conn.SimpleQueue('celery')
queue.put(id)
queue.close()
api/tasks.py
#task()
def process_next_task():
with Connection(settings.BROKER_URL) as conn:
queue = conn.SimpleQueue('celery')
message = queue.get(block=True, timeout=1)
id = int(message.payload)
try:
Model.objects.get(id=id)
except Model.DoesNotExist:
message.reject()
else:
# Do stuff here
message.ack()
queue.close()
In the terminal, foreman start works just fine and shows this:
started with pid 31835
17:08:22 celeryd.1 | started with pid 31836
17:08:22 web.1 | /usr/local/foreman/bin/foreman-runner: line 41: exec: source: not found
17:08:22 web.1 | 2013-02-14 17:08:22 [31838] [INFO] Starting gunicorn 0.16.1
17:08:22 web.1 | 2013-02-14 17:08:22 [31838] [INFO] Listening at: http://0.0.0.0:5000 (31838)
17:08:22 web.1 | 2013-02-14 17:08:22 [31838] [INFO] Using worker: sync
17:08:22 web.1 | 2013-02-14 17:08:22 [31843] [INFO] Booting worker with pid: 31843
17:08:22 web.1 | 2013-02-14 17:08:22 [31844] [INFO] Booting worker with pid: 31844
17:08:22 web.1 | 2013-02-14 17:08:22 [31845] [INFO] Booting worker with pid: 31845
17:08:22 web.1 | 2013-02-14 17:08:22 [31846] [INFO] Booting worker with pid: 31846
17:08:22 celeryd.1 | [2013-02-14 17:08:22,858: INFO/Beat] Celerybeat: Starting...
17:08:22 celeryd.1 | [2013-02-14 17:08:22,870: WARNING/MainProcess] celery#myhost.local ready.
17:08:22 celeryd.1 | [2013-02-14 17:08:22,873: INFO/MainProcess] consumer: Connected to django://localhost//.
17:08:42 celeryd.1 | [2013-02-14 17:08:42,926: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
17:08:42 celeryd.1 | The full contents of the message body was: body: 25 (2b) {content_type:u'application/json' content_encoding:u'utf-8' delivery_info:{u'priority': 0, u'routing_key': u'celery', u'exchange': u'celery'}}
Those last two lines are not shown immediately, but get displayed when my API receives a POST request that runs the code in the put_message section above. I've experimented with using Kombu's fully blown-out Producer and Consumer classes with the same result.
Kombu's SimpleQueue example: http://kombu.readthedocs.org/en/latest/userguide/examples.html#hello-world-example
Celery Docs: http://docs.celeryproject.org/en/latest/index.html
Any ideas?
EDITED
Changing to --loglevel=DEBUG within the procfile changes the terminal output to the following:
08:54:33 celeryd.1 | started with pid 555
08:54:33 web.1 | started with pid 554
08:54:33 web.1 | /usr/local/foreman/bin/foreman-runner: line 41: exec: source: not found
08:54:36 web.1 | 2013-02-15 08:54:36 [557] [INFO] Starting gunicorn 0.16.1
08:54:36 web.1 | 2013-02-15 08:54:36 [557] [INFO] Listening at: http://0.0.0.0:5000 (557)
08:54:36 web.1 | 2013-02-15 08:54:36 [557] [INFO] Using worker: sync
08:54:36 web.1 | 2013-02-15 08:54:36 [564] [INFO] Booting worker with pid: 564
08:54:36 web.1 | 2013-02-15 08:54:36 [565] [INFO] Booting worker with pid: 565
08:54:36 web.1 | 2013-02-15 08:54:36 [566] [INFO] Booting worker with pid: 566
08:54:36 web.1 | 2013-02-15 08:54:36 [567] [INFO] Booting worker with pid: 567
08:54:37 celeryd.1 | [2013-02-15 08:54:37,480: DEBUG/MainProcess] [Worker] Loading modules.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,484: DEBUG/MainProcess] [Worker] Claiming components.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,484: DEBUG/MainProcess] [Worker] Building boot step graph.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,484: DEBUG/MainProcess] [Worker] New boot order: {ev, queues, beat, pool, mediator, autoreloader, timers, state-db, autoscaler, consumer}
08:54:37 celeryd.1 | [2013-02-15 08:54:37,489: DEBUG/MainProcess] Starting celery.beat._Process...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,490: DEBUG/MainProcess] celery.beat._Process OK!
08:54:37 celeryd.1 | [2013-02-15 08:54:37,491: DEBUG/MainProcess] Starting celery.concurrency.processes.TaskPool...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,491: INFO/Beat] Celerybeat: Starting...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,506: DEBUG/MainProcess] celery.concurrency.processes.TaskPool OK!
08:54:37 celeryd.1 | [2013-02-15 08:54:37,507: DEBUG/MainProcess] Starting celery.worker.mediator.Mediator...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,507: DEBUG/MainProcess] celery.worker.mediator.Mediator OK!
08:54:37 celeryd.1 | [2013-02-15 08:54:37,507: DEBUG/MainProcess] Starting celery.worker.consumer.BlockingConsumer...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,508: WARNING/MainProcess] celery#myhost.local ready.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,508: DEBUG/MainProcess] consumer: Re-establishing connection to the broker...
08:54:37 celeryd.1 | [2013-02-15 08:54:37,510: INFO/MainProcess] consumer: Connected to django://localhost//.
08:54:37 celeryd.1 | [2013-02-15 08:54:37,628: DEBUG/Beat] Current schedule:
08:54:37 celeryd.1 | <Entry: celery.backend_cleanup celery.backend_cleanup() {<crontab: * 4 * * * (m/h/d/dM/MY)>}
08:54:37 celeryd.1 | [2013-02-15 08:54:37,629: DEBUG/Beat] Celerybeat: Ticking with max interval->5.00 minutes
08:54:37 celeryd.1 | [2013-02-15 08:54:37,658: DEBUG/Beat] Celerybeat: Waking up in 5.00 minutes.
08:54:38 celeryd.1 | [2013-02-15 08:54:38,110: DEBUG/MainProcess] consumer: basic.qos: prefetch_count->16
08:54:38 celeryd.1 | [2013-02-15 08:54:38,126: DEBUG/MainProcess] consumer: Ready to accept tasks!
08:55:08 celeryd.1 | [2013-02-15 08:55:08,184: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
08:55:08 celeryd.1 | The full contents of the message body was: body: 26 (2b) {content_type:u'application/json' content_encoding:u'utf-8' delivery_info:{u'priority': 0, u'routing_key': u'celery', u'exchange': u'celery'}}
The problem was twofold:
The message format was wrong. It needs to be a dictionary according to the documentation at http://docs.celeryproject.org/en/latest/internals/protocol.html which #asksol provided, and following the example at the bottom of that page.
Example Message
{"id": "4cc7438e-afd4-4f8f-a2f3-f46567e7ca77",
"task": "celery.task.PingTask",
"args": [],
"kwargs": {},
"retries": 0,
"eta": "2009-11-17T12:30:56.527191"}
put_message
with Connection(settings.BROKER_URL) as conn:
queue = conn.SimpleQueue('celery')
message = {
'task': 'process-next-task',
'id': str(uuid.uuid4()),
'args': [id],
"kwargs": {},
"retries": 0,
"eta": str(datetime.datetime.now())
}
queue.put(message)
queue.close()
The Procfile process is a consumer that runs the task, so there's no need to set up a consumer within the task. I just needed to use a parameters that I sent in when I published the message.
api/tasks.py
#task(serializer='json', name='process-next-task')
def process_next_task(id):
try:
Model.objects.get(id=int(id))
except Model.DoesNotExist:
pass
else:
# Do stuff here
This is not solution for this Question,
but mark for issue in use celery4.0.2
output like:
[2017-02-09 17:45:12,136: WARNING/MainProcess] Received and deleted unknown message. Wrong destination?!?
The full contents of the message body was: body: [[], {}, {u'errbacks': None, u'callbacks': None, u'chord': None, u'chain': None}] (77b)
{content_type:'application/json' content_encoding:'utf-8'
delivery_info:{'consumer_tag': 'None4', 'redelivered': False, 'routing_key': 'test2', 'delivery_tag': 1L, 'exchange': ''} headers={'\xe5\xca.\xdb\x00\x00\x00\x00\x00': None, 'P&5\x07\x00': None, 'T\nKB\x00\x00\x00': '3f6295b3-c85c-4188-b424-d186da7e2edb', 'N\xfd\x17=\x00\x00': 'gen23043#hy-ts-bf-01', '\xcfb\xddR': 'py', '9*\xa8': None, '\xb7/b\x84\x00\x00\x00': 0, '\xe0\x0b\xfa\x89\x00\x00\x00': None, '\xdfR\xc4x\x00\x00\x00\x00\x00': [None, None], 'T3\x1d ': 'celeryserver.tasks.test', '\xae\xbf': '3f6295b3-c85c-4188-b424-d186da7e2edb', '\x11s\x1f\xd8\x00\x00\x00\x00': '()', 'UL\xa1\xfc\x00\x00\x00\x00\x00\x00': '{}'}}
solution:
https://github.com/celery/celery/issues/3675
# call this command many times, until it says it's not installed
pip uninstall librabbitmq
Thanks for https://github.com/ask
Apparently librabbitmq issue is related to new default protocol in celery 4.x. You can switch to previous protocol version by either putting CELERY_TASK_PROTOCOL = 1 in your settings if you're using Django or settings app.conf.task_protocol = 1 in celeryconf.py
Then you'll be able to queue task from with another task.
my: [celery 3.1.25; django=1.11]
ADD celery exchange in settings.py
CELERY_QUEUES = {
"celery": {"exchange": "celery",
"routing_key": "celery"}
}
OR use by this
# I declare queue
ch = settings.CELERY_APP.connection().channel()
ex = Exchange("implicit", channel=ch)
q = Queue(name="implicit", routing_key="implicit", channel=ch, exchange=ex)
q.declare() # <-- here
producer = ch.Producer(routing_key=q.routing_key, exchange=q.exchange)
# publish
producer.publish("text")
or U can use the second version
from kombu docs
from kombu import Exchange, Queue
task_queue = Queue('tasks', Exchange('tasks'), routing_key='tasks')
producer.publish(
{'hello': 'world'},
retry=True,
exchange=task_queue.exchange,
routing_key=task_queue.routing_key,
declare=[task_queue], # <-- declares exchange, queue and binds.
)