This is my first time using Celery and redis so there's probably something obvious that I'm not inferring from the documentation and searching through others' questions on here. Whenever I try to run a worker my connection keeps timing out with:
ResponseError: unknown command 'WATCH'
[2013-06-12 18:25:23,059: ERROR/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
here's my requirements.txt
South==0.7.6
amqp==1.0.11
anyjson==0.3.3
billiard==2.7.3.28
boto==2.9.4
celery==3.0.19
celery-with-redis==3.0
dj-database-url==0.2.1
django-admin-bootstrapped==0.3.2
django-celery==3.0.17
django-jsonfield==0.9.4
django-stripe-payments==2.0b20
mimeparse==0.1.3
oauthlib==0.4.0
paramiko==1.10.1
psycopg2==2.5
pycrypto==2.6
python-dateutil==2.1
python-openid==2.2.5
pytz==2013b
redis==2.7.5
requests==1.2.0
requests-oauthlib==0.3.1
six==1.3.0
stripe==1.7.9
wsgiref==0.1.2
settings.py
import djcelery
djcelery.setup_loader()
INSTALLED_APPS = (
...
'djcelery',
...
)
CACHES = {
"default": {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": "127.0.0.1:6379:1",
"OPTIONS": {
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
}
}
}
BROKER_URL = 'redis://localhost:6379/0'
when I start my redis server and run
./manage.py celeryd -B
My connect just keeps timing out with:
Traceback (most recent call last):
File "/venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 395, in start
self.consume_messages()
File "/venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 407, in consume_messages
with self.hub as hub:
File "/venv/lib/python2.7/site-packages/celery/worker/hub.py", line 198, in __enter__
self.init()
File "/venv/lib/python2.7/site-packages/celery/worker/hub.py", line 146, in init
callback(self)
File "/venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 401, in on_poll_init
self.connection.transport.on_poll_init(hub.poller)
File "/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 749, in on_poll_init
self.cycle.on_poll_init(poller)
File "/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 266, in on_poll_init
num=channel.unacked_restore_limit,
File "/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 159, in restore_visible
self.restore_by_tag(tag, client)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 94, in Mutex
pipe.watch(name)
File "/venv/lib/python2.7/site-packages/redis/client.py", line 1941, in watch
return self.execute_command('WATCH', *names)
File "/venv/lib/python2.7/site-packages/redis/client.py", line 1760, in execute_command
return self.immediate_execute_command(*args, **kwargs)
File "/venv/lib/python2.7/site-packages/redis/client.py", line 1779, in immediate_execute_command
return self.parse_response(conn, command_name, **options)
File "/venv/lib/python2.7/site-packages/redis/client.py", line 1883, in parse_response
self, connection, command_name, **options)
File "/venv/lib/python2.7/site-packages/redis/client.py", line 388, in parse_response
response = connection.read_response()
File "/venv/lib/python2.7/site-packages/redis/connection.py", line 309, in read_response
raise response
ResponseError: unknown command 'WATCH'
[2013-06-12 18:25:23,059: ERROR/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
redis:
[1197] 12 Jun 18:50:09 * Server started, Redis version 1.3.14
[1197] 12 Jun 18:50:09 * DB loaded from disk: 0 seconds
[1197] 12 Jun 18:50:09 * The server is now ready to accept connections on port 6379
[1197] 12 Jun 18:50:09 - Accepted 127.0.0.1:53061
[1197] 12 Jun 18:50:09 - DB 0: 2 keys (0 volatile) in 4 slots HT.
[1197] 12 Jun 18:50:09 - 1 clients connected (0 slaves), 1076976 bytes in use
[1197] 12 Jun 18:50:09 - Accepted 127.0.0.1:53062
[1197] 12 Jun 18:50:09 - Accepted 127.0.0.1:53063
[1197] 12 Jun 18:50:09 - Client closed connection
[1197] 12 Jun 18:50:09 - Accepted 127.0.0.1:53064
[1197] 12 Jun 18:50:09 - Client closed connection
[1197] 12 Jun 18:50:09 - Accepted 127.0.0.1:53065
[1197] 12 Jun 18:50:09 - Client closed connection
[1197] 12 Jun 18:50:09 - Accepted 127.0.0.1:53066
[1197] 12 Jun 18:50:09 - Client closed connection
etc etc.
Any guidance for where I should be looking or what possible culprits are? thanks
Your Redis server is too old (1.3.14) to be used with Celery. From this error you can see Celery is trying to use the WATCH command which was introduced in Redis 2.2.
Related
I have installed a stack of OpexEDX platform using Tutor and installed OpexEdx "Codejail" plugin using below link
pip install git+https://github.com/edunext/tutor-contrib-codejail
https://github.com/eduNEXT/tutor-contrib-codejail
I am facing a problem during working on the code jail while importing python matplotlib library.
importing the same library inside codejail container is working fine. the only problem is import through OpnexEdx code block. > advance black > problem.
I have already installed the Codejail and Matplotlib on docker.
I have to run this code. which gives error
<problem>
<script type="loncapa/python">
import matplotlib
</script>
</problem>
import os works fine
but getting error while
import matplotlib
detail of current stack:
open edx version : openedx-mfe:14.0.1
code jail version : codejailservice:14.1.0
please see the error message below
cannot create LoncapaProblem block-v1:VUP+Math101+2022+type#problem+block#3319c4e42da64a74b0e40f048e3f2599: Error while executing script code: Couldn't execute jailed code: stdout: b'', stderr: b'Traceback (most recent call last):\n File "jailed_code", line 19, in <module>\n exec(code, g_dict)\n File "<string>", line 66, in <module>\n File "/sandbox/venv/lib/python3.8/site-packages/matplotlib/__init__.py", line 921, in <module>\n dict.update(rcParams, rc_params_in_file(matplotlib_fname()))\n File "/sandbox/venv/lib/python3.8/site-packages/matplotlib/__init__.py", line 602, in matplotlib_fname\n for fname in gen_candidates():\n File "/sandbox/venv/lib/python3.8/site-packages/matplotlib/__init__.py", line 599, in gen_candidates\n yield os.path.join(get_configdir(), \'matplotlibrc\')\n File "/sandbox/venv/lib/python3.8/site-packages/matplotlib/__init__.py", line 239, in wrapper\n ret = func(**kwargs)\n File "/sandbox/venv/lib/python3.8/site-packages/matplotlib/__init__.py", line 502, in get_configdir\n return get_config_or_cache_dir(_get_xdg_config_dir())\n File "/sandbox/venv/lib/python3.8/site-packages/matplotlib/__init__.py", line 474, in get_config_or_cache_dir\n tempfile.mkdtemp(prefix="matplotlib-")\n File "/opt/pyenv/versions/3.8.6_sandbox/lib/python3.8/tempfile.py", line 347, in mkdtemp\n prefix, suffix, dir, output_type = sanitize_params(prefix, suffix, dir)\n File "/opt/pyenv/versions/3.8.6_sandbox/lib/python3.8/tempfile.py", line 117, in sanitize_params\n dir = gettempdir()\n File "/opt/pyenv/versions/3.8.6_sandbox/lib/python3.8/tempfile.py", line 286, in gettempdir\n tempdir = get_default_tempdir()\n File "/opt/pyenv/versions/3.8.6_sandbox/lib/python3.8/tempfile.py", line 218, in _get_default_tempdir\n raise FileNotFoundError(_errno.ENOENT,\nFileNotFoundError: [Errno 2] No usable temporary directory found in [\'/tmp\', \'/var/tmp\', \'/usr/tmp\', \'/tmp/codejail-lbfd69da\']\n' with status code: 1. For more information check Codejail Service logs.
Codejail service logs are as follows:
{"log":"[pid: 6|app: 0|req: 20/39] 172.18.0.10 () {36 vars in 483 bytes} [Tue Nov 22 11:24:59 2022] POST /api/v0/code-exec =\u003e generated 1978 bytes in 742 msecs (HTTP/1.1 200) 2 headers in 73 bytes (1 switches on core 0)\n","stream":"stderr","time":"2022-11-22T11:25:00.151315626Z"} {"log":"2022-11-22 11:26:23,304 INFO 9 [codejailservice.app] code_exec_service.py:52 - Running problem_id:53fbaa04859f41989ab967c15a12c013 jailed code for course_id:course-v1:VUP+Math101+2022 ...\n","stream":"stderr","time":"2022-11-22T11:26:23.30489438Z"} {"log":"2022-11-22 11:26:23,343 INFO 9 [codejailservice.app] code_exec_service.py:73 - Jailed code was executed in 0.03849988000001758 seconds.\n","stream":"stderr","time":"2022-11-22T11:26:23.343618965Z"} {"log":"[pid: 9|app: 0|req: 20/40] 172.18.0.10 () {36 vars in 483 bytes} [Tue Nov 22 11:26:23 2022] POST /api/v0/code-exec =\u003e generated 73 bytes in 40 msecs (HTTP/1.1 200) 2 headers in 71 bytes (1 switches on core 0)\n","stream":"stderr","time":"2022-11-22T11:26:23.344178308Z"} {"log":"2022-11-23 04:15:24,786 INFO 6 [codejailservice.app] code_exec_service.py:52 - Running problem_id:3319c4e42da64a74b0e40f048e3f2599 jailed code for course_id:course-v1:VUP+Math101+2022 ...\n","stream":"stderr","time":"2022-11-23T04:15:24.786287416Z"} {"log":"2022-11-23 04:15:25,582 ERROR 6 [codejailservice.app] code_exec_service.py:70 - Error found while executing jailed code.\n","stream":"stderr","time":"2022-11-23T04:15:25.582527974Z"} {"log":"[pid: 6|app: 0|req: 21/41] 172.18.0.10 () {36 vars in 483 bytes} [Wed Nov 23 04:15:24 2022] POST /api/v0/code-exec =\u003e generated 1978 bytes in 798 msecs (HTTP/1.1 200) 2 headers in 73 bytes (1 switches on core 0)\n","stream":"stderr","time":"2022-11-23T04:15:25.583132326Z"} {"log":"2022-11-23 06:00:15,150 INFO 9 [codejailservice.app] code_exec_service.py:52 - Running problem_id:3319c4e42da64a74b0e40f048e3f2599 jailed code for course_id:course-v1:VUP+Math101+2022 ...\n","stream":"stderr","time":"2022-11-23T06:00:15.15073834Z"} {"log":"2022-11-23 06:00:15,891 ERROR 9 [codejailservice.app] code_exec_service.py:70 - Error found while executing jailed code.\n","stream":"stderr","time":"2022-11-23T06:00:15.8916806Z"} {"log":"[pid: 9|app: 0|req: 21/42] 172.18.0.10 () {36 vars in 483 bytes} [Wed Nov 23 06:00:15 2022] POST /api/v0/code-exec =\u003e generated 1978 bytes in 742 msecs (HTTP/1.1 200) 2 headers in 73 bytes (1 switches on core 0)\n","stream":"stderr","time":"2022-11-23T06:00:15.892225441Z"}
I'm trying to spawn a few background, celery beat processes using docker-compose, but they are not working anymore. My configuration:
docker-compose-dev.yml
worker-periodic:
image: dev3_web
restart: always
volumes:
- ./services/web:/usr/src/app
- ./services/web/celery_logs:/usr/src/app/celery_logs
command: celery beat -A celery_worker.celery --schedule=/tmp/celerybeat-schedule --loglevel=DEBUG --pidfile=/tmp/celerybeat.pid
environment:
- CELERY_BROKER=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgres://postgres:postgres#web-db:5432/web_dev
- DATABASE_TEST_URL=postgres://postgres:postgres#web-db:5432/web_test
- SECRET_KEY=my_precious
depends_on:
- web
- redis
- web-db
links:
- redis:redis
- web-db:web-db
after I up containers, I $ docker ps and get (note how worker-periodic_1 is always up a few seconds before):
697322a621d5 dev3_web "celery worker -A ce…" 24 hours ago Up 5 minutes dev3_worker-analysis_1
d8e414aa4e5b dev3_web "celery worker -A ce…" 24 hours ago Up 5 minutes dev3_worker-learning_1
ae327266132c dev3_web "flower -A celery_wo…" 24 hours ago Up 5 minutes 0.0.0.0:5555->5555/tcp dev3_monitor_1
3ccb79e01412 dev3_web "celery beat -A cele…" 24 hours ago Up 14 seconds dev3_worker-periodic_1
a50e1276f692 dev3_web "celery worker -A ce…" 24 hours ago Up 5 minutes dev3_worker-scraping_1
All celery workers are working when endpoints are called, except when it is a celery beat, periodically automated, process. When I up containers, my logs complain at celery_logs/worker_analysis.log:
[2019-11-16 23:29:20,880: DEBUG/MainProcess] pidbox received method hello(from_node='celery#d8e414aa4e5b', revoked={}) [reply_to:{'exchange': 'reply.celery.pidbox', 'routing_key': '85f4128f-2f75-3996-8375-2a19aa58d5d4'} ticket:0daa0dc4-fa09-438d-9003-ccbd39f259dd]
[2019-11-16 23:29:20,907: INFO/MainProcess] sync with celery#d8e414aa4e5b
[2019-11-16 23:29:21,018: ERROR/MainProcess] Control command error: OperationalError("\nCannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.\nProbably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.\n",)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/lib/python3.6/site-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.InconsistencyError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/celery/worker/pidbox.py", line 46, in on_message
self.node.handle_message(body, message)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 145, in handle_message
return self.dispatch(**body)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 115, in dispatch
ticket=ticket)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 151, in reply
serializer=self.mailbox.serializer)
File "/usr/lib/python3.6/site-packages/kombu/pidbox.py", line 285, in _publish_reply
**opts
File "/usr/lib/python3.6/site-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 551, in _ensured
errback and errback(exc, 0)
File "/usr/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 444, in _reraise_as_library_errors
sys.exc_info()[2])
File "/usr/lib/python3.6/site-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 439, in _reraise_as_library_errors
yield
File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 518, in _ensured
return fun(*args, **kwargs)
at celer
File "/usr/lib/python3.6/site-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 605, in basic_publish
message, exchange, routing_key, **kwargs
File "/usr/lib/python3.6/site-packages/kombu/transport/virtual/exchange.py", line 70, in deliver
for queue in _lookup(exchange, routing_key):
File "/usr/lib/python3.6/site-packages/kombu/transport/redis.py", line 877, in _lookup
exchange, redis_key))
kombu.exceptions.OperationalError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
this is how celery is configured:
web/project/config.py:
class DevelopmentConfig(BaseConfig):
# CELERY
INSTALLED_APPS = ['routes']
# celery config
CELERYD_CONCURRENCY = 4
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
CELERY_ENABLE_UTC = False
CELERY_TIMEZONE = 'America/Sao_Paulo'
CELERY_BROKER_URL = os.environ.get('CELERY_BROKER')
CELERY_RESULT_BACKEND = os.environ.get('CELERY_RESULT_BACKEND')
CELERY_IMPORTS = ('project.api.routes.background',)
# periodic tasks
CELERYBEAT_SCHEDULE = {
'playlist_generator_with_audio_features': {
'task': 'project.api.routes.background.playlist_generator_with_audio_features',
# Every minute
'schedule': crontab(minute=59),
'args' : [('user_id'),]
},
'cache_user_tracks_with_analysis': {
'task': 'project.api.routes.background.cache_user_tracks_with_analysis',
# Every hour
'schedule': crontab(minute=0, hour='*/1'),
'args' : ('user_id','token')
},
}
this is an example task at project/api/routes/background.py, at my Flask server:
#celery.task(queue='analysis', default_retry_delay=30, max_retries=3, soft_time_limit=1000)
def cache_user_tracks_with_analysis(user_id, token):
# business logic
return {'Status': 'Task completed!',
'features': results}
In my requirements.txt:, kombu is not declared explicitly, and I have:
celery==4.2.1
redis==3.2.0
what am I missing?
This is an open celery/kombu issue: https://github.com/celery/kombu/issues/1063
explicitly downgrading to kombu==4.5.0 fixed the error for me.
I'm trying to build an app in Django that uses Django Channels. I'm deploying on Elastic Beanstalk. My understanding is that the application load balancer supports websockets and I can route websocket traffic to the appropriate port. The websockets work on my localhost:8000. I'm using the free tier Redis Labs for my channel layer.
I followed this tutorial.
https://blog.mangoforbreakfast.com/2017/02/13/django-channels-on-aws-elastic-beanstalk-using-an-alb/#comment-43
Following this tutorial it appears I can get Daphne and the workers running on the correct ports. I SSHed into the EC2 instance to get the following output.
$ sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf status
Daphne RUNNING pid 4240, uptime 0:05:51
Worker:Worker_00 RUNNING pid 4242, uptime 0:05:51
Worker:Worker_01 RUNNING pid 4243, uptime 0:05:51
Worker:Worker_02 RUNNING pid 4244, uptime 0:05:51
Worker:Worker_03 RUNNING pid 4245, uptime 0:05:51
httpd RUNNING pid 4248, uptime 0:05:51
My daphne.out.log and workers.out.log look fine.
$ cat daphne.out.log
2017-07-20 21:41:43,693 INFO Starting server at tcp:port=5000:interface=0.0.0.0, channel layer mysite.asgi:channel_layer.
2017-07-20 21:41:43,693 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2017-07-20 21:41:43,694 INFO Using busy-loop synchronous mode on channel layer
2017-07-20 21:41:43,694 INFO Listening on endpoint tcp:port=5000:interface=0.0.0.0
$ cat workers.out.log
2017-07-20 21:41:44,114 - INFO - runworker - Using single-threaded worker.
2017-07-20 21:41:44,120 - INFO - runworker - Using single-threaded worker.
2017-07-20 21:41:44,121 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-07-20 21:41:44,121 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-07-20 21:41:44,126 - INFO - runworker - Using single-threaded worker.
2017-07-20 21:41:44,126 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-07-20 21:41:44,126 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-07-20 21:41:44,127 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-07-20 21:41:44,127 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
2017-07-20 21:41:44,133 - INFO - runworker - Using single-threaded worker.
2017-07-20 21:41:44,136 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisChannelLayer)
2017-07-20 21:41:44,136 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
From here I updated the security group settings and configured the load balancer and tested my webpage. Instead of /ws/ as the path for rerouting, I used /table/* because those are the pages that need websockets.
If I use /ws/ I get this error.
WebSocket connection to 'ws://example.com/table/1/' failed: Error during WebSocket handshake: Unexpected response code: 404
If I use /table/* I get this error.
WebSocket connection to 'ws://example.com/table/1/' failed: Error during WebSocket handshake: Unexpected response code: 504
When I changed the security group settings of my load balancer to also allow TCP at port 5000, the worker log changes. With the /ws/ path rule in my load balancer, I now get this in the worker log:
...
2017-07-20 21:41:44,136 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconnect, websocket.receive
Not Found: /ws/
Not Found: /ws/
Not Found: /ws/
Not Found: /ws/
Not Found: /ws/
Not Found: /ws/
If I use the /table/* path, I get a very long error in my log.
2017-07-21 01:22:05,270 - ERROR - worker - Error processing message with consumer deal.consumers.ws_connect:
Traceback (most recent call last):
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/contrib/sessions/backends/base.py", line 193, in _get_session
return self._session_cache
AttributeError: 'SessionStore' object has no attribute '_session_cache'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection
connection = Database.connect(**conn_params)
File "/opt/python/run/venv/local/lib64/python3.4/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/python/run/venv/local/lib/python3.4/site-packages/channels/worker.py", line 119, in run
consumer(message, **kwargs)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/channels/sessions.py", line 220, in inner
result = func(message, *args, **kwargs)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/channels/auth.py", line 71, in inner
message.user = auth.get_user(fake_request)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/contrib/auth/__init__.py", line 167, in get_user
user_id = _get_user_session_key(request)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/contrib/auth/__init__.py", line 59, in _get_user_session_key
return get_user_model()._meta.pk.to_python(request.session[SESSION_KEY])
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/contrib/sessions/backends/base.py", line 48, in __getitem__
return self._session[key]
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/contrib/sessions/backends/base.py", line 198, in _get_session
self._session_cache = self.load()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/contrib/sessions/backends/db.py", line 33, in load
expire_date__gt=timezone.now()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/models/manager.py", line 122, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/models/query.py", line 381, in get
num = len(clone)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/models/query.py", line 240, in __len__
self._fetch_all()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/models/query.py", line 1074, in _fetch_all
self._result_cache = list(self.iterator())
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/models/query.py", line 52, in __iter__
results = compiler.execute_sql()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 846, in execute_sql
cursor = self.connection.cursor()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 231, in cursor
cursor = self.make_debug_cursor(self._cursor())
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 204, in _cursor
self.ensure_connection()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/utils.py", line 95, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 199, in ensure_connection
self.connect()
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/base/base.py", line 171, in connect
self.connection = self.get_new_connection(conn_params)
File "/opt/python/run/venv/local/lib/python3.4/site-packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection
connection = Database.connect(**conn_params)
File "/opt/python/run/venv/local/lib64/python3.4/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
Unsurprisingly when I try to use the websockets I get this error:
WebSocket is already in CLOSING or CLOSED state.
Any idea what I'm doing wrong?
Ok so problem was when Daphne runs it doesn't see the environment variables. I added the following code to my supervisord.conf file and it worked:
[program:Daphne]
environment=PATH="/opt/python/run/venv/bin"
**command=/bin/bash -c "source /opt/python/current/env && /opt/python/run/venv/bin/daphne -b 0.0.0.0 -p 5000 mysite.asgi:channel_layer"**
directory=/opt/python/current/app
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/daphne.out.log
[program:Worker]
environment=PATH="/opt/python/run/venv/bin"
**command=/bin/bash -c "source /opt/python/current/env && python manage.py runworker"**
directory=/opt/python/current/app
process_name=%(program_name)s_%(process_num)02d
numprocs=4
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/tmp/workers.out.log
I'm trying to use Celery on my Beanstalk environment (this is the final piece in order to complete the technology stack of my project :P).
This is what I've done so far:
Since, RabbitMQ is the best broker for Celery and Amazon does not provide a dedicated service I created a custom AMI based on Ubuntu 13 64bit
installed RabbitMQ
removed the default user guest/guest
created a custom user
created a custom virtual host
installed admin plugins
tested my configuration using the http API in order to confirm that my RabbitMQ server is up and running.
So far so good! Then in my beanstalk .config file I added a couple of commands for celery:
04_celery_periodic_tasks:
command: "celery worker --app=com.cygora --loglevel=info --beat --autoreload -n period_tasks_worker.%h"
leader_only: true
05_celery_standard_worker:
command: "celery worker --app=com.cygora --loglevel=info --autoreload -n worker_1.%h"
Once I deployed my app, I didn't find any error related to celery (so I'm assuming it's all ok, from "the Python/Django side")... but as soon as I use a feature of my site that requires sending a message to Rabbit via Celery I get a timeout exception:
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 111, in establish_connection
[Thu Feb 20 22:01:24 2014] [error] conn = self.Connection(**opts)
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
[Thu Feb 20 22:01:24 2014] [error] self.transport = create_transport(host, connect_timeout, ssl)
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/amqp/transport.py", line 274, in create_transport
[Thu Feb 20 22:01:24 2014] [error] return TCPTransport(host, connect_timeout)
[Thu Feb 20 22:01:24 2014] [error] File "/opt/python/run/venv/lib/python2.7/site-packages/amqp/transport.py", line 89, in __init__
[Thu Feb 20 22:01:24 2014] [error] raise socket.error(last_err)
[Thu Feb 20 22:01:24 2014] [error] error: timed out
I specified the broker url in settings as:
BROKER_URL = "amqp://myuser:mypassword#myelasticip:5672/myvirtualhost"
What I'm missing or what I did wrong? Why the socket connection can't be established?
I forgot I had asked this question... anyway I solved. It was just a matter of opening the right TCP ports for RabbitMQ:
22
15672
5672
I also changed the way I run celery, by using supervisor + django-supervisor in order to daemonize it properly :)
I am trying to follow the steps in this guide: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
Before I even get to the nginx part I am trying to make sure that uWSGI works correctly
my folder structure is srv/www/domain/projectdatabank/
the project databank folder contains my manage.py file
my wsgi.py file looks like this:
import os
import sys
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "databank.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
i go into the projectdatabank folder and run the following command
uwsgi --http :8000 --wsgi-file projectdatabank/databank/wsgi.py
when i go to the web page i get this error
compiled with version: 4.4.7 20120313 (Red Hat 4.4.7-3) on 06 July 2013 00:16:13
os: Linux-3.8.4-linode50 #1 SMP Mon Mar 25 15:50:29 EDT 2013
nodename:
machine: i686
clock source: unix
pcre jit disabled
detected number of CPU cores: 8
current working directory: /srv/www/databankinfo.com
detected binary path: /usr/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 1024
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
uWSGI http bound on :8000 fd 4
spawned uWSGI http 1 (pid: 10091)
uwsgi socket 0 bound to TCP address 127.0.0.1:47129 (port auto-assigned) fd 3
Python version: 2.6.6 (r266:84292, Feb 21 2013, 23:54:59) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x8cf8598
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 64000 bytes (62 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0x8cf8598 pid: 10090 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 10090, cores: 1)
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/django/core/handlers/wsgi.py", line 236, in __call__
self.load_middleware()
File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 45, in load_middleware
for middleware_path in settings.MIDDLEWARE_CLASSES:
File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 53, in __getattr__
self._setup(name)
File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 48, in _setup
self._wrapped = Settings(settings_module)
File "/usr/lib/python2.6/site-packages/django/conf/__init__.py", line 134, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'databank.settings' (Is it on sys.path?): No module named databank.settings
[pid: 10090|app: 0|req: 1/1] 66.56.35.151 () {38 vars in 669 bytes} [Tue Jul 9 17:34:52 2013] GET / => generated 0 bytes in 1 msecs (HTTP/1.1 500) 0 headers in 0 bytes (0 switches on core 0)
however I know that settings.py exists in the same directory as wsgi.py
You need to provide an additional argument to your uwsgi call:
--chdir /path/to/your/project/