I trying to make a work django with django-cellery, I created my project on virtualenv and I used template https://github.com/xenith/django-base-template and followings:
Django==1.6.5
celery==3.1.11
django-celery==3.1.10
My celery settings in settings/local.py
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://django:pas****#10.0.1.17:5672/myvhost'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
CELERY_RESULT_BACKEND='djcelery.backends.database:DatabaseBackend'
CELERY_ALWAYS_EAGER = True
I have one Periodic Task "Update_All_Feeds" when I start "celery beat", it seems that everything working fine, task is executed every 10sec.
python manage.py celery beat
celery beat v3.1.11 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://django#10.0.1.17:5672/myvhost
. loader -> djcelery.loaders.DjangoLoader
. scheduler -> djcelery.schedulers.DatabaseScheduler
. logfile -> [stderr]#%INFO
. maxinterval -> now (0s)
[2014-05-31 20:53:16,544: INFO/MainProcess] beat: Starting...
[2014-05-31 20:53:16,544: INFO/MainProcess] Writing entries...
[2014-05-31 20:53:16,669: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:19,031: WARNING/MainProcess] /home/phosting/python/django/polskifeed/local/lib/python2.7/site-packages/django/db/models/fields/__init__.py:903: RuntimeWarning: DateTimeField FeedItem.pub_date received a naive datetime (2014-05-31 19:21:49) while time zone support is active.
RuntimeWarning)
[2014-05-31 20:53:19,081: INFO/MainProcess] Writing entries...
[2014-05-31 20:53:26,675: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:36,682: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:46,688: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
[2014-05-31 20:53:56,695: INFO/MainProcess] Scheduler: Sending due task Update_All_ Feeds (update_all_feeds)
But starting this with celeryd is not doing anything
python manage.py celeryd -l DEBUG
-------------- celery#czterykaty v3.1.11 (Cipater)
---- **** -----
--- * *** * -- Linux-2.6.32-bpo.5-xen-amd64-x86_64-with-debian-7.0
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: default:0x13bb510 (djcelery.loaders.DjangoLoader)
- ** ---------- .> transport: amqp://django#10.0.1.17:5672/myvhost
- ** ---------- .> results: djcelery.backends.database:DatabaseBackend
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. update_all_feeds
. update_feeds
[2014-05-31 20:58:55,353: DEBUG/MainProcess] | Worker: Starting Hub
[2014-05-31 20:58:55,354: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,354: DEBUG/MainProcess] | Worker: Starting Pool
[2014-05-31 20:58:55,672: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,675: DEBUG/MainProcess] | Worker: Starting Consumer
[2014-05-31 20:58:55,676: DEBUG/MainProcess] | Consumer: Starting Connection
[2014-05-31 20:58:55,739: DEBUG/MainProcess] Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2014 GoPivotal, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'connection.blocked': True, u'authentication_failure_close': True, u'basic.nack': True, u'per_consumer_qos': True, u'consumer_priorities': True, u'consumer_cancel_notify': True, u'publisher_confirms': True}, u'cluster_name': u'rabbit#czterykaty.luser.nl', u'platform': u'Erlang/OTP', u'version': u'3.3.1'}, mechanisms: [u'PLAIN', u'AMQPLAIN'], locales: [u'en_US']
[2014-05-31 20:58:55,741: DEBUG/MainProcess] Open OK!
[2014-05-31 20:58:55,744: INFO/MainProcess] Connected to amqp://django#10.0.1.17:5672/myvhost
[2014-05-31 20:58:55,744: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,744: DEBUG/MainProcess] | Consumer: Starting Events
[2014-05-31 20:58:55,791: DEBUG/MainProcess] Start from server, version: 0.9, properties: {u'information': u'Licensed under the MPL. See http://www.rabbitmq.com/', u'product': u'RabbitMQ', u'copyright': u'Copyright (C) 2007-2014 GoPivotal, Inc.', u'capabilities': {u'exchange_exchange_bindings': True, u'connection.blocked': True, u'authentication_failure_close': True, u'basic.nack': True, u'per_consumer_qos': True, u'consumer_priorities': True, u'consumer_cancel_notify': True, u'publisher_confirms': True}, u'cluster_name': u'rabbit#czterykaty.luser.nl', u'platform': u'Erlang/OTP', u'version': u'3.3.1'}, mechanisms: [u'PLAIN', u'AMQPLAIN'], locales: [u'en_US']
[2014-05-31 20:58:55,795: DEBUG/MainProcess] Open OK!
[2014-05-31 20:58:55,797: DEBUG/MainProcess] using channel_id: 1
[2014-05-31 20:58:55,800: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:55,802: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:55,802: DEBUG/MainProcess] | Consumer: Starting Mingle
[2014-05-31 20:58:55,803: INFO/MainProcess] mingle: searching for neighbors
[2014-05-31 20:58:55,805: DEBUG/MainProcess] using channel_id: 1
[2014-05-31 20:58:55,807: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:57,266: INFO/MainProcess] mingle: all alone
[2014-05-31 20:58:57,266: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,267: DEBUG/MainProcess] | Consumer: Starting Gossip
[2014-05-31 20:58:57,268: DEBUG/MainProcess] using channel_id: 2
[2014-05-31 20:58:57,270: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:57,282: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,282: DEBUG/MainProcess] | Consumer: Starting Heart
[2014-05-31 20:58:57,285: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,286: DEBUG/MainProcess] | Consumer: Starting Tasks
[2014-05-31 20:58:57,299: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,299: DEBUG/MainProcess] | Consumer: Starting Control
[2014-05-31 20:58:57,300: DEBUG/MainProcess] using channel_id: 3
[2014-05-31 20:58:57,303: DEBUG/MainProcess] Channel open
[2014-05-31 20:58:57,311: DEBUG/MainProcess] ^-- substep ok
[2014-05-31 20:58:57,311: DEBUG/MainProcess] | Consumer: Starting event loop
[2014-05-31 20:58:57,315: WARNING/MainProcess] celery#czterykaty ready.
[2014-05-31 20:58:57,316: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2014-05-31 20:58:57,317: DEBUG/MainProcess] basic.qos: prefetch_count->16
Periodic tasks are setup trough djcelery/admin interface, and also tasks list are empty This is my first experience with celery, and djano-celery, so I not sure what is wrong.
I managed this working.
I adjusted my settings in settings/local.py with followings:
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://django:django123#10.0.1.17:5672/myvhost'
CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler'
Started celeryd with flag -E and -B
python manage.py celeryd -l INFO -E -B
And for monitoring events, was necessary to start celerycam
python manage.py celerycam
Related
I want to insert environmental variables from an .env file into my containerized Django application, so I can use it to securely set Django's settings.py.
However on $docker-compose up I receive part of an UserWarning which apparently originates in the django-environ package (and breaks my code):
/usr/local/lib/python3.9/site-packages/environ/environ.py:628: UserWarning: /app/djangoDocker/.env doesn't exist - if you're not configuring your environment separately, create one. web | warnings.warn(
The warning breaks at that point and (although all the container claim to be running) I can neither stop them from that console (zsh, Ctrl+C) nor can I access the website locally. What am I missing? Really appreciate any useful input.
Dockerfile: (located in root)
# pull official base image
FROM python:3.9.5
# set environment variables, grab via os.environ
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# set work directory
WORKDIR /app
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# add entrypoint script
COPY ./entrypoint.sh ./app
# run entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
# copy project
COPY . /app
docker-compose.yml (located in root; I've tried either using env_file or environment as in the comments)
version: '3'
services:
web:
build: .
container_name: web
command: gunicorn djangoDocker.wsgi:application --bind 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:80"
env_file:
- .env
# environment:
# BASE_URL: ${BASE_URL}
# SECRET_KEY: ${SECRET_KEY}
# ALLOWED_HOSTS: ${ALLOWED_HOSTS}
# DEBUG: ${DEBUG}
# SQL_ENGINE: ${SQL_ENGINE}
# SQL_DATABASE: ${SQL_DATABASE}
# SQL_USER: ${SQL_USER}
# SQL_PASSWORD: ${SQL_PASSWORD}
# SQL_HOST: ${SQL_HOST}
# SQL_PORT: ${SQL_PORT}
# EMAIL_HOST_USER: ${EMAIL_HOST_USER}
# EMAIL_HOST_PASSWORD: ${EMAIL_HOST_PASSWORD}
# TEMPLATE_DIR: ${TEMPLATE_DIR}
depends_on:
- pgdb
pgdb:
image: postgres
container_name: pgdb
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata:/var/lib/postgresql/data/
nginx:
build: ./nginx
volumes:
- .:/app
links:
- web:web
ports:
- "80:80"
depends_on:
- web
volumes:
pgdata:
.env. (also located in root)
BASE_URL=localhost
SECRET_KEY=mySecretKey
ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0
DEBUG=True
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=pgdb
SQL_PORT=5432
EMAIL_HOST_USER=my#mail.com
EMAIL_HOST_PASSWORD=myMailPassword
TEMPLATE_DIR=frontend/templates/frontend/
Terminal Output after running $docker-compose up in the root
pgdb is up-to-date
Recreating web ... done
Recreating djangodocker_nginx_1 ... done
Attaching to pgdb, web, djangodocker_nginx_1
nginx_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx_1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
web | [2021-06-04 14:58:09 +0000] [1] [INFO] Starting gunicorn 20.0.4
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
web | [2021-06-04 14:58:09 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
web | [2021-06-04 14:58:09 +0000] [1] [INFO] Using worker: sync
web | [2021-06-04 14:58:09 +0000] [8] [INFO] Booting worker with pid: 8
nginx_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: using the "epoll" event method
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: nginx/1.20.1
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: built by gcc 10.2.1 20201203 (Alpine 10.2.1_pre1)
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: OS: Linux 4.19.121-linuxkit
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker processes
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 23
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 24
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 25
nginx_1 | 2021/06/04 14:58:09 [notice] 1#1: start worker process 26
pgdb |
pgdb | PostgreSQL Database directory appears to contain a database; Skipping initialization
pgdb |
pgdb | 2021-06-04 14:34:00.119 UTC [1] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
pgdb | 2021-06-04 14:34:00.120 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
pgdb | 2021-06-04 14:34:00.120 UTC [1] LOG: listening on IPv6 address "::", port 5432
pgdb | 2021-06-04 14:34:00.125 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
pgdb | 2021-06-04 14:34:00.134 UTC [27] LOG: database system was shut down at 2021-06-04 14:21:19 UTC
pgdb | 2021-06-04 14:34:00.151 UTC [1] LOG: database system is ready to accept connections
web | /usr/local/lib/python3.9/site-packages/environ/environ.py:628: UserWarning: /app/djangoDocker/.env doesn't exist - if you're not configuring your environment separately, create one.
web | warnings.warn(
requirements.txt
Django==3.2
gunicorn==20.0.4
djoser==2.1.0
django-environ
psycopg2-binary~=2.8.0
django-cors-headers==3.5.0
django-templated-mail==1.1.1
djangorestframework==3.12.2
djangorestframework-simplejwt==4.7.0
Let me know in case any further information is required.
Until now I do not know what caused the error, but in case anyone else has the same problem: switching to python-decouple instead of django-environ fixed it. Of course you have to adapt everything in settings.py accordingly, f.e. add from decouple import config and DEBUG = config('DEBUG', default=False, cast=bool).
I'm trying to setup Celery to talk with AWS SQS. Celery put log as below and after that container is going down and starting up again in loop. Log as below:
trim...
[2020-08-05 09:32:35,715: DEBUG/MainProcess] Changing event name from creating-client-class.iot-data to creating-client-class.iot-data-plane
[2020-08-05 09:32:35,798: DEBUG/MainProcess] Changing event name from before-call.apigateway to before-call.api-gateway
[2020-08-05 09:32:35,799: DEBUG/MainProcess] Changing event name from request-created.machinelearning.Predict to request-created.machine-learning.Predict
[2020-08-05 09:32:35,801: DEBUG/MainProcess] Changing event name from before-parameter-build.autoscaling.CreateLaunchConfiguration to before-parameter-build.auto-scaling.CreateLaunchConfiguration
[2020-08-05 09:32:35,801: DEBUG/MainProcess] Changing event name from before-parameter-build.route53 to before-parameter-build.route-53
[2020-08-05 09:32:35,802: DEBUG/MainProcess] Changing event name from request-created.cloudsearchdomain.Search to request-created.cloudsearch-domain.Search
[2020-08-05 09:32:35,802: DEBUG/MainProcess] Changing event name from docs.*.autoscaling.CreateLaunchConfiguration.complete-section to docs.*.auto-scaling.CreateLaunchConfiguration.complete-section
[2020-08-05 09:32:35,805: DEBUG/MainProcess] Changing event name from before-parameter-build.logs.CreateExportTask to before-parameter-build.cloudwatch-logs.CreateExportTask
[2020-08-05 09:32:35,805: DEBUG/MainProcess] Changing event name from docs.*.logs.CreateExportTask.complete-section to docs.*.cloudwatch-logs.CreateExportTask.complete-section
[2020-08-05 09:32:35,806: DEBUG/MainProcess] Changing event name from before-parameter-build.cloudsearchdomain.Search to before-parameter-build.cloudsearch-domain.Search
[2020-08-05 09:32:35,806: DEBUG/MainProcess] Changing event name from docs.*.cloudsearchdomain.Search.complete-section to docs.*.cloudsearch-domain.Search.complete-section
[2020-08-05 09:32:35,806: DEBUG/MainProcess] Setting config variable for region to {'eu-west-1'}
[2020-08-05 09:32:35,808: DEBUG/MainProcess] Loading JSON file: /usr/local/lib/python3.8/site-packages/botocore/data/endpoints.json
[2020-08-05 09:32:35,815: DEBUG/MainProcess] Event choose-service-name: calling handler <function handle_service_name_alias at 0x7fed0d144940>
[2020-08-05 09:32:35,899: DEBUG/MainProcess] Loading JSON file: /usr/local/lib/python3.8/site-packages/botocore/data/sqs/2012-11-05/service-2.json
[2020-08-05 09:32:35,902: DEBUG/MainProcess] Event creating-client-class.sqs: calling handler <function add_generate_presigned_url at 0x7fed096053a0>
[2020-08-05 09:32:36,907: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
-------------- celery#ip-10-10-12-215.eu-central-1.compute.internal v4.4.6 (cliffs)
--- ***** -----
-- ******* ---- Linux-4.14.177-139.254.amzn2.x86_64-x86_64-with-glibc2.2.5 2020-08-05 09:32:35
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: app_name:0x7fed0d6f18e0
- ** ---------- .> transport: sqs://aws_access_key:**#localhost//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. app_name.tasks.check_new_users
. app_name.tasks.send_mail_for_new_user
. app_name.tasks.test_task
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
Settings in Django are as follows: (variables are taken from SSM)
AWS_REGION = os.getenv("AWS_REGION")
BROKER_URL = os.getenv("BROKER_URL", "amqp://app_name-rabbitmq//")
BROKER_TRANSPORT = 'sqs'
BROKER_TRANSPORT_OPTIONS = {
"region": {AWS_REGION},
"polling_interval": 60,
"queue_name_prefix": "prod-"
}
CELERY_BROKER_URL = BROKER_URL
CELERY_BROKER_TRANSPORT_OPTIONS = BROKER_TRANSPORT_OPTIONS
CELERY_ACCEPT_CONTENT = ['json', 'yaml']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_ENABLE_REMOTE_CONTROL = False
CELERY_SEND_EVENTS = False
CELERY_RESULT_BACKEND = None
RESULT_BACKEND = None
CELERY_IMPORTS = ("app_name.tasks",)
CELERYBEAT_SCHEDULE = {
"check_new_users": {"task": "tasks.app_name.check_new_users", "schedule": crontab(hour="9,17", minute=0,)},
}
Does any one had situation like that and could help?
IAM role is SQS full access for time being.
EDIT: If any other detail is needed please let me know.
I'm having troubles starting a Celery worker on Elastic Beanstalk instance. After a couple of seconds, it just quits unexpectedly with no error. The instance should have enough RAM. I'm attaching the output by worker with log level debug (sensitive information replaced by **). Any guidance would be super helpful. Thanks.
(venv) [ec2-user#ip-** app]$ celery -A app worker -l debug
[12/Mar/2018 10:18:29] DEBUG [raven.contrib.django.client.DjangoClient:265] Configuring Raven for host: <raven.conf.remote.RemoteConfig object at 0x7f556309e940>
-------------- celery#ip-** v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Linux-4.9.75-25.55.amzn1.x86_64-x86_64-with-glibc2.3.4 2018-03-12 10:18:29
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: app:0x7f556998edd8
- ** ---------- .> transport: sqs://**:**#localhost//
- ** ---------- .> results:
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. common.tasks.send_templated_email
. orders.tasks.import_orders_from_all_companies
[2018-03-12 10:18:29,783: DEBUG/MainProcess] Setting config variable for region to 'eu-central-1'
[2018-03-12 10:18:29,784: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,784: DEBUG/MainProcess] Loading variable config_file from defaults.
[2018-03-12 10:18:29,784: DEBUG/MainProcess] Loading variable credentials_file from defaults.
[2018-03-12 10:18:29,784: DEBUG/MainProcess] Loading variable data_path from defaults.
[2018-03-12 10:18:29,785: DEBUG/MainProcess] Loading variable region from instance vars with value 'eu-central-1'.
[2018-03-12 10:18:29,785: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,785: DEBUG/MainProcess] Loading variable ca_bundle from defaults.
[2018-03-12 10:18:29,786: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,786: DEBUG/MainProcess] Loading variable api_versions from defaults.
[2018-03-12 10:18:29,786: DEBUG/MainProcess] Loading JSON file: /opt/python/run/venv/local/lib/python3.6/site-packages/botocore/data/endpoints.json
[2018-03-12 10:18:29,790: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,790: DEBUG/MainProcess] Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f55633fe048>
[2018-03-12 10:18:29,793: DEBUG/MainProcess] Loading JSON file: /opt/python/run/venv/local/lib/python3.6/site-packages/botocore/data/sqs/2012-11-05/service-2.json
[2018-03-12 10:18:29,796: DEBUG/MainProcess] Event creating-client-class.sqs: calling handler <function add_generate_presigned_url at 0x7f5563448d90>
[2018-03-12 10:18:29,797: DEBUG/MainProcess] The s3 config key is not a dictionary type, ignoring its value of: None
[2018-03-12 10:18:29,802: DEBUG/MainProcess] Setting sqs timeout as (60, 60)
[2018-03-12 10:18:29,802: DEBUG/MainProcess] Loading JSON file: /opt/python/run/venv/local/lib/python3.6/site-packages/botocore/data/_retry.json
[2018-03-12 10:18:29,803: DEBUG/MainProcess] Registering retry handlers for service: sqs
[2018-03-12 10:18:29,803: DEBUG/MainProcess] Event before-parameter-build.sqs.ListQueues: calling handler <function generate_idempotent_uuid at 0x7f5563406378>
[2018-03-12 10:18:29,804: DEBUG/MainProcess] Making request for OperationModel(name=ListQueues) (verify_ssl=True) with params: {'url_path': '/', 'query_string': '', 'method': 'POST', 'headers': {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'Boto3/1.5.16 Python/3.6.2 Linux/4.9.75-25.55.amzn1.x86_64 Botocore/1.8.30'}, 'body': {'Action': 'ListQueues', 'Version': '2012-11-05', 'QueueNamePrefix': ''}, 'url': 'https://eu-central-1.queue.amazonaws.com/', 'context': {'client_region': 'eu-central-1', 'client_config': <botocore.config.Config object at 0x7f5562069f28>, 'has_streaming_input': False, 'auth_type': None}}
[2018-03-12 10:18:29,804: DEBUG/MainProcess] Event request-created.sqs.ListQueues: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f5562069cf8>>
[2018-03-12 10:18:29,804: DEBUG/MainProcess] Event choose-signer.sqs.ListQueues: calling handler <function set_operation_specific_signer at 0x7f5563406268>
[2018-03-12 10:18:29,805: DEBUG/MainProcess] Calculating signature using v4 auth.
[2018-03-12 10:18:29,805: DEBUG/MainProcess] CanonicalRequest:
POST
/
content-type:application/x-www-form-urlencoded; charset=utf-8
host:eu-central-1.queue.amazonaws.com
x-amz-date:20180312T091829Z
content-type;host;x-amz-date
**
[2018-03-12 10:18:29,805: DEBUG/MainProcess] StringToSign:
AWS4-HMAC-SHA256
20180312T091829Z
20180312/eu-central-1/sqs/aws4_request
**
[2018-03-12 10:18:29,806: DEBUG/MainProcess] Signature:
**
[2018-03-12 10:18:29,806: DEBUG/MainProcess] Sending http request: <PreparedRequest [POST]>
[2018-03-12 10:18:29,807: INFO/MainProcess] Starting new HTTPS connection (1): eu-central-1.queue.amazonaws.com
[2018-03-12 10:18:29,839: DEBUG/MainProcess] "POST / HTTP/1.1" 200 409
[2018-03-12 10:18:29,840: DEBUG/MainProcess] Response headers: {'server': 'Server', 'date': 'Mon, 12 Mar 2018 09:18:29 GMT', 'content-type': 'text/xml', 'content-length': '409', 'connection': 'keep-alive', 'x-amzn-requestid': '**'}
[2018-03-12 10:18:29,840: DEBUG/MainProcess] Response body:
b'<?xml version="1.0"?><ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><ListQueuesResult><QueueUrl>**</QueueUrl><QueueUrl>**</QueueUrl></ListQueuesResult><ResponseMetadata><RequestId>**</RequestId></ResponseMetadata></ListQueuesResponse>'
[2018-03-12 10:18:29,841: DEBUG/MainProcess] Event needs-retry.sqs.ListQueues: calling handler <botocore.retryhandler.RetryHandler object at 0x7f556201c390>
[2018-03-12 10:18:29,841: DEBUG/MainProcess] No retry needed.
[2018-03-12 10:18:29,841: INFO/MainProcess] Connected to sqs://**:**#localhost//
[2018-03-12 10:18:29,850: DEBUG/MainProcess] Setting config variable for region to 'eu-central-1'
[2018-03-12 10:18:29,850: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,851: DEBUG/MainProcess] Loading variable config_file from defaults.
[2018-03-12 10:18:29,851: DEBUG/MainProcess] Loading variable credentials_file from defaults.
[2018-03-12 10:18:29,851: DEBUG/MainProcess] Loading variable data_path from defaults.
[2018-03-12 10:18:29,852: DEBUG/MainProcess] Loading variable region from instance vars with value 'eu-central-1'.
[2018-03-12 10:18:29,852: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,852: DEBUG/MainProcess] Loading variable ca_bundle from defaults.
[2018-03-12 10:18:29,852: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,852: DEBUG/MainProcess] Loading variable api_versions from defaults.
[2018-03-12 10:18:29,853: DEBUG/MainProcess] Loading JSON file: /opt/python/run/venv/local/lib/python3.6/site-packages/botocore/data/endpoints.json
[2018-03-12 10:18:29,857: DEBUG/MainProcess] Loading variable profile from defaults.
[2018-03-12 10:18:29,857: DEBUG/MainProcess] Event choose-service-name: calling handler <function handle_service_name_alias at 0x7f55633fe048>
[2018-03-12 10:18:29,861: DEBUG/MainProcess] Loading JSON file: /opt/python/run/venv/local/lib/python3.6/site-packages/botocore/data/sqs/2012-11-05/service-2.json
[2018-03-12 10:18:29,863: DEBUG/MainProcess] Event creating-client-class.sqs: calling handler <function add_generate_presigned_url at 0x7f5563448d90>
[2018-03-12 10:18:29,863: DEBUG/MainProcess] The s3 config key is not a dictionary type, ignoring its value of: None
[2018-03-12 10:18:29,865: DEBUG/MainProcess] Setting sqs timeout as (60, 60)
[2018-03-12 10:18:29,865: DEBUG/MainProcess] Loading JSON file: /opt/python/run/venv/local/lib/python3.6/site-packages/botocore/data/_retry.json
[2018-03-12 10:18:29,866: DEBUG/MainProcess] Registering retry handlers for service: sqs
[2018-03-12 10:18:29,866: DEBUG/MainProcess] Event before-parameter-build.sqs.ListQueues: calling handler <function generate_idempotent_uuid at 0x7f5563406378>
[2018-03-12 10:18:29,866: DEBUG/MainProcess] Making request for OperationModel(name=ListQueues) (verify_ssl=True) with params: {'url_path': '/', 'query_string': '', 'method': 'POST', 'headers': {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'Boto3/1.5.16 Python/3.6.2 Linux/4.9.75-25.55.amzn1.x86_64 Botocore/1.8.30'}, 'body': {'Action': 'ListQueues', 'Version': '2012-11-05', 'QueueNamePrefix': ''}, 'url': 'https://eu-central-1.queue.amazonaws.com/', 'context': {'client_region': 'eu-central-1', 'client_config': <botocore.config.Config object at 0x7f5561acfbe0>, 'has_streaming_input': False, 'auth_type': None}}
[2018-03-12 10:18:29,867: DEBUG/MainProcess] Event request-created.sqs.ListQueues: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f5561acfb38>>
[2018-03-12 10:18:29,867: DEBUG/MainProcess] Event choose-signer.sqs.ListQueues: calling handler <function set_operation_specific_signer at 0x7f5563406268>
[2018-03-12 10:18:29,867: DEBUG/MainProcess] Calculating signature using v4 auth.
[2018-03-12 10:18:29,868: DEBUG/MainProcess] CanonicalRequest:
POST
/
content-type:application/x-www-form-urlencoded; charset=utf-8
host:eu-central-1.queue.amazonaws.com
x-amz-date:20180312T091829Z
content-type;host;x-amz-date
**
[2018-03-12 10:18:29,868: DEBUG/MainProcess] StringToSign:
AWS4-HMAC-SHA256
20180312T091829Z
20180312/eu-central-1/sqs/aws4_request
**
[2018-03-12 10:18:29,868: DEBUG/MainProcess] Signature:
**
[2018-03-12 10:18:29,869: DEBUG/MainProcess] Sending http request: <PreparedRequest [POST]>
[2018-03-12 10:18:29,869: INFO/MainProcess] Starting new HTTPS connection (1): eu-central-1.queue.amazonaws.com
[2018-03-12 10:18:29,895: DEBUG/MainProcess] "POST / HTTP/1.1" 200 409
[2018-03-12 10:18:29,895: DEBUG/MainProcess] Response headers: {'server': 'Server', 'date': 'Mon, 12 Mar 2018 09:18:29 GMT', 'content-type': 'text/xml', 'content-length': '409', 'connection': 'keep-alive', 'x-amzn-requestid': '5b132733-22e0-5567-8762-74136ac526ec'}
[2018-03-12 10:18:29,896: DEBUG/MainProcess] Response body:
b'<?xml version="1.0"?><ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><ListQueuesResult><QueueUrl>**</QueueUrl><QueueUrl>**</QueueUrl></ListQueuesResult><ResponseMetadata><RequestId>5b132733-22e0-5567-8762-74136ac526ec</RequestId></ResponseMetadata></ListQueuesResponse>'
[2018-03-12 10:18:29,896: DEBUG/MainProcess] Event needs-retry.sqs.ListQueues: calling handler <botocore.retryhandler.RetryHandler object at 0x7f5561a763c8>
[2018-03-12 10:18:29,896: DEBUG/MainProcess] No retry needed.
[2018-03-12 10:18:29,899: DEBUG/MainProcess] Event before-parameter-build.sqs.GetQueueAttributes: calling handler <function generate_idempotent_uuid at 0x7f5563406378>
[2018-03-12 10:18:29,900: DEBUG/MainProcess] Making request for OperationModel(name=GetQueueAttributes) (verify_ssl=True) with params: {'url_path': '/', 'query_string': '', 'method': 'POST', 'headers': {'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'Boto3/1.5.16 Python/3.6.2 Linux/4.9.75-25.55.amzn1.x86_64 Botocore/1.8.30'}, 'body': {'Action': 'GetQueueAttributes', 'Version': '2012-11-05', 'QueueUrl': '**', 'AttributeName.1': 'ApproximateNumberOfMessages'}, 'url': 'https://eu-central-1.queue.amazonaws.com/', 'context': {'client_region': 'eu-central-1', 'client_config': <botocore.config.Config object at 0x7f5562069f28>, 'has_streaming_input': False, 'auth_type': None}}
[2018-03-12 10:18:29,900: DEBUG/MainProcess] Event request-created.sqs.GetQueueAttributes: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7f5562069cf8>>
[2018-03-12 10:18:29,900: DEBUG/MainProcess] Event choose-signer.sqs.GetQueueAttributes: calling handler <function set_operation_specific_signer at 0x7f5563406268>
[2018-03-12 10:18:29,901: DEBUG/MainProcess] Calculating signature using v4 auth.
[2018-03-12 10:18:29,901: DEBUG/MainProcess] CanonicalRequest:
POST
/
content-type:application/x-www-form-urlencoded; charset=utf-8
host:eu-central-1.queue.amazonaws.com
x-amz-date:20180312T091829Z
content-type;host;x-amz-date
**
[2018-03-12 10:18:29,901: DEBUG/MainProcess] StringToSign:
AWS4-HMAC-SHA256
20180312T091829Z
20180312/eu-central-1/sqs/aws4_request
**
[2018-03-12 10:18:29,901: DEBUG/MainProcess] Signature:
9fb0d1ad68b5d25bf148cc11857b1e1083418557229ca2c47e8b525b54880b74
[2018-03-12 10:18:29,902: DEBUG/MainProcess] Sending http request: <PreparedRequest [POST]>
[2018-03-12 10:18:29,910: DEBUG/MainProcess] "POST / HTTP/1.1" 200 357
[2018-03-12 10:18:29,911: DEBUG/MainProcess] Response headers: {'server': 'Server', 'date': 'Mon, 12 Mar 2018 09:18:29 GMT', 'content-type': 'text/xml', 'content-length': '357', 'connection': 'keep-alive', 'x-amzn-requestid': '9aa38f1d-25e5-576d-a9a9-dc3d6dc029a0'}
[2018-03-12 10:18:29,911: DEBUG/MainProcess] Response body:
b'<?xml version="1.0"?><GetQueueAttributesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><GetQueueAttributesResult><Attribute><Name>ApproximateNumberOfMessages</Name><Value>0</Value></Attribute></GetQueueAttributesResult><ResponseMetadata><RequestId>**</RequestId></ResponseMetadata></GetQueueAttributesResponse>'
[2018-03-12 10:18:29,912: DEBUG/MainProcess] Event needs-retry.sqs.GetQueueAttributes: calling handler <botocore.retryhandler.RetryHandler object at 0x7f556201c390>
[2018-03-12 10:18:29,912: DEBUG/MainProcess] No retry needed.
[2018-03-12 10:18:29,921: DEBUG/MainProcess] Canceling task consumer...
[2018-03-12 10:18:30,926: DEBUG/MainProcess] Canceling task consumer...
[2018-03-12 10:18:30,926: DEBUG/MainProcess] Closing consumer channel...
[2018-03-12 10:18:30,926: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
I have solved the issue. Amazon instance required PyCurl and some additional packages to connect properly with SQS.
I suggest that you already have a script file to run celery daemon(run_supervised_celeryd.sh)
Here is my EB config file:
packages:
yum:
libjpeg-turbo-devel: []
libpng-devel: []
libcurl-devel: []
container_commands:
01_migrate:
command: "django-admin.py migrate --noinput"
leader_only: true
02_collectstatic:
command: "python manage.py collectstatic --noinput"
03_pycurl:
command: 'source /opt/python/run/venv/bin/activate && pip3 install /usr/local/share/pycurl-7.43.0.tar.gz --global-option="--with-nss" --upgrade'
04_celery_tasks_run:
command: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh"
leader_only: true
files:
"/usr/local/share/pycurl-7.43.0.tar.gz" :
mode: "000644"
owner: root
group: root
source: https://pypi.python.org/packages/source/p/pycurl/pycurl-7.43.0.tar.gz
add env variable PYCURL_SSL_LIBRARY="nss"
All listed settings solve the issue
I am using celery 3.1.9 with sqs. I have worker common_w running as daemon. It works with common queue on sqs.
A worker unexpectedly stop processing tasks. No exceptions and errors.
Last logs with option -l DEBUG:
[2014-09-03 21:01:14,766: DEBUG/MainProcess] Method: GET
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Path: /684818426251/dev_common_w_ip-10-84-163-209-celery-pidbox
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Data:
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Headers: {}
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Host: eu-west-1.queue.amazonaws.com
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Port: 443
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Params: {'Action': 'ReceiveMessage', 'Version': '2012-11-05', 'MaxNumberOfMessages': 10}
[2014-09-03 21:01:14,767: DEBUG/MainProcess] Token: None
[2014-09-03 21:01:14,767: DEBUG/MainProcess] CanonicalRequest:
GET
/684818426251/dev_common_w_ip-10-84-163-209-celery-pidbox
Action=ReceiveMessage&MaxNumberOfMessages=10&Version=2012-11-05
host:eu-west-1.queue.amazonaws.com
x-amz-date:20140903T170114Z
host;x-amz-date
e3b0c44298fc1c149afbf4c899sdfasf32wefwef49b934ca495991b7852b855
[2014-09-03 21:01:14,768: DEBUG/MainProcess] StringToSign:
AWS4-HMAC-SHA256
20140903T170114Z
20140903/eu-west-1/sqs/aws4_request
9a9761b49ba9a06e469bwkfj48u83yghkhejwejlr8fce8eb078ac8c4c9ffd9e
[2014-09-03 21:01:14,768: DEBUG/MainProcess] Signature:
2de3c082bc6f01f5d5ecd66b6r89283ryuu8j8rrdaf0c40eba6cc0ceb62df6e
[2014-09-03 21:01:14,824: DEBUG/MainProcess] <?xml version="1.0"?><ReceiveMessageResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><ReceiveMessageResult/><ResponseMetadata><RequestId>4712119a-38ca-51b4-bfd1-5d1f8fu8uc4</RequestId></ResponseMetadata></ReceiveMessageResponse>
[2014-09-03 21:01:14,824: INFO/MainProcess] Received task: skazka.sender.tasks.wait_action_worker[f84c52fe-8748-4c81-b718-f23f23fasdgbg34g]
[2014-09-03 21:01:14,824: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x21666e0> (args:('skazka.sender.tasks.wait_action_worker', 'f84c52fe-8748-4c81-b718-f23f23fasdgbg34g', (1967L,), {}, {'utc': True, u'is_eager': False, 'chord': None, u'group': None, 'args': (1967L,), 'retries': 0, u'delivery_info': {u'priority': 0, u'redelivered': None, u'routing_key': u'common', u'exchange': u'common'}, 'expires': None, u'hostname': 'common_w#ip-10-84-163-209', 'task': 'skazka.sender.tasks.wait_action_worker', 'callbacks': None, u'correlation_id': u'f84c52fe-8748-4c81-b718-f23f23fasdgbg34g', 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, u'reply_to': u'07a91182-23a6-3afb-b3cc-70a2fa3fw333', 'id': 'f84c52fe-8748-4c81-b718-f23f23fasdgbg34g', u'headers': {}}) kwargs:{})
UPDATE:
strace says:
futex(0x7ffff79b3e00, FUTEX_WAIT_PRIVATE, 2, NULL) = 0
enter code here
Then I set in the config:
CELERYD_FORCE_EXECV = True
Until it works fine...
I've been trying to configure the django-SES services to send outgoing emails and am not sure what is wrong here. Am receving a strange error message,
settings.py
# Email Configuration using Amazon SES Services
EMAIL_BACKEND = 'django_ses.SESBackend'
# These are optional -- if they're set as environment variables they won't
# need to be set here as well
AWS_SES_ACCESS_KEY_ID = 'xxxxxxx'
AWS_SES_SECRET_ACCESS_KEY = 'xxxxxxxxxxxxx'
# Additionally, you can specify an optional region, like so:
AWS_SES_REGION_NAME = 'us-east-1'
AWS_SES_REGION_ENDPOINT = 'email-smtp.us-east-1.amazonaws.com'
In my design, am inserting all emails into a table and then using celery task to go through all pending emails and firing them.
here is my tasks.py
#task(name='common_lib.send_notification', ignore_result=True)
#transaction.commit_manually
def fire_pending_email():
try:
Notification = get_model('common_lib', 'Notification')
NotificationEmail = get_model('common_lib', 'NotificationEmail')
pending_notifications=Notification.objects.values_list('id', flat=True).filter(status=Notification.STATUS_PENDING)
for email in NotificationEmail.objects.filter(notification__in=pending_notifications):
msg = EmailMultiAlternatives(email.subject, email.text_body, 'noreply#xx.com.xx', [email.send_to, ])
if email.html_body:
msg.attach_alternative(email.html_body, "text/html")
msg.send()
transaction.commit()
return 'Successful'
except Exception as e:
transaction.rollback()
logging.error(str(e))
finally:
pass
yet in the celery debug console am seeing the following error
[2012-11-13 11:45:28,061: INFO/MainProcess] Got task from broker: common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384]
[2012-11-13 11:45:28,069: DEBUG/MainProcess] Mediator: Running callback for task: common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384]
[2012-11-13 11:45:28,069: DEBUG/MainProcess] TaskPool: Apply <function trace_task_ret at 0x9f38a3c> (args:('common_lib.send_notification', '4dc71dee-fc7c-4ddc-a02c-4097c73e4384', [], {}, {'retries': 0, 'is_eager': False, 'task': 'common_lib.send_notification', 'group': None, 'eta': None, 'delivery_info': {'priority': None, 'routing_key': u'celery', 'exchange': u'celery'}, 'args': [], 'expires': None, 'callbacks': None, 'errbacks': None, 'hostname': 'ubuntu', 'kwargs': {}, 'id': '4dc71dee-fc7c-4ddc-a02c-4097c73e4384', 'utc': True}) kwargs:{})
[2012-11-13 11:45:28,077: DEBUG/MainProcess] Task accepted: common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384] pid:8256
[2012-11-13 11:45:28,097: DEBUG/MainProcess] (0.001) SELECT `common_lib_notification_email`.`id`, `common_lib_notification_email`.`notification_id`, `common_lib_notification_email`.`send_to`, `common_lib_notification_email`.`template`, `common_lib_notification_email`.`subject`, `common_lib_notification_email`.`html_body`, `common_lib_notification_email`.`text_body` FROM `common_lib_notification_email` WHERE `common_lib_notification_email`.`notification_id` IN (SELECT U0.`id` FROM `common_lib_notification` U0 WHERE U0.`status` = 'P' ); args=(u'P',)
[2012-11-13 11:45:28,103: DEBUG/MainProcess] Method: POST
[2012-11-13 11:45:28,107: DEBUG/MainProcess] Path: /
[2012-11-13 11:45:28,107: DEBUG/MainProcess] Data: Action=GetSendQuota
[2012-11-13 11:45:28,107: DEBUG/MainProcess] Headers: {'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}
[2012-11-13 11:45:28,109: DEBUG/MainProcess] Host: email-smtp.us-east-1.amazonaws.com
[2012-11-13 11:45:28,109: DEBUG/MainProcess] establishing HTTPS connection: host=email-smtp.us-east-1.amazonaws.com, kwargs={}
[2012-11-13 11:45:28,109: DEBUG/MainProcess] Token: None
[2012-11-13 11:45:28,702: DEBUG/MainProcess] wrapping ssl socket; CA certificate file=/home/mo/projects/garageenv/local/lib/python2.7/site-packages/boto/cacerts/cacerts.txt
[2012-11-13 11:45:29,385: DEBUG/MainProcess] validating server certificate: hostname=email-smtp.us-east-1.amazonaws.com, certificate hosts=[u'email-smtp.us-east-1.amazonaws.com']
[2012-11-13 11:45:39,618: ERROR/MainProcess] <unknown>:1:0: syntax error
[2012-11-13 11:45:39,619: INFO/MainProcess] Task common_lib.send_notification[4dc71dee-fc7c-4ddc-a02c-4097c73e4384] succeeded in 11.5491399765s: None
UPDATE
when I changed the setting to
AWS_SES_REGION_ENDPOINT = 'email.us-east-1.amazonaws.com'
I got a different error, as below
[2012-11-13 13:24:05,907: DEBUG/MainProcess] Method: POST
[2012-11-13 13:24:05,916: DEBUG/MainProcess] Path: /
[2012-11-13 13:24:05,917: DEBUG/MainProcess] Data: Action=GetSendQuota
[2012-11-13 13:24:05,917: DEBUG/MainProcess] Headers: {'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}
[2012-11-13 13:24:05,918: DEBUG/MainProcess] Host: email.us-east-1.amazonaws.com
[2012-11-13 13:24:05,918: DEBUG/MainProcess] establishing HTTPS connection: host=email.us-east-1.amazonaws.com, kwargs={}
[2012-11-13 13:24:05,919: DEBUG/MainProcess] Token: None
[2012-11-13 13:24:06,511: DEBUG/MainProcess] wrapping ssl socket; CA certificate file=/home/mo/projects/garageenv/local/lib/python2.7/site-packages/boto/cacerts/cacerts.txt
[2012-11-13 13:24:06,952: DEBUG/MainProcess] validating server certificate: hostname=email.us-east-1.amazonaws.com, certificate hosts=['email.us-east-1.amazonaws.com', 'email.amazonaws.com']
[2012-11-13 13:24:07,177: ERROR/MainProcess] 403 Forbidden
[2012-11-13 13:24:07,178: ERROR/MainProcess] <ErrorResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
<Error>
<Type>Sender</Type>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message>
</Error>
<RequestId>41c15592-2d7c-11e2-a590-f33d1568f3ea</RequestId>
</ErrorResponse>
[2012-11-13 13:24:07,180: ERROR/MainProcess] BotoServerError: 403 Forbidden
<ErrorResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
<Error>
<Type>Sender</Type>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.</Message>
</Error>
<RequestId>41c15592-2d7c-11e2-a590-f33d1568f3ea</RequestId>
</ErrorResponse>
[2012-11-13 13:24:07,184: INFO/MainProcess] Task common_lib.send_notification[3b6a049e-d5cb-45f4-842b-633d816a132e] succeeded in 1.31089687347s: None
Can you try using this
AWS_SES_REGION_ENDPOINT = 'email.us-east-1.amazonaws.com'
And not the smtp server setting on AWS's dashboard?
(You used AWS_SES_REGION_ENDPOINT = 'email-smtp.us-east-1.amazonaws.com' as mentioned above)
Once you have updated this, you got a new error as updated in your question. This confirms that you now have the correct AWS_SES_REGION_ENDPOINT setting set.
The reason you are getting this new error is most likely because you are confusing the access keys and giving amazon a wrong set of credentials - see detailed comments here - https://github.com/boto/boto/issues/476#issuecomment-7679158
Follow the solution prescribed in the comment and you should be fine, I think.