Celery SchedulingError: an integer is required - django

I'm using Celery on Heroku with Redis as my broker. I've tried RabbitMQ as a broker as well, but keep getting the following error when trying to run a scheduled task:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/celery/beat.py", line 203, in maybe_due
result = self.apply_async(entry, publisher=publisher)
File "/app/.heroku/python/lib/python2.7/site-packages/celery/beat.py", line 259, in apply_async
entry, exc=exc)), sys.exc_info()[2])
File "/app/.heroku/python/lib/python2.7/site-packages/celery/beat.py", line 251, in apply_async
**entry.options)
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/task.py", line 555, in apply_async
**dict(self._get_exec_options(), **options)
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/base.py", line 347, in send_task
with self.producer_or_acquire(producer) as P:
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/base.py", line 402, in producer_or_acquire
producer, self.amqp.producer_pool.acquire, block=True,
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/amqp.py", line 492, in producer_pool
self.app.pool,
File "/app/.heroku/python/lib/python2.7/site-packages/celery/app/base.py", line 608, in pool
self._pool = self.connection().Pool(limit=limit)
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 612, in Pool
return ConnectionPool(self, limit, preload)
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 987, in __init__
preload=preload)
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 833, in __init__
self.setup()
File "/app/.heroku/python/lib/python2.7/site-packages/kombu/connection.py", line 1011, in setup
for i in range(self.limit):
SchedulingError: Couldn't apply scheduled task my_task: an integer is required
This is how my task is written:
#app.task(ignore_result=True)
def my_task():
do_something()
Any ideas what's going on?

It just occurred to me what was going on. In my settings file, I had the following line:
BROKER_POOL_LIMIT = os.environ.get('BROKER_POOL_LIMIT', 1)
I should have forced that to be an integer:
BROKER_POOL_LIMIT = int(os.environ.get('BROKER_POOL_LIMIT', 1))

Related

textract_python_table_parser.py command prompt lacking credentials

I'm trying to put to work AWS's Textract export table suggestion in this link
I'm a complete newbie in AWS's solutions and in command prompt so I'm trying to do exactly as they suggest. I'm running that in python so I'm using this piece of code:
import os
k=os.system("python textract_python_table_parser.py my_pdf_file_path.pdf")
print(k)
The code runs, I get an Image loaded my_pdf_file_path.pdf however at some point it bugs on credential matters:
Traceback (most recent call last):
File "/Users/santanna_santanna/PycharmProjects/KlooksExplore/PDFWork/textract_python_table_parser.py", line 108, in <module>
main(file_name)
File "/Users/santanna_santanna/PycharmProjects/KlooksExplore/PDFWork/textract_python_table_parser.py", line 94, in main
table_csv = get_table_csv_results(file_name)
File "/Users/santanna_santanna/PycharmProjects/KlooksExplore/PDFWork/textract_python_table_parser.py", line 53, in get_table_csv_results
response = client.analyze_document(Document={'Bytes': bytes_test}, FeatureTypes=['TABLES'])
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/client.py", line 622, in _make_api_call
operation_model, request_dict, request_context)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/client.py", line 641, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/endpoint.py", line 132, in _send_request
request = self.create_request(request_dict, operation_model)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/endpoint.py", line 116, in create_request
operation_name=operation_model.name)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/signers.py", line 160, in sign
auth.add_auth(request)
File "/Users/santanna_santanna/anaconda3/lib/python3.6/site-packages/botocore/auth.py", line 357, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I'm aware I didn't pass any credentials and that's natural to happen but where should I pass it and what would be the right syntax for that using python os? Amazon's example doesn't say anything about that.
It depends where you run your code, for example:
local computer - can use aws configure CLI to set your credetnails
EC2 instance - use instance role
lambda function - use lambda execution role

Adjust node configuration breaks sqlproxy and scheduler

Today I tried to change the node type of the cluster backing a cloud composer environment and switch to the Ubuntu image instead of COS. I did so by adding a second node pool to the GKE cluster, then deleting the first one and have all workloads migrated.
This spawns following errors in the airflow-sqlproxy logs:
couldn't connect to "XXXXX:europe-west1:XXXXX": ensure that the Cloud SQL API is enabled for your project (https://console.cloud.google.com/flows/enableapi?apiid=sqladmin). Error during createEphemeral for XXXXX:europe-west1:XXXXX: googleapi: Error 403: Insufficient Permission, insufficientPermissions
The scheduler fails to start completely and emits following stacktraces:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python2.7/site-packages/airflow/bin/cli.py", line 826, in scheduler
job.run()
File "/usr/local/lib/python2.7/site-packages/airflow/jobs.py", line 192, in run
session.commit()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 943, in commit
self.transaction.commit()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 467, in commit
self._prepare_impl()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 447, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2254, in flush
self._flush(objects)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2380, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2344, in _flush
flush_context.execute()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 391, in execute
rec.execute(self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 556, in execute
uow
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 156, in save_obj
base_mapper, states, uowtransaction
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 286, in _organize_states_for_save(states):
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 1252, in _connections_for_states
connection = uowtransaction.transaction.connection(base_mapper)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 294, in connection
return self._connection_for_bind(bind, execution_options)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 398, in _connection_for_bind
conn = self._parent._connection_for_bind(bind, execution_options)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 409, in _connection_for_bind
conn = bind.contextual_connect()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2123, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2162, in _wrap_pool_connect
e, dialect, self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception_noconnection
exc_info
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2158, in _wrap_pool_connect
return fn()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 403, in connect
return _ConnectionFairy._checkout(self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 788, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 532, in checkout
rec = pool._do_get()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1193, in _do_get
self._dec_overflow()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 1190, in _do_get
return self._create_connection()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 350, in _create_connection
return _ConnectionRecord(self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 477, in __init__
self.__connect(first_connect_check=True)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/pool.py", line 671, in __connect
connection = pool._invoke_creator(self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 106, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 410, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/usr/local/lib/python2.7/site-packages/MySQLdb/__init__.py", line 86, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 204, in __init__
super(Connection, self).__init__(*args, **kwargs2)
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 0") (Background on this error at: http://sqlalche.me/e/e3q8)
It seems that the connection to the backing SQL database is now broken. Its still the same cluster but the nodes are different. Are there any additional configurations I must update?
The SQLProxy relies on the credentials of the service account used to create the Composer environment. If you do not change any settings, this should be the Compute engine default service account.
You should verify that the new node pool and your previous Composer node pool share the same service account.
Additionally, you should verify the new pool has sufficient scopes-- you are likely missing the sql admin scope. See https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create?hl=en_US&_ga=2.222157720.-1458722175.1530287449

celerybeat send task fails with ssl error

I am using celerybeat to send periodic tasks to rabbitmq queue. It works as expected for some time and then send celery.backend_cleanup fails with SSL error.
[2016-10-24 04:00:02,309: DEBUG/MainProcess] Channel open
[2016-10-24 04:00:02,443: ERROR/MainProcess] Message Error: Couldn't apply scheduled task celery.backend_cleanup: [Errno 1] _ssl.c:1309: error:1409F07F:SSL routines:SSL3_WRITE_PENDING:bad write retry
[' File "/local/mnt/apps/ipcat/venvs/django16/bin/celery", line 9, in <module>\n load_entry_point(\'celery-ipcat==3.1.23\', \'console_scripts\', \'celery\')()\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/__main__.py", line 30, in main\n main()\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/celery.py", line 81, in main\n cmd.execute_from_commandline(argv)\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/celery.py", line 793, in execute_from_commandline\n super(CeleryCommand, self).execute_from_commandline(argv)))\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/base.py", line 311, in execute_from_commandline\n return self.handle_argv(self.prog_name, argv[1:])\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/celery.py", line 785, in handle_argv\n return self.execute(command, argv)\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/celery.py", line 717, in execute\n ).run_from_argv(self.prog_name, argv[1:], command=argv[0])\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/base.py", line 315, in run_from_argv\n sys.argv if argv is None else argv, command)\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/base.py", line 377, in handle_argv\n return self(*args, **options)\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/base.py", line 274, in __call__\n ret = self.run(*args, **kwargs)\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/bin/beat.py", line 79, in run\n return beat().run()\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/apps/beat.py", line 83, in run\n self.start_scheduler()\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in start_scheduler\n beat.start()\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/beat.py", line 479, in start\n interval = self.scheduler.tick()\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/beat.py", line 221, in tick\n next_time_to_run = self.maybe_due(entry, self.publisher)\n', ' File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/beat.py", line 207, in maybe_due\n exc, traceback.format_stack(), exc_info=True)\n']
Traceback (most recent call last):
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/beat.py", line 204, in maybe_due
result = self.apply_async(entry, publisher=publisher)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/beat.py", line 260, in apply_async
entry, exc=exc)), sys.exc_info()[2])
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/beat.py", line 252, in apply_async
**entry.options)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/app/task.py", line 565, in apply_async
**dict(self._get_exec_options(), **options)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/app/base.py", line 354, in send_task
reply_to=reply_to or self.oid, **options
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/celery/app/amqp.py", line 310, in publish_task
**kwargs
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/messaging.py", line 172, in publish
routing_key, mandatory, immediate, exchange, declare)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/connection.py", line 436, in _ensured
return fun(*args, **kwargs)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/messaging.py", line 184, in _publish
[maybe_declare(entity) for entity in declare]
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/messaging.py", line 111, in maybe_declare
return maybe_declare(entity, self.channel, retry, **retry_policy)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/common.py", line 113, in maybe_declare
return _maybe_declare(entity, declared, ident, channel)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/common.py", line 120, in _maybe_declare
entity.declare()
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/entity.py", line 521, in declare
self.exchange.declare(nowait)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/kombu/entity.py", line 174, in declare
nowait=nowait, passive=passive,
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/amqp/channel.py", line 615, in exchange_declare
self._send_method((40, 10), args)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/amqp/abstract_channel.py", line 56, in _send_method
self.channel_id, method_sig, args, content,
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/amqp/method_framing.py", line 221, in write_method
write_frame(1, channel, payload)
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/amqp/transport.py", line 182, in write_frame
frame_type, channel, size, payload, 0xce,
File "/local/mnt/apps/ipcat/venvs/django16/lib/python2.7/site-packages/amqp/transport.py", line 254, in _write
n = write(s)
File "/usr/lib64/python2.7/ssl.py", line 172, in write
return self._sslobj.write(data)
SchedulingError: Couldn't apply scheduled task celery.backend_cleanup: [Errno 1] _ssl.c:1309: error:1409F07F:SSL routines:SSL3_WRITE_PENDING:bad write retry
[2016-10-24 04:00:02,743: DEBUG/MainProcess] beat: Waking up in 5.00 minutes.
After this failure all the subsequent send task fails with error as,
[2016-10-24 12:30:03,026: DEBUG/MainProcess] beat: Synchronizing schedule...
[2016-10-24 12:30:03,253: ERROR/MainProcess] Message Error: Couldn't apply scheduled task aggregate_hw_errata_sign_off_metrics_schedule: 'NoneType' object has no attribute 'do_handshake'
I could disable celery.backend_cleanjob using CELERY_TASK_RESULT_EXPIRES = None but I am afraid, it might fails on other tasks configured.
Any help or guidance is appreciated.
I think, I found the problem. I was creating connection and was passing it to exchange and queue instance. I modified the code to let Queue and Exchange create connection to broker. so far no issues after this change.
Thanks
Old code:
from kombu import Connection, Queue, Exchange
conn = Connection(
hostname=settings.BROKER_HOST,
port=settings.BROKER_PORT,
userid=settings.BROKER_USER,
password=settings.BROKER_PASSWORD,
virtual_host=settings.BROKER_VHOST,
connect_timeout=BROKER_CONNECTION_TIMEOUT,
ssl=BROKER_USE_SSL,
transport='pyamqp')
conn.connect()
channel = conn.channel()
exchange = Exchange('my.exchange', type='direct', passive=True, channel=channel)
CELERY_QUEUES = (
Queue('my.tasks', exchange=exchange, routing_key='my.metrics', channel=conn, passive=True),
)
New Code (fix):
from kombu import Queue, Exchange
exchange = Exchange('my.exchange', type='direct', passive=True)
CELERY_QUEUES = (
Queue('my.tasks', exchange=exchange, routing_key='my.metrics', passive=True),
)

eb labs download not working in some AWS regions

I am new to dealing with the Parse-Server and hosting on AWS. But I have noticed that the "eb labs download" command in terminal works when my Parse Server environment is in N.Virginia but comes back with a whole list of errors when the the server environment was initially kept in Oregon. The errors are pertaining to "HTTP header errors". Anyone know why this is happening? Thanks in advance! Error is below:
Downloading application version...
Traceback (most recent call last):
File "/usr/local/bin/eb", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/site-packages/ebcli/core/ebcore.py", line 150, in main
app.run()
File "/usr/local/lib/python2.7/site-packages/cement/core/foundation.py", line 797, in run
return_val = self.controller._dispatch()
File "/usr/local/lib/python2.7/site-packages/cement/core/controller.py", line 472, in _dispatch
return func()
File "/usr/local/lib/python2.7/site-packages/cement/core/controller.py", line 472, in _dispatch
return func()
File "/usr/local/lib/python2.7/site-packages/cement/core/controller.py", line 478, in _dispatch
return func()
File "/usr/local/lib/python2.7/site-packages/ebcli/core/abstractcontroller.py", line 57, in default
self.do_command()
File "/usr/local/lib/python2.7/site-packages/ebcli/labs/download.py", line 36, in do_command
download_source_bundle(app_name, env_name)
File "/usr/local/lib/python2.7/site-packages/ebcli/labs/download.py", line 49, in download_source_bundle
data = s3.get_object(bucket_name, key_name)
File "/usr/local/lib/python2.7/site-packages/ebcli/lib/s3.py", line 68, in get_object
Key=key)
File "/usr/local/lib/python2.7/site-packages/ebcli/lib/s3.py", line 34, in _make_api_call
return aws.make_api_call('s3', operation_name, **operation_options)
File "/usr/local/lib/python2.7/site-packages/ebcli/lib/aws.py", line 218, in make_api_call
response_data = operation(**operation_options)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/client.py", line 251, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/client.py", line 526, in _make_api_call
operation_model, request_dict)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/endpoint.py", line 170, in _send_request
success_response, exception):
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/endpoint.py", line 249, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/utils.py", line 868, in redirect_from_error
new_region = self.get_bucket_region(bucket, response)
File "/Users/Home/Library/Python/2.7/lib/python/site-packages/botocore/utils.py", line 913, in get_bucket_region
response_headers = service_response['ResponseMetadata']['HTTPHeaders']
KeyError: 'HTTPHeaders'
Generally the EB CLI works in a single region at a time. If you have a specific region in which you want to use you can specify it using the --region flag.
eb labs download --region us-west-2
Otherwise it is usually best practice to keep your AWS stack resources in a single region.

aws no credentials error

I am trying to setup dynamic thumbnail service thumbor and to support s3 as storage, I need to setup this community powered pip library for aws.
Its working well on my local environment but when I am trying to host it on one of our servers, I am getting NoCredentialsError. I am assuming this is because of difference versions of botocore (latest one and one installed by pip library). Here is error log:
File "/usr/local/lib/python2.7/dist-packages/botocore/session.py", line 774, in get_component
# client config from the session
File "/usr/local/lib/python2.7/dist-packages/botocore/session.py", line 174, in <lambda>
self._components.lazy_register_component(
File "/usr/local/lib/python2.7/dist-packages/botocore/session.py", line 453, in get_data
- agent_version is the value of the `user_agent_version`
File "/usr/local/lib/python2.7/dist-packages/botocore/loaders.py", line 119, in _wrapper
data = func(self, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/loaders.py", line 364, in load_data
DataNotFoundError: Unable to load data for: _endpoints
2016-04-24 12:14:34 tornado.application:ERROR Future exception was never retrieved: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 230, in wrapper
yielded = next(result)
File "/usr/local/lib/python2.7/dist-packages/thumbor/handlers/imaging.py", line 31, in check_image
exists = yield gen.maybe_future(self.context.modules.storage.exists(kw['image'][:self.context.config.MAX_ID_LENGTH]))
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 455, in wrapper
future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 443, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tc_aws/aws/storage.py", line 107, in exists
self.storage.get(file_abspath, callback=return_data)
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 455, in wrapper
future.result()
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 215, in result
raise_exc_info(self._exc_info)
File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 443, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tc_aws/aws/bucket.py", line 44, in get
Key=self._clean_key(path),
File "/usr/local/lib/python2.7/dist-packages/tornado_botocore/base.py", line 97, in call
return self._make_api_call(operation_name=self.operation, api_params=kwargs, callback=callback)
File "/usr/local/lib/python2.7/dist-packages/tornado_botocore/base.py", line 60, in _make_api_call
operation_model=operation_model, request_dict=request_dict, callback=callback)
File "/usr/local/lib/python2.7/dist-packages/tornado_botocore/base.py", line 54, in _make_request
request_dict=request_dict, operation_model=operation_model, callback=callback)
File "/usr/local/lib/python2.7/dist-packages/tornado_botocore/base.py", line 32, in _send_request
request = self.endpoint.create_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 126, in create_request
operation_name=operation_model.name)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 124, in sign
signer.add_auth(request=request)
File "/usr/local/lib/python2.7/dist-packages/botocore/auth.py", line 626, in add_auth
raise NoCredentialsError
NoCredentialsError: Unable to locate credentials
Could it be fixed with proper ordering in which I install libraries? Because the pip library removes existing newer version of botocore and installs an older version.
EDIT:
I am running processes with supervisor and it seems process cant access aws credentials
EDIT 2:
The issue got resolved with proper configuration of supervisor. The user for process started by supervisor did not have access to config file
The issue got resolved with proper configuration of supervisor. The user for subprocess started by supervisor did not have access to aws config file. So it was working with local environment or creating process separately but not with supervisor.