botocore.exceptions.ProfileNotFound - Pass AWS credentials to docker image - amazon-web-services

This seems to be an issue many people have faced but the solutions I tried haven't solved it:
I have a python app that I dockerized and that I want to push to an EC2 container, however, once dockerized, the app has issues (locally) to access my AWS credentials:
santeau_session = boto3.Session(profile_name='Santeau')
db = santeau_session.resource('dynamodb', region_name='us-west-2')
MainPage = db.Table('mp')
When trying to pass them that way:docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro ks/mz
I get:
Traceback (most recent call last): File "./main.py", line 17, in <module>
santeau_session = boto3.Session(profile_name='Santeau')
File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 80, in __init__
self._setup_loader()
File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 120, in _setup_loader
self._loader = self._session.get_component('data_loader')
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 698, in get_component
return self._components.get_component(name)
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 937, in get_component
self._components[name] = factory()
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 158, in <lambda>
lambda: create_loader(self.get_config_variable('data_path')))
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 251, in get_config_variable
return self.get_component('config_store').get_config_variable(
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 313, in get_config_variable
return provider.provide()
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 410, in provide
value = provider.provide()
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 471, in provide
scoped_config = self._session.get_scoped_config()
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 351, in get_scoped_config
raise ProfileNotFound(profile=profile_name)
botocore.exceptions.ProfileNotFound: The config profile (Santeau) could not be found
My credentials file looks (kind of) like this, and the app correctly connects when not run with docker:
aws_access_key_id = ------------------
aws_secret_access_key = ------------------
[Santeau]
aws_access_key_id = ------------------
aws_secret_access_key = ------------------
Why does it work undockerized but not dockerized, and how can I solve this ?

My guess is that your docker container isn't running as the user and with the home you're expecting. I noticed that you hard coded /home/app/.aws/credentials
You should login to your container and discover what user it's running as and where your home is. You could run aws configure and then find where the credentials files were stored.
Many run as root so your command would look something like this docker run -v ~/.aws/:/root/.aws:ro your_image
Edit: Alternatively, you can pass the AWS_SHARED_CREDENTIALS_FILE environment variable of your file location directly. Here's more information: https://boto3.amazonaws.com/v1/documentation/api/1.9.42/guide/configuration.html

Related

trying to use cookiecutter-django, getting errors and does not create anything

trying to get a Django project started using cookiecutter-django and can't seem to get it to generate anything.
using Python 3.6, Django 2.0.5, cookiecutter 1.6.0 (then created a virtualenv and entered a new, blank directory)
so I enter this command:
cookiecutter https://github.com/pydanny/cookiecutter-django
and get this error traceback:
Traceback (most recent call last):
File "c:\python\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Python\python36\Scripts\cookiecutter.exe\__main__.py", line 9, in
<module>
File "c:\python\python36\lib\site-packages\click\core.py", line 722, in
__call__
return self.main(*args, **kwargs)
File "c:\python\python36\lib\site-packages\click\core.py", line 697, in main
rv = self.invoke(ctx)
File "c:\python\python36\lib\site-packages\click\core.py", line 895, in
invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\python\python36\lib\site-packages\click\core.py", line 535, in
invoke
return callback(*args, **kwargs)
File "c:\python\python36\lib\site-packages\cookiecutter\cli.py", line 120,
in main
password=os.environ.get('COOKIECUTTER_REPO_PASSWORD')
File "c:\python\python36\lib\site-packages\cookiecutter\main.py", line 63,
in cookiecutter
password=password
File "c:\python\python36\lib\site-packages\cookiecutter\repository.py", line
103, in determine_repo_dir
no_input=no_input,
File "c:\python\python36\lib\site-packages\cookiecutter\vcs.py", line 99, in
clone
stderr=subprocess.STDOUT,
File "c:\python\python36\lib\subprocess.py", line 336, in check_output
**kwargs).stdout
File "c:\python\python36\lib\subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['git', 'clone',
'https://github.com/pydanny/cookiecutter-django']' returned non-zero exit
status 128.
OK - figured out how to get this to work.
used Github desktop
from cookiecutter-django repository, right click
open it Git Shell
this opens a Powershell window.
CD to directory where project will be placed in.
cookiecutter https://github.com/pydanny/cookiecutter-django
and it works.
not sure exactly why this works when regular CMD and elevated CMD do not, but this was the only way I could get it to work.
This is a permission issue with github due to the need to setup ssh keys. By the way I'm using ubuntu 12.
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/ - create a key first in your machine using the instructions in the link. Once you have your ssh key, proceed to step 2. (Step 2 is indicated in the first link as last step)
https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account - add the generated ssh key to your github account.

aws credentials error while using dynamic inventory

When i try to run an ansible playbook i am getting an aws credential authentication error. I did aws configure and also tried with creating credentials file manually, but still the same error, but i am able to execute aws commands.
ansible 2.4.0.0
config file = /home/centos/infrastructure/ansible.cfg
configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
[DEPRECATION WARNING]: DEFAULT_SUDO_USER option, In favor of become which is a generic framework . This feature will be removed in
version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: * Failed to parse /home/centos/infrastructure/production/ec2.py with script plugin: Inventory script
(/home/centos/infrastructure/production/ec2.py) had an execution error: Traceback (most recent call last): File
"/home/centos/infrastructure/production/ec2.py", line 1600, in <module> Ec2Inventory() File
"/home/centos/infrastructure/production/ec2.py", line 193, in __init__ self.do_api_calls_update_cache() File
"/home/centos/infrastructure/production/ec2.py", line 525, in do_api_calls_update_cache self.get_instances_by_region(region)
File "/home/centos/infrastructure/production/ec2.py", line 579, in get_instances_by_region conn = self.connect(region) File
"/home/centos/infrastructure/production/ec2.py", line 543, in connect conn = self.connect_to_aws(ec2, region) File
"/home/centos/infrastructure/production/ec2.py", line 568, in connect_to_aws conn = module.connect_to_region(region,
**connect_args) File "/usr/lib/python2.7/site-packages/boto/ec2/__init__.py", line 66, in connect_to_region return
region.connect(**kw_params) File "/usr/lib/python2.7/site-packages/boto/regioninfo.py", line 188, in connect return
self.connection_cls(region=self, **kw_params) File "/usr/lib/python2.7/site-packages/boto/ec2/connection.py", line 102, in __init__
profile_name=profile_name) File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1057, in __init__
profile_name=profile_name) File "/usr/lib/python2.7/site-packages/boto/connection.py", line 568, in __init__ host, config,
self.provider, self._required_auth_capability()) File "/usr/lib/python2.7/site-packages/boto/auth.py", line 882, in get_auth_handler
'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1
handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
[WARNING]: * Failed to parse /home/centos/infrastructure/production/ec2.py with ini plugin:
/home/centos/infrastructure/production/ec2.py:3: Error parsing host definition ''''': No closing quotation
One of the easiest ways to use AWS credentials with ansible is to create a credentials file in .aws/ in your home directory and place the access key and secret access key in there (you can create multiple sets of credentials) i.e:
cat ~/.aws/credentials
[profile1]
aws_access_key_id = XXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxx
[default]
aws_access_key_id = XXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxx
Then you execute ansible-playbook like this:
AWS_PROFILE=profile1 ansible-playbook -i ec2.py playbook.yml
AWS_PROFILE is an environment variable that you can set by doing
export AWS_PROFILE=profile1
Note that you also need an environment variable with a default AWS region for example:
export AWS_EC2_REGION=ap-southeast-2

AWS polly sample example in python?

First Time I am trying AWS services. I have to integrate AWS polly with asterisk for text to speech.
here is example code i written to convert text to speech
from boto3 import client
import boto3
import StringIO
from contextlib import closing
polly = client("polly", 'us-east-1' )
response = polly.synthesize_speech(
Text="Good Morning. My Name is Rajesh. I am Testing Polly AWS Service For Voice Application.",
OutputFormat="mp3",
VoiceId="Raveena")
print(response)
if "AudioStream" in response:
with closing(response["AudioStream"]) as stream:
data = stream.read()
fo = open("pollytest.mp3", "w+")
fo.write( data )
fo.close()
I am getting following error.
Traceback (most recent call last):
File "pollytest.py", line 11, in <module>
VoiceId="Raveena")
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 530, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 166, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 150, in create_request
operation_name=operation_model.name)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 147, in sign
auth.add_auth(request)
File "/usr/local/lib/python2.7/dist-packages/botocore/auth.py", line 316, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I want to provide credentials directly in this script so that i can use this in asterisk system application.
UPDATE:
created a file ~/.aws/credentials with below content
[default]
aws_access_key_id=XXXXXXXX
aws_secret_access_key=YYYYYYYYYYY
now for my current login user its working fine, but for asterisk PBX it is not working.
Your code runs perfectly fine for me!
The last line is saying:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
So, it is unable to authenticate against AWS.
If you are running this code on an Amazon EC2 instance, the simplest method is to assign an IAM Role to the instance when it is launched (it can't be added later). This will automatically assign credentials that can be used by application running on the instance -- no code changes required.
Alternatively, you could obtain an Access Key and Secret Key from IAM for your IAM User and store those credentials in a local file via the aws configure command.
It is bad practice to put credentials in source code, since they may become compromised.
See:
IAM Roles for Amazon EC2
Best Practices for Managing AWS Access Keys
Please note,asterisk pbx usually run under asterisk user.
So you have put authentification for that user, not root.

"xml.sax._exceptions.SAXReaderNotAvailable: No parsers found" when run in jenkins

So I'm working towards having automated staging deployments via Jenkins and Ansible. Part of this is using a script called ec2.py from ansible in order to dynamically retrieve a list of matching servers to deploy to.
SSH-ing into the Jenkins server and running the script from the jenkins user, the script runs as expected. However, running the script from within jenkins leads to the following error:
ERROR: Inventory script (ec2/ec2.py) had an execution error: Traceback (most recent call last):
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 1262, in <module>
Ec2Inventory()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 159, in __init__
self.do_api_calls_update_cache()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 386, in do_api_calls_update_cache
self.get_instances_by_region(region)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 417, in get_instances_by_region
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/connection.py", line 1181, in get_list
xml.sax.parseString(body, h)
File "/usr/lib/python2.7/xml/sax/__init__.py", line 43, in parseString
parser = make_parser()
File "/usr/lib/python2.7/xml/sax/__init__.py", line 93, in make_parser
raise SAXReaderNotAvailable("No parsers found", None)
xml.sax._exceptions.SAXReaderNotAvailable: No parsers found
I don't know too much about python, so I'm not sure how to debug this issue further.
So it turns out the issue was to do with Jenkins overwriting the default LD_LIBRARY_PATH variable. By unsetting that variable before running python, I was able to make the python app work!

cannot run eb push to send new version to elastic beanstalk

I have just set up a new project, the elastic beanstalk environment is running ok with sample application. this was all set up with eb cli.
when I try to do eb push with my new application i get the following
Traceback (most recent call last): File
".git/AWSDevTools/aws.elasticbeanstalk.push", line 57, in
dev_tools.push_changes(opts.get("env"), opts.get("commit")) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line
196, in push_changes
self.create_application_version(env, commit, version_label) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line
184, in create_application_version
self.upload_file(bucket_name, archived_file) File "/Users/Mark/workspace/edu/gc/.git/AWSDevTools/aws/dev_tools.py", line
145, in upload_file
key.set_contents_from_filename(archived_file) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 1315, in set_contents_from_filename
encrypt_key=encrypt_key) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 1246, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 725, in send_file
chunked_transfer=chunked_transfer, size=size) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 914, in _send_file_internal
query_args=query_args File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/connection.py",
line 633, in make_request
retry_handler=retry_handler File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/connection.py",
line 1046, in make_request
retry_handler=retry_handler) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/connection.py",
line 919, in _mexe
request.body, request.headers) File "/Library/Python/2.7/site-packages/boto-2.28.0-py2.7.egg/boto/s3/key.py",
line 815, in sender
http_conn.send(chunk) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py",
line 805, in send
self.sock.sendall(data) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py",
line 229, in sendall
v = self.send(data[count:]) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py",
line 198, in send
v = self._sslobj.write(data) socket.error: [Errno 32] Broken pipe Cannot run aws.push for local repository HEAD:
I have another elastic beanstalk app that is running and when I run eb push in that directory it works fine so I dont think its anything to do with ruby or other dependancies not being installed. I also made changes and made another commit with a very simple message to make sure that wasnt causing the problem and still no joy
the difference between the app that can be pushed and this one is the aws account. the user credentials for this elastic beanstalk app that wont push are admin credentials
this link solved my problem, it was as a result of using two different accounts
http://www.acnenomor.com/207992p1/unable-to-deploy-to-aws-elastic-beanstalk-using-git