When i try to run an ansible playbook i am getting an aws credential authentication error. I did aws configure and also tried with creating credentials file manually, but still the same error, but i am able to execute aws commands.
ansible 2.4.0.0
config file = /home/centos/infrastructure/ansible.cfg
configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
[DEPRECATION WARNING]: DEFAULT_SUDO_USER option, In favor of become which is a generic framework . This feature will be removed in
version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: * Failed to parse /home/centos/infrastructure/production/ec2.py with script plugin: Inventory script
(/home/centos/infrastructure/production/ec2.py) had an execution error: Traceback (most recent call last): File
"/home/centos/infrastructure/production/ec2.py", line 1600, in <module> Ec2Inventory() File
"/home/centos/infrastructure/production/ec2.py", line 193, in __init__ self.do_api_calls_update_cache() File
"/home/centos/infrastructure/production/ec2.py", line 525, in do_api_calls_update_cache self.get_instances_by_region(region)
File "/home/centos/infrastructure/production/ec2.py", line 579, in get_instances_by_region conn = self.connect(region) File
"/home/centos/infrastructure/production/ec2.py", line 543, in connect conn = self.connect_to_aws(ec2, region) File
"/home/centos/infrastructure/production/ec2.py", line 568, in connect_to_aws conn = module.connect_to_region(region,
**connect_args) File "/usr/lib/python2.7/site-packages/boto/ec2/__init__.py", line 66, in connect_to_region return
region.connect(**kw_params) File "/usr/lib/python2.7/site-packages/boto/regioninfo.py", line 188, in connect return
self.connection_cls(region=self, **kw_params) File "/usr/lib/python2.7/site-packages/boto/ec2/connection.py", line 102, in __init__
profile_name=profile_name) File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1057, in __init__
profile_name=profile_name) File "/usr/lib/python2.7/site-packages/boto/connection.py", line 568, in __init__ host, config,
self.provider, self._required_auth_capability()) File "/usr/lib/python2.7/site-packages/boto/auth.py", line 882, in get_auth_handler
'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1
handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
[WARNING]: * Failed to parse /home/centos/infrastructure/production/ec2.py with ini plugin:
/home/centos/infrastructure/production/ec2.py:3: Error parsing host definition ''''': No closing quotation
One of the easiest ways to use AWS credentials with ansible is to create a credentials file in .aws/ in your home directory and place the access key and secret access key in there (you can create multiple sets of credentials) i.e:
cat ~/.aws/credentials
[profile1]
aws_access_key_id = XXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxx
[default]
aws_access_key_id = XXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxx
Then you execute ansible-playbook like this:
AWS_PROFILE=profile1 ansible-playbook -i ec2.py playbook.yml
AWS_PROFILE is an environment variable that you can set by doing
export AWS_PROFILE=profile1
Note that you also need an environment variable with a default AWS region for example:
export AWS_EC2_REGION=ap-southeast-2
Related
This seems to be an issue many people have faced but the solutions I tried haven't solved it:
I have a python app that I dockerized and that I want to push to an EC2 container, however, once dockerized, the app has issues (locally) to access my AWS credentials:
santeau_session = boto3.Session(profile_name='Santeau')
db = santeau_session.resource('dynamodb', region_name='us-west-2')
MainPage = db.Table('mp')
When trying to pass them that way:docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro ks/mz
I get:
Traceback (most recent call last): File "./main.py", line 17, in <module>
santeau_session = boto3.Session(profile_name='Santeau')
File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 80, in __init__
self._setup_loader()
File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 120, in _setup_loader
self._loader = self._session.get_component('data_loader')
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 698, in get_component
return self._components.get_component(name)
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 937, in get_component
self._components[name] = factory()
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 158, in <lambda>
lambda: create_loader(self.get_config_variable('data_path')))
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 251, in get_config_variable
return self.get_component('config_store').get_config_variable(
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 313, in get_config_variable
return provider.provide()
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 410, in provide
value = provider.provide()
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 471, in provide
scoped_config = self._session.get_scoped_config()
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 351, in get_scoped_config
raise ProfileNotFound(profile=profile_name)
botocore.exceptions.ProfileNotFound: The config profile (Santeau) could not be found
My credentials file looks (kind of) like this, and the app correctly connects when not run with docker:
aws_access_key_id = ------------------
aws_secret_access_key = ------------------
[Santeau]
aws_access_key_id = ------------------
aws_secret_access_key = ------------------
Why does it work undockerized but not dockerized, and how can I solve this ?
My guess is that your docker container isn't running as the user and with the home you're expecting. I noticed that you hard coded /home/app/.aws/credentials
You should login to your container and discover what user it's running as and where your home is. You could run aws configure and then find where the credentials files were stored.
Many run as root so your command would look something like this docker run -v ~/.aws/:/root/.aws:ro your_image
Edit: Alternatively, you can pass the AWS_SHARED_CREDENTIALS_FILE environment variable of your file location directly. Here's more information: https://boto3.amazonaws.com/v1/documentation/api/1.9.42/guide/configuration.html
my boto3 setup works with providing the credentials through /.aws/credentials - how ever I would like to pass that as environmental variable to work through docker.
this is the content of /.aws/credentials
[default]
aws_access_key_id = ABCD123
aws_secret_access_key = BCDSA123
[testing]
source_profile = default
role_arn = arn:aws:iam::123:role/access-db
everything works - but if I make a setup env variable in a aws.env file
export AWS_ACCESS_KEY_ID="ABCD123"
export AWS_SECRET_ACCESS_KEY="BCDSA123"
export AWS_ROLE_SESSION_NAME="default"
export AWS_PROFILE="testing"
export AWS_DEFAULT_PROFILE="testing"
export AWS_ROLE_ARN="arn:aws:iam::123:role/access-db"
I get the following error
root#64813e0cc755:/rate_prediction# python test.py
Traceback (most recent call last):
File "test.py", line 4, in <module>
boto3.setup_default_session(profile_name = 'testing')
File "/usr/local/lib/python3.7/site-packages/boto3/__init__.py", line 34, in setup_default_session
DEFAULT_SESSION = Session(**kwargs)
File "/usr/local/lib/python3.7/site-packages/boto3/session.py", line 80, in __init__
self._setup_loader()
File "/usr/local/lib/python3.7/site-packages/boto3/session.py", line 120, in _setup_loader
self._loader = self._session.get_component('data_loader')
File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 685, in get_component
return self._components.get_component(name)
File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 924, in get_component
self._components[name] = factory()
File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 158, in <lambda>
lambda: create_loader(self.get_config_variable('data_path')))
File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 241, in get_config_variable
logical_name)
File "/usr/local/lib/python3.7/site-packages/botocore/configprovider.py", line 293, in get_config_variable
return provider.provide()
File "/usr/local/lib/python3.7/site-packages/botocore/configprovider.py", line 390, in provide
value = provider.provide()
File "/usr/local/lib/python3.7/site-packages/botocore/configprovider.py", line 451, in provide
scoped_config = self._session.get_scoped_config()
File "/usr/local/lib/python3.7/site-packages/botocore/session.py", line 340, in get_scoped_config
raise ProfileNotFound(profile=profile_name)
botocore.exceptions.ProfileNotFound: The config profile (testing) could not be found
and here is my python script
import boto3
import os
boto3.setup_default_session(profile_name = 'testing')
s3 = boto3.resource('s3')
s3_client = boto3.client('s3')
my_bucket = s3.Bucket('ds-models-testing')
which I am expecting to work with - again, seem there is sth missing from my environmental variable and boto is searching for a config file botocore.exceptions.ProfileNotFound: The config profile (testing) could not be foun
You need to unset the AWS_PROFILE env var.
You don't have config file. config and credentials file should be in ~/.aws folder.
.aws/credentials
[default]
aws_access_key_id = ABCD123
aws_secret_access_key = BCDSA123
and in config file .aws/config
[profile testing]
source_profile = default
role_arn = arn:aws:iam::123:role/access-db
First, I think you should remove this line if you are trying to read from environment variables:
boto3.setup_default_session(profile_name = 'testing')
Documentation says
The order in which Boto3 searches for credentials is:
1. Passing credentials as parameters in the boto.client() method
2. Passing credentials as parameters when creating a Session object
3. Environment variables
4. Shared credential file (~/.aws/credentials)
5. AWS config file (~/.aws/config)
6. Assume Role provider
7. Boto2 config file (/etc/boto.cfg and ~/.boto)
8. Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Reference: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
So as you suspect, most likely the environment variables are not getting set as you expect. Try adding code to log the value of an environment variable that you are setting via the app.env file just to be sure you actually see it set and can read it as an environment variable:
os.environ.get('AWS_ACCESS_KEY_ID', 'It is not set!')
Assuming it is actually reading the environment variables you are setting in aws.env, then try exporting only these 2 environment variables in your aws.env and removing the others:
export AWS_ACCESS_KEY_ID="ABCD123"
export AWS_SECRET_ACCESS_KEY="BCDSA123"
First Time I am trying AWS services. I have to integrate AWS polly with asterisk for text to speech.
here is example code i written to convert text to speech
from boto3 import client
import boto3
import StringIO
from contextlib import closing
polly = client("polly", 'us-east-1' )
response = polly.synthesize_speech(
Text="Good Morning. My Name is Rajesh. I am Testing Polly AWS Service For Voice Application.",
OutputFormat="mp3",
VoiceId="Raveena")
print(response)
if "AudioStream" in response:
with closing(response["AudioStream"]) as stream:
data = stream.read()
fo = open("pollytest.mp3", "w+")
fo.write( data )
fo.close()
I am getting following error.
Traceback (most recent call last):
File "pollytest.py", line 11, in <module>
VoiceId="Raveena")
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/client.py", line 530, in _make_api_call
operation_model, request_dict)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 166, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python2.7/dist-packages/botocore/endpoint.py", line 150, in create_request
operation_name=operation_model.name)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 90, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python2.7/dist-packages/botocore/signers.py", line 147, in sign
auth.add_auth(request)
File "/usr/local/lib/python2.7/dist-packages/botocore/auth.py", line 316, in add_auth
raise NoCredentialsError
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I want to provide credentials directly in this script so that i can use this in asterisk system application.
UPDATE:
created a file ~/.aws/credentials with below content
[default]
aws_access_key_id=XXXXXXXX
aws_secret_access_key=YYYYYYYYYYY
now for my current login user its working fine, but for asterisk PBX it is not working.
Your code runs perfectly fine for me!
The last line is saying:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
So, it is unable to authenticate against AWS.
If you are running this code on an Amazon EC2 instance, the simplest method is to assign an IAM Role to the instance when it is launched (it can't be added later). This will automatically assign credentials that can be used by application running on the instance -- no code changes required.
Alternatively, you could obtain an Access Key and Secret Key from IAM for your IAM User and store those credentials in a local file via the aws configure command.
It is bad practice to put credentials in source code, since they may become compromised.
See:
IAM Roles for Amazon EC2
Best Practices for Managing AWS Access Keys
Please note,asterisk pbx usually run under asterisk user.
So you have put authentification for that user, not root.
I would like to ask if it is currently possible to use spark-ec2 script https://spark.apache.org/docs/latest/ec2-scripts.html together with credentials that are consisting not only from: aws_access_key_id and aws_secret_access_key, but it also contains aws_security_token.
When I try to run the script I am getting following error message:
ERROR:boto:Caught exception reading instance data
Traceback (most recent call last):
File "/Users/zikes/opensource/spark/ec2/lib/boto-2.34.0/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 64] Host is down>
ERROR:boto:Unable to read instance data, giving up
No handler was ready to authenticate. 1 handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
Does anyone has some idea what can be possibly wrong? Is aws_security_token the problem?
It maybe seems to me more as boto than Spark problem.
I have tried both:
1) setting credentials in ~/.aws/credentials and ~/.aws/config
2) setting credential by commands:
export aws_access_key_id=<my_aws_access_key>
export aws_secret_access_key=<my_aws_seecret_key>
export aws_security_token=<my_aws_security_token>
My launch command is:
./spark-ec2 -k my_key -i my_key.pem --additional-tags "mytag:tag1,mytag2:tag2" --instance-profile-name "profile1" -s 1 launch test
you can setup your credentials & config using the command aws configure.
I had the same issue but in my case my AWS_SECRET_ACCESS_KEY had a slash, I regenerated the key until there was no slash and it worked
The problem was that I did not use profile called default after renaming everything worked well.
With the release of AMI 3.3.0, AWS supports Hue as an installable "app" in EMR, like Hive/Pig. Using the EMR web UI, creating a cluster with Hue works fine for me, however when adding a Hue installation bootstrap action via Boto, I am getting a non-deterministic error (it periodically crashes). I have tested 4 times with identical configuration and the crash rate is 50%.
In Boto, I add an additional bootstrap action, as is done automatically when creating a cluster from the web UI when Hue is enabled:
BootstrapAction('Install Hue', 's3://elasticmapreduce/libs/hue/install-hue', [])
The cluster then terminates with a:
Terminated with errors: On the master instance (i-c6b7582a),
bootstrap action 2 returned a non-zero return code
And in the bootstrap action logs:
Existing lock /var/run/yum.pid: another copy is running as pid 2007.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 22 M RSS (305 MB VSZ)
Started: Tue Nov 11 21:00:12 2014 - 00:19 ago
State : Sleeping, pid: 2007
Another app is currently holding the yum lock; waiting for it to exit...
Tons of those, and finally a large stacktrace:
Trying other mirror.
http://packages.ap-southeast-2.amazonaws.com/2014.09/main/20140901f63e/x86_64/repodata/repomd.xml?instance_id=i-c6b7582a®ion=us-east-1: [Errno 12] Timeout on http://packages.ap-southeast-2.amazonaws.com/2014.09/main/20140901f63e/x86_64/repodata/repomd.xml?instance_id=i-c6b7582a®ion=us-east-1: (28, 'Connection timed out after 10000 milliseconds')
Trying other mirror.
Traceback (most recent call last):
File "/usr/bin/yum", line 29, in <module>
yummain.user_main(sys.argv[1:], exit_code=True)
File "/usr/share/yum-cli/yummain.py", line 355, in user_main
errcode = main(args)
File "/usr/share/yum-cli/yummain.py", line 174, in main
result, resultmsgs = base.doCommands()
File "/usr/share/yum-cli/cli.py", line 572, in doCommands
return self.yum_cli_commands[self.basecmd].doCommand(self, self.basecmd, self.extcmds)
File "/usr/share/yum-cli/yumcommands.py", line 432, in doCommand
return base.installPkgs(extcmds, basecmd=basecmd)
File "/usr/share/yum-cli/cli.py", line 968, in installPkgs
txmbrs = self.install(pattern=arg)
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 4721, in install
mypkgs = self.pkgSack.returnPackages(patterns=pats,
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 1069, in <lambda>
pkgSack = property(fget=lambda self: self._getSacks(),
File "/usr/lib/python2.6/site-packages/yum/__init__.py", line 774, in _getSacks
self.repos.populateSack(which=repos)
File "/usr/lib/python2.6/site-packages/yum/repos.py", line 383, in populateSack
sack.populate(repo, mdtype, callback, cacheonly)
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 250, in populate
if self._check_db_version(repo, mydbtype):
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 342, in _check_db_version
return repo._check_db_version(mdtype)
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1520, in _check_db_version
repoXML = self.repoXML
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1706, in <lambda>
repoXML = property(fget=lambda self: self._getRepoXML(),
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1702, in _getRepoXML
self._loadRepoXML(text=self.ui_id)
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1693, in _loadRepoXML
return self._groupLoadRepoXML(text, self._mdpolicy2mdtypes())
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1667, in _groupLoadRepoXML
if self._commonLoadRepoXML(text):
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1495, in _commonLoadRepoXML
self._revertOldRepoXML()
File "/usr/lib/python2.6/site-packages/yum/yumRepo.py", line 1345, in _revertOldRepoXML
os.rename(old_data['old_local'], old_data['local'])
OSError: [Errno 2] No such file or directory
In contrast, the bootstrap log shows a single line on success:
Warning: RPMDB altered outside of yum.
Example to install and run Hue in EMR AMI 3.3
import boto.emr
from boto.emr.emrobject import InstanceGroup
from boto.emr.bootstrap_action import BootstrapAction
from boto.emr.step import ScriptRunnerStep
conn = boto.emr.EmrConnection()
jobid = conn.run_jobflow(name="Hue Example", ami_version = "3.3.0",
log_uri="s3n://your-log-path-here",
instance_groups= get_instance_groups(),
bootstrap_actions=get_bootstrap_actions(),
ec2_keyname="your-ec2-key-name",
steps = get_startup_steps()
)
def get_bootstrap_actions():
install_hue_action = BootstrapAction("Install Hue ",
"s3n://us-east-1.elasticmapreduce/libs/hue/install-hue",
bootstrap_action_args=None)
return [install_hue_action]
def get_startup_steps():
runHueStep = ScriptRunnerStep(name="Run Hue",
step_args = ["s3n://us-east-1.elasticmapreduce/libs/hue/run-hue"])
return [runHueStep]
def get_instance_groups():
#This is just an example. Actual implementation will have core, and task instance groups as well. Please choose your instance type, number, and bid price wisely as might it get too expensive too quickly.
spotInstanceGroup = InstanceGroup()
spotInstanceGroup.name="Spot Instance Group Master"
spotInstanceGroup.bidprice="0.20"
spotInstanceGroup.num_instances = 1
spotInstanceGroup.market="SPOT"
spotInstanceGroup.type="c3.2xlarge"
spotInstanceGroup.role="MASTER"