I would like to ask if it is currently possible to use spark-ec2 script https://spark.apache.org/docs/latest/ec2-scripts.html together with credentials that are consisting not only from: aws_access_key_id and aws_secret_access_key, but it also contains aws_security_token.
When I try to run the script I am getting following error message:
ERROR:boto:Caught exception reading instance data
Traceback (most recent call last):
File "/Users/zikes/opensource/spark/ec2/lib/boto-2.34.0/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 64] Host is down>
ERROR:boto:Unable to read instance data, giving up
No handler was ready to authenticate. 1 handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
Does anyone has some idea what can be possibly wrong? Is aws_security_token the problem?
It maybe seems to me more as boto than Spark problem.
I have tried both:
1) setting credentials in ~/.aws/credentials and ~/.aws/config
2) setting credential by commands:
export aws_access_key_id=<my_aws_access_key>
export aws_secret_access_key=<my_aws_seecret_key>
export aws_security_token=<my_aws_security_token>
My launch command is:
./spark-ec2 -k my_key -i my_key.pem --additional-tags "mytag:tag1,mytag2:tag2" --instance-profile-name "profile1" -s 1 launch test
you can setup your credentials & config using the command aws configure.
I had the same issue but in my case my AWS_SECRET_ACCESS_KEY had a slash, I regenerated the key until there was no slash and it worked
The problem was that I did not use profile called default after renaming everything worked well.
Related
This seems to be an issue many people have faced but the solutions I tried haven't solved it:
I have a python app that I dockerized and that I want to push to an EC2 container, however, once dockerized, the app has issues (locally) to access my AWS credentials:
santeau_session = boto3.Session(profile_name='Santeau')
db = santeau_session.resource('dynamodb', region_name='us-west-2')
MainPage = db.Table('mp')
When trying to pass them that way:docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro ks/mz
I get:
Traceback (most recent call last): File "./main.py", line 17, in <module>
santeau_session = boto3.Session(profile_name='Santeau')
File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 80, in __init__
self._setup_loader()
File "/usr/local/lib/python3.8/site-packages/boto3/session.py", line 120, in _setup_loader
self._loader = self._session.get_component('data_loader')
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 698, in get_component
return self._components.get_component(name)
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 937, in get_component
self._components[name] = factory()
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 158, in <lambda>
lambda: create_loader(self.get_config_variable('data_path')))
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 251, in get_config_variable
return self.get_component('config_store').get_config_variable(
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 313, in get_config_variable
return provider.provide()
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 410, in provide
value = provider.provide()
File "/usr/local/lib/python3.8/site-packages/botocore/configprovider.py", line 471, in provide
scoped_config = self._session.get_scoped_config()
File "/usr/local/lib/python3.8/site-packages/botocore/session.py", line 351, in get_scoped_config
raise ProfileNotFound(profile=profile_name)
botocore.exceptions.ProfileNotFound: The config profile (Santeau) could not be found
My credentials file looks (kind of) like this, and the app correctly connects when not run with docker:
aws_access_key_id = ------------------
aws_secret_access_key = ------------------
[Santeau]
aws_access_key_id = ------------------
aws_secret_access_key = ------------------
Why does it work undockerized but not dockerized, and how can I solve this ?
My guess is that your docker container isn't running as the user and with the home you're expecting. I noticed that you hard coded /home/app/.aws/credentials
You should login to your container and discover what user it's running as and where your home is. You could run aws configure and then find where the credentials files were stored.
Many run as root so your command would look something like this docker run -v ~/.aws/:/root/.aws:ro your_image
Edit: Alternatively, you can pass the AWS_SHARED_CREDENTIALS_FILE environment variable of your file location directly. Here's more information: https://boto3.amazonaws.com/v1/documentation/api/1.9.42/guide/configuration.html
I'm trying to use s3fs in python to connect to an s3 bucket. The associated credentials are saved in a profile called 'pete' in ~/.aws/credentials:
[default]
aws_access_key_id=****
aws_secret_access_key=****
[pete]
aws_access_key_id=****
aws_secret_access_key=****
This seems to work in AWS CLI (on Windows):
$>aws s3 ls s3://my-bucket/ --profile pete
PRE other-test-folder/
PRE test-folder/
But I get a permission denied error when I use what should be equivalent code using the s3fs package in python:
import s3fs
import requests
s3 = s3fs.core.S3FileSystem(profile = 'pete')
s3.ls('my-bucket')
I get this error:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 504, in _lsdir
async for i in it:
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\paginate.py", line 32, in __anext__
response = await self._make_request(current_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\aiobotocore\client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<ipython-input-9-4627a44a7ac3>", line 5, in <module>
s3.ls('ma-baseball')
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 993, in ls
files = maybe_sync(self._ls, self, path, refresh=refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 97, in maybe_sync
return sync(loop, func, *args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 68, in sync
raise exc.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\fsspec\asyn.py", line 52, in f
result[0] = await future
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 676, in _ls
return await self._lsdir(path, refresh)
File "C:\ProgramData\Anaconda3\lib\site-packages\s3fs\core.py", line 527, in _lsdir
raise translate_boto_error(e) from e
PermissionError: Access Denied
I have to assume it's not a config issue within s3 because I can access s3 through the CLI. So something must be off with my s3fs code, but I can't find a whole lot of documentation on profiles in s3fs to figure out what's going on. Any help is of course appreciated.
When i try to run an ansible playbook i am getting an aws credential authentication error. I did aws configure and also tried with creating credentials file manually, but still the same error, but i am able to execute aws commands.
ansible 2.4.0.0
config file = /home/centos/infrastructure/ansible.cfg
configured module search path = [u'/home/centos/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
[DEPRECATION WARNING]: DEFAULT_SUDO_USER option, In favor of become which is a generic framework . This feature will be removed in
version 2.8. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: * Failed to parse /home/centos/infrastructure/production/ec2.py with script plugin: Inventory script
(/home/centos/infrastructure/production/ec2.py) had an execution error: Traceback (most recent call last): File
"/home/centos/infrastructure/production/ec2.py", line 1600, in <module> Ec2Inventory() File
"/home/centos/infrastructure/production/ec2.py", line 193, in __init__ self.do_api_calls_update_cache() File
"/home/centos/infrastructure/production/ec2.py", line 525, in do_api_calls_update_cache self.get_instances_by_region(region)
File "/home/centos/infrastructure/production/ec2.py", line 579, in get_instances_by_region conn = self.connect(region) File
"/home/centos/infrastructure/production/ec2.py", line 543, in connect conn = self.connect_to_aws(ec2, region) File
"/home/centos/infrastructure/production/ec2.py", line 568, in connect_to_aws conn = module.connect_to_region(region,
**connect_args) File "/usr/lib/python2.7/site-packages/boto/ec2/__init__.py", line 66, in connect_to_region return
region.connect(**kw_params) File "/usr/lib/python2.7/site-packages/boto/regioninfo.py", line 188, in connect return
self.connection_cls(region=self, **kw_params) File "/usr/lib/python2.7/site-packages/boto/ec2/connection.py", line 102, in __init__
profile_name=profile_name) File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1057, in __init__
profile_name=profile_name) File "/usr/lib/python2.7/site-packages/boto/connection.py", line 568, in __init__ host, config,
self.provider, self._required_auth_capability()) File "/usr/lib/python2.7/site-packages/boto/auth.py", line 882, in get_auth_handler
'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1
handlers were checked. ['QuerySignatureV2AuthHandler'] Check your credentials
[WARNING]: * Failed to parse /home/centos/infrastructure/production/ec2.py with ini plugin:
/home/centos/infrastructure/production/ec2.py:3: Error parsing host definition ''''': No closing quotation
One of the easiest ways to use AWS credentials with ansible is to create a credentials file in .aws/ in your home directory and place the access key and secret access key in there (you can create multiple sets of credentials) i.e:
cat ~/.aws/credentials
[profile1]
aws_access_key_id = XXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxx
[default]
aws_access_key_id = XXXXXXXXXXX
aws_secret_access_key = xxxxxxxxxxxxxxxx
Then you execute ansible-playbook like this:
AWS_PROFILE=profile1 ansible-playbook -i ec2.py playbook.yml
AWS_PROFILE is an environment variable that you can set by doing
export AWS_PROFILE=profile1
Note that you also need an environment variable with a default AWS region for example:
export AWS_EC2_REGION=ap-southeast-2
I ssh to my EC2 instance. I can run these commands and they work perfectly:
aws sqs list-queues
aws s3 ls
I have a small Python script that pulls data from a database, formats it as XML, and then uploads the file to S3. This upload fails with this error:
Traceback (most recent call last):
File "./data_test/data_analytics/lexisnexis/async2.py", line 289, in <module>
insert_parallel(engine, qy, Create_Temp.profile_id, nworkers)
File "./data_test/data_analytics/lexisnexis/async2.py", line 241, in insert_parallel
s3upload(bucketname, keyname, f)
File "./data_test/data_analytics/lexisnexis/async2.py", line 89, in s3upload
bucket = conn.get_bucket(bucketname)
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 506, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 525, in head_bucket
response = self.make_request('HEAD', bucket_name, headers=headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 668, in make_request
retry_handler=retry_handler
File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 1071, in make_request
retry_handler=retry_handler)
File "/usr/lib/python2.7/dist-packages/boto/connection.py", line 1030, in _mexe
raise ex
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
How can I have a script that dies, even when aws cli works?
To be clear, I'm running the Python script as the same user, from the same EC2 instance, as I run the aws cli commands.
aws --version
aws-cli/1.11.176 Python/2.7.12 Linux/4.9.43-17.38.amzn1.x86_64 botocore/1.7.34
The last line of your error messages tells you the problem:
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Your issue could be one of the following:
1) There is an error with the certificate with the server that you are connecting to.
2) The certificate chain is incomplete for the server that you are connecting to.
3) You are missing "cacert.pem". Do a Google search on "cacert.pem". This is a common problem and there is a lot of information on downloading and installing this file.
Certificate verification in Python
i Created new config file:
$ sudo vi ~/.boto
there i paste my credentials (as written in readthedocs for botp):
[Credentials]
aws_access_key_id = YOURACCESSKEY
aws_secret_access_key = YOURSECRETKEY
im trying to check connection:
import boto
boto.set_stream_logger('boto')
s3 = boto.connect_s3("us-east-1")
and my answer:
2014-11-26 14:05:49,532 boto [DEBUG]:Using access key provided by client.
2014-11-26 14:05:49,532 boto [DEBUG]:Retrieving credentials from metadata server.
2014-11-26 14:05:50,539 boto [ERROR]:Caught exception reading instance data
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2014-11-26 14:05:50,540 boto [ERROR]:Unable to read instance data, giving up
Traceback (most recent call last):
File "/Users/user/PycharmProjects/project/untitled.py", line 8, in <module>
s3 = boto.connect_s3("us-east-1")
File "/Library/Python/2.7/site-packages/boto/__init__.py", line 141, in connect_s3
return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/Library/Python/2.7/site-packages/boto/auth.py", line 975, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
why its not found the Credentials?
there is something that i did wrong?
Your issue is:
The string 'us-west-1' you provide as the first argument will be treat as the AWSAccessKeyID.
What you want is:
First creating a connection, note that a connection has no region or location info in it.
conn = boto.connect_s3('your_access_key', 'your_secret_key')
And then when you want to do some thing with the bucket, write the region info as an argument.
from boto.s3.connection import Location
conn.create_bucket('mybucket', location=Location.USWest)
or:
conn.create_bucket('mybucket', location='us-west-1')
By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location.