Trying to create AWS spot datafeed, getting error: InaccessibleStorageLocation - amazon-web-services

I am following the 'documentation' here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html
With the goal of creating an ec2 spot instance price datafeed.
I use this command:
aws ec2 create-spot-datafeed-subscription --region us-east-1 --bucket mybucketname-spot-instance-price-data-feed
And get this response:
An error occurred (InaccessibleStorageLocation) when calling the CreateSpotDatafeedSubscription operation: The specified bucket does not exist or does not have enough permissions
The bucket exists, I am able to upload files into it.
I don't have any idea what to do - there's a blizzard of AWS options for giving permissions and the documentation makes only vague statements, nothing concrete about what might need to be done.
Can anyone suggest please what I can do to get past this error? thanks!

What worked me for this issue is enabling the ACL. I initially had the ACLs disabled and I was running into the same permissions issue. I updated the S3 bucket and enabled ACLs then the message went away and I was able to create see the spot pricing feed.

Related

Amazon S3 Access Denied when calling aws lambda publish-layer-version CLI

I tried to run aws lambda publish-layer-version command line in my local console using my personal aws credentials, but I've got an Amazon S3 Access Denied error for the bucket in which the zip layer is stored.
aws lambda publish-layer-version --layer-name layer_name --content S3Bucket=bucket_name,S3Key=layers/libs.zip
An error occurred (AccessDeniedException) when calling the PublishLayerVersion operation: Your access has been denied by S3, please make sure your request credentials have permission to GetObject for {URI of layer in my S3 bucket}. S3 Error Code: AccessDenied. S3 Error Message: Access Denied
When I'm running the aws cp command in the same bucket, it all works perfectly fine
aws s3 cp s3://bucket_name/layers/libs.zip libs.zip
So I assume that the aws lambda command line is using an other role than the one used when I'm running the aws cp command line ? Or maybe it uses another mecanism that I just don't know. But I couldn't find any thing about it in the AWS documentation.
I've just read that AWS can return a 403 it couldn't find the file. So maybe it could be an issue with the command syntax ?
Thank you for your help.
For your call to publish-layer-version you may need to specify the --content parameter with 3 parts:
S3Bucket=string,S3Key=string,S3ObjectVersion=string
It looks like you are missing S3ObjectVersion. I don't know what the AWS behavior is for evaluating and applying the parts of that parameter, but it could be attempting to do something more since the version is not specified and hence giving you that error. Or it could be returning an error code that is not quite right and is misleading. Try adding S3ObjectVersion and let me know what you get.
Otherwise, AWS permission evaluation can be complex. I like this AWS diagram below, so it is a good one to follow to track down permissions issues, but I suspect that AccessDenied is a bit of a red herring in this case:
Your Lambda does not have privileges (S3:GetObject).
Try running aws sts get-caller-identity. This will give you the IAM role your command line is using.
Go to IAM dashboard, check this role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It will look something like attached image.Make sure S3:GetObject is listed.
Also, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.

Spark credential chain ordering - S3 Exception Forbidden

I'm running Spark 2.4 on an EC2 instance. I am assuming an IAM role and setting the key/secret key/token in the sparkSession.sparkContext.hadoopConfiguration, along with the credentials provider as "org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider".
When I try to read a dataset from s3 (using s3a, which is also set in the hadoop config), I get an error that says
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 7376FE009AD36330, AWS Error Code: null, AWS Error Message: Forbidden
read command:
val myData = sparkSession.read.parquet("s3a://myBucket/myKey")
I've repeatedly checked the S3 path and it's correct. My assumed IAM role has the right privileges on the S3 bucket. The only thing I can figure at this point is that spark has some sort of hidden credential chain ordering and even though I have set the credentials in the hadoop config, it is still grabbing credentials from somewhere else (my instance profile???). But I have no way to diagnose that.
Any help is appreciated. Happy to provide any more details.
spark-submit will pick up your env vars and set them as the fs.s3a access +secret + session key, overwriting any you've already set.
If you only want to use the IAM credentials, just set fs.s3a.aws.credentials.provider to com.amazonaws.auth.InstanceProfileCredentialsProvider; it'll be the only one used
Further Reading: Troubleshooting S3A

Permissions for creating and attaching EBS Volume to an EC2Resource i AWS Data Pipeline

I need more local disk than available to EC2Resources in an AWS Data Pipline. The simplest solution seems to be to create and attach an EBS volume.
I have added EC2:CreateVolume og EC2:AttachVolume policies to both DataPipelineDefaultRole and DataPipelineDefaultResourceRole.
I have also tried setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for an IAM role with the same permissions in the shell, but alas no luck.
Is there some other permission needed, is it not using the roles it says it uses or is this not possible at all?
The Data Pipeline ShellCommandActivity with has a script uri point to a shell script that executes this command:
aws ec2 create-volume --availability-zone eu-west-1b --size 100 --volume-type gp2 --region eu-west-1 --tag-specifications 'ResourceType=volume,Tags=[{Key=purpose,Value=unzip_file}]'
The error I get is:
An error occurred (UnauthorizedOperation) when calling the CreateVolume operation: You are not authorized to perform this operation.
I had completely ignored the encrypted authorization message, thinking it was just some internal AWS thing. Your comment made me take a second look, kdgregory. Turns out the reference to the CreateVolume was somewhat of a red herring.
Decrypting the message, I see that it fails with "action":"ec2:CreateTags" meaning it lacks the permission to create tags. I added this permission and it works now.

AWS - S3 - Create bucket which is already existing - through CLI

Through AWS Console if you create a bucket if its already existing - console will not allow creating again.
But, through CLI it will allow you to create it again - when you execute make bucket command with the existing bucket - it just shows the success message.
It's really confusing, as doesn't show error in CLI. Confusing as different behaviors with two process.
Any idea why is this behavior and why CLI doesn't throw any error for the same?
In a distributed system, when you ask to create most of the time it will upsert. Throwing error back is a costly process.
If you want to check whether bucket exists and if you have appropriate privileges use the following command.
aws s3api head-bucket --bucket my-bucket
Documentation:
http://docs.aws.amazon.com/cli/latest/reference/s3api/head-bucket.html
This operation is useful to determine if a bucket exists and you have
permission to access it.
Hope it helps.

Static content for AWS EC2 with IAM role

Reading through the / many / resources on how to utilize temporary AWS credentials in a launched EC2 instance, I can't seem to get an extremely simple POC running.
Desired:
Launch an EC2 instance
SSH in
Pull a piece of static content from a private S3 bucket
Steps:
Create an IAM role
Spin up a new EC2 instance with the above IAM role specified; SSH in
Set the credentials using aws configure and the details that (successfully) populated in http://169.254.169.254/latest/meta-data/iam/security-credentials/iam-role-name
Attempt to use the AWS CLI directly to access the file
IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket-name/file.png"
}
]
}
When I use the AWS CLI to access the file, this error is thrown:
A client error (Forbidden) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Which step did I miss?
For future reference, the issue was in how I was calling the AWS CLI; previously I was running:
aws configure
...and supplying the details found in the auto-generated role profile.
Once I simply allowed it to find its own temporary credentials and just specified the only other required parameter manually (region):
aws s3 cp s3://bucket-name/file.png file.png --region us-east-1
...the file pulled fine. Hopefully this'll help out someone in the future!
Hope this might help some other Googler that lands here.
The
A client error (403) occurred when calling the HeadObject operation: Forbidden
error can also be caused if your system clock is too far off. I was 12 hours in the past and got this error. Set the clock to within a minute of the true time, and error went away.
According to Granting Access to a Single S3 Bucket Using Amazon IAM, the IAM policy may need to be applied to two resources:
The bucket proper (e.g. "arn:aws:s3:::4ormat-knowledge-base")
All the objects inside the bucket (e.g. "arn:aws:s3:::4ormat-knowledge-base/*")
Yet another tripwire. Damn!
I just got this error because I had an old version of awscli:
Broken:
$ aws --version
aws-cli/1.2.9 Python/3.4.0 Linux/3.13.0-36-generic
Works:
$ aws --version
aws-cli/1.5.4 Python/3.4.0 Linux/3.13.0-36-generic
You also get this error if the key doesn't exist in the bucket.
Double-check the key -- I had a script that was adding an extra slash at the beginning of the key when it POSTed items into the bucket. So this:
aws s3 cp --region us-east-1 s3://bucketname/path/to/file /tmp/filename
failed with "A client error (Forbidden) occurred when calling the HeadObject operation: Forbidden."
But this:
aws s3 cp --region us-east-1 s3://bucketname//path/to/file /tmp/filename
worked just fine. Not a permissions issue at all, just boneheaded key creation.
I had this error because I didn't attach a policy to my IAM user.
tl;dr: wild card file globbing worked better in s3cmd for me.
As cool as aws-cli is --for my one-time S3 file manipulation issue that didn't immediately work as I would hope and thought it might-- I ended up installing and using s3cmd.
Whatever syntax and behind the scenes work I conceptually imagined, s3cmd was more intuitive and accomodating to my baked in preconceptions.
Maybe it isn't the answer you came here for, but it worked for me.