Reading through the / many / resources on how to utilize temporary AWS credentials in a launched EC2 instance, I can't seem to get an extremely simple POC running.
Desired:
Launch an EC2 instance
SSH in
Pull a piece of static content from a private S3 bucket
Steps:
Create an IAM role
Spin up a new EC2 instance with the above IAM role specified; SSH in
Set the credentials using aws configure and the details that (successfully) populated in http://169.254.169.254/latest/meta-data/iam/security-credentials/iam-role-name
Attempt to use the AWS CLI directly to access the file
IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket-name/file.png"
}
]
}
When I use the AWS CLI to access the file, this error is thrown:
A client error (Forbidden) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Which step did I miss?
For future reference, the issue was in how I was calling the AWS CLI; previously I was running:
aws configure
...and supplying the details found in the auto-generated role profile.
Once I simply allowed it to find its own temporary credentials and just specified the only other required parameter manually (region):
aws s3 cp s3://bucket-name/file.png file.png --region us-east-1
...the file pulled fine. Hopefully this'll help out someone in the future!
Hope this might help some other Googler that lands here.
The
A client error (403) occurred when calling the HeadObject operation: Forbidden
error can also be caused if your system clock is too far off. I was 12 hours in the past and got this error. Set the clock to within a minute of the true time, and error went away.
According to Granting Access to a Single S3 Bucket Using Amazon IAM, the IAM policy may need to be applied to two resources:
The bucket proper (e.g. "arn:aws:s3:::4ormat-knowledge-base")
All the objects inside the bucket (e.g. "arn:aws:s3:::4ormat-knowledge-base/*")
Yet another tripwire. Damn!
I just got this error because I had an old version of awscli:
Broken:
$ aws --version
aws-cli/1.2.9 Python/3.4.0 Linux/3.13.0-36-generic
Works:
$ aws --version
aws-cli/1.5.4 Python/3.4.0 Linux/3.13.0-36-generic
You also get this error if the key doesn't exist in the bucket.
Double-check the key -- I had a script that was adding an extra slash at the beginning of the key when it POSTed items into the bucket. So this:
aws s3 cp --region us-east-1 s3://bucketname/path/to/file /tmp/filename
failed with "A client error (Forbidden) occurred when calling the HeadObject operation: Forbidden."
But this:
aws s3 cp --region us-east-1 s3://bucketname//path/to/file /tmp/filename
worked just fine. Not a permissions issue at all, just boneheaded key creation.
I had this error because I didn't attach a policy to my IAM user.
tl;dr: wild card file globbing worked better in s3cmd for me.
As cool as aws-cli is --for my one-time S3 file manipulation issue that didn't immediately work as I would hope and thought it might-- I ended up installing and using s3cmd.
Whatever syntax and behind the scenes work I conceptually imagined, s3cmd was more intuitive and accomodating to my baked in preconceptions.
Maybe it isn't the answer you came here for, but it worked for me.
Related
I am following the 'documentation' here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html
With the goal of creating an ec2 spot instance price datafeed.
I use this command:
aws ec2 create-spot-datafeed-subscription --region us-east-1 --bucket mybucketname-spot-instance-price-data-feed
And get this response:
An error occurred (InaccessibleStorageLocation) when calling the CreateSpotDatafeedSubscription operation: The specified bucket does not exist or does not have enough permissions
The bucket exists, I am able to upload files into it.
I don't have any idea what to do - there's a blizzard of AWS options for giving permissions and the documentation makes only vague statements, nothing concrete about what might need to be done.
Can anyone suggest please what I can do to get past this error? thanks!
What worked me for this issue is enabling the ACL. I initially had the ACLs disabled and I was running into the same permissions issue. I updated the S3 bucket and enabled ACLs then the message went away and I was able to create see the spot pricing feed.
I tried to run aws lambda publish-layer-version command line in my local console using my personal aws credentials, but I've got an Amazon S3 Access Denied error for the bucket in which the zip layer is stored.
aws lambda publish-layer-version --layer-name layer_name --content S3Bucket=bucket_name,S3Key=layers/libs.zip
An error occurred (AccessDeniedException) when calling the PublishLayerVersion operation: Your access has been denied by S3, please make sure your request credentials have permission to GetObject for {URI of layer in my S3 bucket}. S3 Error Code: AccessDenied. S3 Error Message: Access Denied
When I'm running the aws cp command in the same bucket, it all works perfectly fine
aws s3 cp s3://bucket_name/layers/libs.zip libs.zip
So I assume that the aws lambda command line is using an other role than the one used when I'm running the aws cp command line ? Or maybe it uses another mecanism that I just don't know. But I couldn't find any thing about it in the AWS documentation.
I've just read that AWS can return a 403 it couldn't find the file. So maybe it could be an issue with the command syntax ?
Thank you for your help.
For your call to publish-layer-version you may need to specify the --content parameter with 3 parts:
S3Bucket=string,S3Key=string,S3ObjectVersion=string
It looks like you are missing S3ObjectVersion. I don't know what the AWS behavior is for evaluating and applying the parts of that parameter, but it could be attempting to do something more since the version is not specified and hence giving you that error. Or it could be returning an error code that is not quite right and is misleading. Try adding S3ObjectVersion and let me know what you get.
Otherwise, AWS permission evaluation can be complex. I like this AWS diagram below, so it is a good one to follow to track down permissions issues, but I suspect that AccessDenied is a bit of a red herring in this case:
Your Lambda does not have privileges (S3:GetObject).
Try running aws sts get-caller-identity. This will give you the IAM role your command line is using.
Go to IAM dashboard, check this role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It will look something like attached image.Make sure S3:GetObject is listed.
Also, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
I have a Glue ETL job, created by CloudFormation. This job extracts data from RDS Aurora and write to S3.
When I run this job, I get the error below.
The job has an IAM service role.
This service role allows
Glue and RDS service,
assume arn:aws:iam::aws:policy/AmazonS3FullAccess and arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole, and
has full range of rds:* , kms:* , and s3:* actions allow to the corresponding RDS, KMS, and S3 resources.
I have the same error whether the S3 bucket is encrypted with either AES256 or aws:kms.
I get the same error whether the job has a Security Configuration or not.
I have a job doing the exactly same thing that I created manually and can run successfully without a Security Configuration.
What am I missing? Here's the full error log
"/mnt/yarn/usercache/root/appcache/application_1...5_0002/container_15...45_0002_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o145.pyWriteDynamicFrame.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 2.0 failed 4 times, most recent failure: Lost task 3.3 in stage 2.0 (TID 30, ip-10-....us-west-2.compute.internal, executor 1): com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: F...49), S3 Extended Request ID: eo...wXZw=
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588
Unfortunately the error doesn't tell us much except that it's failing during the write of your DynamicFrame.
There is only a handful of possible reasons for the 403, you can check if you have met them all:
Bucket Policy rules on the destination bucket.
The IAM Role needs permissions (although you mention having S3*)
If this is cross-account, then there is more to check with regards things like to allow-policies on the bucket and user. (In general a Trust for the Canonical Account ID is simplest)
I don't know how complicated your policy documents might be for the Role and Bucket, but remember that an explicit Deny statement takes precedence over an allow.
If the issue is KMS related, I would check to ensure your Subnet you select for the Glue Connection has a route to reach the KMS endpoints (You can add an Endpoint for KMS in VPC)
Make sure issue is not with the Temporary Directory that is also configured for your job or perhaps write-operations that are not your final.
Check that your account is the "object owner" of the location you are writing to (normally an issue when read/writing data between accounts)
If none of the above works, you can shed some more light with regards to your setup. Perhaps the code for write-operation.
In addition to Lydon's answer, error 403 is also received if your Data Source location is the same as the Data Target; defined when creating a Job in Glue. Change either of these if they are identical and the issue will be resolved.
You should add a Security configurations(mentioned under Secuity tab on Glue Console). providing S3 Encryption mode either SSE-KMS or SSE-S3.
Security Configuration
Now select the above security configuration while creating your job under Advance Properties.
Duly verify you IAM role & S3 bucket policy.
It will work
How are you providing permission for PassRole to glue role?
{
"Sid": "AllowAccessToRoleOnly",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListRolePolicies",
"iam:ListAttachedRolePolicies"
],
"Resource": "arn:aws:iam::*:role/<role>"
}
Usually we create roles using <project>-<role>-<env> e.g. xyz-glue-dev where project name is xyz and env is dev. In that case we use "Resource": "arn:aws:iam::*:role/xyz-*-dev"
For me it was two things.
Access policy for a bucket should be given correctly - bucket/*, here I was missing the * part
Endpoint in VPC must be created for glue to access S3 https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html
After these two settings, my glue job ran successfully. Hope this helps.
Make sure you have given the right policies.
I was facing the same issue, thought I had the role configured well.
But after I erased the role and followed this step, it worked ;]
I need more local disk than available to EC2Resources in an AWS Data Pipline. The simplest solution seems to be to create and attach an EBS volume.
I have added EC2:CreateVolume og EC2:AttachVolume policies to both DataPipelineDefaultRole and DataPipelineDefaultResourceRole.
I have also tried setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for an IAM role with the same permissions in the shell, but alas no luck.
Is there some other permission needed, is it not using the roles it says it uses or is this not possible at all?
The Data Pipeline ShellCommandActivity with has a script uri point to a shell script that executes this command:
aws ec2 create-volume --availability-zone eu-west-1b --size 100 --volume-type gp2 --region eu-west-1 --tag-specifications 'ResourceType=volume,Tags=[{Key=purpose,Value=unzip_file}]'
The error I get is:
An error occurred (UnauthorizedOperation) when calling the CreateVolume operation: You are not authorized to perform this operation.
I had completely ignored the encrypted authorization message, thinking it was just some internal AWS thing. Your comment made me take a second look, kdgregory. Turns out the reference to the CreateVolume was somewhat of a red herring.
Decrypting the message, I see that it fails with "action":"ec2:CreateTags" meaning it lacks the permission to create tags. I added this permission and it works now.
By googling, I found this tutorial on accessing S3 from EC2 instance without credential file. I followed its instructions and got the desired instance. The aws web console page looks like
However, I don't want to do it manually using the web console every time. How can I create such EC2 instances using boto3?
I tried
s = boto3.Session(profile_name='dev', region_name='us-east-1')
ec2 = s.resource('ec2')
rc = ec2.create_instances(ImageId='ami-0e297018',
InstanceType='t2.nano',
MinCount=1,
MaxCount=1,
KeyName='my-key',
IamInstanceProfile={'Name': 'harness-worker'},
)
where harness-worker is the IAM role with access to S3, but nothing else.
It is also used in the first approach with the aws web console tutorial.
Then I got error saying
ClientError: An error occurred (UnauthorizedOperation) when calling
the RunInstances operation: You are not authorized to perform this
operation.
Did I do something obviously wrong?
The dev profile has AmazonEC2FullAccess. Without the line IamInstanceProfile={'Name': 'harness-worker'},, create_instances is able to create instance.
To assign an IAMProfile to an instance, AmazonEC2FullAccess is not sufficient. In addition, you need the following privilege to pass the role to the instance.
See: Granting an IAM User Permission to Pass an IAM Role to an Instance
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
First you can give full IAM access to your dev profile and see it works. Then remove full IAM access and give only iam:PassRole and try again.
This has nothing to do with the role you are trying to assign the new EC2 instance. The Python script you are running doesn't have the RunInstances permission.