I tried to run aws lambda publish-layer-version command line in my local console using my personal aws credentials, but I've got an Amazon S3 Access Denied error for the bucket in which the zip layer is stored.
aws lambda publish-layer-version --layer-name layer_name --content S3Bucket=bucket_name,S3Key=layers/libs.zip
An error occurred (AccessDeniedException) when calling the PublishLayerVersion operation: Your access has been denied by S3, please make sure your request credentials have permission to GetObject for {URI of layer in my S3 bucket}. S3 Error Code: AccessDenied. S3 Error Message: Access Denied
When I'm running the aws cp command in the same bucket, it all works perfectly fine
aws s3 cp s3://bucket_name/layers/libs.zip libs.zip
So I assume that the aws lambda command line is using an other role than the one used when I'm running the aws cp command line ? Or maybe it uses another mecanism that I just don't know. But I couldn't find any thing about it in the AWS documentation.
I've just read that AWS can return a 403 it couldn't find the file. So maybe it could be an issue with the command syntax ?
Thank you for your help.
For your call to publish-layer-version you may need to specify the --content parameter with 3 parts:
S3Bucket=string,S3Key=string,S3ObjectVersion=string
It looks like you are missing S3ObjectVersion. I don't know what the AWS behavior is for evaluating and applying the parts of that parameter, but it could be attempting to do something more since the version is not specified and hence giving you that error. Or it could be returning an error code that is not quite right and is misleading. Try adding S3ObjectVersion and let me know what you get.
Otherwise, AWS permission evaluation can be complex. I like this AWS diagram below, so it is a good one to follow to track down permissions issues, but I suspect that AccessDenied is a bit of a red herring in this case:
Your Lambda does not have privileges (S3:GetObject).
Try running aws sts get-caller-identity. This will give you the IAM role your command line is using.
Go to IAM dashboard, check this role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It will look something like attached image.Make sure S3:GetObject is listed.
Also, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.
Related
I successfully authenticate with 2 factor but when using aws s3 ls I keep getting
An error occurred (InvalidToken) when calling the ListBuckets operation: The provided token is malformed or otherwise invalid.
And I do have admin rights.
Issue was that I wasn't passing the --region in. e.g. aws s3 --region us-gov-west-1 ls. I suppose this could be set with an ENV variable too. That error message is a candidate for improvement.
This error also occurs when aws cli reads the aws_session_token and aws_security_token declared in the ~/.aws file, which might be associated to a previously used account. Removing both and leaving just the key and the credentials associated to the account where the bucket is will force aws to establish the connection.
Please delete .aws/credentials file from your users account and reconfigure your aws cli.
If you already associated with another account then there are high chances of this type of error.
Run aws configure
You may leave access key and access key id blank if you have an IAM role attached
Set value for 'region'
Now you will be able to successfully run 'aws s3 ls'
Else run 'aws s3 ls --region '
If you are using AWS Single Sign-on you can pass --profile <profile_name> and it should solve the issue
In the .aws credentials file remove session token and it will work
I have a problem with removing S3 bucket with the following ERROR state (check this image).
I tried to remove it via aws cli but the result is of course:
aws s3 rb s3://hierarchy --force
fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I can't change anything (bucket policy, etc.) on this bucket.
I have to mention that I'm administrator and have all privileges on the AWS account.
Google does not help me. I would like to know if is it possible to remove bucket like this.
It is possible that there is a Bucket Policy containing a Deny statement that is preventing your access.
Find somebody who has access to the Root credentials, which should be able to delete it.
If that fails, contact AWS Support.
I'm trying to add MFA-deletion to my S3 bucket with the AWS-cli with the following command:
aws s3api put-bucket-versioning --bucket <my-bucket-name> --versioning-configuration '{"MFADelete":"Enabled","Status":"Enabled"}' --mfa 'arn:aws:iam::<code-found-at-iam-page>:mfa/root-account-mfa-device <my-google-authenticator-code>'
but the response I get is this:
An error occurred (InvalidRequest) when calling the
PutBucketVersioning operation: DevPay and Mfa are mutually exclusive
authorization methods.
which makes no sense as I have never used DevPay. My security group for the instance has S3FullAccess enabled so that shouldn't be a problem either.
Any suggestions on what the problem might be would be appreciated.
I submitted a case to AWS and they answer with this:
That error response typically gets returned when the API cannot
perform the MFA Delete task due to the request being made with
non-root credentials. The only way to turn on MFA Delete is to use the
credentials from the root user of the account
Simple solution!
I just got the same error when attempting to perform this using the AWS CloudShell, although I was using the root user (and confirmed I was root using the CloudShell). The same command worked when run from the local CLI.
To enable/disable MFA delete on s3 bucket you must configure your aws command line with root access key .
Check the Prerequisites part
https://aws.amazon.com/premiumsupport/knowledge-center/s3-undelete-configuration/
Can anyone explain this behaviour:
When I try to download a file from S3, I get the following error:
An error occurred (403) when calling the HeadObject operation: Forbidden.
Commandline used:
aws s3 cp s3://bucket/raw_logs/my_file.log .
However, when I use the S3 console website, I'm able to download the file without issues.
The access key used by the commandline is correct. I verified this, and other AWS operations via commandline work fine. The access key is tied to the same user account I use in the AWS console.
So I assume you're sure about the IAM policy of your user and the file exists in your bucket
If you have set a default region in your configuration but the bucket has not been created in this region (Yes s3 buckets are created in a region), it will not find it. Make sure to add the region flag to the CLI
aws s3 cp s3://bucket/raw_logs/my_file.log . --region <region of the bucket>
Other notes:
make sure to upgrade to latest version
can be cause if system clock is not synchronized, if you're not indicating any synchronize params, it might be ok but I dont know the internal and for some commands the CLI is looking at the system clock to compare to S3, if you're out of sync it might cause issues
I had a similar issue due to having two-factor authentication enabled on my account. Check out how to configure 2FA for the aws cli here: https://aws.amazon.com/premiumsupport/knowledge-center/authenticate-mfa-cli/
I have the AWS cli installed on an EC2 instance, and I configured it by running aws configure and giving it my AWSAccessKeyId and AWSSecretKey keys so if I run the command aws s3 ls it returns the name of my S3 bucket (call it "mybucket").
But, if I then try aws s3 cp localfolder/ s3://mybucket/ --recursive I get an error that looks like
A client error (AccessDenied) occurred when calling the CreateMultipartUpload operation: Anonymous users cannot initiate multipart uploads. Please authenticate.
I thought that by running aws configure and giving it my root key that I was effectively giving the aws cli everything it needs to authenticate? Is there something I am missing regarding copying to an S3 bucket as opposed to listing them?
Thought I would add in a very similar issue that I had where I could list buckets but could not write to a given bucket returning the error
An error occurred (AccessDenied) when calling the
CreateMultipartUpload operation: Access Denied
If the bucket uses server-side encryption you'll need to add the --sse flag to be able to write to this bucket.
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Root Access keys and Secret key have full control and full privileges to interact with the AWS. Please try running the aws configure again to recheck the setting and try again.
PS: it is highly not recommended to use root access keys - please give a thought is creating an IAM ( which take admin privileges- like root ) and use those.
If you have environment variables AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and AWS_REGION set, AWS CLI gives higher precedence to them, and not to credentials you specify with aws configure.
So, in my case, bash command unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY solved the problem.