How to fix expired token in AWS s3 copy command? - amazon-web-services

I need to run the command aws s3 cp <filename> <bucketname> from an EC2 RHEL instance to copy a file from the instance to an S3 bucket.
When I run this command, I receive this error: An error occurred (ExpiredToken) when calling the PutObject operation: The provided token has expired
I also found that this same error occurs when trying to run many other CLI commands from the instance.
I do not want to change my IAM role because the command was previously working perfectly fine and IAM policy changes must go through an approval process. I have double checked the IAM role the instance is assuming and it still contains the correct configuration for allowing PutObject on the correct resources.
What can I do to allow AWS CLI commands to work again in my instance?

AWS API tokens are time-sensitive, and VMs in the cloud tend to suffer from clock drift.
Check that time is accurate on the RHEL instance, and use ntp servers to make sure any drift is regularly corrected.

Related

How to get AWS policy needed to run a specific CLI command?

I am new to AWS. I am trying to import an OVA to a AMI and use it for an EC2 instance as described here:
One of the commands it asks you to run is
aws ec2 describe-import-image-tasks --import-task-ids import-ami-1234567890abcdef0
When I do this I get
An error occurred (UnauthorizedOperation) when calling the DescribeImportImageTasks operation: You are not authorized to perform this operation.
I believe this means I need to add the appropriate Role (with a policy to be able to describe-import-image-tasks) to my cli user.
In the IAM console, I see this search feature to filter policies for a role which I will assign to my user. However it doesn't seem to have any results for describe-import-image-tasks
Is there an easy way to determine which policies are needed to run an AWS Cli command?
There is not an easy way. The CLI commands usually (but not always) map to a single IAM action that you need permission to perform. In your case, it appears you need the ec2:DescribeImportImageTasks permission, as listed here.

Amazon S3 Access Denied when calling aws lambda publish-layer-version CLI

I tried to run aws lambda publish-layer-version command line in my local console using my personal aws credentials, but I've got an Amazon S3 Access Denied error for the bucket in which the zip layer is stored.
aws lambda publish-layer-version --layer-name layer_name --content S3Bucket=bucket_name,S3Key=layers/libs.zip
An error occurred (AccessDeniedException) when calling the PublishLayerVersion operation: Your access has been denied by S3, please make sure your request credentials have permission to GetObject for {URI of layer in my S3 bucket}. S3 Error Code: AccessDenied. S3 Error Message: Access Denied
When I'm running the aws cp command in the same bucket, it all works perfectly fine
aws s3 cp s3://bucket_name/layers/libs.zip libs.zip
So I assume that the aws lambda command line is using an other role than the one used when I'm running the aws cp command line ? Or maybe it uses another mecanism that I just don't know. But I couldn't find any thing about it in the AWS documentation.
I've just read that AWS can return a 403 it couldn't find the file. So maybe it could be an issue with the command syntax ?
Thank you for your help.
For your call to publish-layer-version you may need to specify the --content parameter with 3 parts:
S3Bucket=string,S3Key=string,S3ObjectVersion=string
It looks like you are missing S3ObjectVersion. I don't know what the AWS behavior is for evaluating and applying the parts of that parameter, but it could be attempting to do something more since the version is not specified and hence giving you that error. Or it could be returning an error code that is not quite right and is misleading. Try adding S3ObjectVersion and let me know what you get.
Otherwise, AWS permission evaluation can be complex. I like this AWS diagram below, so it is a good one to follow to track down permissions issues, but I suspect that AccessDenied is a bit of a red herring in this case:
Your Lambda does not have privileges (S3:GetObject).
Try running aws sts get-caller-identity. This will give you the IAM role your command line is using.
Go to IAM dashboard, check this role associated with your Lambda execution. If you use AWS wizard, it automatically creates a role called oneClick_lambda_s3_exec_role. Click on Show Policy. It will look something like attached image.Make sure S3:GetObject is listed.
Also, AWS returns 403 (access denied) when the file does not exist. Be sure the target file is in the S3 bucket.

AWS Batch job getting Access Denied on S3 despite user role

I am deploying my first batch job on AWS. When I run my docker image in an EC2 instance, the script called by the job runs fine. I have assigned an IAM role to this instance to allow S3 access.
But when I run the same script as a job on AWS Batch, it fails due to Access Denied errors on S3 access. This is despite the fact that in the Job Definition, I assign an IAM role (created for Elastic Container Service Task) that has full S3 access.
If I launch my batch job with a command that does not access S3, it runs fine.
Since using an IAM role for the job definition seems not to be sufficient, how then do I grant S3 permissions within a Batch Job on AWS?
EDIT
So if I just run aws s3 ls interlinked as my job, that also runs properly. What does not work is running the R script:
library(aws.s3)
get_bucket("mybucket")[[1]]
Which fails with Access Denied.
So it seems the issue is either with the aws.s3 package or, more likely, my use of it.
The problem turned out to be that I had IAM Roles specified for both my compute environment (more restrictive) and my jobs (less restrictive).
In this scenario (where role based credentials are desired), the aws.s3 R package uses aws.signature and aws.ec2metadata to pull temporary credentials from the role. It pulls the compute environment role (which is an ec2 role), but not the job role.
My solution was just to grant the required S3 permissions to my compute environment's role.

DevPay and Mfa are mutually exclusive authorization methods

I'm trying to add MFA-deletion to my S3 bucket with the AWS-cli with the following command:
aws s3api put-bucket-versioning --bucket <my-bucket-name> --versioning-configuration '{"MFADelete":"Enabled","Status":"Enabled"}' --mfa 'arn:aws:iam::<code-found-at-iam-page>:mfa/root-account-mfa-device <my-google-authenticator-code>'
but the response I get is this:
An error occurred (InvalidRequest) when calling the
PutBucketVersioning operation: DevPay and Mfa are mutually exclusive
authorization methods.
which makes no sense as I have never used DevPay. My security group for the instance has S3FullAccess enabled so that shouldn't be a problem either.
Any suggestions on what the problem might be would be appreciated.
I submitted a case to AWS and they answer with this:
That error response typically gets returned when the API cannot
perform the MFA Delete task due to the request being made with
non-root credentials. The only way to turn on MFA Delete is to use the
credentials from the root user of the account
Simple solution!
I just got the same error when attempting to perform this using the AWS CloudShell, although I was using the root user (and confirmed I was root using the CloudShell). The same command worked when run from the local CLI.
To enable/disable MFA delete on s3 bucket you must configure your aws command line with root access key .
Check the Prerequisites part
https://aws.amazon.com/premiumsupport/knowledge-center/s3-undelete-configuration/

S3 download works from console, but not from commandline

Can anyone explain this behaviour:
When I try to download a file from S3, I get the following error:
An error occurred (403) when calling the HeadObject operation: Forbidden.
Commandline used:
aws s3 cp s3://bucket/raw_logs/my_file.log .
However, when I use the S3 console website, I'm able to download the file without issues.
The access key used by the commandline is correct. I verified this, and other AWS operations via commandline work fine. The access key is tied to the same user account I use in the AWS console.
So I assume you're sure about the IAM policy of your user and the file exists in your bucket
If you have set a default region in your configuration but the bucket has not been created in this region (Yes s3 buckets are created in a region), it will not find it. Make sure to add the region flag to the CLI
aws s3 cp s3://bucket/raw_logs/my_file.log . --region <region of the bucket>
Other notes:
make sure to upgrade to latest version
can be cause if system clock is not synchronized, if you're not indicating any synchronize params, it might be ok but I dont know the internal and for some commands the CLI is looking at the system clock to compare to S3, if you're out of sync it might cause issues
I had a similar issue due to having two-factor authentication enabled on my account. Check out how to configure 2FA for the aws cli here: https://aws.amazon.com/premiumsupport/knowledge-center/authenticate-mfa-cli/