I want to get my s3 bucket creation dates using s3api . But it is not showing the creation date that it is showing in aws console.
When i tried with cli the output is like this
C:\Users\hero>aws s3api list-buckets
{
"Buckets": [
{
"CreationDate": "2018-09-12T11:32:04.000Z",
"Name": "campaign-app-api-prod-serverlessdeploymentbucket-"
},
{
"CreationDate": "2018-09-12T10:06:44.000Z",
"Name": "s3-api-log-events"
}
]
}
In console
Why am i getting different dates in s3api. Is my CreationDate interpretation of is wrong ?
Any help is appreciated.
Thanks
The Date Created field displayed in the web console is according to the actual creation date registered in us-east-1, while the AWS CLI and SDKs will display the creation date depending on the specified region (or the default region set in your configuration).
When using an endpoint other than us-east-1, the CreationDate you receive is actually the last modified time according to the bucket's last replication time in this region. This date can change when making changes to your bucket, such as editing its bucket policy.
So, to get the CreationDates of the buckets that are in s3 console then you need to give the region us-east-1 .
Try like this in aws cli aws s3api list-buckets --region "us-east-1"
Checkout this github issue
This python script:
import boto3
client=boto3.client('s3')
response=client.list_buckets()
returns the same dates as the AWS CLI and s3cmd. Therefore, it is not a bug in the CLI/s3cmd. Instead, it is different information coming from the Amazon S3 API call. So, I'm not sure where the console gets the 'correct' dates.
If there is a bug anywhere, it would be in the ListBuckets API call to AWS. This is best raised with AWS Support.
Related
I'm working on a system that receives new findings from Amazon GuardDuty. Most access in our organization is delegated to IAM roles instead of directly to users, so the findings usually result from the actions of assumed roles, and the actor identity of the GuardDuty finding looks something like this:
"resource": {
"accessKeyDetails": {
"accessKeyId": "ASIALXUWSRBXSAQZECAY",
"principalId": "AROACDRML13PHK3X7J1UL:129545928468",
"userName": "my-permitted-role",
"userType": "AssumedRole"
},
"resourceType": "AccessKey"
},
I know that the accessKeyId is created when a security principal performs the iam:AssumeRole action. But I can't tell who assumed the role in the first place! If it was an IAM user, I want to know the username. Is there a way to programmatically map temporary AWS STS keys (starts with ASIA...) back to an original user?
Ideally I'm looking for a method that runs in less than 30 seconds so I can use it as part of my security event pipeline to enrich GuardDuty findings with the missing information.
I've already looked at aws-cli and found aws cloudtrail lookup-events but it lacks the ability to narrow the query to a specific accessKeyId so it takes a loooong time to run. I've explored the CloudTrail console but it's only about as capable as aws-cli here. I tried saving my CloudTrail logs to S3 and running an Athena query, but that was pretty slow too.
This seems like it would be a common requirement. Is there something obvious that I'm missing?
Actually, aws-cli can perform a lookup on the session! Just make sure to specify ResourceName as the attribute key in the lookup attributes.
$ aws cloudtrail lookup-events \
--lookup-attributes 'AttributeKey=ResourceName,AttributeValue=ASIALXUWSRBXSAQZECAY' \
--query 'Events[*].Username'
[
"the.user#example.com"
]
When I use:
aws dynamodb list-tables
I am getting:
{
"TableNames": []
}
I gave the region as default as I did the same while aws configure.
I also tried with specific region name.
When I check in AWS console also I don't see any DynamoDB tables, but i am able to access the table programmatically. Able to add and modify item as well.
But no result when enter I use aws dynamodb list-tables and also no tables when I check in console.
This is clearly a result of the commands looking in the wrong place.
DynamoDB tables are stored in an account within a region. So, if there is definitely a table but none are showing, then the credentials being used either belong to a different AWS Account or the command is being sent to the wrong region.
You can specify a region like this:
aws dynamodb list-tables --region ap-southeast-2
If you are able to access the table dynamically, the make sure the same credentials being used by your program are also being used for the AWS CLI.
We need to specify the endpoint in the command which will work . As the above dynamodb is used programmatically and used as wed app.
this command will work :
aws dynamodb list-tables --endpoint-url http://localhost:8080 --region us-west-2
Check the region you set up in AWS configuration vs what is displayed at the top of the AWS console. I had my app configured to us-east-2 but the AWS console had us-east-1 as the default. I was able to view my table once the correct region was selected in the AWS console.
Its possible to do object logging on a S3 bucket to Cloud trail using the following guide, but this is through the console.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
I've been trying to figure out a way to do this via the cli since want to do this for many buckets but haven't had much luck. I've setup a new cloud trail on my account and would like to map it to s3 buckets to do object logging. Is there a cli for this?
# This is to grant s3 log bucket access (no link to cloudtrail here)
aws s3api put-bucket-logging
It looks like you'll need to use the CloudTrail put_event_selectors() command:
DataResources
CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda functions.
(dict): The Amazon S3 buckets or AWS Lambda functions that you specify in your event selectors for your trail to log data events.
Do a search for object-level in the documentation page.
Disclaimer: The comment by puji in the accepted answer works. This is an expansion of that answer with the resources.
Here is the AWS documentation on how to do this through the AWS CLI
https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-event-selectors.html
The specific CLI command you are interested is the following from the above documentation. The original documentation lists two objects in the same bucket. I have modified it to cover all the objects in two buckets.
aws cloudtrail put-event-selectors --trail-name TrailName --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::mybucket1/","arn:aws:s3:::mybucket2/"]}]}]'
If you want all the S3 buckets in your AWS accounts covered you can use arn:aws:s3::: instead of list of bucket arns like the following.
aws cloudtrail put-event-selectors --trail-name TrailName2 --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::"]}]}]'
Is it possible to get the ARN of an S3 bucket via the AWS command line?
I have looked through the documentation for aws s3api ... and aws s3 ... and have not found a way to do this.
It's always arn:PARTITION:s3:::NAME-OF-YOUR-BUCKET. If you know the name of the bucket and in which partition it's located, you know the ARN. No need to 'get' it from anywhere.
The PARTITION will be aws, aws-us-gov, or aws-cndepending on whether you're in general AWS, GovCloud, or China resepectively.
You can also select your S3 bucket ARN by selecting it using the tick mark at the s3 management console, which will pop up a Side bar. where there is a provision to copy your S3 bucket ARN.S3 management console with bucket ARN
aws articles spell out the arn format but never say go here to see it. Highlighting my s3 bucket and seeing that Copy Bucket ARN kept me sane for a few more hours.
Reading through the / many / resources on how to utilize temporary AWS credentials in a launched EC2 instance, I can't seem to get an extremely simple POC running.
Desired:
Launch an EC2 instance
SSH in
Pull a piece of static content from a private S3 bucket
Steps:
Create an IAM role
Spin up a new EC2 instance with the above IAM role specified; SSH in
Set the credentials using aws configure and the details that (successfully) populated in http://169.254.169.254/latest/meta-data/iam/security-credentials/iam-role-name
Attempt to use the AWS CLI directly to access the file
IAM role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket-name/file.png"
}
]
}
When I use the AWS CLI to access the file, this error is thrown:
A client error (Forbidden) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Which step did I miss?
For future reference, the issue was in how I was calling the AWS CLI; previously I was running:
aws configure
...and supplying the details found in the auto-generated role profile.
Once I simply allowed it to find its own temporary credentials and just specified the only other required parameter manually (region):
aws s3 cp s3://bucket-name/file.png file.png --region us-east-1
...the file pulled fine. Hopefully this'll help out someone in the future!
Hope this might help some other Googler that lands here.
The
A client error (403) occurred when calling the HeadObject operation: Forbidden
error can also be caused if your system clock is too far off. I was 12 hours in the past and got this error. Set the clock to within a minute of the true time, and error went away.
According to Granting Access to a Single S3 Bucket Using Amazon IAM, the IAM policy may need to be applied to two resources:
The bucket proper (e.g. "arn:aws:s3:::4ormat-knowledge-base")
All the objects inside the bucket (e.g. "arn:aws:s3:::4ormat-knowledge-base/*")
Yet another tripwire. Damn!
I just got this error because I had an old version of awscli:
Broken:
$ aws --version
aws-cli/1.2.9 Python/3.4.0 Linux/3.13.0-36-generic
Works:
$ aws --version
aws-cli/1.5.4 Python/3.4.0 Linux/3.13.0-36-generic
You also get this error if the key doesn't exist in the bucket.
Double-check the key -- I had a script that was adding an extra slash at the beginning of the key when it POSTed items into the bucket. So this:
aws s3 cp --region us-east-1 s3://bucketname/path/to/file /tmp/filename
failed with "A client error (Forbidden) occurred when calling the HeadObject operation: Forbidden."
But this:
aws s3 cp --region us-east-1 s3://bucketname//path/to/file /tmp/filename
worked just fine. Not a permissions issue at all, just boneheaded key creation.
I had this error because I didn't attach a policy to my IAM user.
tl;dr: wild card file globbing worked better in s3cmd for me.
As cool as aws-cli is --for my one-time S3 file manipulation issue that didn't immediately work as I would hope and thought it might-- I ended up installing and using s3cmd.
Whatever syntax and behind the scenes work I conceptually imagined, s3cmd was more intuitive and accomodating to my baked in preconceptions.
Maybe it isn't the answer you came here for, but it worked for me.