AWS CloudTrail without configuring trail - amazon-web-services

I am new to AWS CloudTrail. I have gone through number of aws docs and unable to figure out how to read cloudtrails last 7 days logs through program without configuring trail or without getting charged.
I want to write a java program which will read audit logs from aws and process those logs. I know we can create trail and we can read logs from aws s3 bucket using program, but I don't know how to read logs using aws sdk api for last 7 days like how we get logs on aws console ( we can read last 7 days audit logs free of cost.).
We can get this done using - cloudtrail-processing-library, but the properties/conf file for this lib requires sqs url as argument which i don't have, rather I don't know.
Please assist me so that I can write java program.
Regards,
Sachin

You can use the lookupEvents API in cloudtrail for getting the list of events (any create/update/delete operations).
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/cloudtrail/AWSCloudTrail.html#lookupEvents-com.amazonaws.services.cloudtrail.model.LookupEventsRequest

The logs are stored in a S3 bucket and you can use AWS athena to process and query the logs if you want, so you don't have to write a Java program. If you do then that program will need IAM privileges to read from the S3 bucket that stores the logs.
AWS Athena
How to find your Cloud trail logs
Java code examples on S3 bucket objects
Java Cloudtrail SDK reference

Related

Cloudtrail logs is turned off for your account

I'm trying to trigger AWS Step Function whenever a new file is uploaded on S3 bucket. I'm using Cloudwatch rules to do this but I'm getting this warning
I tried to follow AWS documentation link "https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html#tutorial-cloudwatch-events-s3-step-1" but state machine did not invoked.
Can anyone tell me what exactly I'm doing wrong?
EDIT
I created this trail and region is Ohio
I found the issue, we need to enable data events as well to get API calls for S3. It was not mentioned in above AWS Document.

Creating Amazon Kinesis Data Generator Stack on different Region

I'm trying to generate a cloudformation stack provided by AWS here. When I click the Create a Cognito User with CloudFormation button, it directs me to AWS console CloudFormation page on us-west-2 (Oregon), from there its pretty much self explanatory. The problem is, the company that I'm working on only allows work on us-west-1 (N. California). I have tried looking over the CloudFormation template itself and I cant find any region being mentioned. I have also asked this question in AWS developer forum but no one has responded, and I'm wondering if anyone here knows how to generate that particular stack on any region other than us-west-2 (oregon)? Thanks!
I found a workaround for that. I used to face the same problem, as my company policy was set to not use us-west-2, therefore I couldn't use the CloudFormation JSON script provided by Amazon Kinesis Data Generator.
What I did was:
Download CloudFormation JSON script by Amazon Kinesis Data Generator in your local machine. CloudFormation JSON script download link can be found Amazon Kinesis Data Generator Help page
Download the source code. The source code download link can be found in Amazon Kinesis Data Generator Help page.
In your AWS account, go to S3 and create a S3 bucket in the region that you are allowed to create. Name it whatever you want.
Upload the source code downloaded in step2 to the created bucket in step3.
Edit CloudFormation JSON script downloaded in step1. Inside of script, change bucket name inside of Lambda function to the name of bucket you created in step3.
Go to CloudFormation and create the stack by uploading your edited script.
One thing that you need to keep in mind implementing this workaround is that if there are any changes to source code by AWSLAB, or any newer version of source code comes to life, you will have to manually check and update it to your bucket.
I hope it was clear.
I have created JMeter plugin to publish data records in Kinesis Data Stream.
https://github.com/JoseLuisSR/awsmeter
It works very well and you don't need use any aditional AWS service to publish event in Kinesis as Kinesis Data Generator does, where you could pay aditional charges for services like Cognito, Cloudformation, Lambda that are need to build and deploy KDG.
You just need AWS IAM user with programmatic access, download JMeter and install awsmeter plugin.
If you have questions or comments let me know.
Thanks.

How to get notification from S3 Server Access Loggings to CloudWatch?

I used Terraform to create a new S3 bucket for getting logs automatically from three different existing S3 bucket, next step I want to make the most use of these logs, getting various notification, e.g if someone created/deleted/modified a S3 bucket, relevant users will get notified about the events. Because at the moment all the logs are a mess, and the filename of the logs are all meaningless, I've been messing around with CloudWatch and CloudTrail, not sure what is the right way to do this.Can someone help me, many thanks.

Use aws cloudtrail to collect application logs

Is it possible to use cloud trail to recieve custom logs like application logs, access logs, security logs?
And cloud trail keeps the logs for how long?
You might be thinking of CloudWatch Logs, which does capture, provide search, and groom custom logs from EC2 instances. The retention grooming rules are configurable.
No. CloudTrail is for AWS APIs activity only. It logs the activity for the last 7 days of API activity for supported services. The list only includes API activity for create, modify, and delete API calls. You can optionally save the logs in S3 buckets for historic API activity.
You could configure VPC flow logs, CloudTrail logs and AWS Config logs with CloudWatch. You can setup a S3 bucket with lifecycle policies enabled to retain logs forever. Refer this.

AWS S3 bucket logs vs AWS cloudtrail

What's the difference between the AWS S3 logs and the AWS CloudTrail?
On the doc of CloudTrail I saw this:
CloudTrail adds another dimension to the monitoring capabilities
already offered by AWS. It does not change or replace logging features
you might already be using.
CloudTrail tracks API access for infrastructure-changing events, in S3 this means creating, deleting, and modifying bucket (S3 CloudTrail docs). It is very focused on API methods that modify buckets.
S3 Server Access Logging provides web server-style logging of access to the objects in an S3 bucket. This logging is granular to the object, includes read-only operations, and includes non-API access like static web site browsing.
AWS has added one more functionality since this question was asked, namely CloudTrail Data events
Currently there are 3 features available:
CloudTrail: Which logs almost all API calls at Bucket level Ref
CloudTrail Data Events: Which logs almost all API calls at Object level Ref
S3 server access logs: Which logs almost all (best effort server logs delivery) access calls to S3 objects. Ref
Now, 2 and 3 seem similar functionalities but they have some differences which may prompt users to use one or the other or both(in our case)! Below are the differences which I could find:
Both works at different levels of granularity. e.g. CloudTrail data events can be set for all the S3 buckets for the AWS account or just for some folder in S3 bucket. Whereas, S3 server access logs would be set at individual bucket level
The S3 server access logs seem to give more comprehensive information about the logs like BucketOwner, HTTPStatus, ErrorCode, etc. Full list
Information which is not available in Cloudtrail logs but is available in Server Access logs. Reference:
Fields for Object Size, Total Time, Turn-Around Time, and HTTP Referer for log records
Life cycle transitions, expiration, restores
Logging of keys in a batch delete operation
Authentication failures
CloudTrail does not deliver logs for requests that fail authentication (in which the provided credentials are not valid). However, it does include logs for requests in which authorization fails (AccessDenied) and requests that are made by anonymous users.
If a request is made by a different AWS Account, you will see the CloudTrail log in your account only if the bucket owner owns or has full access to the object in the request. If that is not the case, the logs will only be seen in the requester account. The logs for the same request will however be delivered in the server access logs of your account without any additional requirements.
AWS Support recommends that decisions can be made using CloudTrail logs and if you need that additional information too which is not available in CloudTrail logs, you can then use Server access logs.
There are two reasons to use CloudTrail Logs over S3 Server Access Logs:
You are interested in bucket-level activity logging. CloudTrail has that, S3 logs does not.
You have a log analysis setup that involves CloudWatch log streams. The basic S3 logs just store log events to files on some S3 bucket and from there it's up to you to process them (though most log analytics services can do this for you).
Bottom line: use CloudTrail, which costs extra, if you have a specific scenario that requires it. Otherwise, the "standard" S3 Server Access Logs are good enough.
From the CloudTrail developer guide (https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html):
Using CloudTrail Logs with Amazon S3 Server Access Logs and CloudWatch Logs
You can use AWS CloudTrail logs together with server access logs for Amazon S3. CloudTrail logs provide you with detailed API tracking for Amazon S3 bucket-level and object-level operations, while server access logs for Amazon S3 provide you visibility into object-level operations on your data in Amazon S3. For more information about server access logs, see Amazon S3 Server Access Logging.
You can also use CloudTrail logs together with CloudWatch for Amazon S3. CloudTrail integration with CloudWatch logs delivers S3 bucket-level API activity captured by CloudTrail to a CloudWatch log stream in the CloudWatch log group you specify. You can create CloudWatch alarms for monitoring specific API activity and receive email notifications when the specific API activity occurs. For more information about CloudWatch alarms for monitoring specific API activity, see the AWS CloudTrail User Guide. For more information about using CloudWatch with Amazon S3, see Monitoring Metrics with Amazon CloudWatch.
AWS CloudTrail is an AWS service for logging all account activities on different AWS resources. It also tracks things like IAM console login etc. Once CloudTrail service is enabled you can just go to CloudTrail console and see all the activity and also apply filters. Also, while enabling you can choose to log these activities and send the data to AWS CloudWatch. In AWS CloudWatch you can apply filters and also create alarms to notify you when a certain kind of activity happens.
S3 logging is enabling logging for basic activity on your S3 buckets/Objects.
CloudTrail logs API calls accessed to your AWS Account.
These CloudTrail logs are stored in Amazon S3 Bucket.
The two offer different services.
The Definition you have shared from CloudTrail Doc:
CloudTrail adds another dimension to the monitoring capabilities already offered by AWS. It does not change or replace logging features you might already be using.
It means you might have already activated some of the other logging features offered in other AWS services like ELB logging etc..
But when you enable CloudTrail monitoring, you need not worry about your previous logging functionalities as they will be still active.
You will recieve logs from all the services.
So By Enabling CloudTrail logging, It does not change or replace logging features you might already be using.
Hope it Helps.. :)