Find the terminated EC2 instances count - amazon-web-services

Is there a way to find the number of EC2 instances which were launched in last 1/2/3/4/5 or 6 months in all regions? (running and terminated).
From a similar question as below, I can only get the current status (running|stopped|terminated) but not anything from past months.
How to see all running Amazon EC2 instances across all regions?
Please advise. This is purely for audit purpose.
Thanks in advance.

AWS CloudTrail makes it easier to ensure compliance with internal policies and regulatory standards by providing a history of activity in your AWS account.
AWS have an option to view Event History if you have CloudTrail enabled. Please go through this AWS page to view clear instructions.
If you like to use AWS CLI then this documentation provides all the details.

I would recommend a combination of CloudTrail logs stored in S3 and Athena to do the query. The problem with CloudTrail alone is that you have a three month window before logs roll off. Your requirements include as far out as six months.
To deliver log files to an S3 bucket, CloudTrail must have the required permissions, and it cannot be configured as a Requester Pays bucket. CloudTrail automatically attaches the required permissions to a bucket when you create an Amazon S3 bucket as part of creating or updating a trail in the CloudTrail console.
To setup Athena you can configure through the CloudTrail Console:
Open the CloudTrail console at https://console.aws.amazon.com/cloudtrail/
In the navigation pane, choose Event history.
Choose Create Athena table.
For Storage location, use the down arrow to select the Amazon S3 bucket where log files are stored for the trail to query.
Choose Create table. The table is created with a default name that includes the name of the Amazon S3 bucket.
Then you can run a query similar to this in Athena:
SELECT eventname,
useridentity.principalid,
awsregion,
eventtime
FROM cloudtrail_logs
WHERE eventtime >= '2021-02-01T00:00:00Z'
AND eventtime < '2021-08-30T00:00:00Z'
AND (eventname ='RunInstances')
References
Create S3 Bucket Policy for CloudTrail
Query CloudTrail logs with Athena
Athena Search CloudTrail Logs

Related

AWS Amplify send logs to Splunk

I'd like to send Amplify monitoring data(access logs, metrics) to Splunk - this would be best case scenario. But for the beginning it would be ok if I could at least store them into another service like s3 or even grater to link it with CloudWatch, as I haven't found if those logs are somehow taken from CW log groups.
My question would be if there's a way to get those metrics outside of Amplify service?
There's a way you can send CW logs to your 3rd Party Apps.
Two Major steps:
Export CW logs to s3.
Configure lambda with s3 bucket & write your logic to read & send logs to 3rd party apps every time a file is written on bucket.
Cloudwatch allows you to export logs to s3.
From AWS docs:
To export data to Amazon S3 using the CloudWatch console
Sign in as the IAM user that you created in Step 2: Create an IAM user
with full access to Amazon S3 and CloudWatch Logs.
Open the CloudWatch console at
https://console.aws.amazon.com/cloudwatch/.
In the navigation pane, choose Log groups.
On the Log Groups screen, choose the name of the log group.
Choose Actions, Export data to Amazon S3.
On the Export data to Amazon S3 screen, under Define data export, set
the time range for the data to export using From and To.
If your log group has multiple log streams, you can provide a log
stream prefix to limit the log group data to a specific stream. Choose
Advanced, and then for Stream prefix, enter the log stream prefix.
Under Choose S3 bucket, choose the account associated with the Amazon
S3 bucket.
For S3 bucket name, choose an Amazon S3 bucket.
For S3 Bucket prefix, enter the randomly generated string that you
specified in the bucket policy.
Choose Export to export your log data to Amazon S3.
To view the status of the log data that you exported to Amazon S3,
choose Actions and then View all exports to Amazon S3.
Once you have exported logs into S3, You can setup simple S3 lambda trigger to read & send these logs to Third Party Applications(splunk in this case) using their APIs.
This way you'd also have saved logs in the S3 for future use or something.

How to get list of users who are accessing the objects in S3 Buckets?

Scenario:
My client have 80+ S3 Buckets and 1000+ applications is running in their AWS account. I want to get the list of IAM users/roles who are accessing the objects in all the S3 Buckets.
Method 1: Initially I tried to fetch it from CloudTrail Event History, but no luck.
From the above image, you can see CloudTrail is failing to log the object level logging.
Method 2: I created a CloudTrail Trails to log the activities. But it captures all management level activities happening through out the account which makes me hard to find the S3 logs alone(I already mentioned that there is 80+ Buckets & 1000+ applications in the account).
Method 3: S3 Server Access Log: If I enable this option, it creates log entry for every action happening to the objects. (that is: When I attempt to read a log file, it creates an another log. It keeps on doubling the count of logs)
If anyone have a solution to find the list of IAM users/roles who are accessing the S3 bucket objects and in an effective way, please help me.
Thanks in advance.
For each bucket, configure object-level logging.
Once that is complete, you can use the CloudTrail API to filter events and extract IAM identities making the requests.
aws cloudtrail lookup-events --lookup-attributes AttributeKey=ResourceType,AttributeValue=AWS::S3::Object --query Events[*].Username

Get cost or bandwidth contributed by a file in total AWS bill

I am using AWS S3 for serving assets to my website, now even though I have added cache-control metadata header to all my assets my daily overall bandwidth usage almost got doubled in past month.
I am sure that traffic on my website has not increased dramatically to account for increase in S3's bandwidth usage.
Is there a way to find out how much a file is contributing to the total bill in terms of bandwidth or cost ?
I am routing all my traffic through cloudfare so it should be protected against DDoS attack.
I expect the bandwidth of my S3 bucket to reduce or to get some valid reason which explains why bandwidth almost doubled when there's no increase in daily traffic.
You need to enable Server Access Logging on your content bucket. Once you do this, all bucket accesses will be written to logfiles that are stored in a (different) S3 bucket.
You can analyze these logfiles with a custom program (you'll find examples on the web) or AWS Athena, which lets you write SQL queries against structured data.
I would focus on the remote IP address of the requestor, to understand what proportion of requests are served via CloudFlare versus people going directly to your bucket.
If you find that CloudFlare is constantly reloading content from the bucket, you'll need to give some thought to cache-control headers, either as metadata on the object in S3, or in your CloudFlare configuration.
From: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
To enable CloudTrail data events logging for objects in an S3 bucket:
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the Bucket name list, choose the name of the bucket that you want.
Choose Properties.
Choose Object-level logging.
Choose an existing CloudTrail trail in the drop-down menu. The trail you select must be in the same AWS Region as your bucket, so the drop-down list contains only trails that are in the same Region as the bucket or trails that were created for all Regions.
If you need to create a trail, choose the CloudTrail console link to go to the CloudTrail console. For information about how to create trails in the CloudTrail console, see Creating a Trail with the Console in the AWS CloudTrail User Guide.
Under Events, select Read to specify that you want CloudTrail to log Amazon S3 read APIs such as GetObject. Select Write to log Amazon S3 write APIs such as PutObject. Select both Read and Write to log both read and write object APIs. For a list of supported data events that CloudTrail logs for Amazon S3 objects, see Amazon S3 Object-Level Actions Tracked by CloudTrail Logging in the Amazon Simple Storage Service Developer Guide.
Choose Create to enable object-level logging for the bucket.
To disable object-level logging for the bucket, you must go to the CloudTrail console and remove the bucket name from the trail's Data events.

How to Track AWS Resource created by an IAM user and store record in database?

I have created some IAM users to my AWS account with permission to launch instances.
Now I want to track and store their instance launch activity like time and instance ID in my MySQL or any other database.
Is there any way to achieve this, any suggestion will be appreciated.
All activities of an IAM user can be monitored using aws cloudtrail. Cloudtrail logs all the events.
The cloudtrail log is stored to a S3 bucket. You can use the storage trigger option in aws lambda functions to watch for a particular log .
In this case the log for new EC2 instance creation.
In the lambda function you need to add the code that takes that log information and stores into a Mysql database that you have setup.
Refer this post https://docs.aws.amazon.com/lambda/latest/dg/with-cloudtrail.html
Also you can try creating a cloudwatch for EC2 instance creation and it can trigger an aws lambda function which will do the data insert to the db.
Here is a sample of cloudwatch based scheduler. You have to setup a specific trigger as per your need though.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
You should use AWS CloudTrail:
CloudTrail is enabled on your AWS account when you create it. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event. You can easily view recent events in the CloudTrail console by going to Event history. For an ongoing record of activity and events in your AWS account, create a trail.

AWS console see what action has been done by which user

On the AWS console, is there any history of users actions? I would like to see which of ours users has last modified a property of a S3 bucket for example
For this you can do few things.
Setup AWS CloudTrail to audit user actions to AWS S3
Enable logging for the S3 bucket and store the logs either in a bucket in the same account of in a different account (Better if you need more security).
Enable versioning on S3 buckets, so past versions remains and allows to revert the changes.
The best way to collect all user actions in AWS is using CloudTrail. Using CloudTrail you can also create trails that includes S3 object-level operation events.