Creating SEIM Dashboard for AWS logs using ELK Stack - amazon-web-services

We are collecting AWS logs in ELK stack SEIM (Open Distro for Elasticsearch) and Can someone please advise what type of logs or security events requires continuous monitoring and immediate alert notification. we are using Kibana for visualization.
What are the important things we need to keep in the Main Dashboard (ex: how many users logged in, which account is mostly used)?
What type of event requires alerts (ex: wrong password attempt 10X, S3 Bucket write after office hours) ?
How to identify when an AWS account is hacked or Attacker stole Data?
Thanks

In open distro (in our days open search) this needs to be done on your own in the alerting section.
The easiest option to solve your question is to use the free version from original Elasticsearch that provides an detection engine within the Security app in Kibana.
This detection engine comes with a number of AWS specific rules that are checking for e.g. hacked accounts.
In version 8 you find this under Elastic Security -> Alerts -> (Manage) Rules -> Import Elastic Prebuilt rules
You can access this version of Elasticsearch via AWS Marketplace.

Related

Alert: Behavior:EC2/NetworkPortUnusual use port:80 to AWS S3 Webpage

The other day, I received the following alert in GuardDuty.
Behavior:EC2/NetworkPortUnusual
port:80
Target:3.5.154.156
The EC2 that was the target of the alert was not being used for anything in particular. (However, it had been started up.)
There was no communication using port 80 until now.
Also, the IPAddress of the Target seems to be AWS S3.
The only recent change is that I recently deleted the EC2 InstanceProfile.
Therefore, there is currently no InstanceProfile attached to anything.
Do you know why this EC2 suddenly tried to use port 80 to communicate with the S3 page?
I looked at CloudTrail, etc., and found nothing suspicious.
(If there are any other items I should check, please let me know.)
Thankyou.
We have experienced similar alerts and after tedious debugging we found that SSM agent is responsible for this kind of GuardDuty findings.
SSM Agent communications with AWS managed S3 buckets
"In the course of performing various Systems Manager operations, AWS Systems Manager Agent (SSM Agent) accesses a number of Amazon Simple Storage Service (Amazon S3) buckets. These S3 buckets are publicly accessible, and by default, SSM Agent connects to them using HTTP calls."
I suggest to review CloudTrail logs and look for "UpdateInstanceInformation" event (this is how we found it eventually)

I can't find and disable AWS resources

My free AWS tier is going to expire in 8 days. I removed every EC2 resource and elastic IP associated with it. Because that is what I recall initializing and experimenting with. I deleted all the roles I created because as I understand it, roles permit AWS to perform actions for AWS services. And yet, when I go to the billing page it shows I have these three services that are in current usage.
[1]: https://i.stack.imgur.com/RvKZc.png
I used the script as recommended by AWS documentation to check for all instances and it shows "no resources found".
Link for script: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-listec2resources.html
I tried searching for each service using the dashboard and didn't get anywhere. I found an S3 bucket, I don't remember creating it but I deleted it anyway, and still, I get the same output.
Any help is much appreciated.
ok, I was able to get in touch with AWS support via Live chat, and they informed me that those services in my billing were usages generated before the services were terminated. AWS support was much faster than I expected.

AWS - Log aggregation and visualization

We have couple of application running on AWS. Currently we are redirecting all our logs to single bucket. However for ease of access to users, I am thinking to install ELK Stack on EC2 instance.
Would want to check if there is alternate way available where I don't have to maintain this stack.
Scaling won't be an issue, as this is only for logs generated through application running on AWS, so not ingestion or processing is required. mostly log4j logs.
You can go for either the managed Elasticsearch available in AWS or setup your own in an EC2 instance
It usually comes down to the price involved and the amount of time you have in hand in setting up and maintaining your own setup
With your own setup, you can do a lot more configurations than that provided by the managed service and also helps in reducing the cost
You can find more info on this blog

Aws limits monitoring with Nagios

I tried searching for this topic on Google and after many failed attempts I decided to post this as questions here.
What I want to achieve: Monitoring my aws limits using Nagios.
As I have understood aws cli can be used to get the limits of only few aws services, for more in depth cost management and service limit management one has to opt for trusted advisor. Unfortunately it's quite expensive.
So I was wondering if there's a much simpler way with Nagios in which I could get notified if any of the aws services for my account is hitting a limit?
What kind of service limit notification strategy is used by organizations(That can't afford to buy a subscription of trusted advisor) that use Aws?
You're right: only few services can show their limit (and current usage) through CLI or API. I don't like it either :) We have three options here:
Create a parser that grabs information from AWS Console (there is an example code here: https://forrestbrazeal.com/2015/07/20/adventures-in-aws-automating-service-limit-checks/).
Buy Trusted Advisor (btw, you can get a Trusted Advisor report with API call).
Try using awslimitchecker. Cause someone already tried to solve this problem.
https://awslimitchecker.readthedocs.io/en/latest/

Easier way to access ElasticBeanstalk EC2 Log files

I am programming a Jersey service on Tomcat via EBS with LoadBalancer. I am finding getting the EC2's S3 catalina files very cumbersome. Currently I need to determine the EC2 instance(s) then work my way to each of the S3 locations, download the files, then I can diagnose.
The snapshot doesn't help due to the amount of requests that come in, it doesn't hold enough info and by the time I get the snapshot, it has "rolled" off the snapshot.
Two questions:
1) Is there an easier approach to logs files via AWS? (Increase time before rotation which I don't believe is supported as of now, scripts, etc)
2) Is there any software or scripts to access all the logs under load balancer? I am basically wanting to say "give me all logs for this EBS" and have it get all logs for that day under all servers for that load balancer (up or down)". The clincher is down. Problem becomes more complex when the load balancer takes down an instance right when the issue occurs.
Thanks!
As an immediate solution to your problem you can follow the approach suggested in this answer. Essentially you can modify the logrotate configuration to rotate for a bigger log size using ebextensions.
Then snapshot logs should work for you.
Let me know if you need more clarifications on this approach.
AWS has released CloudWatch Logs just last week, which enables you to to monitor and troubleshoot your systems and applications using your existing system, application, and custom log files:
You can send your existing system, application, and custom log files to CloudWatch Logs and monitor these logs in near real-time. [...] you can store your logs using highly durable, low-cost storage for later access.
See the introductory blog post Store and Monitor OS & Application Log Files with Amazon CloudWatch for an illustrated walk through, which touches on using Elastic Beanstalk and CloudWatch Logs already - this is further detailed in Using AWS Elastic Beanstalk with Amazon CloudWatch Logs.