AWS Config query to track changes - amazon-web-services

I am implementing AWS Config and trying to figure out how to run a query which will tell us if there are any changes to the security groups or firewalls. I've setup a SNS topic and played with some existing rules such as ec2-security-group-attached-to-eni but I didn't find preexisting rules to alert the team if there were any changes in a security group. I did not find much online, would appreciate any guidance.

I know this is old so I assume you found a solution?
https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls.html#cis-3.10-remediation
Security Hub has a solution for this, and you do not need to actually use Security Hub to setup the Alarm. So this is using CloudTrail and a CloudWatch metric and alarm and then SNS.

Related

How to list the AWS services I'm paying for

I'm stuck with the AWS service. I can't find what services are being turned on. I can't find them, even I can't see why the number of services is increasing over time and I am being charged. Please help me.
I have deleted the S3 buckets I started and closed two EFS instances from the AWS console. But still, I am being charged.
You can use aws-nuke to remove resources in your account. Simple to use and support many services.
Have you checked the AWS cost-explorer dashboard? Please check the same and see which services are being charged today. Go and delete them if you are not using them.
If you don't want any AWS service, go and suspend your AWS account. :P
If you want more help.. contact us on https://www.cloudkaramchari.com/contact/
#Cloudkaramchari

Creating SEIM Dashboard for AWS logs using ELK Stack

We are collecting AWS logs in ELK stack SEIM (Open Distro for Elasticsearch) and Can someone please advise what type of logs or security events requires continuous monitoring and immediate alert notification. we are using Kibana for visualization.
What are the important things we need to keep in the Main Dashboard (ex: how many users logged in, which account is mostly used)?
What type of event requires alerts (ex: wrong password attempt 10X, S3 Bucket write after office hours) ?
How to identify when an AWS account is hacked or Attacker stole Data?
Thanks
In open distro (in our days open search) this needs to be done on your own in the alerting section.
The easiest option to solve your question is to use the free version from original Elasticsearch that provides an detection engine within the Security app in Kibana.
This detection engine comes with a number of AWS specific rules that are checking for e.g. hacked accounts.
In version 8 you find this under Elastic Security -> Alerts -> (Manage) Rules -> Import Elastic Prebuilt rules
You can access this version of Elasticsearch via AWS Marketplace.

Send AWS EC2 metrics to AWS Elasticsearch Service Domain for monitoring in Kibana

I am stuck on one point I have created one EC2 Linux based instance in Aws.
Now I want to send the EC2 metrics data to the managed Elasticsearch domain for monitoring purposes in Kiban, I go through the cloud watch console and check the metric is present of instance but didn't get how to connect with the Elasticsearch domain that I have created.
Can anyone please help me with this situation?
There is no build in mechanism for extraction/streaming of metrics data points in real time. You have to develop a custom solution for that. For example, by having a lambda function which is invoked every minute and which reads data points using get_metric_data. The the lambda would inject the points into your ES.
To invoke a lambda function periodically, e.g. every 1 minute you would have to setup CloudWatch Event rule with schedule Expressions. Lambda function would also need to have permissions granted to interact with CloudWatch metrics.
Welcome to SO :)
An alternative to the solution suggested by Marcin is to install metricbeat on the EC2 Instance and configure the metricbeat config file to send metrics to your Managed AWS ES Domain.
This is pretty simple and you should be able to do this fairly quickly.

How to setup email notifications for AWS operational issues

Yesterday our infrastructure started throwing lots of connection errors. We started debugging and the more we looked, the more perplexing the issue appeared to be; until someone noticed the bell icon (Alerts) on the AWS page had an orange dot on it.
Behold! there were lots of AWS operational issues in our availability region that AWS were fixing.
To avoid this situation in the future I wanted to subscribe to these 'Alert' so we get an email notification.
Does anyone know how to set up an email alert for AWS operational issues in the specified region?
Much to my astonishment, there was no obvious way to set this up.
Easiest way is to subscribe RSS feed on AWS Service Health Dashboard.
If you want customized stuffs, you can checkout AWS Personal Health Dashbaord. It shows your AWS services and whether they are experiencing issues.
This AWS documentation provides a really comprehensive guide on how to setup alerts. Checkout this aws-health-tools github repository for fully functional examples.

AWS Glue ETL job from AWS Redshift to S3 fails

I am trying out AWS Glue service to ETL some data from redshift to S3. Crawler runs successfully and creates the meta table in data catalog, however when I run the ETL job ( generated by AWS ) it fails after around 20 minutes saying "Resource unavailable".
I cannot see AWS glue logs or error logs created in Cloudwatch. When I try to view them it says "Log stream not found. The log stream jr_xxxxxxxxxx could not be found. Check if it was correctly created and retry."
I would appreciate it if you could provide any guidance to resolve this issue.
So basically, the job you add to Glue will either run if there's not too much traffic in the region your Glue is. If there are no resources available, you need to either manually re-add the job again or you can also bind yourself to events from CloudWatch via SNS.
Also, there are parameters you can pass to the job like maximunRetry and timeout.
If you have a Ressource not available, it won't trigger a retry because the job did not fail, it just didn't even started. But if you set the timeout to let's say 60 minutes, it will trigger an error after that time, decrement your retry pool and re-launch the job.
The closest thing I see to Glue documentation on this is here:
If you encounter errors in AWS Glue, use the following solutions to
help you find the source of the problems and fix them. Note The AWS
Glue GitHub repository contains additional troubleshooting guidance in
AWS Glue Frequently Asked Questions. Error: Resource Unavailable If
AWS Glue returns a resource unavailable message, you can view error
messages or logs to help you learn more about the issue. The following
tasks describe general methods for troubleshooting. • A custom DNS
configuration without reverse lookup can cause AWS Glue to fail. Check
your DNS configuration. If you are using Amazon Route 53 or Microsoft
Active Directory, make sure that there are forward and reverse
lookups. For more information, see Setting Up DNS in Your VPC (p. 23).
• For any connections and development endpoints that you use, check
that your cluster has not run out of elastic network interfaces.
I have recently struggled with Resource Unavailable thrown by Glue Job
Also i was not able to make a direct connection in Glue using RDS -it said "no suitable security group found"
I faced this issue while trying to connect with AWS RDS and Redshift
The problem was with the Security Group that the Redshift was using. There is a need to place a self referencing inbound rule in the Security Group.
For those who dont know what is self referencing inbound rule, follow the steps
1) Go to the Security Group you are using (VPC -> Security Group)
2) In the Inbound Rules select Edit Inbound Rules
3) Add a Rule
a) Type - All Traffic b) Protocol - All c) Port Range - ALL d) Source - custom and in space available write the initial of your security group and select it. e) Save it.
Its done !
if you were missing this condition in your Security Group Inbound Rules
Try creating the connection you will be able to create the connection.
Also job should work this time.