Send a Notification after a new Database creation on AWS - amazon-web-services

I want to get a notification when a new Database gets created on AWS Aurora by another application.
But all notifications are on the Cluster or instance level.
any help would be appreciated.

Currently there is no metric like "number of databases" available, but for security reason, it is also not best practice for AWS, because getting the number of databases means, you have to provide AWS your database credentials.
What I would prefer (without any large costs) is to write a simple AWS Lambda, which queries all x minutes your database and write the number of available databases into Cloudwatch Metric. As soon as the number of databases is changing, you can trigger a SNS based on that metric.
Possible setup:
Create AWS Lambda
Provide your credentials for the Aurora Cluster as environment variables
Lambda connects during initalizing to the database
query the number of databases
run put_metric to store this number to Cloudwatch Metrics
attach a SNS to the Cloudwatch Metric which will send a notification on every change.

Related

How to stop services for running in AWS after budget is reached?

I was wondering is it possible to set up a budget for a user eg. if i'm part of organisation and i want only resources i created to monitor and be notified about?
My understanding is that if i set up a budget, I'll only be notified in case the budget is reached, but it will not stop resources to run further and generate costs. Is this correct and can it be changed?
AWS does not keep track of "only resources I created". Resources are associated with an AWS Account, not an AWS User. You would need to tag all relevant resources with the user who created the resource to be able to identify such 'owners' of resources.
You can create an Alarm based on a budget, and the Alarm could trigger an AWS Lambda function. You could then write code for the Lambda function that turns off / deletes resources based upon their tags.
Please note that some services can be stopped to save money and later restarted (eg Amazon EC2 instances, Amazon RDS databases), while some resources can only be deleted to stop the charges (eg NAT Gateway, storage in Amazon S3).

I need to create alerts based on the results returned by queries in Amazon Athena

I need to create alerts based on the results returned by queries in Amazon Athena. I don't see how I can do that now.
For example -
Schedule a query to be executed once an hour (I am not aware of a way to do this now)
Based on the results of the query (for example I would be checking the number of transactions the last hour), I might need to send an alert to someone that something may be wrong (number of transactions is too low).
I know this is different but I would do something similar, in SQL Server, using a SQL Server Agent job.
There is no in-built capability to run Amazon Athena queries on a schedule and send notifications. However, you could configure this using AWS services.
I would recommend:
Create an Amazon SNS topic that will receive notifications
Subscribe recipients to the SNS topic (eg via email, SMS)
Create an Amazon CloudWatch Event that triggers on a cron schedule
Configure the Event to trigger an AWS Lambda function
Write code for the AWS Lambda function to:
Run an Amazon Athena query
Compare the result to desired values
If the result is outside desired values, send a message to the Amazon SNS Topic

How to setup Cloudwatch SQL monitor?

I have a view on a PostgreSQL RDS instance that lists any ongoing deadlocks. Ideally, there are no deadlocks in the database, causing the view to show nothing, but on rare occasions, there are.
How would I setup an alarm in Cloudwatch to query this view and raise an alarm if any records return?
I found the cool script on Github specifically for this:
A Serverless MySQL RDS Data Collection script to push Custom Metrics to CloudWatch on AWS
Basically, there are 2 main possibilities to publish any custom metrics on CloudWatch:
Via API
You can run it on a schedule on EC2 instance (AWS example) or as a lambda function (great manual with code examples)
With CloudWatch agent
Here is the pretty example for Monitor your Microsoft SQL Server using custom metrics with Amazon CloudWatch and AWS Systems Manager.
After all, you should set up CloudWatch alarms with Metric Math and relevant thresholds.
It is not possible to configure Amazon CloudWatch to look inside an Amazon RDS database.
You will need some code running somewhere that regularly runs a query on the database and sends a custom metric to Amazon CloudWatch.
For example, you could trigger an AWS Lambda function, or use cron on an Amazon EC2 instance to trigger a script.

How to handle AWS IOT streaming data in relational database

Generic information :-i am designing solution for one of IOT problem approach in which data is continuously streaming from plc(programmable logic controller),plc have different tags these tags are representation of telemetry data and data will be continuously streaming from these tags, each of devices will have alarm tags which will be 0 or 1 , 1 means there is an equipment failure
problem statement:- i have to read the alarm tag and raise a ticket if any of alarm tag value is 1 and i have to stream these alerts to dashboard and also i have to maintain the ticket history too,so the operator can update the ticket status too
My solution:- i am using aws IOT , i am getting data in dynamo db then i am using dynamo db stream to check if any new item is added in alarm table and if it will trigger lambda function (which i have implemented in java) lambda function opens a new ticket in relational database using hibernate.
problem with my approach:-the aws iot data is continuously streaming in alarm table at a very fast rate and this is opening a lot of connection before it can be closed that's taking my relational database down
please let me know if other good design approach can i adopt?
USE Amazon Kinesis Analytics to process streaming data. Dynamodb isn't suitable for this.
Read more here
Below image will give you an idea for same
Just a proposal....
From lambda, do not contact RDS,
Rather push all alarms in AWS SQS
then you can have one another lambda scheduled for every minute using AWS CloudWatch Rules that will pick all items from AWS SQS and then insert them in RDS at once.
I agree with raevilman's design of not letting Lambda contact RDS directly.
Since creating a new ticket is not the only task you Lambda function is doing, you are also streaming these alerts to a dashboard. Depending on the streaming rate and the RDS limitations, you may want to split these tasks in multiple queues.
Generic solution: I'd suggest you can push the alarm to a fanout exchange and this exchange will in turn push the alarm to one or more queues as required. You can then batch the alarms and perform multiple writes together without performing connect/disconnect cycle multiple times.
AWS specific Solution: I haven't used SQS so can't really comment on it's architecture. Alternatively, you can create an SNS Topic and publish these alarms to this topic. You can then have SQS queues as subscribers to this topic which in turn will be used for Ticketing and Dashboard purpose independent of each other.
Here again, from Ticketing queue, you can poll messages using Lambda or your own scheduler in batch and process tickets(frequency depending on how time critical alarms are).
You may want to read this tutorial to get some pointers.
You can control number of lambda function concurrency. And this will reduce the number of lambdas that get spinned up based on the dynamo events. Thereby reducing the connections to RDS.
https://aws.amazon.com/blogs/compute/managing-aws-lambda-function-concurrency/
Ofcourse , this will throttle the dynamo events.

How to use query (or modify table) in Redshift as event to trigger AWS SNS

Basically here I want to send a notification to AWS SNS when my target table in Redshift cluster modified (for example I load new data from S3 to this table).
But When I went through Redshift documents, the events are all at higher level(like cluster, security group), and more concern about system management, monitoring and error.
So I was wondering is there a way to use specific Redshift table as an event to trigger AWS SNS?