Simple Notification for row insertion in AWS Redshift - amazon-web-services

I have created a custom error log table in Redshift.
In which rows are inserted when any error occurs in my stored procedures.
Is there a way to get a notification like SNS whenever a new row gets inserted in that error table?

There is no capability in Amazon Redshift to trigger events on row insertion.

Related

Eventbridge target to redshift to insert event data

Steps followed to insert event data to redshift database
Have created redshift cluster with VPC. created database and schema
Created eventbridge rule with target redshift
SQL query used insert into dev.schema (a,b) VALUES (event.data.a,event.data.b)
But if the event triggers for the particular name the data is not inserted in the redshift
Is there any specific SQL query methods available to push the event data to redshift database schema

AWS DynamoDB Trigger

I have one dynamo table. Any data insert to a table it will trigger one lambda function. So in loop when I hit data to dynamo table. Some time trigger not happening for one or two rows
What is the solution for trigger not happening for loop

Why does DynamoDB stream trigger but the entry is not in DynamoDB

I am using DynamoDB as my database and have DynamoDB streams setup to do some extra things when a row is saved onto DynamoDB.
But my problem is that when i write to a database. The dynamo stream is getting triggered but when i try to fetch the data from the lambda that is triggered by the stream the event is not present in the dynamo table.
Really not sure why it is happening. Is it something that happens when the table gets big and we try to add a lot of data in dynamo at the same time?
I have a combination of partition key and sort key. partition key is the userId and sort key is the timestamp. Could it be something due to that?
Could it be that the entries are lot of a lot of different services try to write to dynamo at the same time?

How to Send Aws Iot MQTT message to dynamo db table?

I want to get the mqtt message and put into dynamodb table for that i created a rule and actions for that but i am getting another value when the data is inserted into a dynamo db table . Can you please how can i get the mqtt messsage value to dynamodb table ?
If you could provide more information on where and what wrong data you are getting I can help you out with the first part of the question.
How can I keep MQTT messages in DynamoDB?
You can keep a trigger with a lambda function. Whenever there is a change you can trigger the function to store the data in DynamoDB.
UPDATE
You can get more information here. It is explained in detail and solves your exact problem.

Copying only new records from AWS DynamoDB to AWS Redshift

I see there is tons of examples and documentation to copy data from DynamoDB to Redshift, but we are looking at an incremental copy process where only the new rows are copied from DynamoDB to Redshift. We will run this copy process everyday, so there is no need to kill the entire redshift table each day. Does anybody have any experience or thoughts on this topic?
Dynamo DB has a feature (currently in preview) called Streams:
Amazon DynamoDB Streams maintains a time ordered sequence of item
level changes in any DynamoDB table in a log for a duration of 24
hours. Using the Streams APIs, developers can query the updates,
receive the item level data before and after the changes, and use it
to build creative extensions to their applications built on top of
DynamoDB.
This feature will allow you to process new updates as they come in and do what you want with them, rather than design an exporting system on top of DynamoDB.
You can see more information about how the processing works in the Reading and Processing DynamoDB Streams documentation.
The copy from redshift can only copy the entire table. There are several ways to achieve this
Using an AWS EMR cluster and Hive - If you set up an EMR cluster then you can use Hive tables to execute queries on the dynamodb data and move to S3. Then that data can be easily moved to redshift.
You can store your dynamodb data based on access patterns (see http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.TimeSeriesDataAccessPatterns). If we store the data this way, then the dynamodb tables can be dropped after they are copied to redshift
This can be solved with a secondary DynamoDB table that tracks only the keys that were changed since the last backup. This table has to be updated wherever initial DynamoDB table is updated (add, update, delete). At the end of a backup process you will delete them or after you backup a row (one by one).
If your DynamoDB table can have
Timestamps as an attribute or
A binary flag which conveys data freshness as attribute
then you can write a hive query to export only current day's data or fresh data to s3 and then 'KEEP_EXISTING' copy this incremental s3 data to Redshift.