AWS DynamoDB to S3 Hourly Backup - amazon-web-services

I have gone through couple of stackoverflow questions regarding hourly backups from DDB to S3 where the best solution turned out to be to enable DDB Stream, subscribe lambda function and push to S3.
I am trying to understand if directly pushing from Lambda to S3 is fine or from Lambda to Kinesis Firehose and then to S3. Can someone share what is the advantage if we introduce Firehose in between. We anyways trigger lambda only after specific batch window that implies we are already buffering there.
Thanks in advance.

Firehose gives you the possibility to convert and compress your data. In addition you can directly attach a Glue Metadata table, so you can query your data with Athena.

You can write a Lambda function that reads a DynamoDB table, gets a result set, encodes the data to some format (ie, JSON), then place that JSON into an Amazon S3 bucket. You can use scheduled events to fire off the Lambda function on a regular schedule.
Here in AWS tutorial that shows you how to use scheduled events to invoke a Lambda function:
Creating scheduled events to invoke Lambda functions
This AWS tutorial also shows you how to read data from an Amazon DynamoDB table from a Lambda function.

Related

Automatically ingest Data from S3 into Amazon Timestream?

What is the easiest way to automatically ingest csv-data from a S3 bucket into a Timestream database ?
I have a s3-bucket which continuasly is generating csv files inside a folder structure. I want to save these files inside a timestream database so i can visualize them inside my grafana instance.
I already tried to do that via a Glue crawler but that wont wont for me. Is there any workaround or tutorial on how to solve this task ?
I do this using a Lambda function, an SNS topic and a queue.
New files in my bucket triggers a notification on an SNS Topic
The notification gets added to an SQS queue.
The lambda function consumes the queue, recovers the bucket and key of the new s3 object, downloads the csv file, does some processing and ingests the data into timestream. The lambda is implemented in Python.
This has been working ok, with the caveat that large files may not ingest fully within the lambda 15 minute limit. Timestream is not super fast. It gets better by using multi-valued records, as well as using the "common attributes' feature of the timestream client in boto3.
(it should be noted that the lambda can be triggered directly by the S3 bucket, if one prefers. Using a queue allows a bit more flexibility, such as being able to manually add files to the queue for reprocessing)

Is it possible to use bigquery in aws lambda?

I want to write a python function that send data to Bigquery every time a put event occurs in my S3 bucket but I'm new in AWS is it possible to integrate bigquery with a lambda function? or can someone give me another way to stream my dynamodb data to bigquery? Thank you my language is python
N.B: I used dynamostream firehose to send my data in S3 now I want to retrieve my data from s3 every time a put event occur to send it into bigquery.
There are already plenty of resources online about "how to trigger a lambda after a put object on a S3".
But here are a few links to get you set up:
You will need to set up an EventBridge (CloudWatch Events are legacy) to trigger your Lambda when some action happens on your S3 bucket:
https://aws.amazon.com/fr/blogs/compute/using-dynamic-amazon-s3-event-handling-with-amazon-eventbridge/
You can use the boto3 Python framework to write AWS Lambdas:
https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
You can check the BigQuery Python SDK by GCP to communicate with your BQ database: https://googleapis.dev/python/bigquery/latest/index.html

buffer s3 object inputs

Does anyone know other than kinesis firehose, is there any other service from AWS can catch the S3 inject event? I am trying to do some analysis on VPC flow logs, currently setup is cloud-watch-logs -> Kinesis Firehose -> S3 -> Athena.
The problem is kinesis firehose can only buffer up to 128MB which is to small for me.
Events from Amazon S3 can go to:
AWS Lambda functions
Amazon SNS topic
Amazon SQS queue
So, you could send the messages to an SQS queue and then have a regular process (every hour?) that retrieves many messages and writes them to a single file.
Alternatively, you could use your current setup but use Amazon Athena on a regular basis to join multiple files by using CREATE TABLE AS. This would select from the existing files and store the output in a new table (with a new location). You could even use it to transform the files into a format that is easier to query in Athena (eg Snappy-compressed Parquet). The hard part is to only include each input file once into this concatenation process (possibly using SymlinkTextInputFormat).

AWS DynamoDB to S3

I want to move(export) data from DynamoDB to S3
I have seen this tutorial but i'm not sure if the extracted data of dynamoDB will be deleted or coexits in DynamoDB and S3 at the same time.
What I expect is the data from dynamoDB will be deleted and stored in s3 (after X time stored in DynamoDB)
The main purpose of the project could be similar to this
There are any way to do this without have to develop a lambda function?
In resume, I have found this 2 different ways:
DynamoDB -> Pipeline -> S3 (Are the dynamoDB data deleted?)
DynamoDB -> TTL DynamoDB + DynamoDB stream -> Lambda -> firehose -> s3 (this appears to be more difficult)
Is this post currently valid for this purpouse?
What would be the simpliest and fasted way?
In your first option, as per default, data is not removed from dynamoDB. You can design a pipeline to make this work, but I think that is not the best solution.
In your second option, you must evaluate the solution based on your expected data volume:
If the data volume that will expire in TTL definition is not very
large, you can use lambda to persist removed data into S3 without
Firehose. You can design a simple lambda function to be triggered by
DynamoDB Stream and persist each stream event as a S3 object. You
can even trigger another lambda function to consolidate the objects
in a single file in the end of the day, week or month. But again,
based on your expected volume.
If you have a lot of data being expired at the same time and you
must perform transformations on this data, the best solution is to
use Firehose. Firehose can proceed with the transformation,
encryption and compact your data before sending it to S3. If the
volume of data is to big, using functions in the end of the day,
week or month may not be feasible. So it's better to perform all
this procedures before persisting it.
You can use AWS Pipeline to dump DynamoDB table to S3 and it will not be deleted.

DynamoDB Streams to S3

I am using Data Pipeline (DP) for daily backups of DynamoDB, however, I would like to do incremental backups of the data that is missed by DP runs (updates between DP runs). To accomplish that, I would like to use DynamoDB Streams + Lambda + S3 to bring real-time DynamoDB updates to S3. I understand how DynamoDB streams work, however, I am struggling with creating a Lambda function that writes to S3 and say rolls a file every hour.
Has anyone tried it?
Its an hour job dude,What you need to do is
Enable Dynamo DB update Stream and attach aws provided lambda function
https://github.com/awslabs/lambda-streams-to-firehose
Enable Firehose stream and use above function to stream outs records in firehose.
Configure Firehose to dump the records to S3.
done.