Lambda read DynamoDB and send to ML Endpoint - amazon-web-services

Background:
I have a DynamoDB table with column's "TimeStamp | Data1 | Data2". I
also have a ML endpoint in SageMaker which needs Data1 and Data2 to
generate one output value(score).
Question:
My ambition is to script a Lambda function (Java or Python) to read
the latest row in the DynamoDB table, and send this through the
Endpoint and receive the score.
What I have tried:
I have only found guides where you do this by exporting the whole
DynamoDB table to s3 and in Data Pipeline send it to the Endpoint.
This is not how I want it to work!

I am an engineer of Sagemaker team.
As I understand, you would like to use Lambda function to
Listen to DynamoDB table updates
Invoke Sagemaker endpoint for real-time predictions.
For (1), DynamoDB stream could be a wonderful place to start. Here is a tutorial for processing DynamoDB streams using Lambda function:
For (2), here is an step-by-step tutorial for invoke Sagemaker endpoint for predictions inside Lambda function.
Hope this helps.

Related

Transfer Data from AWS Time Stream to DynamoDB

i am working in the IoT Space with 2 Databases. AWS Time Stream & AWS DynamoDB.
My sensor data is coming into Time Stream via AWS IoT Core and MQTT. I set up a rule, that gives permission to transfer the incoming data directly into Time Stream.
What i need to do now is to run some operations on the data and save the result of these operations into DynamoDB.
I know with DynamoDB there is function called DynamoDB Streams. Is there a solution like Streams in Time Stream as well? Or does anybody has an idea, how i can automatically transfer the results of the operations from Time Stream to DynamoDB?
Timestream does not have Change Data Capture capabilities.
Best thing to do is to write the data into DynamoDB from wherever you are doing your operations on Timestream. For example, if you are using AWS Glue to analyze your Timestream data, you can sink the results directly from Glue using the DynamoDB sink.
Timestream has the concept of Schedule Query. When a query has ran, you can be notified via a SNS topic. You could connect a lambda on that SNS topic to retrieve the query result and store it in DynamoDB.

Is it possible to use bigquery in aws lambda?

I want to write a python function that send data to Bigquery every time a put event occurs in my S3 bucket but I'm new in AWS is it possible to integrate bigquery with a lambda function? or can someone give me another way to stream my dynamodb data to bigquery? Thank you my language is python
N.B: I used dynamostream firehose to send my data in S3 now I want to retrieve my data from s3 every time a put event occur to send it into bigquery.
There are already plenty of resources online about "how to trigger a lambda after a put object on a S3".
But here are a few links to get you set up:
You will need to set up an EventBridge (CloudWatch Events are legacy) to trigger your Lambda when some action happens on your S3 bucket:
https://aws.amazon.com/fr/blogs/compute/using-dynamic-amazon-s3-event-handling-with-amazon-eventbridge/
You can use the boto3 Python framework to write AWS Lambdas:
https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
You can check the BigQuery Python SDK by GCP to communicate with your BQ database: https://googleapis.dev/python/bigquery/latest/index.html

Incremental exports from Amazon DynamoDB to Amazon S3

We need to run an analysis of the data in Amazon DynamoDB. Since doing it in the DDB isn't an option due to DDB's limitations with analysis, based on the recommendations I am leaning towards DDB -?> S3 -> Athena.
It is a data-heavy application with data streaming from AWS IoT devices and is also a multi-tenant application. Now, to sync data from DDB to Amazon S3, it will be probably a couple of times a day. How do we set up incremental exports for this purpose?
There is an Athena connector to be able to query your data in DynamoDB table directly using SQL query.
https://docs.aws.amazon.com/athena/latest/ug/athena-prebuilt-data-connectors-dynamodb.html
https://dev.to/jdonboch/finally-dynamodb-support-in-aws-quicksight-sort-of-2lbl
Another solution for this use case is you can write an AWS Step Functions workflow that when invoked, can read data from an Amazon DynamoDB table and then format the data to the way you want it and place the data into an Amazon S3 bucket (an example that shows a similar use case will be available soon):
This is the reverse (here the source is an Amazon S3 bucket and the target is an Amazon DynamoDB table) but you can build the Workflow so the target is an Amazon S3 bucket. Because it's a workflow, you can use a Lambda function that is scheduled to fire a few times a day based on a CRON expression. The job of this Lambda function is to invoke the workflow using the Step Functions API.

Which is the correct way to insert data from dynamodb to elasticsearch via Logstash or lambda?

I want to insert data from multiple tables as a single document in AWS Dynamodb to Elasticsearch. Currently, I have set a trigger on one of my tables in Dynamodb and called a Lambda function which inserts the document in Elasticsearch. But recently I read this article and got a bit confused to understand that which is the correct approach, using Logstash or Lambda function?

Visualize DynamoDB data in AWS Quicksight

I am looking for an AWS-centric solution (avoiding 3rd party stuff if possible) for visualizing data that is in a very simple DynamoDB table.
We use AWS Quicksight for many other reports and dashboards for our clients so that is goal to have visualizations made available there.
I was very surprised to see that DynamoDB was not a supported source for Quicksight although many other things are like S3, Athena, Redshift, RDS, etc.
Does anyone have any experience for creating a solution for this?
I am thinking that I will just create a job that will dump the DynamoDB table to S3 every so often and then use the S3 or Athena integrations with Quicksight to read/display it. It would be nice to have a simple solution for more live data.
!!UPDATE!!
As of 2021, we can finally get Athena Data connectors to expose DynamoDB data in Quicksight without any custom scripts or duplicate data.
That being said, I would like the caveat this by saying just because it can be done, you may need to ask yourself if this is really a good solution for your workload. DynamoDB isn't the best for data warehousing use cases and performing large scans on tables can end up being slow/costly. If your dataset is very large and this is a real production use case, it would probably be best to still go with an ETL workflow and move the DynamoDB data to a more appropriate data store.
But.. if you are still interested in seeing DynamoDB data live QuickSight without any additional ETL processes to move/transform the data: I wrote a detailed blog post with step by step instructions but in general, here is the process:
Ensure you have an Athena Workgroup that uses the new Athena Engine version 2 and if not, create one
In Athena under data sources, create a new data source and select "Query a data source" and then "Amazon DynamoDB"
On the next part of the wizard, click the "Configure new AWS Lambda function" to deploy the prebuilt AthenaDynamoDBConnector.
Once the AthenaDynamoDBConnector is deployed, select the name of the function you deployed in the Data Source creation wizard in Athena, give your DynamoDB data a catalog name like "dynamodb" and click "Connect"
You now should be able to query DynamoDB data in Athena but there are a few more steps to get things working in QuickSight.
Go to the IAM console and find the QuickSight service role (i.e. aws-quicksight-service-role-v0).
Attach the AWS Managed "AWSLambdaRole" policy to the QuickSight role since QuickSight now needs the permissions to invoke your data connector.
Go to the QuickSight console and add a new Athena data source that uses the version 2 engine that you created in Step 1
You should now be able to create a data set with that Athena Engine version 2 workgroup data source and choose the Athena catalog name you gave the DynamoDB connector in Step 4.
Bingo bango, you should now be able to directly query or cache DynamoDB data in Quicksight without needing to create custom code or jobs that duplicate your data to another data source.
As of March 2020, Amazon is making available a beta feature called Athena DynamoDB Connector.
Unfortunately, it's only beta/preview and you can get it setup in Athena but I don't see a way to use these new Athena catalogs in Quicksight.
Hopefully once this feature is GA, it can be easily imported into Quicksight and I can update the answer with the good news.
Instructions on getting up a DynamoDB connector
There are many new data sources that AWS is making available in beta for autmoting the connections to Athena.
You can set these up via the console by:
Navigate to the "Data Sources" menu in the AWS Athena console.
Click the "Configure Data Source" button
Choose "Query a data source" radio button
Select "Amazon DynamoDB" option that appears
Click the "Configure new function" option
You'll need to specify a bucket to help put "spilled" data into and provide a name for the new DyanmoDB catalog.
Once the app is deployed from Step 5, select the Lambda name (the name of the catalog you entered in Step 5) in the Athena data source form from Step 4 and also provide that same catalog name.
Create the data connector
Now you can go to the Athena query editor, select the catalog you just created and see a list of all DyanmoDB tables for your region, under the default Athena database in the new catalog, that you can now query as part of Athena.
We want DynamoDB support in Quicksight!
The simplest way I could find is below:
1 - Create a Glue Crawler which takes DynamoDB table as a Data Source and writes documents to a Glue Table. (Let's say Table X)
2 - Create a Glue Job which takes 'Table X' as a data source and writes them into a S3 Bucket in parquet format. (Let's say s3://table-x-parquets)
3 - Create a Glue Crawler which takes 's3://table-x-parquets' as data source and creates a new Glue Table from it. (Let's say Table Y)
Now you can execute Athena queries in Table Y and also you can use it as Data Set in Quicksight.
I'd also like to see a native integration between DynamoDB and QuickSight, so I will be watching this thread as well.
But there is at least 1 option that's closer to what you want. You could enable Streams on your DynamoDB table and then set up a trigger to trigger a Lambda function when changes are made to DynamoDB.
Then you could only take action on specific DynamoDB events if you like ('Modify', 'Insert', 'Delete') and then dump the new/modified record to S3. That would be pretty close to real-time data, as it would trigger immediately upon update.
I did something similar in the past but instead of dumping data to S3 I was updating another DynamoDB table. It would be pretty simple to switch the example to S3 instead. See below.
const AWS = require('aws-sdk');
exports.handler = async (event, context, callback) => {
console.log("Event:", event);
const dynamo = new AWS.DynamoDB();
const customerResponse = await dynamo.scan({
TableName: 'Customers',
ProjectionExpression: 'CustomerId'
}).promise().catch(err => console.log(err));
console.log(customerResponse);
let customers = customerResponse.Items.map(item => item.CustomerId.S);
console.log(customers);
for(let i = 0; i < event.Records.length; i++)
{
if(event.Records[i].eventName === 'INSERT')
{
if(event.Records[i].dynamodb.NewImage)
{
console.log(event.Records[i].dynamodb.NewImage);
for(let j = 0; j < customers.length; j++)
{
await dynamo.putItem({
Item: {
...event.Records[i].dynamodb.NewImage,
CustomerId: { S: customers[j] }
},
TableName: 'Rules'
}).promise().catch(err => console.log(err));
}
}
}
}
}
Possible solutions are explained in other answers. Just wanted to discuss another point.
BI tools such as QuickSight are designed to be usually used on top of analytical data stores such as Redshift, S3 etc. DynamoDB is not a very suitable data storage for analytics purposes. Row by row operations such as "put" or "get" are very efficient. But bulk operations such as "scan" are expensive. If you are constantly doing scans during the day, your DynamoDB costs might grow fast.
A possible way is to cache the data in SPICE (QuickSight's in memory cache). But a better way is to unload the data into a better suited storage such as S3 or RedShift. Couple of solutions are given on other answers.
Would love to see DynamoDB integration with Quicksight. Using DynamoDB streams to dump to S3 doesn't work because DynamoDB streams send out events instead of updating records. Hence if you read from this S3 bucket you'll have two instances of the same item: one before update and one after update.
One solution that I see now is to dump data from DynamoDB to a S3 bucket periodically using data pipeline and use Athena and Quicksight on this s3 bucket.
Second solution is to use dynamo db stream to send data to elastic search using lambda function. Elastic search has a plug in called Kibana which has pretty cool visualizations. Obviously this is going to increase your cost because now you are storing your data in two places.
Also make sure that you transform your data such that each Elastic Search document has the most granular data according to your needs. As kibana visualizations will aggregate everything in one document.