How can I display sensor value from AWS DynamoDB to my front-end? - amazon-web-services

I am building an android IoT app using AWS Amplify and DynamoDB. I have an ESP32 here that publishes data the sensor data to a topic I created a rule that would store it to separate DynamoDB columns.
My code below is to sort the data in descending order according to the timestamp and return a query with a limit of 1. It works for the first time, but I noticed it returns the same item even if the database is already updated or the item is no longer in the table. I have read about DynamoDB Stream, Lambda, and IoT Device Shadow but I am new to AWS and I'm confused on which service or what method works best and is not too complex for me.
This is my code to retrieve data:
public void readById() {
Amplify.DataStore.query(
myModel.class,
Where.sorted(myModel.ID.descending()).paginated(Page.startingAt(0).withLimit(1)),
items -> {
while (items.hasNext()) {
mModel item = items.next();
retrievedId = item.getId();
Log.i("Amplify", "Id " + item.getId());
}
},
failure -> Log.e("Amplify", "Could not query DataStore", failure)
);
}
Any input is appreaciated!

Related

How to delete all items that satisfies certain condition in AWS amplify?

I am developing an web app using AWS amplify.
I want to delete multiple items that satisfies certain condition using a query like the following:
mutation delete {
deletePostTag(condition: {title: {eq: "Hello"}}) {
id
}
}
However, having tried to run the above query on AWS AppSync console, it complains that input field is missing, but unfortunately input only accepts id.
It seems that the resolver generated by amplify cli does not support deleting multiple items at once.
Do I have to implement a custom resolver?
You can delete multiple items in a batch. Example below and read more here.
Schema:
type Mutation {
batchDelete(ids: [ID]): [Post]
}
Query:
mutation delete {
batchDelete(ids:[1,2]){ id }
}
Not 100% sure if conditions are supported here, but hopefully you can test it. If, as I suspect, they are not supported then simply issue a query with those same conditions to retrieve matching items and then supply the resulting array of item keys to batchDelete.

Triggering a Lambda function with multiple required sources

I'm currently running a CloudFormation stack with a number of elements to process video, including a call to Rekognition. I have most of it working but have a question about properly storing information as I go... so I can, in the end, write the Rekognition data for a video to a DynamoDB table.
Below I have the relevant parts of the stack, which are mostly inside of a Step Function passing this input event along:
sample_event = {
"guid": "1234",
"video": "video.mp4",
"bucket": "my-bucket"
}
Current setup:
Write sample_event to a DynamoDB Table, by primary key 'guid', pass that sample_event along to the next step.
Rekgonition-Trigger Lambda: Lambda function that runs start_label_detection() on 'video.mp4' in 'my-bucket', sets notification channel as an SNS topic.
Rekognition-Collect Lambda: Lambda function (sits outside the Step Function) that is triggered by the SNS topic (several minutes later, for example), collects the JobID from the SNS, runs get_label_detection() at the JobID.
The above is working fine. I want to add step 4:
Write rekognition response to my DyanmoDB table, for the entry at "guid" = "1234", so my dynamo item is updated to
{ "guid": "1234", "video": "video.mp4", "bucket": "my-bucket", "rek_data": "{"Labels": [...]}" }
So it seems to me that I essentially can't pass any other data through Rekognition other than the SNS topic. Also seems that in the second lambda, I shouldn't be querying by a non-primary key such as the JobID.
Is there a way to set up the second lambda function so that it is triggered by two (and only the correct two) SNS topics? Such as one to send the 'guid' and one to send the Rekognition data?
Or would it be efficient to use two Dynamo tables, one to temporarily store the JobID and guid for later referencing? Or a better way to do all of this?
Thanks!

AWS DynamoDB read newly inserted record

Since i am new to AWS and other AWS services. for my hands on , prepared dynamodb use case. Whenever records insert into Dynamodb, that record should move to S3 for further processing. Written below code snippet in java using KCL
public static void main(String... args) {
KinesisClientLibConfiguration workerConfig = createKCLConfiguration();
StreamsRecordProcessorFactory recordProcessorFactory = new StreamsRecordProcessorFactory();
System.out.println("Creating worker");
Worker worker = createKCLCWorker(workerConfig, recordProcessorFactory);
System.out.println("Starting worker");
worker.run();
}
public class StreamsRecordProcessorFactory implements IRecordProcessorFactory {
public IRecordProcessor createProcessor() {
return new StreamRecordsProcessor();
}
}
method in StreamRecordsProcessor class
private void processRecord(Record record) {
if (record instanceof RecordAdapter) {
com.amazonaws.services.dynamodbv2.model.Record streamRecord = ((RecordAdapter) record)
.getInternalObject();
if ("INSERT".equals(streamRecord.getEventName())) {
Map<String, AttributeValue> attributes
= streamRecord.getDynamodb().getNewImage();
System.out.println(attributes);
System.out.println(
"New item name: " + attributes.get("name").getS());
}
}
}
From my local environment , i can able to see the record whenever we added the records in dynamodb. but i have few questions.
How can i deploy this project into AWS.
What is procedure or any required configuration from AWS side.
Please share your thoughts.
You should be able to use AWS Lambda as the integration point between Kinesis that ingest data from the DynamoDB stream and your Lambda function that reads data from the stream and pushes into a Kinesis Firehose stream to be ultimately deposited in S3. Here is an AWS blog article that can serve as a high-level guide for doing this. It gives you information about the AWS components you can use to build this and additional research on each component can help you put the pieces together.
Give that a try, if you get stuck anywhere, please add a comment and I'll respond in due time.

How to send data from AWS IoT to AWS DynamoDB v2 using IoT Rules

I want to send individual data values to respective columns received from AWS IoT to AWS DynamoDB.
My Devices send this payload:
{
"state": {
"desired": {
"DeviceId" : "Device101",
"DateTime" : now,
"Room1 Temperature" : m_t,
"Room2 Temperature": b_t
},
"reported": {
"Item": {
"DeviceId" : "Device101",
"DateTime" : now,
"Room1 Temperature" : m_t,
"Room2 Temperature": b_t
}
}
}
}
I am receiving this payload as a Shadow Update on my Shadow Link here :
$aws/things/shadow/update
I have created a Sample DynamoDB Table and linked it with AWS IoT Rule, so that whenever data is incoming into the above mentioned topic, it will be triggered by the SQL Query:
SELECT * FROM '$aws/things/shadow/update'
This Data is reflected in my Shadow Update
This Data is not forwarded to DynamoDB table
What is the problem?!
Any help would be appreciated. Thanks.
Firstly, enabling Cloudwatch Logs should assist you in debugging this issue.
Generally these types of silent failures indicate you have not properly formatted your data for insertion into DynamoDB.
Things to check:
Your SELECT statement won't work, as it needs to pull in the content you want to insert. In your case, this would be either SELECT desired.* or SELECT reported.Item.*
The primary partition key must be one of the keys you pull in or your DynamoDB insert will fail. What is your primary partition key? Make sure you have that in your list of keys in your SELECT statement
Make sure your data type for the primary partition key matches the type you are passing in - for example if your primary partition key was DeviceId, it should be a string type (not, for example, an Integer)
CloudWatch logs will provide much more detailed information to assist you. See http://docs.aws.amazon.com/iot/latest/developerguide/cloud-watch-logs.html for information on how to do this.
You can also enable it from the IoT console by selecting Settings (the cog) from the left hand navigation, and updating settings under Logging.
For me this post led me to a result that finally worked, after many hours of misleading information:
https://forums.aws.amazon.com/thread.jspa?messageID=931485

Aws IOT to dynamoDB - returns encrypted MQTT messages

My IOT device populates the following data:
12-09-2017 12:05:26PeopleIn: 2
and this gets streamed in AWS IOT server when I subscribe to the topic. When I export a message as csv I get the following getting exported:
format,payload,qos,timestamp,topic
string,12-09-2017 12:05:26PeopleIn: 2,0,1505190320098,TestSite
I am trying to configure AWS DynamoDB to store the above data as a table. In AWS IOT rules setting I have provided the following:
Attribute = *
topicFilter = # (i have also tried with TestSite)
Selected Action = Insert a message into DynamoDB table
I created a dynamoDB table with the following settings:
Primary Partition Key = timestamp (Number)
I have set any sort key. Upon setting up the rule, the data gets populated into the dynamoDB. However, the timestamp is populated as Unix Timestamp and the data is populated in an encrypted format. The following is the output that I am getting in AWS DynamoDB.
timestamp data_raw
1505198126899 MTItMDktMjAxNyAxMjowNToyNlBlb3BsZUluOiAy
Not sure what I am missing. What I ideally want is the timestamp + payload. The desired output would be:
timestamp data
1505198126899 12-09-2017 12:05:26PeopleIn: 2
Can someone help?