I am working with AWS IoT. I create a Thing and use MQTT to view the result updated from the Thing Shadow.
$aws/things/thing_name/shadow/update
This is the sample result:
{"state": {
"desired": null,
"reported": {
"ext_addr": "0x124b0013a4c55d",
"last_reported": "22:20:35 2018-10-30",
"objects": {
"temperature": {
"0": {
"oid": "temperature",
"sensorValue": 33,
"units": "Cels",
"minMeaValue": 33,
"maxMeaValue": 33
}
}
}
}
I want to store the "last_reported", "objects" in separated columns in DynamoDB using Rule to invoke Lambda function. However, I stuck at programming step of the Lambda function.
The table should have items like:
sensor_id = ${topic(3)}
last_reported = SELECT state.reported.last_reported FROM '$aws/things/thing_name/shadow/update'
data = SELECT state.reported.objects FROM '$aws/things/thing_name/shadow/update'
Thanks in advance.
Although you can use a lambda rule to store IoT data in dynamo, AWS IoT includes a direct-to-dynamo rule, which I think will be much easier for you if you do not have much programming experience:
Create a DynamoDBv2 action with the following definition:
{
"rule": {
"ruleDisabled": false,
"sql": "SELECT payload.state.reported, topic(3) AS sensor_id FROM '$aws/things/+/shadow/update/accepted'",
"description": "A test DynamoDBv2 rule",
"actions": [{
"dynamoDBv2": {
"roleArn": "arn:aws:iam::123456789012:role/aws_iot_dynamoDBv2",
"putItem": {
"tableName": "my_ddb_table"
}
}
}]
}
}
Where:
arn:aws:iam::123456789012:role/aws_iot_dynamoDBv2 is an IAM role
that is allowed to put item to your dynamo table
my_ddb_table is
the name of the table to save the data to
Related
Here is my DynamoDB structure.
{"books": [
{
"name": "Hello World 1",
"id": "1234"
},
{
"name": "Hello World 2",
"id": "5678"
}
]}
I want to set ConditionExpression to check whether id existed before adding new items to books array. Here is my ConditionExpression. I am using API gateway to access DynamoDB.
"ConditionExpression": "NOT contains(#lu.books.id,:id)",
"ExpressionAttributeValues": {":id": {
"S": "$input.path('$.id')"
}
}
Result when I test the API: no matter id existed or not, success to add items to array.
Any suggestion on how to do it? Thanks!
Unfortunately, you can't. However, there is a workaround.
Store the books in separate rows. For example
PK SK
BOOK_LU#<ID> BOOK_NAME#<book name>#BOOK_ID#<BOOK_ID>
Now you can use the 'if_not_exists' conditional expression
"ConditionExpression": "if_not_exists(id, :id)'",
"ExpressionAttributeValues": {":id": {
"S": "$input.path('$.id')"
}
}
The con is if you were previously fetching the list as part of another object you will have to change that.
The pro is that now you can easily work with the books + you won't hit the max row size limits if the books became too many.
I am trying to write "BatchPutItem" custom resolver, so that I can create multiple items (not more than 25 at a time), which should accept a list of arguments and then perform a batch operation.
Here is the code which I have in CusomtResolver:
#set($pdata = [])
#foreach($item in ${ctx.args.input})
$util.qr($item.put("createdAt", $util.time.nowISO8601()))
$util.qr($item.put("updatedAt", $util.time.nowISO8601()))
$util.qr($item.put("__typename", "UserNF"))
$util.qr($item.put("id", $util.defaultIfNullOrBlank($item.id, $util.autoId())))
$util.qr($pdata.add($util.dynamodb.toMapValues($item)))
#end
{
"version" : "2018-05-29",
"operation" : "BatchPutItem",
"tables" : {
"Usertable1-staging": $utils.toJson($pdata)
}
}
Response in the query console section:
{
"data": {
"createBatchUNF": null
},
"errors": [
{
"path": [
"createBatchUserNewsFeed"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Unsupported operation 'BatchPutItem'. Datasource Versioning only supports the following operations (TransactGetItems,PutItem,BatchGetItem,Scan,Query,GetItem,DeleteItem,UpdateItem,Sync)"
}
]
}
And the query is :
mutation MyMutation {
createBatchUNF(input: [{seen: false, userNFUserId: "userID", userNFPId: "pID", user NFPOwnerId: "ownerID"}]) {
items {
id
seen
}
}
}
Conflict detection is also turned off
And when I check cloud logs I found this error:
I was able to solve this issue by disabling DataStore for entire API.
When we create a new project/backend app using amplify console, and in data tab it asks us to enable data store so that we can start modelling our GraphQL api, that's the reason which enables versioning for the entire API, and it prevents batchWrite to execute.
So in order to solve this issue, we can follow these steps:
[Note: I am using #aws-amplify/cli]
$ amplify update api
? Please select from one of the below mentioned services: GraphQL
? Select from the options below: Disable DataStore for entire API
And then amplify push
This will fix this issue.
I have two tables, cuts and holds. In response to an event in my application I want to be able to move the values from a hold entry with a given id across to the cuts table. The naïve way is to do an INSERT and then a DELETE.
How do I run multiple sql statements in an AppSync resolver to achieve that result? I have tried the following (replacing sql by statements and turning it into an array) without success.
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"statements": [
"INSERT INTO cuts (id, rollId, length, reason, notes, orderId) SELECT '$id', rollId, length, reason, notes, orderId FROM holds WHERE id=:ID",
"DELETE FROM holds WHERE id=:ID"
],
"variableMapping": {
":ID": "$context.arguments.id"
},
"responseSQL": "SELECT * FROM cuts WHERE id = '$id'"
}
}
If you're using the "AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda" found here https://github.com/aws-samples/aws-appsync-rds-aurora-sample, you won't be able to send multiple statements in the sql field
If you are using the AWS AppSync integration with the Aurora Serverless Data API, you can pass up to 2 statements in a statements array such as in the example below:
{
"version": "2018-05-29",
"statements": [
"select * from Pets WHERE id='$ctx.args.input.id'",
"delete from Pets WHERE id='$ctx.args.input.id'"
]
}
You will be able to do it as the following.If you're using the "AWS AppSync Using Amazon Aurora as a Data Source via AWS Lambda" found here https://github.com/aws-samples/aws-appsync-rds-aurora-sample.
In the resolver, add the "sql0" & "sql1" field (you can name them what ever you want) :
{
"version" : "2017-02-28",
"operation": "Invoke",
#set($id = $util.autoId())
"payload": {
"sql":"INSERT INTO cuts (id, rollId, length, reason, notes, orderId)",
"sql0":"SELECT '$id', rollId, length, reason, notes, orderId FROM holds WHERE id=:ID",
"sql1":"DELETE FROM holds WHERE id=:ID",
"variableMapping": {
":ID": "$context.arguments.id"
},
"responseSQL": "SELECT * FROM cuts WHERE id = '$id'"
}
}
In your lambda, add the following piece of code:
if (event.sql0) {
const inputSQL0 = populateAndSanitizeSQL(event.sql0, event.variableMapping, connection);
await executeSQL(connection, inputSQL0);
}
if (event.sql1) {
const inputSQL1 = populateAndSanitizeSQL(event.sql1, event.variableMapping, connection);
await executeSQL(connection, inputSQL1);
}
With this approach, you can send to your lambda as much sql statements as you want,and then your lambda will execute them.
Have a bunch of IoT devices (ESP32) which publish a JSON object to things/THING_NAME/log for general debugging (to be extended into other topics with values in the future).
Here is the IoT rule which kind of works.
{
"sql": "SELECT *, parse_time(\"yyyy-mm-dd'T'hh:mm:ss\", timestamp()) AS timestamp, topic(2) AS deviceId FROM 'things/+/stdout'",
"ruleDisabled": false,
"awsIotSqlVersion": "2016-03-23",
"actions": [
{
"elasticsearch": {
"roleArn": "arn:aws:iam::xxx:role/iot-es-action-role",
"endpoint": "https://xxxx.eu-west-1.es.amazonaws.com",
"index": "devices",
"type": "device",
"id": "${newuuid()}"
}
}
]
}
I'm not sure how to set #timestamp inside Elasticsearch to allow time based searches.
Maybe I'm going about this all wrong, but it almost works!
Elasticsearch can recognize date strings matching dynamic_date_formats.
The following format is automatically mapped as a date field in AWS Elasticsearch 7.1:
SELECT *, parse_time("yyyy/MM/dd HH:mm:ss", timestamp()) AS timestamp FROM 'events/job/#'
This approach does not require to create a preconfigured index, which is important for dynamically created indexes, e.g. with daily rotation for logs:
devices-${parse_time("yyyy.MM.dd", timestamp(), "UTC")}
According to elastic.co documentation,
The default value for dynamic_date_formats is:
[ "strict_date_optional_time","yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"]
#timestamp is just a convention as the # prefix is the default prefix for Logstash generated fields. Because you are not using Logstash as a middleman between IoT and Elasticsearch, you don't have a default mapping for #timestamp.
But basically, it is just a name, so call it what you want, the only thing that matters is that you declare it as a timestamp field in the mappings section of the Elasticsearch index.
If for some reason you still need it to be called #timestamp, you can either SELECT it with that prefix right away in the AS section (might be an issue with IoT's sql restrictions, not sure):
SELECT *, parse_time(\"yyyy-mm-dd'T'hh:mm:ss\", timestamp()) AS #timestamp, topic(2) AS deviceId FROM 'things/+/stdout'
Or you use the copy_to functionality when declaring you're mapping:
PUT devices/device
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"copy_to": "#timestamp"
},
"#timestamp": {
"type": "date",
}
}
}
}
I would like to ask you about how to correctly put the data from S3 to ES domain. I've created and configured a new ES domain, bucket and the lambda function (from this example:
https://github.com/awslabs/amazon-elasticsearch-lambda-samples). All of them are created on the same location.
Everything would be fine but I tried to put something to my bucket and everything looks good - I've placed a new json file and then lambda function detected it and show the results like:
{
"Records": [
"bucket": {
"name": "test",
"...."
},
"object": {
"key": "test.json",
"size": 22,
"eTag": "",
"sequencer": ""
}
....
]
}
2016-04-08T07:34:xxxxxxx 0 All 26 log records added to ES.
After all, I tried to search something in ES, but it doesn't show me any new indexes, i've checked this by url:
https://search-xxxx.us-west-2.es.amazonaws.com/_aliases
What am I doing wrong?
Cheers :)
The index must be created before
The lambda function from the code you report https://github.com/awslabs/amazon-elasticsearch-lambda-samples/blob/master/src/s3_lambda_es.js have the link of existing index
/* Globals */
var esDomain = {
endpoint: 'my-search-endpoint.amazonaws.com',
region: 'my-region',
index: 'logs',
doctype: 'apache'
};
You have updated those values and it must match the index you created
If you left logs you must create the logs index in elasticsearch
curl -XPOST 'https://search-xxxx.us-west-2.es.amazonaws.com/_aliases' -d '
{
"actions" : [
{ "add" : { "index" : "logs", "alias" : "logs" } }
]
}'
Just make sure the index is aligned between the creation and your lambda function