AWS IoT - Dynamo Insert record failed - amazon-web-services

I am trying to update DynamoDB and I send JSON data from Rasperry PI or MQTT Client, but when I look to CloudWatch I see below error message.
EVENT:DynamoActionFailure TOPICNAME:iotbutton/test CLIENTID:MQTT_FX_Client MESSAGE:Dynamo Insert record failed. The error received was Attribute name must not be null or empty. Message arrived on: iotbutton/test, Action: dynamo, Table: myTable_IoT, HashKeyField: SerialNumber, HashKeyValue: ABCDEFG12345, RangeKeyField: Some(ClickType), RangeKeyValue: SINGLE
I am using the AWS IoT Tutorial (http://docs.aws.amazon.com/iot/latest/developerguide/iot-dg.pdf), The Seccion: Creating a DynamoDB Rule.
The data I send to the IoT platform is:
{
"serialNumber" : "ABCDEFG12345",
"clickType" : "SINGLE",
"batteryVoltage" : "5v USB"
}
topic: iotbutton/ABCDEFG12345
Does anyone come across this error and aware of any solution?
Thanks, regards.

This is the message the CloudWatch logs showed when a tried doing this:
{
"timestamp": "2019-01-28 21:26:16.363",
"logLevel": "ERROR",
"traceId": "9e3ff9b0-fcdf-d8ae-e8a8-4b7a24902405",
"accountId": "xxx",
"status": "Failure",
"eventType": "RuleExecution",
"clientId": "basicPubSub",
"topicName": "xxx/r117",
"ruleName": "devCompDynamoDB",
"ruleAction": "DynamoAction",
"resources":
{
"ItemRangeKeyValue": "SINGLE",
"IsPayloadJSON": "true",
"ItemHashKeyField": "SerialNumber",
"Operation": "Insert",
"ItemRangeKeyField": "ClickType",
"Table": "TestIoTDataTable",
"ItemHashKeyValue": "ABCDEFG12345"
},
"principalId": "xx",
"details": "Attribute name must not be null or empty"
}
To Fix it I edited the DynamoDB Rule in the IoT Web Console and I added a payload column in the "Write message data to this column" field.

Related

Where is the GCP Cloud Scheduler HTTP body?

I am trying to work with a cron job on GCP Cloud Scheduler. I am using the HTTP target with the "GET" method.
I am trying to post messages to a discord channel but first need to GET the body my server webhook sends back to me. The CRON job runs successfully but I cannot find the body of what the webhook returned on the GCP Cloud Scheduler. I have checked the logs as well, it does not contain the body. Here is what the log has:
{
"insertId": "a06j1cfzy21xe",
"jsonPayload": {
"targetType": "HTTP",
"jobName": "projects/website-274422/locations/us-central1/jobs/discord_sec_bot",
"url": "https://discordapp.com/api/webhooks/<redacted>/<redacted>",
"#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
},
"httpRequest": {
"status": 200
},
"resource": {
"type": "cloud_scheduler_job",
"labels": {
"project_id": "website-274422",
"job_id": "discord_sec_bot",
"location": "us-central1"
}
},
"timestamp": "2020-08-10T21:42:13.290867117Z",
"severity": "INFO",
"logName": "projects/website-274422/logs/cloudscheduler.googleapis.com%2Fexecutions",
"receiveTimestamp": "2020-08-10T21:42:13.290867117Z"
}
Could anyone tell me where I could find what my GET request received?
Although it's not mentioned directly in the documentation I don't think it's possible to see this. I am not sure what do you want to do, however if you need any information to pass to the logs you can use response status. I have done quick test on my cloud function, which was randomly sending response status from 200 to 204.
For each job I get 2 different log items. In the 2nd one there is following field with random status:
httpRequest: {
status: 201
}
According this is only chance to see anything returned by the endpoint to logs. You can use this status to code some information.

HIVE_INVALID_METADATA in Amazon Athena

How can I work around the following error in Amazon Athena?
HIVE_INVALID_METADATA: com.facebook.presto.hive.DataCatalogException: Error: : expected at the position 8 of 'struct<x-amz-request-id:string,action:string,label:string,category:string,when:string>' but '-' is found. (Service: null; Status Code: 0; Error Code: null; Request ID: null)
When looking at position 8 in the database table connected to Athena generated by AWS Glue, I can see that it has a column named attributes with a corresponding struct data type:
struct <
x-amz-request-id:string,
action:string,
label:string,
category:string,
when:string
>
My guess is that the error occurs because the attributes field is not always populated (c.f. the _session.start event below) and does not always contain all fields (e.g. the DocumentHandling event below does not contain the attributes.x-amz-request-id field). What is the appropriate way to address this problem? Can I make a column optional in Glue? Can (should?) Glue fill the struct with empty strings? Other options?
Background: I have the following backend structure:
Amazon PinPoint Analytics collects metrics from my application.
The PinPoint event stream has been configured to forward the events to an Amazon Kinesis Firehose delivery stream.
Kinesis Firehose writes data to S3
Use AWS Glue to crawl S3
Use Athena to write queries based on the databases and tables generated by AWS Glue
I can see PinPoint events successfully being added to json files in S3, e.g.
First event in a file:
{
"event_type": "_session.start",
"event_timestamp": 1524835188519,
"arrival_timestamp": 1524835192884,
"event_version": "3.1",
"application": {
"app_id": "[an app id]",
"cognito_identity_pool_id": "[a pool id]",
"sdk": {
"name": "Mozilla",
"version": "5.0"
}
},
"client": {
"client_id": "[a client id]",
"cognito_id": "[a cognito id]"
},
"device": {
"locale": {
"code": "en_GB",
"country": "GB",
"language": "en"
},
"make": "generic web browser",
"model": "Unknown",
"platform": {
"name": "macos",
"version": "10.12.6"
}
},
"session": {
"session_id": "[a session id]",
"start_timestamp": 1524835188519
},
"attributes": {},
"client_context": {
"custom": {
"legacy_identifier": "50ebf77917c74f9590c0c0abbe5522d2"
}
},
"awsAccountId": "672057540201"
}
Second event in the same file:
{
"event_type": "DocumentHandling",
"event_timestamp": 1524835194932,
"arrival_timestamp": 1524835200692,
"event_version": "3.1",
"application": {
"app_id": "[an app id]",
"cognito_identity_pool_id": "[a pool id]",
"sdk": {
"name": "Mozilla",
"version": "5.0"
}
},
"client": {
"client_id": "[a client id]",
"cognito_id": "[a cognito id]"
},
"device": {
"locale": {
"code": "en_GB",
"country": "GB",
"language": "en"
},
"make": "generic web browser",
"model": "Unknown",
"platform": {
"name": "macos",
"version": "10.12.6"
}
},
"session": {},
"attributes": {
"action": "Button-click",
"label": "FavoriteStar",
"category": "Navigation"
},
"metrics": {
"details": 40.0
},
"client_context": {
"custom": {
"legacy_identifier": "50ebf77917c74f9590c0c0abbe5522d2"
}
},
"awsAccountId": "[aws account id]"
}
Next, AWS Glue has generated a database and a table. Specifically, I see that there is a column named attributes that has the value of
struct <
x-amz-request-id:string,
action:string,
label:string,
category:string,
when:string
>
However, when I attempt to Preview table from Athena, i.e. execute the query
SELECT * FROM "pinpoint-test"."pinpoint_testfirehose" limit 10;
I get the error message described earlier.
Side note, I have tried to remove the attributes field (by editing the database table from Glue), but that results in Internal error when executing the SQL query from Athena.
This is a known limitation. Athena table and database names allow only underscore special characters#
Athena table and database names cannot contain special characters, other than underscore (_).
Source: http://docs.aws.amazon.com/athena/latest/ug/known-limitations.html
Use tick (`) when table name has - in the name
Example:
SELECT * FROM `pinpoint-test`.`pinpoint_testfirehose` limit 10;
Make sure you select "default" database on the left pane.
I believe the problem is your struct element name: x-amz-request-id
The "-" in the name.
I'm currently dealing with a similar issue since my elements in my struct have "::" in the name.
Sample data:
some_key: {
"system::date": date,
"system::nps_rating": 0
}
Glue derived struct Schema (it tried to escape them with ):
struct <
system\:\:date:String
system\:\:nps_rating:Int
>
But that still gives me an error in Athena.
I don't have a good solution for this other than changing Struct to STRING and trying to process the data that way.

How to search by nested property in AWS IoT Rules using AWS IoT query language

I'm trying to get AWS IoT rules trigger my actions. But the documentation is pretty poor. For some reason documentation think i'll have json payloads with nesting level=1 top.
Example of my JSON payload:
"state": {
"reported": {
"movement": "yes"
}
}
}
Query i'm using inside a rule
SELECT * FROM '$aws/things/thing-name/shadow/update/accepted' WHERE state.reported.movement="yes"
Documentation, i'm using: http://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-where.html
has information just about flat JSON object, i did try to use state.reported.movement, reported.movement, just movement and looks like none of them works
Okay, answer to my question is: AWS IoT Supports nested properties in WHERE statements. For some reason it didn't work when i did create a rule. Maybe SNS topic had some delay with delivering messages.
My state of the thing, coming to AWS IoT accepted topic looks like that:
{
"state": {
"reported": {
"system": "armed",
"movement": "yes"
}
},
"metadata": {
"reported": {
"system": {
"timestamp": 1509207282
},
"movement": {
"timestamp": 1509207282
}
}
},
"version": 618,
"timestamp": 1509207282,
"clientToken": "xxxxxxxx"
}
And to query that JSON i'm using following query:
SELECT * FROM '$aws/things/myThing/shadow/update/accepted' WHERE state.reported.movement="yes" and state.reported.system="armed"

How to send Cloudwatch log details via email?

The diagram below is what I am trying to achieve. In brief, to send CloudTrail logs to CloudWatch log group then scan it for certain events and finally send email alerts if there is an concerting event.
I am following this official documentation which also has a sample CloudFormation templates: http://docs.aws.amazon.com/awscloudtrail/latest/userguide/use-cloudformation-template-to-create-cloudwatch-alarms.html
Using the CloudFormation templates above, I have been able to send the email alerts. However the alerts are very basic; it does not send key information like which user initiated this event, when did it occur etc.
Logically thinking AWS::Logs::MetricFilter should pass the value to AWS::CloudWatch::Alarm which would then send the information. I have looked at the documentation of both MetricFilter and Alarm services. Dimension comes closer to what I want but not yet able to read the information from logs.
I would have thought this is a common use case and there would be documentation. Am I missing something glaringly obvious here? Has anyone here solved this issue?
AWS::Logs::MetricFilter block:
"AuthorizationFailuresMetricFilter": {
"Type": "AWS::Logs::MetricFilter",
"Properties": {
"LogGroupName": { "Ref" : "LogGroupName" },
"FilterPattern": "{ ($.errorCode = \"*UnauthorizedOperation\") || ($.errorCode = \"AccessDenied*\") }",
"MetricTransformations": [
{
"MetricNamespace": "CloudTrailMetrics",
"MetricName": "AuthorizationFailureCount",
"MetricValue": "1"
}
]
}
},
AWS::CloudWatch::Alarm block
"AuthorizationFailuresAlarm": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmName" : "CloudTrailAuthorizationFailures",
"AlarmDescription" : "Alarms when an unauthorized API call is made.",
"AlarmActions" : [{ "Ref" : "AlarmNotificationTopic" }],
"Dimensions": [
{
"Name": "errorCode",
"Value": ""
},
{
"Name": "userIdentity",
"Value": ""
}
],
"MetricName" : "AuthorizationFailureCount",
"Namespace" : "CloudTrailMetrics",
"ComparisonOperator" : "GreaterThanOrEqualToThreshold",
"EvaluationPeriods" : "1",
"Period" : "300",
"Statistic" : "Sum",
"Threshold" : "1"
}
},
This is not possible.
Amazon CloudWatch Logs will accept information from AWS CloudTrail and, upon finding messages that match a pre-defined filter, will increment a metric count.
An Amazon CloudWatch alarm can then be triggered when the metric exceeds a certain threshold. However, there is no direct connection between the incoming data that generated the metrics and the alarm that triggers based upon the threshold.
Think of it like a turnstile counting people who enter a subway. The turnstile counts the number of people, but does not retain information about the people who passed through. In the same way, the CloudWatch alarm counts the events but does not have any information about the events that were counted.

Alexa Skill ARN - The remote endpoint could not be called, or the response it returned was invalid

I've created a simple Lambda function to call a webpage, this works fine when I test it from the functions page however when trying to create a skill to call this function I end up with a "The remote endpoint could not be called, or the response it returned was invalid." error.
Lambda Function
var http = require('http');
exports.handler = function(event, context) {
console.log('start request to ' + event.url)
http.get(event.url, function(res) {
console.log("Got response: " + res.statusCode);
context.succeed();
}).on('error', function(e) {
console.log("Got error: " + e.message);
context.done(null, 'FAILURE');
});
console.log('end request to ' + event.url);
}
The Test Event code looks like this:
{
"url": "http://mywebsite.co.uk"
}
and I've added a trigger for the "Alexa Skills Kit".
The ARN for this function is showing as:
arn:aws:lambda:us-east-1:052516835015:function:CustomFunction
Alexa Skill (Developer Portal)
I've then created a skill with a simple Intent:
{
"intents": [
{
"intent": "CustomFunction"
}
]
}
and created an Utterance as:
CustomFunction execute my custom function
In the Configuration section for my skill I have selected the "AWS Lambda ARN (Amazon Resource Name)" option and entered the ARN into the box for North America.
In the Test -> Service Simulator section, I've added "execute my custom function" as the Text and this changes the Lambda Request to show:
{
"session": {
"sessionId": "SessionId.a3e8aee0-acae-4de5-85df-XXXXXXXXX",
"application": {
"applicationId": "amzn1.ask.skill.XXXXXXXXX"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.XXXXXXXXX"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.445267bd-2b4a-45ef-8566-XXXXXXXXX",
"locale": "en-GB",
"timestamp": "2016-11-27T22:54:07Z",
"intent": {
"name": "RunWOL",
"slots": {}
}
},
"version": "1.0"
}
but when I run the test I get the following error:
The remote endpoint could not be called, or the response it returned was invalid.
Does anyone have any ideas on why the skill can't connect to the function?
Thanks
The Service Simulator built into the Amazon Alexa Developer Console has known issues. Try copying the JSON generated by the Simulator and pasting it into your lambda function's test event. To access lambda's test events first find the blue 'Test' button. Next to that button select the (Actions Drop down menu) -> (Configure Test Event) -> Paste the provided JSON into the code area -> (Save and Test). Lambda's built in testing features are much more reliable than Alexa's.
If this does not solve the problem lambda's testing event returns a complete stackTrace and error codes. It becomes much easier to trouble shoot when every error isn't "The remote endpoint could not be called, or the response it returned was invalid."
{
"session": {
"sessionId": "SessionId.a3e8aee0-acae-4de5-85df-XXXXXXXXX",
"application": {
"applicationId": "amzn1.ask.skill.XXXXXXXXX"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.XXXXXXXXX"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.445267bd-2b4a-45ef-8566-XXXXXXXXX",
"locale": "en-GB",
"timestamp": "2016-11-27T22:54:07Z",
"intent": {
"name": "RunWOL",
"slots": {}
}
},
"version": "1.0"
}
​While uploading .zip, do not compress the folder into .zip.
Instead, go into the folder, select package.json, index.js and node modules & then compress them and then upload the .zip.
This error message is very broad and may imply a lot of different issues. I was getting this error and in my case it was a timeout issue. How long does that website you are pinging taking to respond? The timeout doesn't seem to be properly documented, see my original question here: Troubleshooting Amazon's Alexa Skill Kit (ASK) Lambda interaction