Lambda function not working upon Alexa Skill invocation - amazon-web-services

I've just created my first (custom) still. I've set the function up in Lambda by uploading a zip file containing my index.js and all the necessary code required, including node_modules and the base Alexa skill that mine is a child of (as per the tutorials). I made sure I zipped up the files and sub-folders, not the folder itself (as I can see this is a common cause of similar errors) but when I create the skill and test in the web harness with a sample utterance I get:
remote endpoint could not be called, or the response it returned was
invalid.
I'm not sure how to debug this as there's nothing logged in CloudWatch.
I can see in the Lambda request that my slot value is translated/parsed successfully and the intentname is correct.
In AWS Lambda I can invoke the function successfully both with a LaunchRequest and another named intent. From the developer console though, I get nothing.
I've tried copying the JSON from the lambda test (that works) to the developer portal and I get the same error. Here is a sample of the JSON I'm putting in the dev portal (that works in Lambda)
{
"session": {
"new": true,
"sessionId": "session1234",
"attributes": {},
"user": {
"userId": null
},
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.149e75a3-9a64-4224-8bcq-30666e8fd464"
}
},
"version": "1.0",
"request": {
"type": "LaunchRequest",
"requestId": "request5678"
}
}

The first step in pursuing this problem is probably to test your lambda separate from your skill configuration.
When looking at your lambda function in the AWS console, note the 'test' button at the top, and next to it there is a drop down with an option to configure a test event. If you select that option you will find that there are preset test events for Alexa. Choose 'alexa start session' and then choose 'save and test' button.
This will give you more detailed feedback about the execution of your lambda.
If your lambda works fine here then the problem probably lies in your skill configuration, so I would go back through whatever tutorial and documentation you were using to configuration your skill and make sure you did it right.
When you write that the lambda request looks fine I assume you are talking about the service simulator, so that's a good start, but there could still be a problem on the configuration tab.

We built a tool for local skill development and testing.
BST Tools
Requests and responses from Alexa will be sent directly to your local server, so that you can quickly code and debug without having to do any deployments. I have found this to be very useful for our own development.
Let me know if you have any questions.
It's open source: https://github.com/bespoken/bst

Related

Amazon EventBridge won't send events via 2 rules nor 2 targets within one rule

Within Amazon Eventbridge, I'm listening for Transcribe events such as the following:
{
"source": ["aws.transcribe"],
"detail-type": ["Transcribe Job State Change"],
"detail": {
"TranscriptionJobStatus": ["FAILED", "COMPLETED"]
}
}
I need to send this event to development (via Ngrok), to staging, and to production, each time with a query parameter indicating which environment triggered the transcription.
Having worked on this simple usecase for a full day, it simple seems bugged:
The first rule, target and connection that I set up functions fine.
Adding additional targets to this rule do not function
Brand new rules I add to receive and handle the events do not work
Deleting everything and rebuilding, again, the first rule, target and connection will function (even if this is to a different environment.)
So for example, I've had dev but not staging working, ripped it all down, and then rebuilt, and ended up with staging but not dev working.
What on earth is going on?
We fixed it after considerable trial and error.
It's not clear what the root cause is. Likely issues were:
When a role is created for your connections the first time, it will be created with the correct permissions. Check all IAM permissions with a fine tooth comb.
AWS Events only fire when the triggering system and the rules are in the same region (e.g. us-east-1)
To help debug, export the cloud formation templates for systems that are working and are not working to see if there are any differences in the set-up.
The rest of this answer is simply going to be advice to anyone considering using AWS EventBridge: run.

Debugging "read time out" for AWS lambda function in Alexa Skill

I am using an AWS lambda function to serve my NodeJS codebase for an Alexa Skill.
The skill makes external API calls to a custom API as well as the Amazon GameOn API, it also uses URL's which serve audio files and images from an S3 Bucket.
The issue I am having is intermittent, and is affecting about 20% of users. At random points of the skill, the user request will produce an invalid response from the skill, with the following error:
{
"Request": {
"type": "System.ExceptionEncountered",
"requestId": "amzn1.echo-api.request.ab35c3f1-b8e6-4478-945c-16f644359556",
"timestamp": "2020-05-16T19:54:24Z",
"locale": "en-US",
"error": {
"type": "INVALID_RESPONSE",
"message": "Read timed out for requestId amzn1.echo-api.request.323b1fbb-b4e8-4cdf-8f31-30c9b67e4a5d"
},
"cause": {
"requestId": "amzn1.echo-api.request.323b1fbb-b4e8-4cdf-8f31-30c9b67e4a5d"
}
},
I have looked up this issue, I believe it's something wrong with the lambda function configuration but can't figure out where!
I've tried increasing the Memory the function uses (now 256MB).
It should be noted that the function timeout is 8000ms, since this is the max time you are allowed for an Alexa response.
What causes this Read timeout issue, and what measures can I take to debug and resolve it?
Take a look at AWS XRay. By using this with your Lambda you should be able to identify the source of these timeouts.
This link should help you understand how to apply it.
We found that this was occurring when the skill was trying to access a resource which was stored on our Azure website.
The CPU and Memory allocation for the azure site was too low, and it would fail when facing a large amount of requests.
To fix, we improved the plan the app service was running on.

Is there a way to write git check status back to git when there is a failure in awscode pipeline

I am setting up aws code pipeline based on the git developer branch.
when the developer commits his code, the pipeline will be trigger based on the webhook. Now the idea is when there is a failure in the pipeline, and when the developer triggers a pull-request, the reviewer should know that this is bad branch. He should be able to see the git status of the branch showing there is a failure.
Earlier I have used a build tool called Codeship which has a git-hub app to do this stuff. Now I have gone through git-hub API
https://developer.github.com/v3/checks/runs/#create-a-check-run
But not sure where to start.
To send a notification when stage fails, please follow these steps:
Based on your Cloudwatch events emitted by CodePipeline [0], trigger a lambda function [1].
Lambda function can trigger API call "list-pipeline-executions"[2] and from that you can fetch all the required values like Commit Id, status message etc [3].
Once the values are fetched, you can send same value to SNS by writing lambda function code inside same lambda. Following blogs shows how to publish in SNS using lambda [4][5].
[0] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-cloudwatch-sns-notifications.html
[1] https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
[2] https://docs.aws.amazon.com/cli/latest/reference/codepipeline/list-pipeline-executions.html
[3] https://docs.aws.amazon.com/cli/latest/reference/codepipeline/index.html
[4] https://gist.github.com/jeremypruitt/ab70d78b815eae84e037
[5] Can you publish a message to an SNS topic using an AWS Lambda function backed by node.js?
I have done the following to write back the status to gitrepo:
I have made use of gitstatus api:
https://developer.github.com/v3/repos/statuses/
Written a Lambda function to do POST request with these details
state, TargetURL , context
{
"state": "success",
"target_url": "https://example.com/build/status" (Build Tool URL like codebuild or jenkins)
"description": "The build succeeded!",
"context": "continuous-integration/jenkins"
}
the response here should be
"url": "https://api.github.com/repos//-/statuses/6dcb09b5b57875f334f61aebed695e2e4193db5e ",
All these details can be obtained using CLoudwatch event for the pipeline. by using patterns of event detail list :
event.detail.state;event.detail.pipeline;event.region, event ['detail']['execution-id'],
data.pipelineExecution.artifactRevisions[0].revisionUrl
Cheers.

Cognito User Migration Trigger - Exception during user migration - Exception Location

We're using a lambda function to respond to the 'User Migration' trigger in AWS Cognito. When something like a syntax error occurs, you can see it in cloud watch logs. However, "Exception during user migration" errors seen on the login page are no where to be found in the cloud watch logs.
Where are we supposed to look for these? I can't find any anything in the documentation and assumed it would have gone to cloud watch.
I can't test it in the lambda interface because one of the parameters being passed into the lambda function will have a function nested within the object and I can't create a test JSON setup that has that. There's also no test trigger for user migration that is pre-built.
Any ideas as to why I can't see this in cloud watch or where the exceptions would be shown would be greatly appreciated.
Unfortunately Cogntio doesn't expose any logs (or metrics, for that matter!).
The closest you can get is to view the lambda's logs in CloudWatch. If you log your response, and watch your lambda's error metric then you should mostly be able to debug issues internal to the lambda.
This does leave a few edge cases:
You won't see anything if the lambda can't be invoked (this would only happen under heavy concurrent loads either on that single lambda, or on all lambdas across your account)
If you return a bad response the lambda will succeed but the trigger action will fail and Cognito will give you a fairly generic message. At this point you're at the mercy of AWS' documentation to work out what's wrong (which can be a bit hit and miss- although StackOverflow always helps!).
You can find an example payload for the lambda in the trigger documentation:
{
"userName": "THE USERNAME",
"request": {
"password": "THE PASSWORD"
},
"response": {
// it is your responsibility to fill this bit in and return the completed object back:
"userAttributes": {
"string": "string",
...
},
"finalUserStatus": "string",
"messageAction": "string",
"desiredDeliveryMediums": [ "string", ... ],
"forceAliasCreation": boolean
}
}
n.b. As an aside, which you might know, but Lambda payloads always have to be in JSON, which does not store functions. So you should always be able to derive a test payload to use in the console.

How to use logger to log an error into stackdriver?

I have a VM micro instance running on google compute cloud and I want to log an error message to stackdriver. This page https://cloud.google.com/logging/docs/agent/installation shows this example
logger "Some test message"
which works great for normal messages, but I want stackdriver to recognize some messages as errors, so that they would show up here https://console.cloud.google.com/errors, which would allow me to get email notifications.
I'm aware that the gcloud tool has a beta logging solution, but I'm hoping to avoid installing the extra components it requires.
You'll want to read over the docs about formatting at https://cloud.google.com/error-reporting/docs/formatting-error-messages
Something like:
{
"message": "Some test message",
"context": {
"reportLocation": {
"functionName": "my_function"
}
},
"serviceContext": {
"service": "my service",
}
}
You'll need the message to be the jsonPayload of the log entry, not the textPayload. I believe the agent will automatically recognize JSON messages, but if there are also non-JSON messages it may fall back to using text in all cases. In that case, using a dedicated log for the errors should help.
You may also be interested in the docs on how messages are grouped together: https://cloud.google.com/error-reporting/docs/grouping