AWS API Gateway : Execution failed due to configuration error: No match for output mapping and no default output mapping configured - amazon-web-services

In AWS API Gateway, I have a GET method that invokes a lambda function.
When I test the method in the API Gateway dashboard, the lambda function executes successfully but API Gateway is not mapping the context.success() call to a 200 result despite having default mapping set to yes.
Instead I get this error:
Execution failed due to configuration error: No match for output mapping and no default output mapping configured
This is my Integration Response setup:
And this is my method response setup:
Basically I would expect the API Gateway to recognize the successful lambda execution and then map it by default to a 200 response but
that doesn't happen.
Does anyone know why this isn't working?

I have same issue while uploading api using serverless framework. You can simply follow bellow steps which resolve my issue.
1- Navigate to aws api gateway
2- find your api and click on method(Post, Get, Any, etc)
3- click on method response
4- Add method with 200 response.
5- Save it & test

I had the similar issue, got it resolved by adding the method response 200

This is a 'check-the-basics' type of answer. In my scenario, CORS and the bug mentioned above were not at issue. However, the error message given in the title is exactly what I saw and what led me to this thread.
Instead, (an an API Gateway newb) I had failed to redeploy the deployment. Once I did that, everything worked.
As a bonus, for Terraform 0.12 users, the magic you need and should use is a triggers parameter for your aws_api_gateway_deployment resource. This will automatically redeploy for you when other related APIGW resources change. See the TF documentation for details.

There was an issue when saving the default integration response mapping which has been resolved. The bug caused requests to API methods that were saved incorrectly to return a 500 error, the CloudWatch logs should contain:
Execution failed due to configuration error:
No match for output mapping and no default output mapping configured.
Since the 'ENABLE CORS' saves the default integration response, this issue also appeared in your scenario.
For more information, please refer to the AWS forums entry: https://forums.aws.amazon.com/thread.jspa?threadID=221197&tstart=0
Best,
Jurgen

What worked for me:
1. In Api Gateway Console created OPTIONS method manually
2. In the Method Response section under created OPTIONS method added 200 OK
3. Selected Option method and enabled CORS from menu

I found the problem:
Amazon had added a new button in the API-Gateway resource configuration
titled 'Enable CORS'. I had earlier clicked this however once enabled
there doesn't seem to be a way to disable it
Enabling CORS using this button (Instead of doing it manually which is what I ended up doing) seems to cause an internal server error even on a
successful lambda execution.
SOLUTION: I deleted the resource and created it again without clicking
on 'Enable CORS' this time and everything worked fine.
This seems to be a BUG with that feature but perhaps I just don't
understand it well enough. Comment if you have any further information.
Thanks.

Check the box which says "Use Lambda Proxy integration".
This works fine for me. For reference, my lambda function looks like this...
def lambda_handler(event, context:
# Get params
param1 = event['queryStringParameters']['param1']
param2 = event['queryStringParameters']['param2']
# Do some stuff with params to get body
body = some_fn(param1, param2)
# Return response object
response_object = {}
response_object['statusCode'] = 200
response_object['headers'] = {}
response_object['headers']['Content-Type']='application/json'
response_object['body'] = json.dumps(body)
return response_object

I just drop this here because I faced the same issue today and in my case was that we are appending at the end of the endpoint a /. So for example, if this is the definition:
{
"openapi": "3.0.1",
"info": {
"title": "some api",
"version": "2021-04-23T23:59:37Z"
},
"servers": [
{
"url": "https://api-gw.domain.com"
}
],
"paths": {
"/api/{version}/intelligence/topic": {
"get": {
"parameters": [
{
"name": "username",
"in": "query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "version",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "x-api-key",
"in": "header",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "X-AWS-Cognito-Group-ID",
"in": "header",
"schema": {
"type": "string"
}
}
],
...
Remove any / at the end of the endpoint: /api/{version}/intelligence/topic. Also do the same in uri on apigateway-integration section of the swagger + api gw extensions json.

Make sure your ARN for the role is indeed a role (and not, e.g., the policy).

Related

Getting authorizer context from Step Function executed from API Gateway

I'm trying to get my API Gateway api to:
Run an authorizer
Pass authorizer context to a Step Function execution
Respond to client with Step Function output
I already have #1 and #3 done, but passing the response of the attached authorizer lambda to the step function is proving to be impossible.
I found this page and this page with reference sheets on what interpolation values you can use for your parameter mapping (Create Integration -> Step Function: StartSyncExecution -> Advanced Settings -> Input) but any time I try to use anything related to $context like $context.authorizer.email, API Gateway just responds with an HTTP 400 and gives me this CloudWatch output:
"Unable to resolve property Input from source {\"lambdaName\": \"arn:aws:lambda:us-east-1:xxxxxxx\", \"reqBody\": $request.body.Input, \"authContext\": $context.apiId }. Please make sure that the request to API Gateway contains all the necessary fields specified in request parameters."
These are the JSON objects I've tried using for the Input text box and all of them either give me an errors when trying to save or throw an HTTP 400 and log the above errors when I visit the route:
{"lambdaName": "xxx", "reqBody": $request.body.Input, "authContext": $context.authorizer.email }
{"lambdaName": "xxx", "reqBody": $request.body.Input, "authContext": "$context.authorizer.email" }
{"lambdaName": "xxx", "reqBody": $request.body.Input, "authContext": $context.apiId }
{"lambdaName": "xxx", "reqBody": $request.body.Input, "authContext": $context }
{"lambdaName": "xxx", "reqBody": $request.body.Input, "authContext": $event.requestContext.authorizer.email }
It seems like the only way to have authorization code to work with step functions is to wrap my step function called by API Gateway in another step function that authorizes the request and then invokes the endpoint step function. I've researched this for hours and I'm not getting anywhere. Any help at all is appreciated.
I ended up solving this by using API Gateway v1 and a REST API instead of a HTTP API. For some reason v2's input field currently doesn't work for anything other than $request.body.Input. From there, I hooked up all of my endpoints to a step function that runs the authorization lambda on their Authorization header in the request.
I have a step function that allows me to chain together step function and lambda actions so for most requests I just chain together the authorizer lambda and the endpoint's action (can be lambda or another step function).
The main takeaway here is that if you're using API Gateway and Step Functions, it looks like passing custom-formatted input into your step function isn't very easy to do without using the v1 of API Gateway in a REST API, not an HTTP api. Hopefully this will be fixed in the future.
Another solution could be to use the Mapping Template in API Gateway Integration Request.
Example: Consider the response from the Lambda Authorizer as this:
{
principalId: 'myuser',
context: {
customKey: 'CustomValue'
},
policyDocument: {
Version: '2012-10-17',
Statement: [
{
Action: ['execute-api:Invoke'],
Effect: 'Allow',
Resource:
'arn:aws:execute-api:_region_:111111111111:0a0a0a0a0a/default/POST/my-endpoint'
}
]
}
}
In the Mapping Template for application/json, map whatever property you want in the input field (quote escaped). Each property sent from the Lambda Authorizer in the context field will be available as $context.authorizer.property.
One possible Template Mapping could be:
{
"input": "{\"origin\":\"$input.json('$.requestvalue').replaceAll('\"','')\", \"customPropertyFromLambdaAuthorizer\" : \"$context.authorizer.customKey\" }",
"name": "DemoStateMachineRequest",
"stateMachineArn": "arn:aws:states:_region_:11111111111:stateMachine:MyStateMachine"
}
Reference $context.authorizer.property

Which casing of property names is considered the "most correct" in a Google Cloud Pub/Sub Push Message?

If you use a "Push" subscription to a Google Cloud Pub/Sub, you'll be registering an HTTPS endpoint that receives messages from Google's managed service. This is great if you wish to avoid dependencies on Google Cloud's SDKs and instead trigger your asynchronous services via a traditional web request. However, the intended casing of the properties of the payload is not clear, and since I'm using Push subscriptions I don't have a SDK to defer to for deserialization.
If you look at this documentation, you see references to message_id using snake_case (Update 9/18/18: As stated in Kamal's answer, the documentation was updated since this was incorrect), e.g.:
{
"message": {
"attributes": {
"key": "value"
},
"data": "SGVsbG8gQ2xvdWQgUHViL1N1YiEgSGVyZSBpcyBteSBtZXNzYWdlIQ==",
"message_id": "136969346945",
"publish_time": "2014-10-02T15:01:23.045123456Z"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
If you look at this documentation, you see references to messageId using camelCase, e.g.:
{
"message": {
"attributes": {
"key": "value"
},
"data": "SGVsbG8gQ2xvdWQgUHViL1N1YiEgSGVyZSBpcyBteSBtZXNzYWdlIQ==",
"messageId": "136969346945",
"publishTime": "2014-10-02T15:01:23.045123456Z"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
If you subscribe to the topics and log the output, you actually get both formats, e.g.:
{
"message": {
"attributes": {
"key": "value"
},
"data": "SGVsbG8gQ2xvdWQgUHViL1N1YiEgSGVyZSBpcyBteSBtZXNzYWdlIQ==",
"messageId": "136969346945",
"message_id": "136969346945",
"publishTime": "2014-10-02T15:01:23.045123456Z",
"publish_time": "2014-10-02T15:01:23.045123456Z"
},
"subscription": "projects/myproject/subscriptions/mysubscription"
}
An ideal response would answer both of these questions:
Why are there two formats?
Is one more correct or authoritative?
The officially correct names for the variables should be camel case (messageId), based on the Google JSON style guide. In the early phases of Cloud Pub/Sub, snake case was used for message_id and publish_time, but was changed later in order to conform to style standards. The snake case ones were kept in addition to the camel case ones in order to ensure push endpoints depending on the original format did not break. The first documentation link you point apparently was not updated at the time and it will be fixed shortly.

GCP stackdriver fo OnPrem

Based on Stackdriver, I want to send notifications to my Centreon monitoring (behind Nagios) for workflow reasons, do you have any idea on how to do so?
Thank you
Stackdriver alerting allows webhook notifications, so you can run a server to forward the notifications anywhere you need to (including Centreon), and point the Stackdriver alerting notification channel to that server.
There are two ways to send external information in the Centreon queue without a traditional passive agent mode.
First, you can use the Centreon DSM (Dynamic Services Management) addon.
It is interesting because you don't have to register a dedicated and already known service in your configuration to match the notification.
With Centreon DSM, Centreon can receive events such as SNMP traps resulting from the detection of a problem and assign the event dynamically to a slot defined in Centreon, like a tray event.
A resource has a set number of “slots” on which alerts will be assigned (stored). While this event has not been taken into account by human action, it will remain visible in the Centreon web frontend. When the event is acknowledged, the slot becomes available for new events.
The event must be transmitted to the server via an SNMP Trap.
All the configuration is made through Centreon web interface after the module installation.
Complete explanations, screenshots, and tips are described on the online documentation: https://documentation.centreon.com/docs/centreon-dsm/en/latest/user.html
Secondly, Centreon developers added a Centreon REST API you can use to submit information to the monitoring engine.
This feature is easier to use than the SNMP Trap way.
In that case, you have to create both host/service objects before any API utilization.
To send status, please use the following URL using POST method:
api.domain.tld/centreon/api/index.php?action=submit&object=centreon_submit_results
Header
key value
Content-Type application/json
centreon-auth-token the value of authToken you got on the authentication response
Example of service body submit: The body is a JSON with the parameters provided above formatted as below:
{
"results": [
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "Memory",
"status": "2"
"output": "The service is in CRITICAL state"
"perfdata": "perf=20"
},
{
"updatetime": "1528884076",
"host": "Centreon-Central"
"service": "fake-service",
"status": "1"
"output": "The service is in WARNING state"
"perfdata": "perf=10"
}
]
}
Example of body response: :: The response body is a JSON with the HTTP return code, and a message for each submit:
{
"results": [
{
"code": 202,
"message": "The status send to the engine"
},
{
"code": 404,
"message": "The service is not present."
}
]
}
More information is available in the online documentation: https://documentation.centreon.com/docs/centreon/en/19.04/api/api_rest/index.html
Centreon REST API also allows to get real-time status for hosts, services and do the object configuration.

Cognito User Migration Trigger - Exception during user migration - Exception Location

We're using a lambda function to respond to the 'User Migration' trigger in AWS Cognito. When something like a syntax error occurs, you can see it in cloud watch logs. However, "Exception during user migration" errors seen on the login page are no where to be found in the cloud watch logs.
Where are we supposed to look for these? I can't find any anything in the documentation and assumed it would have gone to cloud watch.
I can't test it in the lambda interface because one of the parameters being passed into the lambda function will have a function nested within the object and I can't create a test JSON setup that has that. There's also no test trigger for user migration that is pre-built.
Any ideas as to why I can't see this in cloud watch or where the exceptions would be shown would be greatly appreciated.
Unfortunately Cogntio doesn't expose any logs (or metrics, for that matter!).
The closest you can get is to view the lambda's logs in CloudWatch. If you log your response, and watch your lambda's error metric then you should mostly be able to debug issues internal to the lambda.
This does leave a few edge cases:
You won't see anything if the lambda can't be invoked (this would only happen under heavy concurrent loads either on that single lambda, or on all lambdas across your account)
If you return a bad response the lambda will succeed but the trigger action will fail and Cognito will give you a fairly generic message. At this point you're at the mercy of AWS' documentation to work out what's wrong (which can be a bit hit and miss- although StackOverflow always helps!).
You can find an example payload for the lambda in the trigger documentation:
{
"userName": "THE USERNAME",
"request": {
"password": "THE PASSWORD"
},
"response": {
// it is your responsibility to fill this bit in and return the completed object back:
"userAttributes": {
"string": "string",
...
},
"finalUserStatus": "string",
"messageAction": "string",
"desiredDeliveryMediums": [ "string", ... ],
"forceAliasCreation": boolean
}
}
n.b. As an aside, which you might know, but Lambda payloads always have to be in JSON, which does not store functions. So you should always be able to derive a test payload to use in the console.

Lambda function not working upon Alexa Skill invocation

I've just created my first (custom) still. I've set the function up in Lambda by uploading a zip file containing my index.js and all the necessary code required, including node_modules and the base Alexa skill that mine is a child of (as per the tutorials). I made sure I zipped up the files and sub-folders, not the folder itself (as I can see this is a common cause of similar errors) but when I create the skill and test in the web harness with a sample utterance I get:
remote endpoint could not be called, or the response it returned was
invalid.
I'm not sure how to debug this as there's nothing logged in CloudWatch.
I can see in the Lambda request that my slot value is translated/parsed successfully and the intentname is correct.
In AWS Lambda I can invoke the function successfully both with a LaunchRequest and another named intent. From the developer console though, I get nothing.
I've tried copying the JSON from the lambda test (that works) to the developer portal and I get the same error. Here is a sample of the JSON I'm putting in the dev portal (that works in Lambda)
{
"session": {
"new": true,
"sessionId": "session1234",
"attributes": {},
"user": {
"userId": null
},
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.149e75a3-9a64-4224-8bcq-30666e8fd464"
}
},
"version": "1.0",
"request": {
"type": "LaunchRequest",
"requestId": "request5678"
}
}
The first step in pursuing this problem is probably to test your lambda separate from your skill configuration.
When looking at your lambda function in the AWS console, note the 'test' button at the top, and next to it there is a drop down with an option to configure a test event. If you select that option you will find that there are preset test events for Alexa. Choose 'alexa start session' and then choose 'save and test' button.
This will give you more detailed feedback about the execution of your lambda.
If your lambda works fine here then the problem probably lies in your skill configuration, so I would go back through whatever tutorial and documentation you were using to configuration your skill and make sure you did it right.
When you write that the lambda request looks fine I assume you are talking about the service simulator, so that's a good start, but there could still be a problem on the configuration tab.
We built a tool for local skill development and testing.
BST Tools
Requests and responses from Alexa will be sent directly to your local server, so that you can quickly code and debug without having to do any deployments. I have found this to be very useful for our own development.
Let me know if you have any questions.
It's open source: https://github.com/bespoken/bst