InputTransformer YAML not resolving for event rule - amazon-web-services

I am aiming to send following format to the data:
{
"headers": {
"Authorization": "Bearer expectedToken"
},
"body": {
"id": "abc",
"status": "ANY",
"preferences": [ [Object] ]
}
}
but struggling with the Input transformer in YAML:
inputTransformer:
inputPathsMap:
expectedToken: '$detail.metadata.someToken'
inputTemplate: '{"headers": {"Authorization": <expectedToken>}}'
And getting this error:
Received response status [FAILED] from custom resource. Message
returned: Event pattern is not valid. Reason: "expectedToken" must be
an object or an array+ +at [Source:
(String)"{"inputTransformer":{"inputPathsMap":{"expectedToken":"$detail.metadata.someToken"},"inputTemplate":"{"headers":
{"Authorization": }}"},"
Trigger is an APIGW with a mapping template that appends a Auth token as part of the metadata, but the target expects it as a header. Is it a viable solution? How can this be resolved to the expected format?
Later Edit: Data being sent from APIGW:
{
detail: {
body: {
id: 'abc',
extraInfo: 'Postman_15:07',
preferences: [Array]
},
metadata: {
service: 'my-service',
status: 'ANY',
someToken: 'Bearer expectedToken'
}
}
}

Could you share an example (scrubbed of personal info) of your event payload - ie the JSON that has the detail, and metadata sub-fields?
You could also try
inputTransformer:
inputPathsMap:
expectedToken: '$detail.metadata.someToken'
inputTemplate: '{"headers": {"Authorization": "Bearer <expectedToken>"}}'

Try it out with an input-path like $.detail.metadata.someToken
If this also doesn't work, as already asked for, give us an example of the event that arrives at the EventBridge itself. If you don't know the event payload you can setup an event rule that forwards the event to a CloudWatch log group so that you can check the logs. (tip: setup the cloudwatch target via AWS console, afaik there are some issues using CloudFormation for this)

Thank you for the suggestions. It looks like the issue was with indentation of the inputTransformer section under eventbridge. That error message was by no means helpful. Final format to match what the target Lambda was expecting was :
inputTransformer:
inputPathsMap:
expectedToken: '$detail.metadata.someToken'
data: '$.detail.body'
inputTemplate: '{"headers": {"Authorization": <expectedToken>}, "body": <data>}'

Related

Adding allow all to 'Access-Control-Allow-Origin' header in AWS Gateway response using CDK

I am using CDK to create API endpoints. I would like to set 'Access-Control-Allow-Origin' to allow all in the header responses. Here is what I have tried
api.addGatewayResponse('4xx-error-response', {
type: ResponseType.DEFAULT_4XX ,
statusCode: '400',
responseHeaders: {
'Access-Control-Allow-Origin': `*`
},
templates: {
'application/json': '{ "message": "Access denied", "statusCode": "403", "type": "$context.error.responseType" }'
}
});
When I try to deploy this, I get the following error
Resource handler returned message: "Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: *]
Question: How do I add a gateway response like in the below screenshot using CDK
try this way:
const resource = api.addGatewayResource('MyResource', {params...});
resource.addCorsPreflight({
allowOrigins: api.Cors.ALL_ORIGINS // eqivalent to ['*']
allowCredentials: true, // optional for credentials
});
According to the AWS CORS docs You need to set more than one header, so I would just use the method that is doing that for me.
The CDK test directories are a good source of information, next to the official docs. You may find there some useful examples. Here the ApiGateway insides.
in response header instead of * for Access-Control-Allow-Origin you'll have to use '*'.
Like this:
api.addGatewayResponse('invalid-endpoint-error-response', {
type: ResponseType.MISSING_AUTHENTICATION_TOKEN,
statusCode: '500',
responseHeaders: {
'Access-Control-Allow-Origin': "'*'",
},
templates: {
'application/json': '{ "message": $context.error.messageString, "statusCode": "488", "type": "$context.error.responseType" }'
}
});

AWS API Gateway -> Lambda -> Github Pages

I am trying to point a domain to Github pages site.
I am very new to working with domains and AWS services so I am finding it difficult to troubleshoot issues.
I have created an AWS ApiGateway that points to a lambda function which I would like to use to serve the content from Github pages, but currently, it is giving me the error:
{"message":"Internal Server Error"}
so when trying to fix this issue, I found instructions to make it log additional debug information. (instructions found at: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-troubleshooting-lambda.html)
this is telling me that my configuration for the lambda function is incorrect.
The response from the Lambda function doesn't match the format that API Gateway expects. Lambda body contains the wrong type for field "headers"
I don't know what is expected so I don't know what needs to be changed... my entire lambda function is configured as:
exports.handler = async (event, context, callback) => {
let domain = 'https://github-org-name.github.io/my-repo-with-gh-pages/';
return {
statusCode: '301',
statusDescription: 'Moved Permanently',
headers: {
'location': [{
key: 'Location',
value: domain,
}],
'cache-control': [{
key: 'Cache-Control',
value: "max-age=3600"
}]
},
}
};
I am completely new to using AWS services, so I don't know if anything else needs to be configured. any help is appreciated.
The values in your headers dict must be strings, e.g:
{
"cookies" : ["cookie1", "cookie2"],
"isBase64Encoded": true|false,
"statusCode": httpStatusCode,
"headers": { "headername": "headervalue", ... },
"body": "Hello from Lambda!"
}
See the bottom of this page:
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html

Issue with EventBridge rule for aws.events

I want to send to CloudWatch logs ALL the events sent to a custom event bus.
I created the custom event bus: my-event-bus
I created the CloudWatch log group
I created the event bus policy so everyone within my account can put an event into my-event-bus
I created a rule for that custom bus
This is the rule:
MyRuleForBus:
Type: AWS::Events::Rule
Properties:
Description: Testing rule
EventBusName:
Name: testing-rule-for-my-event-bus
EventPattern:
source:
- aws.events
State: ENABLED
Targets:
- Arn: arn:aws:logs:us-east-1:MY_ACCOUNT_ID:log-group:my-event-bus-log-group
Id: 'my-bus'
When I try to put an event
aws events put-events --entries file://put-events.json
I receive the following error
{
"FailedEntryCount": 1,
"Entries": [
{
"ErrorCode": "NotAuthorizedForSourceException",
"ErrorMessage": "Not authorized for the source."
}
]
}
This is the content of put-events.json
[
{
"Source": "aws.events",
"EventBusName": "my-event-bus",
"Detail": "{ \"key1\": \"value3\", \"key2\": \"value4\" }",
"Resources": [
"resource1",
"resource2"
],
"DetailType": "myDetailType"
}
]
But, if I change the source to other, for example, 'hello', in both, the rule and the event it works.
What am I doing wrong?
I want to make it work with aws.events so all the events sent to this bus end in CloudWatch (target)
aws.events belongs to AWS, not to you, thus you can't define it as source of your events. Only AWS can do it.
You need to use your own custom name for the source of your events, e.g. myapp.events.

AWS lambda working without errors but i cannot return a value

I am not able to make my lambda ever return a value even though the code runs fine to right before the return statement. my return statement and the statement above it look like this:
print(score)
return {
"StatusCode": 200,
"headers":{'dummy':'dummy'},
"body": str(score)
}
Following is my serverless.yml:
service : test-deploy
plugins:
- serverless-python-requirements
provider:
name: aws
runtime: python3.6
region : ap-south-1
deploymentBucket:
name : smecornerdep-package
iamRoleStatements:
- Effect : Allow
Action:
- s3:GetObject
Resource:
- "arn:aws:s3:::smecornerdep/*"
custom:
pythonRequirements:
slim: True
functions:
app-on-book-only:
name: app-on-book-only
description : deploy trained lightgbm on aws lambda using serverless
handler : run_model_l.get_predictions
events :
- http : POST /engine
And I am hitting the end points with a POST from my command line like so:
curl -X POST -H "Content-Type: application/json" --data #sample_common_3.json https://76pmb6z1ya.execute-api.ap-south-1.amazonaws.com/dev/engine
In my aws lambda logs I can see the output of print(score) right above the return statement to be computed accurately. There are no errors in aws lambda logs. However, in my terminal I always get {"message": "Internal server error"} returned by the curl command.
I am a data scientist and fairly new to the concepts of dev ops. Please help me understand my mistake and also, suggest a solution.
Thank you for reading
I know whats wrong with your code. You are using the wrong syntax in the return dictionary
StatusCode should be statusCode
return {
"statusCode": 200,
"headers":{'dummy':'dummy'},
"body": str(score)
}
When you call a lambda with POST or GET ... need to have specific values in the return because the lambda use a proxy to be executed...
first... you have one bad header "StatusCode" -> "statusCode"
but you need this
return {
"statusCode": 200,
"body": json.dumps(score),
"headers": {
"Access-Control-Allow-Headers": "*",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "*",
"dummy": "dummy"
},
"isBase64Encoded": False
}
If you don't have this... if you call your api gateway or lambda from the browser or another way (not postman or not command line) your function will not work :(

Can't send message from lambda to aws sqs and no error is returned from aws-sdk

I am trying to send a message from my lambda function to a sqs queue that is already created. When I run the code, it literally stops the execution and no feedback is provided by aws-sdk.
I also have a function to read from the queue when I insert the messages manually, I use the same code to create the session. Which I believe can be used on both situations.
Then I tried to use the code provided by amazon but the outcome was the same.
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/sqs-example-receive-message.html
The only difference on my code is how I create the session. Like I mentioned above, that's the same way I use to read messages when they are inserted manually in the queue. On that function everything seems perfect.
func sendToOrderQueue(rID string, c Course) error {
log.Println(1)
var err error
sess := session.Must(session.New(&aws.Config{
Region: aws.String("eu-central-1"),
}), err)
svc := sqs.New(sess)
log.Println(2)
url := "https://sqs.eu-central-1.amazonaws.com/XXXXXX/myqueue"
log.Println(3)
result, err := svc.SendMessage(&sqs.SendMessageInput{
DelaySeconds: aws.Int64(10),
MessageAttributes: map[string]*sqs.MessageAttributeValue{
"Title": &sqs.MessageAttributeValue{
DataType: aws.String("String"),
StringValue: aws.String("The Whistler"),
},
"Author": &sqs.MessageAttributeValue{
DataType: aws.String("String"),
StringValue: aws.String("John Grisham"),
},
"WeeksOn": &sqs.MessageAttributeValue{
DataType: aws.String("Number"),
StringValue: aws.String("6"),
},
},
MessageBody: aws.String("Information about current NY Times fiction bestseller for week of 12/11/2016."),
QueueUrl: &url,
})
log.Println(4)
if err != nil {
log.Println("Error", err)
return err
}
log.Println(5, *result.MessageId, err)
return err
}
Also, my serverless.yaml
service: client
frameworkVersion: ">=1.28.0 <2.0.0"
provider:
name: aws
runtime: go1.x
vpc: ${file(../security.yaml):vpc}
package:
exclude:
- ./**
include:
- ./bin/**
functions:
postFunction:
handler: bin/post
environment:
REDIS_URL: ${file(../env.yaml):environment.REDIS_URL}
HASH_KEY: ${file(../env.yaml):environment.HASH_KEY}
events:
- http:
path: /func
method: post
cors: ${file(../cors.yaml):cors}
Checking the cloudwatch's logs the execution prints 1, 2, 3 and nothing else. No 4, no Error and no 5.
What am I doing wrong here?
I have the same issue.
Look in CloudWatch labmda's logs. There is an error like
Task timed out after 10.01 seconds
This is a lambda's timeout. You have no errors regarding sqs because timeout of lambda is smaller than default timeout of http.Client inside svc.SendMessage (sendMessage is just a POST request to aws api) and lambda is terminated before it gets any response from sqs. 10 seconds for lambda and 30 seconds for http request in my case. Add LogLevel to aws.Config like this:
&aws.Config{
LogLevel: aws.LogLevel(aws.LogDebug),
}
and you will see this http request in CloudWatch logs. Also you can set lambda's timeout to 2-3 times bigger than http.Client timeout and you will see retries in logs (3 retries by default).
Looks like lambda can't resolve the host of sqs or something like this because I get the same errors when VPC is configured wrong.
UPD
Fixed the problem. If you use VPC in your lambda it should have special configuration to access to SQS from VPC. Take a look here https://docs.aws.amazon.com/en_us/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-sending-messages-from-vpc.html