I am running a lambda function written in Go using Serverless and I want to pass a couple of parameters to it when it's invoked.
Here's the struct I created to receive the request:
type RequestStruct struct {
StartAt int `json:"startAt"`
EndAt int `json:"endAt"`
}
And in the handler I'm trying to print out the values:
func Handler(ctx context.Context,request RequestStruct) (Response, error) {
fmt.Printf("Request: %v",request)
I tried invoking it using the --raw option, so I tried doing this
serverless invoke -f orders --raw -d '{"startAt":1533513600,"endAt":1534118399}'
and I tried wrapping it in double quotes instead
serverless invoke -f orders --raw -d "{startAt:1533513600,endAt:1534118399}"
serverless invoke -f orders --raw -d "{\"startAt\":1533513600,\"endAt\":1534118399}"
I received a marshal error with all three:
{
"errorMessage": "json: cannot unmarshal string into Go value of type main.RequestStruct",
"errorType": "UnmarshalTypeError"
}
I'm not sure what I am doing wrong and I can find any examples for that online, there's only this serverless doc about how to do the invoke and this aws doc about how to handle the event in Go
Update
I tried invoking the event from the AWS Console and it worked, so odds are the issue is in the serverless invoke command.
I found a way around this by having my JSON in a file rather than in the command itself, this doesn't solve the issue I'm experiencing in the question but it's a way to invoke the function with Json
I added a events/startAndEnd.json file that contains my json data:
{
"startAt":1533513600,
"endAt":1534118399
}
And referenced that file in the invoke command: serverless invoke -f orders --path events/startAndEnd.json
Incase you hit this issue when running the command via npm. I also had a similar error when invoking it with:
"invoke": "serverless invoke --function myfunction --data \"{ \"Records\": []}\"",
By changing the double quotes to single quotes on the data it then suddenly started working:
"invoke": "serverless invoke --function myfunction --data '{ \"Records\": []}'",
Related
Normally, for testing lambda locally, I use
sam local invoke WebhookFunction -e test.json
in test.json
{"body":"test"}
This valiable is passed to event.
def lambda_handler(event, context):
Now I want to do equivalent thing by curl
I tried this.
curl -X POST -H "Content-Type: application/json" -d '{"body":"test"}'
however I think {"body":"test"} is not correctly passed to event.
I guess I need to set something more.
Can anyone help me?
This won't work, unless you have a lambda RIE (Runtime Interface Emulator) running as a proxy for the Lambda Runtime API locally.
Depending on the language you've written your lambda in, you need to build a docker image and run it locally.
Finally, you can do this:
$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
This command invokes the Lambda function running in the container image and returns a response.
You can use one of the AWS base images for Lambda to build the container image for your function code.
Choose your lambda language and follow the instructions here.
Finally, test your lambda container locally with RIE.
There's a really nice blog post that walks you through the entire process here.
I am trying to follow along the AWS Getting Started with Lambda Tutorial, but I am having Issues actually invoking my Function using the CLI.
I came across THIS step and got two errors:
An error occurred (InvalidRequestContentException) when calling the
Invoke operation: Could not parse request body into json: Could not
parse payload into json: Unexpected character ((CTRL-CHAR, code 145)):
expected a valid value (JSON String, Number, Array, Object or token
'null', 'true' or 'false') at [Source: (byte[])"��j[�"; line: 1,
column: 2]
and
An error occurred (ResourceNotFoundException) when calling the
GetLogEvents operation: The specified log group does not exist.
I assume the first error is caused by the first command :
aws lambda invoke --function-name my-function --payload '{"key": "value"}' out.json
and the second error accordingly by:
aws logs get-log-events --log-group-name /aws/lambda/my-function --log-stream-name $(cat out) --limit 5
I am more concerned about the first error.
I tried to solve this, by looking at the documentation for invoking a lambda function using the CLI. The most basic example was:
aws lambda invoke --function-name my-function --payload '{ "key":
"value" }' response.json
Using this, I get the same Error code
Could not
parse payload into json: Unexpected character ((CTRL-CHAR, code 145)):
I have asked about this in the AWS Dev Forums, but have not gotten any answer.
There where a few topics about similar errors on Stackoverflow, however they mentioned a specific character that was missing in the payload to be valid JSON.
According to google "CTRL-CHAR" sometimes points out a line break in your JSON, but there are none in this example. As far, as I can tell, the payload is valid JSON.
According to the CLI Documentation, you can also use other data types as payload. So I tried just passing a list:
aws lambda invoke --function-name my-func2 --payload '[2, 3, 4, 5]' out.json
I got the error:
Could not parse request body into json: Could not parse payload into
json: Unrecognized token 'Û': was expecting (JSON String, Number,
Array, Object or token 'null', 'true' or 'false')
Just in case anyone ever gets stuck at the same point while doing the official Lambda Tutorial:
I had the issue solved by adding:
--cli-binary-format raw-in-base64-out
as a parameter.
According to : CLI 2 AWS DOCS
This has something to do with encoding changes from CLI 1 to CLI 2.
It can also be added to to the aws config file, so you dont have to add it manually every time.
However, I am not sure why the Lambda Tutorial would not mention this, since the tutorial assumes you use CLI 2 and also guides you through the steps of the installation...
For me this way it worked in windows:
aws lambda invoke --function-name Func2 --payload {\"key1\":\"val1\"} --cli-binary-format raw-in-base64-out out.json
As suggested by MrTony, I added "--cli-binary-format raw-in-base64-out" args
I am following this tutorial in GCP, to make scraper run with schedule.
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
Seems like it flow works in a row of
1) Scheduler
2) PubSub
3) Function
4) Compute instance
but when i wanted to try whether it is working, it keeps shows an error of
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
error: |-
Error: function execution failed. Details:
Attribute 'label' missing from payload
but nowhere i can find the answer to fill the label into the payload, and i don't know what is happening here.
GCP tutorial sucks...
Can anybody help me with this?
p.s) when i do the npm test
➜ scheduleinstance git:(master) npm test
> cloud-functions-schedule-instance#0.1.0 test /Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance
> mocha test/*.test.js --timeout=20000
functions_start_instance_pubsub
✓ startInstancePubSub: should accept JSON-formatted event payload with label (284ms)
Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:160:19)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async GoogleAuth.getClient (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:502:17)
at async GoogleAuth.authorizeRequest (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:543:24)
✓ startInstancePubSub: should fail with missing 'zone' attribute
✓ startInstancePubSub: should fail with missing 'label' attribute
✓ startInstancePubSub: should fail with empty event payload
functions_stop_instance_pubsub
✓ stopInstancePubSub: should accept JSON-formatted event payload with label
✓ stopInstancePubSub: should fail with missing 'zone' attribute
✓ stopInstancePubSub: should fail with missing 'label' attribute
✓ stopInstancePubSub: should fail with empty event payload
John Hanley from the comments above:
The error message comes from the code in index.js because you probably did not encode the payload correctly. This is an example where you should not include pictures and you should copy and paste the actual error. The payload that you created is base64 and we cannot decode that from a picture. You should base64 enocde something similar to {"zone":"us-west1-b", "label":"env=dev"}
Your payload decoded: {"zone":"us-west1-b","instance":"workday-instance"}. That does not match what the code expects. Look at the example in my comment again. Base64 encoding is very simple and there are many articles on the Internet. –
Thanks to #JohnHanley, I solved the problem of my question, and I am giving the solutions incase that other people could experience the same problem, since Google Tutorial was not user-friendly.
I was following the tutorial that can set the scheduler of compute instance so that my scraper can work and close at given time.
[Schduler tutorial of gcp]
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
In this tutorial, process works as follows:
1. Set the scheduler to call the Pubsub
2. Pubsub send message to Cloud Functions
3. Cloud wakes and closes the compute instance
*4. I turned on the compute engine at 23:50, and used cron inside the compute engine to run my scraper at 00:00, and finally turned off the compute engine at 1:00.
I will skip all the non-problematic lines of script, but only deal with that made me sick for few days.
After setting, compute instances, pubsub, you have to deploy the functions.
gcloud functions deploy startInstancePubSub \
--trigger-topic start-instance-event \
--runtime nodejs6
gcloud functions deploy stopInstancePubSub \
--trigger-topic stop-instance-event \
--runtime nodejs6
At here, it says runtime nodejs6, but you have to set it to nodejs8, since nodejs6 has beed depracated and this tutorial don't mentions that.
After, you have to test that functions are callable.
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
--data parameter needs the json format data which is encoded into 'base64' like follows.
echo '{"zone":"us-west1-b", "instance":"workday-instance"}' | base64
eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
However when I followed the instruction, it returned the error of
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
error: |-
Error: function execution failed. Details:
Attribute 'label' missing from payload
Since I was not used to gcp, had no idea of what 'label' meant. But by following the comments from #JohnHanley, I changed the line to
echo '{"zone":"asia-northeast2-a", "label":"env:dev", "instance":"workday-instance"}' | base64
eyJ6b25lIjoiYXNpYS1ub3J0aGVhc3QyLWEiLCAibGFiZWwiOiJlbnY6ZGV2IiwgImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
gcloud functions call stopInstancePubSub --data '{"data":"eyJ6b25lIjoiYXNpYS1ub3J0aGVhc3QyLWEiLCAibGFiZWwiOiJlbnY6ZGV2IiwgImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
"}'
And this worked like magic. Although I haven't set the labels of the functions, but it worked anyway. But to be completely sustainable, I set the labels of the function to "env:dev" to be sure.
This actually extends to the lines below:
gcloud beta scheduler jobs create pubsub startup-workday-instance \
--schedule '0 9 * * 1-5' \
--topic start-instance-event \
--message-body '{"zone":"us-west1-b","instance":"workday-instance"}' \
--time-zone 'America/Los_Angeles'
In this message-body "label" is missing. I tested the 'with--label-message-version' and 'without-label-message-version', turned out that although 'without version' showed the message that 'I did what you asked for', but actually it didn't
context
I want to build an API which accepts a file and text parameters together using multipart/form-data. AWS Lambda then performs operations on the file and returns some text. For example
curl -X POST \
http://my-endpoint.com \
-F lang=eng \
-F config=text \
-F image=#/home/myfile.jpg
#lang and config are text, image is file. Text is returned
problem
I can build API gateway+lambda or API gateway+S3 APIs. But I'm not getting how to combine them in parallel for the desired effect.
Edit: By parallel I mean one API call starts this sequence-.
POST->save file in S3->read file in lambda->process using passed variables->response
There are a few options here that I can think of.
You can make the lambda function handle the S3 actions for you instead of integrating directly between API Gateway and S3.
Alternatively, you may be able to use web sockets to keep a connection open. The flow would be connect to API (Web socket established) -> POST to API GWAY + API -> s3 put triggers LAMBDA -> Lambda processes and responds via websock.
The first approach may be more achievable.
I'm trying to set some custom API Gateway responses using aws cli. This is the command I'm using (only the related parameter):
aws apigateway put-gateway-response --response-parameters method.response.header.Access-Control-Allow-Origin='"'"'*'"'"'
The complete command is:
aws apigateway put-gateway-response --rest-api-id w1s3nc4dxd --response-type UNAUTHORIZED --status-code 401 --response-parameters method.response.header.Access-Control-Allow-Origin='"'"'*'"'"' --response-templates '{ "application/json": "{\"errorcode\":401,\"message\":$context.error.messageString}" }' --region eu-west-1
And it fails with:
An error occurred (BadRequestException) when calling the PutGatewayResponse operation: Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: method.response.header.Access-Control-Allow-Origin]
If it is executed without that param, everything works properly. I have also tried with the json format and same result.
--response-parameters '{"method.response.header.Access-Control-Allow-Origin":"'"'*'"'"}'
Any insight? Thanks in advance.
--- EDIT
Just for an extra clarification. This fails with all kind of response-parameters, this is not only involving Access-Control-Allow-Origin header.
I believe the format for Gateway Response parameters should be gatewayresponse.header.[name]. That will be the map key (destination), and the value (source) is either a static value (like you have) or a mapping expression to method.request.(path|querystring|header).[name] or stageVariables.[name] or context.[name]
Try using
--response-parameters '{"gatewayresponse.header.Access-Control-Allow-Origin":"'"'*'"'"}', it worked for me