Test lambda locally with curl - amazon-web-services

Normally, for testing lambda locally, I use
sam local invoke WebhookFunction -e test.json
in test.json
{"body":"test"}
This valiable is passed to event.
def lambda_handler(event, context):
Now I want to do equivalent thing by curl
I tried this.
curl -X POST -H "Content-Type: application/json" -d '{"body":"test"}'
however I think {"body":"test"} is not correctly passed to event.
I guess I need to set something more.
Can anyone help me?

This won't work, unless you have a lambda RIE (Runtime Interface Emulator) running as a proxy for the Lambda Runtime API locally.
Depending on the language you've written your lambda in, you need to build a docker image and run it locally.
Finally, you can do this:
$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
This command invokes the Lambda function running in the container image and returns a response.
You can use one of the AWS base images for Lambda to build the container image for your function code.
Choose your lambda language and follow the instructions here.
Finally, test your lambda container locally with RIE.
There's a really nice blog post that walks you through the entire process here.

Related

GCP Scheduler Error: function execution failed. Details: Attribute 'label' missing from payload

I am following this tutorial in GCP, to make scraper run with schedule.
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
Seems like it flow works in a row of
1) Scheduler
2) PubSub
3) Function
4) Compute instance
but when i wanted to try whether it is working, it keeps shows an error of
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
error: |-
Error: function execution failed. Details:
Attribute 'label' missing from payload
but nowhere i can find the answer to fill the label into the payload, and i don't know what is happening here.
GCP tutorial sucks...
Can anybody help me with this?
p.s) when i do the npm test
➜ scheduleinstance git:(master) npm test
> cloud-functions-schedule-instance#0.1.0 test /Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance
> mocha test/*.test.js --timeout=20000
functions_start_instance_pubsub
✓ startInstancePubSub: should accept JSON-formatted event payload with label (284ms)
Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:160:19)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at async GoogleAuth.getClient (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:502:17)
at async GoogleAuth.authorizeRequest (/Users/yoonhoonsang/Desktop/nodejs-docs-samples/functions/scheduleinstance/node_modules/google-auth-library/build/src/auth/googleauth.js:543:24)
✓ startInstancePubSub: should fail with missing 'zone' attribute
✓ startInstancePubSub: should fail with missing 'label' attribute
✓ startInstancePubSub: should fail with empty event payload
functions_stop_instance_pubsub
✓ stopInstancePubSub: should accept JSON-formatted event payload with label
✓ stopInstancePubSub: should fail with missing 'zone' attribute
✓ stopInstancePubSub: should fail with missing 'label' attribute
✓ stopInstancePubSub: should fail with empty event payload
John Hanley from the comments above:
The error message comes from the code in index.js because you probably did not encode the payload correctly. This is an example where you should not include pictures and you should copy and paste the actual error. The payload that you created is base64 and we cannot decode that from a picture. You should base64 enocde something similar to {"zone":"us-west1-b", "label":"env=dev"}
Your payload decoded: {"zone":"us-west1-b","instance":"workday-instance"}. That does not match what the code expects. Look at the example in my comment again. Base64 encoding is very simple and there are many articles on the Internet. –
Thanks to #JohnHanley, I solved the problem of my question, and I am giving the solutions incase that other people could experience the same problem, since Google Tutorial was not user-friendly.
I was following the tutorial that can set the scheduler of compute instance so that my scraper can work and close at given time.
[Schduler tutorial of gcp]
https://cloud.google.com/scheduler/docs/start-and-stop-compute-engine-instances-on-a-schedule
In this tutorial, process works as follows:
1. Set the scheduler to call the Pubsub
2. Pubsub send message to Cloud Functions
3. Cloud wakes and closes the compute instance
*4. I turned on the compute engine at 23:50, and used cron inside the compute engine to run my scraper at 00:00, and finally turned off the compute engine at 1:00.
I will skip all the non-problematic lines of script, but only deal with that made me sick for few days.
After setting, compute instances, pubsub, you have to deploy the functions.
gcloud functions deploy startInstancePubSub \
--trigger-topic start-instance-event \
--runtime nodejs6
gcloud functions deploy stopInstancePubSub \
--trigger-topic stop-instance-event \
--runtime nodejs6
At here, it says runtime nodejs6, but you have to set it to nodejs8, since nodejs6 has beed depracated and this tutorial don't mentions that.
After, you have to test that functions are callable.
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
--data parameter needs the json format data which is encoded into 'base64' like follows.
echo '{"zone":"us-west1-b", "instance":"workday-instance"}' | base64
eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
However when I followed the instruction, it returned the error of
gcloud functions call stopInstancePubSub \
--data '{"data":"eyJ6b25lIjoidXMtd2VzdDEtYiIsImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg=="}'
error: |-
Error: function execution failed. Details:
Attribute 'label' missing from payload
Since I was not used to gcp, had no idea of what 'label' meant. But by following the comments from #JohnHanley, I changed the line to
echo '{"zone":"asia-northeast2-a", "label":"env:dev", "instance":"workday-instance"}' | base64
eyJ6b25lIjoiYXNpYS1ub3J0aGVhc3QyLWEiLCAibGFiZWwiOiJlbnY6ZGV2IiwgImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
gcloud functions call stopInstancePubSub --data '{"data":"eyJ6b25lIjoiYXNpYS1ub3J0aGVhc3QyLWEiLCAibGFiZWwiOiJlbnY6ZGV2IiwgImluc3RhbmNlIjoid29ya2RheS1pbnN0YW5jZSJ9Cg==
"}'
And this worked like magic. Although I haven't set the labels of the functions, but it worked anyway. But to be completely sustainable, I set the labels of the function to "env:dev" to be sure.
This actually extends to the lines below:
gcloud beta scheduler jobs create pubsub startup-workday-instance \
--schedule '0 9 * * 1-5' \
--topic start-instance-event \
--message-body '{"zone":"us-west1-b","instance":"workday-instance"}' \
--time-zone 'America/Los_Angeles'
In this message-body "label" is missing. I tested the 'with--label-message-version' and 'without-label-message-version', turned out that although 'without version' showed the message that 'I did what you asked for', but actually it didn't

How to create API in AWS which accesses S3 and lambda in parallel?

context
I want to build an API which accepts a file and text parameters together using multipart/form-data. AWS Lambda then performs operations on the file and returns some text. For example
curl -X POST \
http://my-endpoint.com \
-F lang=eng \
-F config=text \
-F image=#/home/myfile.jpg
#lang and config are text, image is file. Text is returned
problem
I can build API gateway+lambda or API gateway+S3 APIs. But I'm not getting how to combine them in parallel for the desired effect.
Edit: By parallel I mean one API call starts this sequence-.
POST->save file in S3->read file in lambda->process using passed variables->response
There are a few options here that I can think of.
You can make the lambda function handle the S3 actions for you instead of integrating directly between API Gateway and S3.
Alternatively, you may be able to use web sockets to keep a connection open. The flow would be connect to API (Web socket established) -> POST to API GWAY + API -> s3 put triggers LAMBDA -> Lambda processes and responds via websock.
The first approach may be more achievable.

How to schedule task to call gRPC method?

I have .Net server running in Google Kubernetes Engine. It is configured to use gRPC through Google Cloud Endpoints. Now I need to schedule task to call my gRPC method once per day.
The first thing I tried was to use Google Cloud Scheduler to call http methods directly. For that I have:
Set up HTTP to gRPC transcoding on my server to call my gRPC method through http.
Created and enabled SSL certificate as described here.
Created service account in IAM & admin console with Service Account Token Creator and Service Account User permissions.
Created Cloud Scheduler job with my url and Auth header as OIDC token and created above service account.
Deployed Google Cloud Endpoints configuration with following parameters (not only them):
authentication:
providers:
- id: google_service_account
issuer: MY_SERVICE_ACCOUNT_EMAIL
jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/MY_SERVICE_ACCOUNT_EMAIL
rules:
- selector: "*"
requirements:
- provider_id: google_service_account
After that when I run scheduler job it returns result "Failed". In logs it writes ERROR with status UNKNOWN.
The second thing I tried was to use Google Cloud Scheduler to publish message in Pub Sub topic with my server as subscriber.
Unsuccesfully too because I can't verify ownership of Google Cloud Endpoints domain. I asked regarding question here: How to verify ownership of Google Cloud Endpoints service URL?
Now the question: what is the best way to schedule task that would call gRPC method assuming following environment:
.Net server running on GKE
gRPC
Automated periodical call of that task (I can call manually but it's meaningless)
So you were able to make a HTTP call manually, but not automatically by Google Cloud Scheduler, is that correct?
If so, check to see if the request reach the Cloud Endpoint Proxy in the cloud console Endpoint Logging, it may give you some hints.
Distributed scheduler
more details refer sourcedcode Distributed scheduler
This application can be run on different hosts and offers functionality to
schedule execution of arbitrary command at particular time or periodically.
There are two ways to communicate with application: gRPC and REST. Remote
interfaces are
specified in dsched.proto file
Corresponding REST API could be also found over there in form of API
annotations. We also provide generated Swagger files.
To specify task execution timing, we are using notation adopted by cron.
Scheduled tasks are stored in file and loaded automatically during startup.
Building
Install gRPC
Install gRPC gateway
To parse crontab statements and schedule task execution, we are using gopkg.in/robfig/cron.v2 library.
So it should be installed also: go get -u gopkg.in/robfig/cron.v2. Documentation could be found here
Get dsched package: go get
-u gitlab.com/andreynech/dsched
Now it is possible to run standard go build command in dscheduler and
gateway directories to generate binaries for scheduler and REST/JSON API
gateway. It might be also helpful to examine our
CI configuration file to see how we
set up building environment.
Running
All the scheduling functionality is implemented by dscheduler executable. So
it could be run on system startup or on demand. As described by dscheduler --help,
there are two command line parameters:
-i string - File name to store task list (default "/var/run/dscheduler.db")
-p string - Endpoint to listen (default ":50051")
If there is a need to offer REST/JSON API, gateway application located in
gateway directory should be run. It could reside on the same host as
dscheduler, but typically it would be other host which is accessible over
HTTP from outside and at the same way can talk to dscheduler running in
internal network. This setup was also the reason to split scheduler and
gateway in two executables. gateway is mostly generated application and
supports several command-line parameters described by running gateway --help.
Important parameter is -sched_endpoint string which is endpoint of Scheduler
service (default "localhost:50051"). It specifies the host name and port
where dscheduler is listening for requests.
Scheduling tasks (testing)
There are three ways to control scheduler server:
Using Go client implemented in cli/ directory
Using Python client implemented in py_cli directory
Using REST/JSON API gateway and curl
Go and Python clients have similar set of command line parameters.
$ ./cli --help
Usage of cli:
-a string
The command to execute at time specified by -c parameter
-c string
Statement in crontab format describes when to execute the command
-e string
Host:port to connect (default "localhost:50051")
-l List scheduled tasks
-p Purge all scheduled tasks
-r int
Remove the task with specified id from schedule
-s Schedule task. -c and -a arguments are required in this case
They are using gRPC protocol to talk to scheduler server. Here are several
example invocations:
$ ./cli -l list currently scheduled tasks
$ ./cli -s -c "#every 0h00m10s" -a "df" schedule df command for
execution every 10 seconds
$ ./cli -s -c "0 30 * * * *" -a "ls -l" schedule ls -l command to
run every 30 minutes
$ ./cli -r 3 remove task with ID 3
$ ./cli -p remove all scheduled tasks
It is also possible to use curl to invoke dscheduler functionality over
REST/JSON API gateway. Assuming that dscheduler and gateway applications
are running, here are some invocations to list, add and remove scheduling
entries from the same host (localhost):
curl 'http://localhost:8080/v1/scheduler/list' list currently scheduled tasks
curl -d '{"id":0, "cron":"#every 0h00m10s", "action":"ls"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls command for execution every 10 seconds
curl -d '{"id":0, "cron":"0 30 * * * *", "action":"ls -l"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls -l to run every 30 minutes
curl -d '{"id":2}' -X POST 'http://localhost:8080/v1/scheduler/remove' remove task with ID 2.
curl -X POST 'http://localhost:8080/v1/scheduler/removeall' remove all scheduled tasks
All changes are automatically saved in file.
Thoughts on scheduler service discovery
In large deployment scenarios (like hundreds of hosts) it might be
challenging problem to find out all IP addresses and ports where scheduler
service is started. It would be pretty easy to add support for Zeroconf
(Bonjour/Avahi) technology to simplify service discovery. As alternative, it
might be possible to implement something similar to CORBA Naming Service
where running services register themself and location of naming service is
well known. We decide to collect feedback before deciding for particular
service discovery implementation. So your input very welcome!

Using HyperLedger Fabric with C++ Application

So I am considering HyperLedger Fabric to use with an application I have written in C++. From my understanding, the interactions i.e. posting retrieving data is all done in chaincode, in all of the examples I have seen this is invoked by using the CLI interface docker container.
I simply want to be able to store data produced by my application on a blockchain.
My question is how do I invoke the chaincode externally, surely this is something that is able to be done. I saw that there was a REST SDK but this is no longer supported so I don't want to go near it, to be honest. What other options are available??
Thanks!
There are two official SDKs you can try out.
Fabric Java SDK
Node JS SDK
As correctly mentioned by #Ajaya Mandal, you can use SDKs to automate the invoking process. For example, you can start the node app as written in app.js of balance transfer example and you can hit the API like it is shown in ./testAPI.sh file.
echo "POST invoke chaincode on peers of Org1 and Org2"
echo
VALUES=$(curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes/mycc \
-H "authorization: Bearer $ORG1_TOKEN" \
-H "content-type: application/json" \
-d "{
\"peers\": [\"peer0.org1.example.com\",\"peer0.org2.example.com\"],
\"fcn\":\"move\",
\"args\":[\"a\",\"b\",\"10\"]
}")
Here you can add your arguments and pass it as you wish. You can use this thread to see how you can pass an HTTP request from C++.

Passing JSON when invoking function using serverless

I am running a lambda function written in Go using Serverless and I want to pass a couple of parameters to it when it's invoked.
Here's the struct I created to receive the request:
type RequestStruct struct {
StartAt int `json:"startAt"`
EndAt int `json:"endAt"`
}
And in the handler I'm trying to print out the values:
func Handler(ctx context.Context,request RequestStruct) (Response, error) {
fmt.Printf("Request: %v",request)
I tried invoking it using the --raw option, so I tried doing this
serverless invoke -f orders --raw -d '{"startAt":1533513600,"endAt":1534118399}'
and I tried wrapping it in double quotes instead
serverless invoke -f orders --raw -d "{startAt:1533513600,endAt:1534118399}"
serverless invoke -f orders --raw -d "{\"startAt\":1533513600,\"endAt\":1534118399}"
I received a marshal error with all three:
{
"errorMessage": "json: cannot unmarshal string into Go value of type main.RequestStruct",
"errorType": "UnmarshalTypeError"
}
I'm not sure what I am doing wrong and I can find any examples for that online, there's only this serverless doc about how to do the invoke and this aws doc about how to handle the event in Go
Update
I tried invoking the event from the AWS Console and it worked, so odds are the issue is in the serverless invoke command.
I found a way around this by having my JSON in a file rather than in the command itself, this doesn't solve the issue I'm experiencing in the question but it's a way to invoke the function with Json
I added a events/startAndEnd.json file that contains my json data:
{
"startAt":1533513600,
"endAt":1534118399
}
And referenced that file in the invoke command: serverless invoke -f orders --path events/startAndEnd.json
Incase you hit this issue when running the command via npm. I also had a similar error when invoking it with:
"invoke": "serverless invoke --function myfunction --data \"{ \"Records\": []}\"",
By changing the double quotes to single quotes on the data it then suddenly started working:
"invoke": "serverless invoke --function myfunction --data '{ \"Records\": []}'",