So I am considering HyperLedger Fabric to use with an application I have written in C++. From my understanding, the interactions i.e. posting retrieving data is all done in chaincode, in all of the examples I have seen this is invoked by using the CLI interface docker container.
I simply want to be able to store data produced by my application on a blockchain.
My question is how do I invoke the chaincode externally, surely this is something that is able to be done. I saw that there was a REST SDK but this is no longer supported so I don't want to go near it, to be honest. What other options are available??
Thanks!
There are two official SDKs you can try out.
Fabric Java SDK
Node JS SDK
As correctly mentioned by #Ajaya Mandal, you can use SDKs to automate the invoking process. For example, you can start the node app as written in app.js of balance transfer example and you can hit the API like it is shown in ./testAPI.sh file.
echo "POST invoke chaincode on peers of Org1 and Org2"
echo
VALUES=$(curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes/mycc \
-H "authorization: Bearer $ORG1_TOKEN" \
-H "content-type: application/json" \
-d "{
\"peers\": [\"peer0.org1.example.com\",\"peer0.org2.example.com\"],
\"fcn\":\"move\",
\"args\":[\"a\",\"b\",\"10\"]
}")
Here you can add your arguments and pass it as you wish. You can use this thread to see how you can pass an HTTP request from C++.
Related
Normally, for testing lambda locally, I use
sam local invoke WebhookFunction -e test.json
in test.json
{"body":"test"}
This valiable is passed to event.
def lambda_handler(event, context):
Now I want to do equivalent thing by curl
I tried this.
curl -X POST -H "Content-Type: application/json" -d '{"body":"test"}'
however I think {"body":"test"} is not correctly passed to event.
I guess I need to set something more.
Can anyone help me?
This won't work, unless you have a lambda RIE (Runtime Interface Emulator) running as a proxy for the Lambda Runtime API locally.
Depending on the language you've written your lambda in, you need to build a docker image and run it locally.
Finally, you can do this:
$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
This command invokes the Lambda function running in the container image and returns a response.
You can use one of the AWS base images for Lambda to build the container image for your function code.
Choose your lambda language and follow the instructions here.
Finally, test your lambda container locally with RIE.
There's a really nice blog post that walks you through the entire process here.
I'm using the official GCP PubSub emulator to test integration locally.
I'd like to send messages via classic curl/postman tools but it is getting complicated because this emulator requires encryption of incoming messages.
For instance, if we send it like this:
curl --location --request POST 'http://localhost:8091/v1/projects/my-project/topics/transactions:publish' \
--header 'Content-Type: application/json' \
--data-raw '{"messages":[{"data":"{\"foo\":\"baz\"}","attributes":{}}]}'
Then, I'm getting 400:
{
"error": {
"code": 400,
"message": "Payload isn't valid for request.",
"status": "INVALID_ARGUMENT"
}
}
due to invalid incoming messages. It requires encryption and if I sniff the encrypted body it works.
But it is overwhelming to encrypt messages running it locally.
In order to disable encryption in GCP I can follow this guide but it is not applicable to local emulators run - there is no GCP environment or I don't know how to do it.
Are there any options to disable emulator decryption? If not where to report it, there is no GitHub project for this.
Ok, it took time for me to understand, but I think you mixed 2 things: encryption and encoding.
The data value in PubSub isn't provided encrypted but encoded in base64. There isn't encryption required here. Base64 encoding is a raw encoding to prevent data loss, encoding type, special characters, binary data and boring compatibility things.
Note: On your local computer with pubsub emulator, the data aren't encrypted at rest and in transit. On Google Cloud, with PubSub service, the data are encrypted in transit and at rest
With curl you can use this command (with linux OS)
curl --location --request POST 'http://localhost:8091/v1/projects/my-project/topics/transactions:publish' \
--header 'Content-Type: application/json' \
--data-raw "{\"messages\":[{\"data\":\"$(echo "{\"foo\":\"baz\"}" | base64 -)\",\"attributes\":{}}]}"
Yes, backslash are boring...
I don't know how to do with postman
context
I want to build an API which accepts a file and text parameters together using multipart/form-data. AWS Lambda then performs operations on the file and returns some text. For example
curl -X POST \
http://my-endpoint.com \
-F lang=eng \
-F config=text \
-F image=#/home/myfile.jpg
#lang and config are text, image is file. Text is returned
problem
I can build API gateway+lambda or API gateway+S3 APIs. But I'm not getting how to combine them in parallel for the desired effect.
Edit: By parallel I mean one API call starts this sequence-.
POST->save file in S3->read file in lambda->process using passed variables->response
There are a few options here that I can think of.
You can make the lambda function handle the S3 actions for you instead of integrating directly between API Gateway and S3.
Alternatively, you may be able to use web sockets to keep a connection open. The flow would be connect to API (Web socket established) -> POST to API GWAY + API -> s3 put triggers LAMBDA -> Lambda processes and responds via websock.
The first approach may be more achievable.
I am using coinpayment.net payment API for my project now.
Here is example bitcoin transaction.
https://www.blockchain.com/btc/tx/25ecdc29903aa8f80efb51a6b41ac036a91fe441aefd0d26df383827b9578cae
Here is transactionID is 25ecdc29903aa8f80efb51a6b41ac036a91fe441aefd0d26df383827b9578cae.
Sender address is bc1q3q2jw046t888slq9rrg6ypwfna7ellkxh0ytss
I want to get this sender address by programming with TxID.
If you have got any solution or API, please let me know.
I have checked coinpayments.net api, but they don't provide sender address somehow in webhook endpoint. So, I am trying to find out this by external api or any solution.
The purpose of this is that I want to send some BTC to sender again every month without asking withdraw address to every customers.
You can use any scripting language with JSON support like perl, JavaScript or python.
Or simply use command line tool like jq:
curl -s https://blockchain.info/tx/25ecdc29903aa8f80efb51a6b41ac036a91fe441aefd0d26df383827b9578cae\?format\=json | jq '.inputs[0]."prev_out".addr'
to get familiar with jq use https://jqplay.org/
another way is to install your own bitcoin node and fetch all info directly from your own node.
I have .Net server running in Google Kubernetes Engine. It is configured to use gRPC through Google Cloud Endpoints. Now I need to schedule task to call my gRPC method once per day.
The first thing I tried was to use Google Cloud Scheduler to call http methods directly. For that I have:
Set up HTTP to gRPC transcoding on my server to call my gRPC method through http.
Created and enabled SSL certificate as described here.
Created service account in IAM & admin console with Service Account Token Creator and Service Account User permissions.
Created Cloud Scheduler job with my url and Auth header as OIDC token and created above service account.
Deployed Google Cloud Endpoints configuration with following parameters (not only them):
authentication:
providers:
- id: google_service_account
issuer: MY_SERVICE_ACCOUNT_EMAIL
jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/MY_SERVICE_ACCOUNT_EMAIL
rules:
- selector: "*"
requirements:
- provider_id: google_service_account
After that when I run scheduler job it returns result "Failed". In logs it writes ERROR with status UNKNOWN.
The second thing I tried was to use Google Cloud Scheduler to publish message in Pub Sub topic with my server as subscriber.
Unsuccesfully too because I can't verify ownership of Google Cloud Endpoints domain. I asked regarding question here: How to verify ownership of Google Cloud Endpoints service URL?
Now the question: what is the best way to schedule task that would call gRPC method assuming following environment:
.Net server running on GKE
gRPC
Automated periodical call of that task (I can call manually but it's meaningless)
So you were able to make a HTTP call manually, but not automatically by Google Cloud Scheduler, is that correct?
If so, check to see if the request reach the Cloud Endpoint Proxy in the cloud console Endpoint Logging, it may give you some hints.
Distributed scheduler
more details refer sourcedcode Distributed scheduler
This application can be run on different hosts and offers functionality to
schedule execution of arbitrary command at particular time or periodically.
There are two ways to communicate with application: gRPC and REST. Remote
interfaces are
specified in dsched.proto file
Corresponding REST API could be also found over there in form of API
annotations. We also provide generated Swagger files.
To specify task execution timing, we are using notation adopted by cron.
Scheduled tasks are stored in file and loaded automatically during startup.
Building
Install gRPC
Install gRPC gateway
To parse crontab statements and schedule task execution, we are using gopkg.in/robfig/cron.v2 library.
So it should be installed also: go get -u gopkg.in/robfig/cron.v2. Documentation could be found here
Get dsched package: go get
-u gitlab.com/andreynech/dsched
Now it is possible to run standard go build command in dscheduler and
gateway directories to generate binaries for scheduler and REST/JSON API
gateway. It might be also helpful to examine our
CI configuration file to see how we
set up building environment.
Running
All the scheduling functionality is implemented by dscheduler executable. So
it could be run on system startup or on demand. As described by dscheduler --help,
there are two command line parameters:
-i string - File name to store task list (default "/var/run/dscheduler.db")
-p string - Endpoint to listen (default ":50051")
If there is a need to offer REST/JSON API, gateway application located in
gateway directory should be run. It could reside on the same host as
dscheduler, but typically it would be other host which is accessible over
HTTP from outside and at the same way can talk to dscheduler running in
internal network. This setup was also the reason to split scheduler and
gateway in two executables. gateway is mostly generated application and
supports several command-line parameters described by running gateway --help.
Important parameter is -sched_endpoint string which is endpoint of Scheduler
service (default "localhost:50051"). It specifies the host name and port
where dscheduler is listening for requests.
Scheduling tasks (testing)
There are three ways to control scheduler server:
Using Go client implemented in cli/ directory
Using Python client implemented in py_cli directory
Using REST/JSON API gateway and curl
Go and Python clients have similar set of command line parameters.
$ ./cli --help
Usage of cli:
-a string
The command to execute at time specified by -c parameter
-c string
Statement in crontab format describes when to execute the command
-e string
Host:port to connect (default "localhost:50051")
-l List scheduled tasks
-p Purge all scheduled tasks
-r int
Remove the task with specified id from schedule
-s Schedule task. -c and -a arguments are required in this case
They are using gRPC protocol to talk to scheduler server. Here are several
example invocations:
$ ./cli -l list currently scheduled tasks
$ ./cli -s -c "#every 0h00m10s" -a "df" schedule df command for
execution every 10 seconds
$ ./cli -s -c "0 30 * * * *" -a "ls -l" schedule ls -l command to
run every 30 minutes
$ ./cli -r 3 remove task with ID 3
$ ./cli -p remove all scheduled tasks
It is also possible to use curl to invoke dscheduler functionality over
REST/JSON API gateway. Assuming that dscheduler and gateway applications
are running, here are some invocations to list, add and remove scheduling
entries from the same host (localhost):
curl 'http://localhost:8080/v1/scheduler/list' list currently scheduled tasks
curl -d '{"id":0, "cron":"#every 0h00m10s", "action":"ls"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls command for execution every 10 seconds
curl -d '{"id":0, "cron":"0 30 * * * *", "action":"ls -l"}' -X POST 'http://localhost:8080/v1/scheduler/add' schedule ls -l to run every 30 minutes
curl -d '{"id":2}' -X POST 'http://localhost:8080/v1/scheduler/remove' remove task with ID 2.
curl -X POST 'http://localhost:8080/v1/scheduler/removeall' remove all scheduled tasks
All changes are automatically saved in file.
Thoughts on scheduler service discovery
In large deployment scenarios (like hundreds of hosts) it might be
challenging problem to find out all IP addresses and ports where scheduler
service is started. It would be pretty easy to add support for Zeroconf
(Bonjour/Avahi) technology to simplify service discovery. As alternative, it
might be possible to implement something similar to CORBA Naming Service
where running services register themself and location of naming service is
well known. We decide to collect feedback before deciding for particular
service discovery implementation. So your input very welcome!