I am currently using Fabric together with Composer as a platform to test secure transaction of assets or data whilst keeping validity intact
Right now I'm on a use case where a node sends an asset over to another node, but before that the asset will go through a middle node for some changes before ending with the receiving node. Is there any way for me or the receiving node to verify that this asset came from the first node?
Eg: X intends to send a message to Z, but before that he sends to message to Y, Y simplifies the message and sends it to Z. Is there any way for Z to check that the message originated from X?
I think you could use the getCurrentParticipant() call from within your TP function to verify the identity that submitted the transaction. Does that help?
Related
I am creating a custom skill that will query a custom device for a parameter. In this case, it is the voltage.
The device is a node on node red so it is not a physical device but a virtual one.
The device will be linked to my account. Here is what I am thinking that the workflow would be:
Hello alexa ask test app what is the motor voltage?
I get a session request that goes to my custom intent and executes the corresponding function on the lambda server
----- here is the part that is fuzzy ----
Using some device ID, the lambda server sends out a request to the virtual device
The node for the device gets this request (mostly likely some sort of JSON object), parses it and sends back out the requested parameter that is stored on the Node Red server (For the sake of the discussion lets say that it is a constant number that is on the server)
Lambda server gets this response and forwards it to the Alexa service
Alexa - The voltage of the motor is twelve volts
So basically, how do I do this? Is this the correct workflow for Alexa or is there a different workflow? What components (besides the components needed for Alexa to run) will I be needing? I believe that I can get the ID of the device in the handler_interface.
Running a test node in GCP, using Docker 9.9.4, Ubuntu, Postgres db, Infura. I had issues with public/private IP, but once I cleared that up my node is up and running. I am now throwing the error below repeatedly, potentially due to the blockchain connection. How do I fix this?
[ERROR] HeadTracker: dropping head 26085153 with hash 0xf50e19099b7e343829935d70dd7d86c5bc0398286b7a4e4f32ac033ac60c3733 because queue is full. WARNING: Your node is overloaded and may start missing jobs. logger/default.go:155 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Errorf
This log output is related to an overload of your blockchain connection.
This notification is usually related to the usage of public websocket connections and/or free third party NaaS Provider. To fix this connection issue you can either run an own full node or change the tier or the third party NaaS provider. Also it is recommended to use Chainlink version 0.10.8 or higher, as the HeadTracker has been revised here and performs more efficient.
In regard to the question let me try to give you a small technical overview, which may clarify the payload of a Chainlink node to it's remote full node:
Your Chainlink node establishes a connection to a full node. There the Chainlink node initiates various subscriptions, which are a special feature of the websocket protocol to enable bidirectional communication. More precisely, this means that the Chainlink node is informed if a certain "state" of the subscription changes. Basically, the node interacts with using JSON-RPC methods and uses the following methods to initiate and process various functions internally:
eth_getBlockByNumber,eth_getBalance,eth_getTransactionReceipt,eth_getTransactionCount,eth_getLogs,eth_subscribe,eth_unsubscribe,eth_sendRawTransaction and eth_Call
https://ethereum.org/uk/developers/docs/apis/json-rpc/
The high amount of interactions of the Chainlink node are especially executed during the syncing process via the internal HeadTracker service. This service initiates a "head" subscription in order to interact with every single incoming new blockheader.
During this syncing process it uses the JSON-RPC methods eth_GetBlockByNumber and eth_getBalance to get all the necessary information from the block. So these two methods are used/ executed every block. The number of requests now depends on the average blocktime of the network the Chainlink node is connected to
An example would be the Kovan Testnet:
The avg. blocktime here is 6.7sec, which means you get a daily request number of approx. 21.000
During fulfilling job requests, those request also includes following methods: eth_getTransactionReceipt, eth_sendRawTransaction, eth_getLogs, eth_subscribe, eth_unsubscribe, eth_getTransactionCount and eth_call, which increases the total number significantly depending on the number of job requests.
It should also be noted that especially with faster blockchains (e.g. polygon) there is a very high payload of the WebSocket and you have to deal with a good full node connection in detail, as many full nodes do not receive such a high number of requests permanently.
I am trying to add topic tree into the header file in order to access them every time I an launching the MQTT broker. I am using the forward slash "/"
to get into the sub branches such as:
Car/Bus/Temp/Fan
Here, Car is the root node and it branches to Bus which further branches to Temp and similarly Fan.
I am willing to create topic tree as stated above with multiple branches in C++ (QT creator) along with that it also updates the data whenever there is any change for that particular topic.
Also, as it creates the tree it prompts the user with error message if the topic entered to extract any data is incorrect.
Firstly from the brokers point of view topics only exist at the point the message is published. The broker checks the topic in the incoming message against the subscription topic patterns* for each connected client (and any ACLs that may be in place) before forwarding the message to those clients that have a match.
There is no concept of pre-populating a broker with a list of topics that will be used.
As for the client, it doesn't need to store the topic tree it wants to publish messages on. It just needs to store the string that represents that topic as this is what any MQTT client library you will be using to publish messages will take as an input.
So you can just use #define to create macros that represent the topics as strings.
* clients subscribe to topic patterns, not specific topics, because they may contain wildcards
I still have some difficulties to understand how does the PBFT Consensus Algorithm work in Hyperledger Fabric 0.6. Are there any paper which describes the PBFT Algorithm in blockchain environment?
Thank you very much for your answers!
While Hyperledger Fabric v0.6 has been deprecated for quite some time (we are working towards release of v1.1 shortly, as I write this) we have preserved the archived repository, and the protocol specification contains all that you might want to know about how the system works.
It is really too long a description to add here.
Practical Byzantine Fault Tolerance (PBFT) is a protocol developed to
provide consensus in the presence of Byzantine faults.
The PBFT algorithm has three phases. These phases run in a sequence to
achieve consensus: pre-prepare, prepare, and commit
The protocol runs in rounds where, in each round, an elected leader
node, called the primary node, handles the communication with the
client. In each round, the protocol progresses through the three
previously mentioned phases. The participants in the PBFT protocol are
called replicas, where one of the replicas becomes primary as a
leader in each round, and the rest of the nodes act as backups.
Pre-prepare:
This is the first phase in the protocol, where the primary node, or
primary, receives a request from the client. The primary node assigns
a sequence number to the request. It then sends the pre-prepare
message with the request to all backup replicas. When the pre-prepare
message is received by the backup replicas, it checks a number of
things to ensure the validity of the message:
First, whether the digital signature is valid.
After this, whether the current view number is valid.
Then, that the sequence number of the operation's request message is valid.
Finally, if the digest/hash of the operation's request message is valid.
If all of these elements are valid, then the backup replica accepts
the message. After accepting the message, it updates its local state
and progresses toward the preparation phase.
Prepare:
A prepare message is sent by each backup to all other replicas in the
system. Each backup waits for at least 2F + 1 (F is the number of
faulty nodes.) to prepare messages to be received from other replicas.
They also check whether the prepared message contains the same view
number, sequence number, and message digest values. If all these
checks pass, then the replica updates its local state and progresses
toward the commit phase.
Commit:
Each replica sends a commit message to all other replicas in the
network. The same as the prepare phase, replicas wait for 2F + 1
commit messages to arrive from other replicas. The replicas also check
the view number, sequence number, and message digest values. If they
are valid for 2F+ 1 commit messages received from other replicas, then
the replica executes the request, produces a result, and finally,
updates its state to reflect a commit. If there are already some
messages queued up, the replica will execute those requests first
before processing the latest sequence numbers. Finally, the replica
sends the result to the client in a reply message. The client accepts
the result only after receiving 2F+ 1 reply messages containing the
same result.
Reference
Using Akka 2.3.14, I'm trying to create an Akka cluster of various services. Until now, I have had all my "services" in one artifact that was clustered across multiple nodes, but now I am trying to break this artifact into multiple services that all exist on the same cluster.
So in breaking this up, we've designed it so that any node on the cluster will first try to connect to the seed nodes. If there is no seed node, it will look to see if it is a candidate to run as a seed node (if it's on the same host that a seed node can be on) in which case it will grab the an open seed node port and become a seed node. So in this sense, any service in the cluster can become the seed node.
At least, that was the idea. Our API into this system running as a separate service implements a ClusterClient into this system. The initialContacts are set to be the same as the seed nodes. The problem is that the only receptionist actors I can send a message to through the ClusterClient are the actors on the seed nodes.
Here is an example if it helps. Let's say I have a String Service and a Double Service, and the receptionist for each service is a StringActor and a DoubleActor respectively. Now lets say I have a Client Service which sends StringMessages and DoubleMessages to the StringActor and DoubleActor
So for simplicity, let's say I have two nodes, server1 and server2 then:
seed-nodes = ["akka.tcp://system#server1:2773", "akka.tcp://system#server2:2773"]
My ClusterClient would be initialize like so:
system.actorOf(
ClusterClient.props(
Set(
system.actorSelection("akka.tcp://system#server1:2773/user/receptionist"),
system.actorSelection("akka.tcp://system#server2:2773/user/receptionist")
)
),
"clusterClient"
)
Here are the scenarios that are happening for me:
If the StringServices start up on both servers first, then DoubleMessages from the Client Service just disappear into the ether.
If the DoubleServices start up on both servers first, then StringMessages from the Client Service just disappear into the ether.
If the StringService starts up first on serverX and the DoubleService starts up first on serverY, then all StringMessages will be sent to serverX and all DoubleMessages will be sent to serverY, which is not as bad as the above case, but it means it's not really scaling.
This isn't what I expected, it's possible it's just a defect in my code, so I would like to know if this IS expected behavior or not. And if not, then is there another Akka concept that could help me with this?
Arguably, I could just make one service type my entry point, like a RoutingService that could accept StringMessages or DoubleMessages, and then send that to the correct service. But if the Client Service can only send messages to the RoutingService instances that are in the initial contacts, then I can't dynamically scale the RoutingService because no matter how many nodes I add the Client Service can only send to the initial contacts.
I'm also thinking about subscribing to ClusterEvents in my Client Service and seeing if I can add and remove initial contacts from my cluster client as nodes are started up in the cluster, but I'm not sure if this is possible, and it feels like there should be a better solution.
This is what I found out upon more troubleshooting, in case it helps anyone else:
The ClusterClient will attempt to connect to the initial contacts in order, and then only sends it's messages across that connection. If you are deploying different services on each node, you will have problems as the messages sent from the ClusterClient will only be sent to the node that it makes its connection to. In this way, you can think of the ClusterClient a legitimate client, it will connect to a URL that you give it, and then continue to communicate with the server through that URL.
Reading the Distributed Workers example, I realized that my Frontend, or in this case my routing service, should actually be part of the cluster, rather than acting as a client. For this I used the DistributedPubSub method instead.