It's stated that fn offchain_worker function is called every time by all nodes after a block import. Imagine the case where in fn offchain_worker we make a http call to fetch some not deterministic value from a remote server, and once we get the result we call to pub fn onchain_callback to sign a transaction to include that result in the blockchain state.
If Off-chain workers are executed by all validators after each block import, would I end up with one new signed transaction per validator with different results (remember is not deterministic).
Example. My Off-chain worker fetch a random number from a remote server and callback the result signing a new transaction. If I have in my network 10 validators... questions:
1.- I would end up with 10 new transactions with different random numbers?
2.- It would be executed only by the validators or also by all the full nodes connected to the blockchain?
3.- Is it possible to trigger the Off-chain workers only when a certain extrinsic is included in the block, instead of after every block import?
Yes, if the validators run with default off-chain workers settings.
If this is not desired, your OCW can pick a validator or introduce a
random delay & extra conditions between different runs. We do that
for im-online pallet in substrate repo or offchain phragmen
elections.
Other nodes can opt-in with a CLI flag (and most likely extra keys
to sign transactions), but you can also put a guard in your OCW code
to run only in case sp_io::offchain::is_validator() == true
That has to be done manually for now - off-chain worker has full
state access so it can inspect the Events in frame_system and only
run in case some specific one is there. I believe there are some
examples in substrate-recipies repo fo that.
More information here: Role of off-chain workers
Related
If you are using AWS connect as a contact center solution one problem that happens often is when TOLL free number would receive a high amount of call volume that your currently staffed agents would not be able to handle it.
There are ways to handle it by setting Queue limits which can overflow to other queues or branch the call flow in other directions.
The solution I propose is to slow the calls into the queue using the tools available in connect and not depending on external services or Lambda.
The aim of this solution is to control the call flow into the agent queue based on two metrics Agent staffed and Agents available
The below is the contact flow structure and I am going to explain how each of them work
The first block Set working queue sets the Queue where the calls should be going to.
The second block Get queue metrics retrieves the real-time queue metrics
Retrieve metrics from a queue so you can make routing decisions. You
can route contacts based on queue status, such as number of contacts
in queue or agents available. Queue metrics are aggregated across all
channels by default and are returned as attributes. The current queue
is used by default. Learn more
The third block Check contact attributes is where I am checking the number of Agents Staffed
This is how the block looks on the inside
I am retrieving the Queue metrics to get the number of Agents Staffed
Now lets assume there are more than 100 agents staffed, which means there are more than 100 agents logged in and could be any status on their client.
The Check contact attributes would follow the first branch and go the second Check contact attributes block
This is how the second Check contact attributes looks on the inside.
Here also I am retrieving the Queue metrics to get the number of Agents available.
In this section I am checking how many agents are in Available status to answer the call.
If the number of Available agents are less than or equal to 20 the call flows back to the Get queue metrics block to follow the logic again.
Here it is important to use a Play prompt to play a audio for 5 seconds before looping it back. If I do not insert this delay / prompt the calls will move too fast for the contact flow to handle and it stops working for any call hitting the flow.
Now if the number of Available agents are greater than 20 the call flows into the Transfer to queue block after the Check staffing block.
This same logic applies to the other blocks based the number of agents staffed are in different sections as seen the picture above.
With this solution I do not have to worry about how many agents are actually working at any time in the day. The contact flow will branch the calls based on Agents staffed and only send in the calls if there are X amount agents Available to answer the call.
Hope this solution helps for any one looking for a simple solution
Here is my case:
When my server receieve a request, it will trigger distributed tasks, in my case many AWS lambda functions (the peek value could be 3000)
I need to track each task progress / status i.e. pending, running, success, error
My server could have many replicas
I still want to know about the task progress / status even if any of my server replica down
My current design:
I choose AWS S3 as my helper
When a task start to execute, it will create marker file in a special folder on S3 e.g. running folder
When the task fail or success, it will move the marker file from running folder to fail folder or success folder
I check the marker files on S3 to check the progress of the tasks.
The problems:
There is a limit for AWS S3 concurrent access
My case is likely to exceed the limit some day
Attempt Solutions:
I had tried my best to reduce the number of request to S3
I don't want to track the progress by storing data in my DB because my DB has already been under heavy workload.
To be honest, it is kind of wierd that using marker files on S3 to track progress of the tasks. However, it worked before.
Is there any recommendations ?
Thanks in advance !
This sounds like a perfect application of persistent event queueing, specifically Kinesis. As each Lambda starts it generates a “starting” event on Kinesis. When it succeeds or fails, it generates the appropriate event. You could even create progress events along the way if you want to see how far they have gotten.
Your server can then monitor the number of starting events against ending events (success or failure) until these two numbers are equal. It can query the error events to see which processes failed and why. All servers can query the same events without disrupting each other, and any server can go down and recover without losing data.
Make sure to put an Origination Key on events that are supposed to be grouped together so they don't get mixed up with a subsequent event. Also, each Lambda should be given its own key so you can trace progress per Lambda. Guids are perfect for this.
I have a fleet of multiple worker hosts polling for the following tasks of my SWF:
Activity 1: Perform some business logic to create a large file.
Activity 2: Wait for some time (a human approval, timer, etc.)
Activity 3: Transmit the file using some protocol (governed by input parameters of the SWF).
Activity 4: Clean-up the local-generated file.
The file generated in Step-1 needs to be used again in Step-3, and then eventually discarded at the end of the workflow.
The system would work fine if there is only 1 host polling for all tasks. However, when I have multiple workers, I cannot seem to ensure that task-1 and task-3 would end up on the same host.
I would like to avoid doing the following:
Uploading the file to a central repository (say S3) on step-1 and download it in step-3; or
Having a single activity for the task-1 and task-3.
I have the following questions:
Is it possible to control that subsequent activities be run on the same host as opposed to going to any random host in my fleet?
What are specific guidelines/best practices on re-using resources generated in different activities in a workflow?
Is it possible to control that subsequent activities be run on the
same host as opposed to going to any random host in my fleet?
Yes, absolutely. The basic idea is that SWF task lists (queues used to deliver activity tasks) are dynamic. So each host can have its own task list and workflow can specify specific task list name when calling an activity. See fileprocessing sample which executes download activity on any host from the pool, then converts the file and uploads the result on the same host as the first one.
List item What are specific guidelines/best practices on re-using resources generated in different activities in a workflow?
The approach of caching result in the worker process memory or on the local disk is considered the best practice. Sometimes using external data store and getting it each times also makes sense.
Let's say I have two chaincode in Hyperledger Fabric, ChaincodeA and ChaincodeB.
Some events in ChaincodeA will have to change state in ChaincodeB, for example, change its balance. If invokeChaincode() used in ChaincodeA to invoke some logic in ChaincodeB, which calls putState() to change ChaincodeB's state, any race condition could happen when getting consensus? What's the best practices on handling this?
While invoking a chaincode you do not change the state you only simulate transaction execution based on the current state. Only once transaction placed into the block by ordering service and reaches the peer where it has to pass VSCC and MVCC checks it gonna be eventually committed. MVCC will take care of possible race condition. Transaction execution works as following:
Client sends transaction proposal to the peer
Peer simulates transaction sign the results and put them into signed transaction proposal
Client has to repeat step #2 based on expected endorsement policies
Once client collected enough endorsements he send them to the ordering service
Ordering service cuts the block and order all transaction
Block delivered to the peers
Peer validates and eventually commits the block
As I understated two chaincode deployed on two different channels. chaincodeA want to call method of chaincodeB. As per specification its possible but only for read operation.
https://godoc.org/github.com/hyperledger/fabric/core/chaincode/shim#ChaincodeStub.InvokeChaincode
can you please share code how you are calling another chaincodeB from chaincodeA?
We have a very simple AppFabric setup where there are two clients -- lets call them Server A and Server B. Server A is also the lead cache host, and both Server A and B have a local cache enabled. We'd like to be able to make an update to an item from server B and have that change propagate to the local cache of Server A within 30 seconds (for example).
As I understand it, there appears to be two different ways of getting changes propagated to the client:
Set a timeout on the client cache to evict items every X seconds. On next request for the item it will get the item from the host cache since the local cache doesn't have the item
Enable notifications and effectively subscribe to get updates from the cache host
If my requirement is to get updates to all clients within 30 seconds then setting a timeout of less than 30 seconds on the local cache appears to be the only choice if going with option #1 above. Due to the size of the cache, this would be inefficient to evict all of the cache (99.99% of which probably hasn't changed in the last 30 seconds).
I think what we need to implement is option #2 above, but I'm not sure I understand how this works. I've read all of the msdn documentation (http://msdn.microsoft.com/en-us/library/ee808091.aspx) and have looked at some examples but it is still unclear to me whether it is really necessary to write custom code or if this is only if you want to do extra handling.
So my question is: is it necessary to add code to your existing application if want to have updates propagated to all local caches via notifications, or is the callback feature just an bonus way of adding extra handling or code if a notification is pushed down? Can I just enable Notifications and set the appropriate polling interval at the client and things will just work?
It seems like the default behavior (when Notifications are enabled) should be to pull down fresh items automatically at each polling interval.
I ran some tests and am happy to say that you do NOT need to write any code to ensure that all clients are kept in sync. If you set the following as a child element of the cluster config:
In the client config you need to set sync="NotificationBased" on the element.
The element in the client config will tell the client how often it should check for new notifications on the server. In this case, every 15 seconds the client will check for notifications and pull down any items that have changed.
I'm guessing the callback logic that you can add to your app is just in case you want to add your own special logic (like emailing the president every time an item changes in the cache).