What is the main difference between WriteSet, TransferSet and ContractResult in Ride4dApps? - blockchain

In Ride4dApps, the callable function returns WriteSet, TransferSet or a ContractResult but I still do not get the main difference between them? and who pays fees for this kind of dApps?

TransferSet, It's a keyValue list which defines what outgoing
payments will be made upon your contract invocation.
WriteSet, It's a keyValue list which defines what data will be stored
in contract's account upon your contract invocation(for example the
caller address and balance). So basically it's a list of data entries
that should be recorded to read the dApp state.
ContractResult, It's the combination of WriteSet and TransferSet.
The sender pays fees in WAVES(1 + 4*(the cost of each script involved)) to the miner of the invocation.
Example:
ContractResult(
WriteSet([DataEntry(currentKey, amount)]),
TransferSet([ContractTransfer(i.caller, amount, unit)])
)
Where:
DataEntry (key : String, value : String | Binary | Integer | Boolean).
i.caller is the caller address.

Related

Camunda: query processes that do not have a specific variable

In Camunda (7.12) I can query processes by variable value:
runtimeService.createProcessInstanceQuery()
.variableValueEquals("someVar", "someValue")
.list();
I can even query processes for null-value variables:
runtimeService.createProcessInstanceQuery()
.variableValueEquals("someVar", null)
.list();
But how can I query processes that do not have variable someVar?
I am not sure why you wouldn't have figured that out, but if i am correct what i think you are looking for , then looks like its pretty simple . The ProcessInstanceQuery class has also a method called variableValueNotEquals(String name, Object value) , that allows you select the processes that do not match a variable . In the Camunda Java API Docs it is stated as :
variableValueNotEquals(String name, Object value)
Only select process instances which have a global variable with the given name, but with a different value than the passed value.
Documentation page for your reference:
https://docs.camunda.org/javadoc/camunda-bpm-platform/7.12/?org/camunda/bpm/engine/RuntimeService.html
So i believe you can simply do :
runtimeService.createProcessInstanceQuery()
.variableValueNotEquals("someVar", null)
.list();
Let me know if that helps you .
First, get list of ids of all process instances which have "someVar"
Second, get list of all process ids in camunda
Get ids, from second list, which are not contained in first list.
Here is Kotlin sample as it's shorter than Java code, but concept is the same:
val idsOfProcessesWithVar = runtimeService.createVariableInstanceQuery().variableName("someVar").list().map {
it.processInstanceId
}
val allProcessesIds = runtimeService.createProcessInstanceQuery().list().map { it.id }
allProcessesIds.minus(idsOfProcessesWithVar)

Why use amount.token to initialize Paid variable?

I am going through the Corda R3 training course and I am making headway, but when asked to create a Paid variable initialized to 0, the answer is:
package net.corda.training.state
import net.corda.core.contracts.Amount
import net.corda.core.contracts.ContractState
import net.corda.core.identity.Party
import java.util.*
/**
* This is where you'll add the definition of your state object. Look at the unit tests in [IOUStateTests] for
* instructions on how to complete the [IOUState] class.
*
* Remove the "val data: String = "data" property before starting the [IOUState] tasks.
*/
data class IOUState(val amount: Amount<Currency>,
val lender: Party,
val borrower: Party,
val paid: Amount<Currency> = Amount(0, amount.token) ):
ContractState {
override val participants: List<Party> get() = listOf()
}
Now I understand that we need to cast the value to type Amount, but why amount.token? I took the solution from:
https://github.com/corda/corda-training-solutions/blob/master/kotlin-source/src/main/kotlin/net/corda/training/state/IOUState.kt
Also, the task was to define it as Pounds, but I cannot figure out how to do so.
I find the reference for Pounds under:
https://docs.corda.net/api/kotlin/corda/net.corda.finance/kotlin.-int/index.html
I just do not understand how I would define the function.
Anyone have any pointers or suggestions for me? This code compiles and the tests pass, but I want to understand why... Thanks!
The token simply indicates what this is an amount of.
So when used here:
val paid: Amount<Currency> = Amount(0, amount.token)
You're taking whatever token was used for the amount parameter e.g. POUNDS, DOLLARS etc and setting the paid Amount to the same token type.
Take a look at how it's done in currencies.kt in Corda

How to sign a transaction in multichain?

I want to know the sequence of API commands or cli commands to sign a transaction in multichain ?
Here is the list of commands you have to call in order to sign a transaction in multichain.
SO first in order to sign a transaction you need to have an address so to generate address on multichain node. you have to call :
1.) createkeypairs- it will give you three key value pairs : a) publick key b) public address 3.) privkey
Then you have to import this address to the node using :
2.) importaddress ( address )
After importing the address you have to call createrawsendfrom which will give you a hex.
3.) createrawsendfrom
After createrawsendfrom you have to call signrawtransaction and pass the hex obtained from createrawsendfrom.
4.) signrawtransaction ( hex obtained from createrawsenfrom)
After signrawtransaction, call sendrawtransaction and pass the hex obtained from signrawtransaction and it will give you the transaction succcesful and txid. Your transaction can be seen on explorer after this.
5.) sendrawtransaction ( hex obtained from signrawtransaction)

How to run Apache beam write to big query based on conditions

I am trying to read values from Google pubsub and Google storage and put those values into big query based on count conditions i.e., if the values exists, then it should not insert the value, else it can insert a value.
My code looks like this:
p_bq = beam.Pipeline(options=pipeline_options1)
logging.info('Start')
"""Pipeline starts. Create creates a PCollection from what we read from Cloud storage"""
test = p_bq | beam.Create(data)
"""The pipeline then reads from pub sub and then combines the pub sub with the cloud storage data"""
BQ_data1 = p_bq | 'readFromPubSub' >> beam.io.ReadFromPubSub(
'mytopic') | beam.Map(parse_pubsub, param=AsList(test))
where 'data' is the value from Google storage and reading from pubsub is the value from Google Analytics. Parse_pubsub returns 2 values: one is the dictionary and the other is count (which states the value is present or not in the table)
count=comparebigquery(insert_record)
return (insert_record,count)
How to provide condition to big query insertion since the value is in Pcollection
New edit:
class Process(beam.DoFn):
def process1(self, element, trans):
if element['id'] in trans:
# Emit this short word to the main output.
yield pvalue.TaggedOutput('present',element)
else:
# Emit this word's long length to the 'above_cutoff_lengths' output.
yield pvalue.TaggedOutput(
'absent', present)
test1 = p_bq | "TransProcess" >> beam.Create(trans)
where trans is the list
BQ_data2 = BQ_data1 | beam.ParDo(Process(),trans=AsList(test1)).with_outputs('present','absent')
present_value=BQ_data2.present
absent_value=BQ_data2.absent
Thank you in advance
You could use
beam.Filter(lambda_function)
after the beam.Map step to filter out elements that return False when passed to the lambda_function.
You can split the PCollection in a ParDo function using Additional-outputs based on a condition.
Don't forget to provide output tags to the ParDo function .with_outputs()
And when writing elements of a PCollection to a specific output, use .TaggedOutput()
Then you select the PCollection you need and write it to BigQuery.

Theano scan function

Example taken from: http://deeplearning.net/software/theano/library/scan.html
k = T.iscalar("k")
A = T.vector("A")
# Symbolic description of the result
result, updates = theano.scan(fn=lambda prior_result, A: prior_result * A,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
# We only care about A**k, but scan has provided us with A**1 through A**k.
# Discard the values that we don't care about. Scan is smart enough to
# notice this and not waste memory saving them.
final_result = result[-1]
# compiled function that returns A**k
power = theano.function(inputs=[A,k], outputs=final_result, updates=updates)
print power(range(10),2)
print power(range(10),4)
What is prior_result? More accurately, where is prior_result defined?
I have this same question for lot of the examples given on:http://deeplearning.net/software/theano/library/scan.html
For example,
components, updates = theano.scan(fn=lambda coefficient, power, free_variable: coefficient * (free_variable ** power),
outputs_info=None,
sequences=[coefficients, theano.tensor.arange(max_coefficients_supported)],
non_sequences=x)
Where is power and free_variables defined?
This is using a Python feature call "lambda". lambda are unnamed python function of 1 line. They have this forme:
lambda [param...]: code
In your example it is:
lambda prior_result, A: prior_result * A
This is a function that take prior_result and A as input. This function, is passed to the scan() function as the fn parameter. scan() will call it with 2 variables. The first one will be the correspondance of what was provided in the output_info parameter. The other is what is provided in the non_sequence parameter.