How to sign a transaction in multichain? - blockchain

I want to know the sequence of API commands or cli commands to sign a transaction in multichain ?

Here is the list of commands you have to call in order to sign a transaction in multichain.
SO first in order to sign a transaction you need to have an address so to generate address on multichain node. you have to call :
1.) createkeypairs- it will give you three key value pairs : a) publick key b) public address 3.) privkey
Then you have to import this address to the node using :
2.) importaddress ( address )
After importing the address you have to call createrawsendfrom which will give you a hex.
3.) createrawsendfrom
After createrawsendfrom you have to call signrawtransaction and pass the hex obtained from createrawsendfrom.
4.) signrawtransaction ( hex obtained from createrawsenfrom)
After signrawtransaction, call sendrawtransaction and pass the hex obtained from signrawtransaction and it will give you the transaction succcesful and txid. Your transaction can be seen on explorer after this.
5.) sendrawtransaction ( hex obtained from signrawtransaction)

Related

Convert wallet address on Tron

I need to get the wallet address from the trc-20 transaction on the Trone network.
I found out that this info can be taken from the input field. I use web3.js to parse this field and get an address like
0xee564858c4874cac2d1fff98c1eabba915f50b2f
But I need it to be like
TXhRBjb8GDnodu3VY6vgmNXcGnLpQhm9NW
I can't find the info how to convert it
tronWeb.address.toHex :
Convert Base58 format addresses to Hex
const convertedAddress = tronWeb.address.toHex(givenAddress);
tronWeb.address.fromHex :
Convert Hexstring format address to Base58 format address
const convertedAddress = tronWeb.address.fromHex(givenAddress);
tronWeb.address.fromPrivateKey :
Derive its corresponding address based on the private key
const givenAddress = tronWeb.address.fromPrivateKey(privateKey);
source link

Camunda: query processes that do not have a specific variable

In Camunda (7.12) I can query processes by variable value:
runtimeService.createProcessInstanceQuery()
.variableValueEquals("someVar", "someValue")
.list();
I can even query processes for null-value variables:
runtimeService.createProcessInstanceQuery()
.variableValueEquals("someVar", null)
.list();
But how can I query processes that do not have variable someVar?
I am not sure why you wouldn't have figured that out, but if i am correct what i think you are looking for , then looks like its pretty simple . The ProcessInstanceQuery class has also a method called variableValueNotEquals(String name, Object value) , that allows you select the processes that do not match a variable . In the Camunda Java API Docs it is stated as :
variableValueNotEquals(String name, Object value)
Only select process instances which have a global variable with the given name, but with a different value than the passed value.
Documentation page for your reference:
https://docs.camunda.org/javadoc/camunda-bpm-platform/7.12/?org/camunda/bpm/engine/RuntimeService.html
So i believe you can simply do :
runtimeService.createProcessInstanceQuery()
.variableValueNotEquals("someVar", null)
.list();
Let me know if that helps you .
First, get list of ids of all process instances which have "someVar"
Second, get list of all process ids in camunda
Get ids, from second list, which are not contained in first list.
Here is Kotlin sample as it's shorter than Java code, but concept is the same:
val idsOfProcessesWithVar = runtimeService.createVariableInstanceQuery().variableName("someVar").list().map {
it.processInstanceId
}
val allProcessesIds = runtimeService.createProcessInstanceQuery().list().map { it.id }
allProcessesIds.minus(idsOfProcessesWithVar)

What is the main difference between WriteSet, TransferSet and ContractResult in Ride4dApps?

In Ride4dApps, the callable function returns WriteSet, TransferSet or a ContractResult but I still do not get the main difference between them? and who pays fees for this kind of dApps?
TransferSet, It's a keyValue list which defines what outgoing
payments will be made upon your contract invocation.
WriteSet, It's a keyValue list which defines what data will be stored
in contract's account upon your contract invocation(for example the
caller address and balance). So basically it's a list of data entries
that should be recorded to read the dApp state.
ContractResult, It's the combination of WriteSet and TransferSet.
The sender pays fees in WAVES(1 + 4*(the cost of each script involved)) to the miner of the invocation.
Example:
ContractResult(
WriteSet([DataEntry(currentKey, amount)]),
TransferSet([ContractTransfer(i.caller, amount, unit)])
)
Where:
DataEntry (key : String, value : String | Binary | Integer | Boolean).
i.caller is the caller address.

How to run Apache beam write to big query based on conditions

I am trying to read values from Google pubsub and Google storage and put those values into big query based on count conditions i.e., if the values exists, then it should not insert the value, else it can insert a value.
My code looks like this:
p_bq = beam.Pipeline(options=pipeline_options1)
logging.info('Start')
"""Pipeline starts. Create creates a PCollection from what we read from Cloud storage"""
test = p_bq | beam.Create(data)
"""The pipeline then reads from pub sub and then combines the pub sub with the cloud storage data"""
BQ_data1 = p_bq | 'readFromPubSub' >> beam.io.ReadFromPubSub(
'mytopic') | beam.Map(parse_pubsub, param=AsList(test))
where 'data' is the value from Google storage and reading from pubsub is the value from Google Analytics. Parse_pubsub returns 2 values: one is the dictionary and the other is count (which states the value is present or not in the table)
count=comparebigquery(insert_record)
return (insert_record,count)
How to provide condition to big query insertion since the value is in Pcollection
New edit:
class Process(beam.DoFn):
def process1(self, element, trans):
if element['id'] in trans:
# Emit this short word to the main output.
yield pvalue.TaggedOutput('present',element)
else:
# Emit this word's long length to the 'above_cutoff_lengths' output.
yield pvalue.TaggedOutput(
'absent', present)
test1 = p_bq | "TransProcess" >> beam.Create(trans)
where trans is the list
BQ_data2 = BQ_data1 | beam.ParDo(Process(),trans=AsList(test1)).with_outputs('present','absent')
present_value=BQ_data2.present
absent_value=BQ_data2.absent
Thank you in advance
You could use
beam.Filter(lambda_function)
after the beam.Map step to filter out elements that return False when passed to the lambda_function.
You can split the PCollection in a ParDo function using Additional-outputs based on a condition.
Don't forget to provide output tags to the ParDo function .with_outputs()
And when writing elements of a PCollection to a specific output, use .TaggedOutput()
Then you select the PCollection you need and write it to BigQuery.

Where condition in geode

I am new to geode .
I am adding like below:
gfsh>put --key=('id':'11') --value=('firstname':'Amaresh','lastname':'Dhal') --region=region
Result : true
Key Class : java.lang.String
Key : ('id':'11')
Value Class : java.lang.String
Old Value : <NULL>
when I query like this:
gfsh>query --query="select * from /region"
Result : true
startCount : 0
endCount : 20
Rows : 9
Result
-----------------------------------------
('firstname':'A2','lastname':'D2')
HI
Amaresh
Amaresh
('firstname':'A1','lastname':'D1')
World
World
('firstname':'Amaresh','lastname':'Dhal')
Hello
NEXT_STEP_NAME : END
When I am trying to query like below I am not getting the value:
gfsh>query --query="select * from /region r where r.id='11'"
Result : true
startCount : 0
endCount : 20
Rows : 0
NEXT_STEP_NAME : END
Ofcourse I can use get command...But i want to use where condition..Where I am doing wrong..It gives no output
Thanks
In Geode the key is not "just another column". In fact, the basic query syntax implicitly queries only the fields of the value. However, you can include the key in your query using this syntax:
select value.lastname, value.firstname from /region.entries where key.id=11
Also, it is fairly common practice to include the id field in your value class even though it is not strictly required.
What Randy said is exactly right, the 'key" is not another column. The exact format of the query should be
gfsh>query --query="select * from /Address.entries where key=2"
What you are looking for here is getting all the "entries" on the region "Address" and then querying the key.
To check which one you want to query you can fire this query
gfsh>query --query="select * from /Address.entries"
You can always use the get command to fetch the data pertaining to a specific key.
get --key=<KEY_NAME> --region=<REGION_NAME>
Example:
get --key=1 --region=Address
Reference: https://gemfire.docs.pivotal.io/910/geode/tools_modules/gfsh/command-pages/get.html